text
stringlengths 8
5.74M
| label
stringclasses 3
values | educational_prob
sequencelengths 3
3
|
---|---|---|
Joe Kinnear announced his arrival as Newcastle's director of football by blasting the club's fans and getting the names of his own players wrong. He left late on Monday with his tail between his legs as the first director of football in the history of the Premier League to go through two transfer windows without making a permanent signing. From the start, fantasy and reality merged for Kinnear. The 67-year-old's outlandish boasts during eight chaotic months in his position at St James' Park can be revealed here: Download the new Independent Premium app Sharing the full story, not just the headlines How he privately claimed Newcastle were going to sign Mohamad Salah last month for £25 million AFTER a deal had been struck with Chelsea for less than half that figure. How he told punters in the Hilton Hotel the night before Newcastle were beaten by Manchester City in January that he had £20 million to spend on a striker, a boast that reached and infuriated owner Mike Ashley. How he struggled with technology and had endless problems with his mobile phone. How he got the name of his own defender Shane Ferguson muddled up with former midfielder Stephen Ireland and called him Shane Ireland. How he claimed on a train earlier this season as he headed to watch a Newcastle match that he had to call off a drinking night with Bobby Moore to wine and dine the Supremes. How he told fans not to worry about selling Tim Krul because he had three world class goalkeepers lined up. And more damagingly how he then agreed a fee for midfielder Yohan Cabaye that was less than the £25 million the club wanted. The pair clashed last week following the sale of Cabaye for less than what Ashley expected, believed to be around the £16 million mark. That was the tipping point for owner Mike Ashley. The 67-year-old had been the stunning appointment of Ashley in the summer. The Sports Direct billionaire shocked football back in 2008 when he appointed Kinnear, who had been out of football for four years, as the manager of Newcastle United. A triple heart bypass operation before a match at West Brom ended his time at the club, but Ashley retained a soft spot for Kinnear and kept promising him a job at Newcastle. When Ashley's relationship with the then managing director Derek Llambias had problems, Kinnear was given a staggering, senior role back in the richest league in the world. Chief scout Graham Carr offered to quit. Derek Llambias, who was called Lambeeze by Kinnear in that infamous radio interview, went a step further and left. The remit then for Kinnear was simple: Keep an eye on and unsettle manager Alan Pardew, who Ashley was not prepared to give a big payout to by sacking after the club finished 16th, 12 months after the club had came fifth, and carry out Ashley's plan to the letter by ending spending. Newcastle were never going to fork out a penny last summer on permanent deals after Ashley was forced to spend £33 million midway through the season in the 2013 January transfer window to keep the club in the Premier league when he signed five French players. On that front at least, Kinnear delivered. Newcastle went through a summer after they had fought relegation by spending just £2 million on the loan fee for Loic Remy from Queens Park Rangers. That riled supporters but Kinnear's only failing was an inability to negotiate with Arsenal over the transfer of Yohan Cabaye after an initial bid from Arsene Wenger came in at £10.2 million. That infuriated Cabaye, who criticised Kinnear when he was away on international duty with France in October. Cabaye was finally sold to Paris Saint-Germain last week, but the fee, negotiated by Kinnear, was not the £25 million they were expecting. Newcastle also failed to react to a £6 million offer for forward Papiss Cisse from Qatar outfit Al-Rayyan, who were ready to increase their bid. Negotiations didn't happen. That saw Kinnear summoned to St James' Park on Monday. Newcastle could still sell Cisse, with the Russian transfer window open until the end of this month, but Kinnear was not there to discuss transfers. Instead, he was told his time at the club was over and agreement was reached that he would go. He leaves a club in chaos. Even Pardew was in the dark about Kinnear going. He only found about the latest stunning development late on Monday night. The Newcastle manager had admitted after the demolition derby to Sunderland that he would have done things differently if he had sole control of transfers. Pardew initially thought Kinnear was there for his job. He then thought he might help him land more physical players better designed for the Premier League, but in the end he had his best player sold on the eve of a Tyne-Wear clash that he could not afford to lose. Ashley, who had originally planned to attend (his burly bouncer was there at St James' Park for the three-nil defeat) instead went to Ireland to do a business deal. Now he must plot a new path for a football club that is once more in crisis. There will be relief that one level of the club's ill-advised management structure has gone. The appointment of Kinnear as a director of football will go down in Premier League history as one of the most strange. And unquestionably as one of the worst. Shape Created with Sketch. Newcastle United 0 Sunderland 3: The match in pictures Show all 7 left Created with Sketch. right Created with Sketch. Shape Created with Sketch. Newcastle United 0 Sunderland 3: The match in pictures 1/7 Fabio Borini puts Sunderland ahead from the spot 2/7 Fabio Borini (2nd L) of Sunderland celebrates with teammates after scoring the opening goal 3/7 Adam Johnson scores Sunderland's second 4/7 Adam Johnson celebrates his goal 5/7 Jack Colback scores Sunderland's third goal 6/7 Gus Poyet celebrates Sunderland's third goal 7/7 A Newcastle incensed by Sunderland's third goal 1/7 Fabio Borini puts Sunderland ahead from the spot 2/7 Fabio Borini (2nd L) of Sunderland celebrates with teammates after scoring the opening goal 3/7 Adam Johnson scores Sunderland's second 4/7 Adam Johnson celebrates his goal 5/7 Jack Colback scores Sunderland's third goal 6/7 Gus Poyet celebrates Sunderland's third goal 7/7 A Newcastle incensed by Sunderland's third goal The best league in the world is back. Join The Independent for an online event, as our panel discusses the most uncertain start to a season in Premier League history. Click here to find out more and book your free ticket today. | Low | [
0.506437768240343,
29.5,
28.75
] |
/*------------------------------------------------------------------------- * * freespace.h * POSTGRES free space map for quickly finding free space in relations * * * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/storage/freespace.h * *------------------------------------------------------------------------- */ #ifndef FREESPACE_H_ #define FREESPACE_H_ #include "storage/block.h" #include "storage/relfilenode.h" #include "utils/relcache.h" /* prototypes for public functions in freespace.c */ extern Size GetRecordedFreeSpace(Relation rel, BlockNumber heapBlk); extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded); #ifdef _SHARDING_ extern BlockNumber GetPageWithFreeSpace_withshard(Relation rel, Size spaceNeeded, ShardID sid); extern BlockNumber GetPageWithFreeSpace_fromextent(Relation rel, Size spaceNeeded, ExtentID eid); #endif extern BlockNumber RecordAndGetPageWithFreeSpace(Relation rel, BlockNumber oldPage, Size oldSpaceAvail, Size spaceNeeded); #ifdef _SHARDING_ extern BlockNumber RecordAndGetPageWithFreeSpace_extent(Relation rel, BlockNumber oldPage, Size oldSpaceAvail, Size spaceNeeded, ShardID sid); extern void RecordNewPageWithFullFreeSpace(Relation rel, BlockNumber heapBlk); #endif extern void RecordPageWithFreeSpace(Relation rel, BlockNumber heapBlk, Size spaceAvail); #ifdef _SHARDING_ extern void RecordPageWithFreeSpace_extent(Relation rel, ShardID sid, BlockNumber heapBlk, Size spaceAvail); #endif extern void XLogRecordPageWithFreeSpace_extent(RelFileNode rnode, BlockNumber heapBlk, Size spaceAvail, bool hasExtent); #ifdef _SHARDING_ #define XLogRecordPageWithFreeSpace(rnode, heapBlk, spaceAvail) \ XLogRecordPageWithFreeSpace_extent(rnode, heapBlk, spaceAvail, false); uint8 GetMaxAvailWithExtent(Relation rel, ExtentID eid); #endif extern void FreeSpaceMapTruncateRel(Relation rel, BlockNumber nblocks); extern void FreeSpaceMapVacuum(Relation rel); extern void UpdateFreeSpaceMap(Relation rel, BlockNumber startBlkNum, BlockNumber endBlkNum, Size freespace); #endif /* FREESPACE_H_ */ | Mid | [
0.635245901639344,
38.75,
22.25
] |
What xian/religious sayings get on your nerves? We've had countless discussions on here about the religious colloquialisms and sayings that others say that we tolerate (e.g. many don't mind something like, ("I hope you get better soon - I'll pray for you", or something as simple as "bless you" after a sneeze). But what about the other end of the spectrum? What are the xian, or religious for that matter, sayings that drive you crazy or get under your skin? For instance, here are a couple of mine... When a religious person sees another who is down on his luck, or has had some tragedy, or is simply "not as good as" the religious person and they say, "There but for the grace of god go I". How pompous and sanctimonious, not to mention cruel and selfiish Sports figures, actors, etc. who go through the histrionics of thanking god when something great happens to them. What, did god hate the other players/competitors? Replies to This Discussion The very sadistic expression of "well, I'll go to heaven and you'll rot in hell." because you know in some masochistic part of their mind they get a kick out of the thought of you in hell. The people who carelessly say that others will be tortured eternally, as if they don't care for their fellow humans. For me, there isn't any particular phrase (although 'My prayer's are with you' and related sentiments often make me laugh) that bothers me, but responses to my position in religious discussions. Apparently being an atheist means my position can be dismissed because I don't believe in religion... Many of the saying I see here aren't actually prayers or blessings, they are phrases that we used to express ourselves, very much like "gesundheit" has its origins in the Germanic language "bless you" has its origins in Religion. At least from my stand point... I'm a devout atheist and I still say "bless you" and reference god and his son when I talk (ie: thank god; oh god, what the hell is wrong with you; JESUS CHRIST, what was that?). Christianity has been a part of our culture for so long its a part of our language, I know many atheists who will say hell or will damn someone yet no one bats an eye. Years from now when (hopefully) religion is at least mostly weeded out of society kids will ask "Why did dad say god? what is god?" and the parents will have to say "ask your history teacher, I have no clue" One that I hate hearing and didn't see mentioned here is: "Satan's greatest victory is convincing humanity that he doesn't exist." I am usually fed this when I make a mention of anything that could go against religion. ANYthing. I actually made a full conscious decision to throw away God and religion when I was trying to conceive. I suffer from a condition that causes infertility. I've wanted to be a mother for a long, long time, and my husband and I have been together for over 9 and a half years, but could never conceive. When it was starting to get to be too much for me and I was suffering from some pretty hardcore depression and mental distress, I had people constantly feeding me the religious bullshit of, "It will happen when God believes it's right to happen!" or "God is trying to teach you patience." As if almost 10 years hadn't been patient enough! When I was FINALLY able to conceive and was pregnant, some of those same people came back to say, "God has blessed you!" or "God has answered your prayers!" Took him fucking long enough, didn't it? Oh, wait... God didn't do jack squat! I was able to pull this off with the use of SCIENCE and the power of pure human determination. God had NOTHING to do with it! UGH! I also dislike people saying, "Bless you!" when I sneeze. And the one about "God taking another angel for Heaven" when someone passes on. Recently I was in a car accident on the highway that could have been very serious. Thankfully no one was hurt. The response from my mom was that "God was watching out for me." You think god could have maybe prevented the accident in the first place? What about the six other people who have died in the past two months on the same stretch of highway? Did god not give a damn about them? I've pointed out many times that their supposedly omnipotent, omnipresent, omniscient, omnibenevolent big-guy-in-the-sky always seems to get there too late to have any real effect, much less impact on the problem at hand, be it a wreck, crash, or assassination. For this problem I always turn to greater words than mine, like Epicurus, and I quote; “Is God willing to prevent evil, but not able?Then he is not omnipotent.Is he able, but not willing?Then he is malevolent.Is he both able and willing?Then whence cometh evil?Is he neither able nor willing?Then why call him God?” | Low | [
0.48085106382978704,
28.25,
30.5
] |
In the Press Love your garden and love wildlife, why not read this post to give you a few hints and tips to help attract some of natures creatures to your garden. #helpthebees http://www.thompson-morgan.com/plants-for-wildlife We chose to look at water sustainability, with WaterAid being one the Prince’s main charities. Inspired by an episode of “One foot in the grave” where Victor Meldrew has a plant delivered to his house whilst he is out, telling them to leave it in the downstairs toilet. He comes home to find the plant… | Low | [
0.516,
32.25,
30.25
] |
// (C) Copyright 2016 by Autodesk, Inc. #ifdef DEB_ON #include "DebFile.hh" #include "DebCallStack.hh" #include "DebDefault.hh" #include "Base/Utils/Environment.hh" #include <string> #include <fstream> #include <time.h> #include <vector> #include <iostream> #include <map> #include <memory> #include <list> #include <map> #include <sstream> #include <cstring> #if !defined(WIN32) && !defined(_WIN32) #define sprintf_s snprintf #endif namespace Debug { namespace { // TODO: make this use std::string; check for html extension; case insensitive bool is_html_filename(const char* const str) { if (str == NULL) return false; const char* dot = strrchr(str, '.'); if (dot == NULL) return false; ++dot; return (!strncmp(dot, "htm", 3)) || (!strncmp(dot, "HTM", 3)) ; } }//namespace class File::Impl { public: Impl(const char* const _flnm, const uint _flags) : flags_(_flags), num_flush_(0), line_strt_(false) { set_filename(_flnm); } ~Impl() { close(); clear(); } bool is_kept_open() const { return 0 != (flags_ & KEEP_OPEN); } bool is_html() const { return 0 != (flags_ & HTML); } bool is_retained() const { return 0 != (flags_ & RETAIN); } bool is_appended() const { return 0 != (flags_ & APPEND); } // Only applies to HTML DEB_out bool is_white_on_black() const { return true; } bool file_is_open() const { return file_stream_.is_open(); } int priority() const { return priority_; } const char* filename() const { return flnm_.empty() ? NULL : flnm_.c_str(); } void clear() { bffr_.clear(); output_.clear(); flnm_.clear(); } void print(const char _c, const bool _cnsl = true) { if (line_strt_) { line_strt_ = false; print(' ', false); // indents never go onto the console! } bffr_.append(&_c, 1); if (_cnsl && console()) std::cerr << _c; // print on the console if (_c == '\n') { std::cerr << std::flush; line_strt_ = true; } } void line_break(const bool _cnsl = true) { print('\n', _cnsl); } void print(const std::string& _s, const bool _cnsl = true) { for (size_t i = 0, n = _s.size(); i < n; ++i) print(_s[i], _cnsl); } void print(const char* const _s, const bool _cnsl = true) { if (_s == NULL) return; for (int i = 0, c = _s[0]; c != '\0'; c = _s[++i]) print((char)c, _cnsl); } void print(const size_t _i) { char buffer[128]; #if defined(_MSC_VER) && _MSC_VER < 1900 // MSVC versions older than VC2015 sprintf_s(buffer, sizeof(buffer), "%Iu", _i); #else // MSVC 2015 and everything else sprintf_s(buffer, sizeof(buffer), "%zu", _i); #endif print(buffer); } void print(const int _i) { char buffer[64]; sprintf_s(buffer, sizeof(buffer), "%i", _i); print(buffer); } const char* double_format() const { if (double_format_.empty()) return "%.17g"; return double_format_.c_str(); } void set_double_format(const char* const str) { if (str == NULL) double_format_.clear(); else double_format_ = str; } void print(double _d) { char buffer[64]; sprintf_s(buffer, sizeof(buffer), double_format(), _d); print(buffer); } void print(const Base::Command& _co) { switch (_co.cmd) { case Base::Command::END : break; case Base::Command::END_ERR : case Base::Command::END_LF : line_break(); break; } } // Append current asctime to given string void add_time(std::string& str) { str += System::Environment::time(); } #if 1 bool hover(std::string& _str, const std::string& _hover, const bool _open) { if (is_html()) { char buffer[1024]; if (_open) sprintf_s(buffer, sizeof(buffer), "<span title=\"%s\">", _hover.c_str()); else sprintf_s(buffer, sizeof(buffer), "</span>"); _str.append(buffer); return true; } return false; } #endif bool anchor(std::string& _str, const int _id, const char* _tag, const bool _open) { if (is_html()) { char buffer[1024]; if (_open) sprintf_s(buffer, sizeof(buffer), "<A name=\"%08X_%s\">", _id, _tag); else sprintf_s(buffer, sizeof(buffer), "</A>"); _str.append(buffer); return true; } return false; } bool link_to(std::string& _str, const int _id, const char* _tag, const std::string& _hover, const bool _open) { if (is_html()) { char buffer[2048]; if (_open) { // HTML title hover text is cropped to 64 char in Firefox but displays // OK in Chrome. We could use javascript to avoid this limit but HTML // is simpler. if (_hover.empty()) sprintf_s(buffer, sizeof(buffer), "<A href=\"#%08X_%s\">", _id, _tag); else sprintf_s(buffer, sizeof(buffer), "<A href=\"#%08X_%s\" title=\"%s\">", _id, _tag, _hover.c_str()); } else sprintf_s(buffer, sizeof(buffer), "</A>"); _str.append(buffer); return true; } return false; } void header(std::string& str) { if (is_html()) { str.append("<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\">"); str.append("\n<HTML><HEAD>"); str.append("\n<TITLE>ReForm DEB_out"); str.append("</TITLE>"); // javascript lib loads go here // stylesheet loads go here // within HEAD javascript goes here str.append("\n</HEAD>"); if (is_white_on_black()) { str.append("\n<BODY BGCOLOR=\"#000000\" TEXT=\"#FFFFFF\" LINK=\"#%00FFFF\" VLINK=\"#FFFF00\" >"); //str.append( "\n<BODY BGCOLOR=\"#000000\" TEXT=\"#FFFFFF\" >"); } else { str.append("\n<BODY BGCOLOR=\"#FFFFFF\" TEXT=\"#000000\" LINK=\"#%FF0000\" VLINK=\"#0000FF\" >"); //str.append( "\n<BODY BGCOLOR=\"#000000\" TEXT=\"#FFFFFF\" >"); } str.append("\n"); } // endif is_html bool date_header = true; if (date_header) { if (!flnm_.empty()) { str.append(flnm_); str.append(" opened "); } add_time(str); str.append("[ Build: " __TIME__ " " __DATE__ "] "); if (is_html()) str.append("<BR>"); str.append("\n"); } } void footer() { std::string str("\n"); if (!flnm_.empty()) str.append(flnm_); str.append(" Closed: "); add_time(str); str.append("\n"); print(str.c_str()); } bool is_first_flush() { return num_flush_ == 0; } bool flush() { if (bffr_.empty() || !logfile()) return true; const char* const flnm = filename(); if (flnm == NULL && !file_is_open()) return false; if (!file_is_open()) { file_stream_.open(flnm, std::fstream::out | ((is_appended() && !is_first_flush()) ? std::fstream::app : std::fstream::trunc)); } if (!file_is_open()) // failed opening the file return false; std::string hdr; if (!is_appended()) {// Re-output entire file header(hdr); output_.append(hdr); file_stream_ << output_; } else { if (is_first_flush()) { header(hdr); if (is_retained()) output_.append(hdr); file_stream_ << hdr; } ++num_flush_; } file_stream_ << bffr_; if (is_retained()) output_.append(bffr_); bffr_.clear(); if (!is_kept_open()) file_stream_.close(); return true; } void close() { footer(); flush(); } void set_filename(const char* const _flnm) { flnm_ = _flnm != NULL ? _flnm : ""; if (is_html_filename(_flnm)) flags_ = flags_ | HTML; } void enter() { const bool entr_cnsl = false; // do we print enters on the console? // First DEB_out in this function so output call-stack, and flush. if (!line_strt_) line_break(entr_cnsl); // make sure we start on a new line with this std::string str; str.append("*>"); // .txt call stack lead in CallStack::query().append(str); bffr_.append(str); line_break(entr_cnsl); // make sure we start on a new line with this flush(); } void set_console(const bool _on = true) { set_flag<CONSOLE>(_on); } bool console() const { return flag<CONSOLE>(); } void set_logfile(bool _on) { set_flag<LOGFILE>(_on); } bool logfile() const { return flag<LOGFILE>(); } private: uint flags_; int num_flush_; int priority_; // Last permission granted bool line_strt_; // are we at the start of th line? std::string bffr_; std::string output_; std::string flnm_; std::fstream file_stream_; std::string double_format_; private: template <Flags _flag> void set_flag(const bool _on) { flags_ = _on ? flags_ | _flag : flags_ & ~_flag; } template <Flags _flag> bool flag() const { return (flags_ & _flag) == _flag; } }; ////////////////////////////////////////////////////////////////////////// File& File::modify() { // TODO: Thread-local storage, each (per thread) file in a separate folder static File glbl_file(Default::LOG_FILENAME); return glbl_file; } const File& File::query() { return modify(); } File::File(const char* const _flnm, const uint _flags) : impl_(new Impl(_flnm, _flags)) {} File::~File() { delete impl_; } void File::enter(const int /*_id*/) { impl_->enter(/*_id*/); } void File::print(const char _c) { impl_->print(_c); } void File::print(const char* _s) { impl_->print(_s); } void File::print(const size_t _i) { impl_->print(_i); } void File::print(const int _i) { impl_->print(_i); } void File::print(double _d) { impl_->print(_d); } void File::print(const Base::Command& _co) { impl_->print(_co); } const char* File::double_format() const { return impl_->double_format(); } void File::set_double_format(const char* const str) { impl_->set_double_format(str); } void File::set_console(const bool _on) { impl_->set_console(_on); } bool File::console() const { return impl_->console(); } void File::set_logfile(bool _on) { impl_->set_logfile(_on); } bool File::logfile() const { return impl_->logfile(); } }//namespace Debug #endif // DEB_ON | Low | [
0.5207100591715971,
33,
30.375
] |
Tag: best blender Auto-iQ system is one of the most advanced blender technology nowadays. This feature allows you to select and control blender settings that suit your food preparing process. Furthermore, this system offers convenience in making healthy drinks, soup, and other foods. Ninja Blenders is the best-selling brand of this device in the market right now. This manufacturer is popular for their auto-iQ blenders. Are you looking for a blender that can do all the tough crushing job? Then, what you need is the Ninja Mega Kitchen blender. This product from Ninja Blenders is a popular option for individuals who want an all-around feature in their blenders. This one can perform every difficult task which your regular food processor can do. It can crush ice, nuts, fruits, and vegetables, and mix it perfectly. The reason behind Ninja Mega Kitchen Blender is its 1500-watt motor power which is amazingly high for this kind of kitchen device. Moreover, this appliance has a 2-HP motor which makes it possible for this product to prepare all types of food blend. Aside from this product’s motor power, Ninja Mega Kitchen features Auto-iQ technology. It means that this device could not only create shakes and smoothies but also it can prepare dough, crush and pulse. In addition, this blender has powerful blades that can break the ice and make a healthy smoothie easy and quick. If you want to enjoy your healthy drink with friends, this blender is a nice choice since it has a large pitcher capacity which can hold a total of 72 ounces of food blend. The only drawback of this device is its expensive price. Still, investing in this kind of blender is worth your money. Nutri Ninja Duo Nutrient Extraction Blender Here is another creation of Ninja Blender that you must see for yourself. The Nutri Ninja BL680 Duo nutrient extraction blender is a must-have because of its top-notch quality, durability, and power. Parallel to Ninja Mega Kitchen Blender, this product of the same brand has a 1500 watt motor base and 2 Horsepower which are both excellent qualities of the kitchen tool. Therefore, using this blender, you can prepare lots of drinks and food recipes to a large group of foodies minus the kitchen disasters. If you always experience problems in making tasty baby foods, worry no more for Nutri Ninja Duo can do the job for you. This device can make better baby foods for your infant compared to those regular baby food processors. The high motor power and strong blades of this device can crush, chop and create dough better rather than a traditional food processor. This product of Ninja Blender is one of its creations with Auto-iQ system input. Thus, through its various settings, you can control how you want to prepare your food and have the best outcome. Ninja Blender has more help to offer for your kitchen duties. Learn more about this brand’s products on Blend It Nutrition. | Mid | [
0.626050420168067,
37.25,
22.25
] |
Any other predictions:In a dark and strange turn of the NHL's storied history, both teams to compete in the Stanley Cup finals are killed when Mike Ditka drives the Chicago Bears team bus over all players and personnel of both the St. Louis Blues and the New Jersey Devils. Instead of witnessing the first drop of the puck in what would be game one of the Stanley Cup finals, horrified and saddened hockey fans at the Scotttrade Center are forced to watch members of the 1985 Chicago Bears perform the superbowl shuffle continuously for 3 hours. Any other predictions: The forum as a whole will collectively call the Blues season 'done' at least 300 times. Jaden Schwartz will be the Blues' best player by the end of the season. Tyler Seguin will serve a lengthy suspension for stupidity off the ice. USA wins Gold. USA! USA! Any other predictions: Blues trade Halak or Elliott. Detroit thrives in the Eastern Conference....just because they're mother (frank)ers. 2014-2015 Official LGB Sponsor of T.J. Oshie2013-2014 Official LGB Sponsor of Kevin Shattenkirk2012-2013 Official LGB Sponsor of Ryan Reaves2011-2012 Official LGB Sponsor of Vladimir Tarasenko2010-2011 Official LGB Sponsor of Vladimir Tarasenko Any other predictions:Blues beat Chicago in west finalsPitsburgh beat Rangers in the EastHalak stays healthy all year plays great, so we trade Elliot, Lapierre and Porter to the Flyers and get Simmonds and a conditional pickJaden Schwartz is our top scorer followed close by Chris Stewart. | Low | [
0.5330073349633251,
27.25,
23.875
] |
Glassed on fins Glassed on fins are a really pain in the ass but for my taste they are beautiful and classic. Once you did a lot of mistakes you are getting better and faster. For travelling they are shit. Hopefully these sleds are done soon to hit the local lineups! | Low | [
0.44034707158351405,
25.375,
32.25
] |
Fatty acid transport and metabolism in HepG2 cells. The mechanism(s) of fatty acid uptake by liver cells is not fully understood. We applied new approaches to address long-standing controversies of fatty acid uptake and to distinguish diffusion and protein-based mechanisms. Using HepG2 cells containing an entrapped pH-sensing fluorescence dye, we showed that the addition of oleate (unbound or bound to cyclodextrin) to the external buffer caused a rapid (seconds) and dose-dependent decrease in intracellular pH (pH(in)), indicating diffusion of fatty acids across the plasma membrane. pH(in) returned to its initial value with a time course (in min) that paralleled the metabolism of radiolabeled oleate. Preincubation of cells with the inhibitors phloretin or triacsin C had no effect on the rapid pH(in) drop after the addition of oleate but greatly suppressed pH(in) recovery. Using radiolabeled oleate, we showed that its esterification was almost completely inhibited by phloretin or triacsin C, supporting the correlation between pH(in) recovery and metabolism. We then used a dual-fluorescence assay to study the interaction between HepG2 cells and cis-parinaric acid (PA), a naturally fluorescent but slowly metabolized fatty acid. The fluorescence of PA increased rapidly upon its addition to cells, indicating rapid binding to the plasma membrane; pH(in) decreased rapidly and simultaneously but did not recover within 5 min. Phloretin had no effect on the PA-mediated pH(in) drop or its slow recovery but decreased the absolute fluorescence of membrane-bound PA. Our results show that natural fatty acids rapidly bind to, and diffuse through, the plasma membrane without hindrance by metabolic inhibitors or by an inhibitor of putative membrane-bound fatty acid transporters. | High | [
0.685294117647058,
29.125,
13.375
] |
# SPDX-License-Identifier: GPL-2.0-only config HAVE_GCC_PLUGINS bool help An arch should select this symbol if it supports building with GCC plugins. menuconfig GCC_PLUGINS bool "GCC plugins" depends on HAVE_GCC_PLUGINS depends on CC_IS_GCC depends on $(success,$(srctree)/scripts/gcc-plugin.sh $(CC)) default y help GCC plugins are loadable modules that provide extra features to the compiler. They are useful for runtime instrumentation and static analysis. See Documentation/kbuild/gcc-plugins.rst for details. if GCC_PLUGINS config GCC_PLUGIN_CYC_COMPLEXITY bool "Compute the cyclomatic complexity of a function" if EXPERT depends on !COMPILE_TEST # too noisy help The complexity M of a function's control flow graph is defined as: M = E - N + 2P where E = the number of edges N = the number of nodes P = the number of connected components (exit nodes). Enabling this plugin reports the complexity to stderr during the build. It mainly serves as a simple example of how to create a gcc plugin for the kernel. config GCC_PLUGIN_SANCOV bool help This plugin inserts a __sanitizer_cov_trace_pc() call at the start of basic blocks. It supports all gcc versions with plugin support (from gcc-4.5 on). It is based on the commit "Add fuzzing coverage support" by Dmitry Vyukov <[email protected]>. config GCC_PLUGIN_LATENT_ENTROPY bool "Generate some entropy during boot and runtime" help By saying Y here the kernel will instrument some kernel code to extract some entropy from both original and artificially created program state. This will help especially embedded systems where there is little 'natural' source of entropy normally. The cost is some slowdown of the boot process (about 0.5%) and fork and irq processing. Note that entropy extracted this way is not cryptographically secure! This plugin was ported from grsecurity/PaX. More information at: * https://grsecurity.net/ * https://pax.grsecurity.net/ config GCC_PLUGIN_RANDSTRUCT bool "Randomize layout of sensitive kernel structures" select MODVERSIONS if MODULES help If you say Y here, the layouts of structures that are entirely function pointers (and have not been manually annotated with __no_randomize_layout), or structures that have been explicitly marked with __randomize_layout, will be randomized at compile-time. This can introduce the requirement of an additional information exposure vulnerability for exploits targeting these structure types. Enabling this feature will introduce some performance impact, slightly increase memory usage, and prevent the use of forensic tools like Volatility against the system (unless the kernel source tree isn't cleaned after kernel installation). The seed used for compilation is located at scripts/gcc-plugins/randomize_layout_seed.h. It remains after a make clean to allow for external modules to be compiled with the existing seed and will be removed by a make mrproper or make distclean. Note that the implementation requires gcc 4.7 or newer. This plugin was ported from grsecurity/PaX. More information at: * https://grsecurity.net/ * https://pax.grsecurity.net/ config GCC_PLUGIN_RANDSTRUCT_PERFORMANCE bool "Use cacheline-aware structure randomization" depends on GCC_PLUGIN_RANDSTRUCT depends on !COMPILE_TEST # do not reduce test coverage help If you say Y here, the RANDSTRUCT randomization will make a best effort at restricting randomization to cacheline-sized groups of elements. It will further not randomize bitfields in structures. This reduces the performance hit of RANDSTRUCT at the cost of weakened randomization. config GCC_PLUGIN_ARM_SSP_PER_TASK bool depends on GCC_PLUGINS && ARM endif | Mid | [
0.5879396984924621,
29.25,
20.5
] |
Calling for Climate Leadership As the nation that kicked off the industrial revolution and started burning fossil fuels, we have a particular responsibility to lead the world in action to tackle the climate change brought about by the burning of those fossil fuels. Using our power as citizens We need to demand climate leadership from our elected representatives, from our political parties, from our businesses and from our media outlets and platforms. As citizens, we have more power than we generally realise. Understanding that we have power and understanding how to use it is empowering. The belief that we are powerless is disempowering. Leadership has to start with us. What do we mean by being a climate leader? It’s instructive to look at great leaders from history, but we can also probably think of great leaders from our own personal experiences. It could be a parent, or someone we’re worked with. If you’re going to write to your MP, give them an example of what you mean by great leadership. You might want to mention Gareth Southgate, or Malala Yousafzai, or Winston Churchill, or Nelson Mandela, or Emmeline Pankhurst… You might also want to point out that the UK parliament has in the past shown great leadership, such as in passing (with cross-party support) the 2008 Climate Change Act, making the us the first country in the world to set legally binding targets for reductions in greenhouse gas emissions. What specifically are we calling for? Well, effective action on climate change, which in short means a robust and escalating price on carbon. That’s the bottom line. If you really want to drive down emissions at the rate required to meet the targets set by the Climate Change Act then you’ve got to put a meaningful price on carbon, something high enough to drive significant reductions in emissions. But we’re not the only ones calling for carbon pricing. In May, Parliament’s Environmental Audit Committee published a report on green finance, saying a higher carbon price could, in the long run, be an effective and technology neutral way to drive investment and innovation in emissions-reducing technologies and now we have a highly influential think tank, the Policy Exchange, coming out with a proposal that is very similar to the one Citizens’ Climate Lobby has been lobbying for for the past decade, a proposal that has already attracted strong support from climate scientists, economists and many others. Leaders need allies In San Francisco this September the Carbon Pricing Leadership Coalition is hosting a meeting of climate leaders from government and industry. Citizens’ Climate Lobby is calling on the UK government to send high level ministers to that meeting. The Carbon Pricing Leadership Coalition (CPLC) is a unique initiative that brings together leaders across national and sub-national governments, the private sector, academia, and civil society with the goal of putting in place effective carbon pricing policies that maintain competitiveness, create jobs, encourage innovation, and deliver meaningful emissions reductions. There’s an opportunity here to set up a club of climate leaders, and that’s something the UK ought to be part of and something we’d like to see the UK taking the lead on and actively pushing for, but for that to happen, we have to push for it. We can’t be expecting someone else to do that for us. We have to be the climate leaders we want our political leaders to be, and one way to do that is to join Citizens’ Climate Lobby and get lobbying. | Mid | [
0.6474820143884891,
33.75,
18.375
] |
inline editing for add/edit +ajax inline editing for add/edit +ajax i had implemented DT-Editor for add/edit with ajax and working as expected, but when implemented inline edit,not making any Ajax call and when edited text and click outside the textbox, value reverting back to old value. Unfortunately I don't see what is causing the problem there. Could you publish the page on the web somewhere so I can debug it directly please? PM me the URL by clicking my name above and then Send message if you like. 1)for select2 +ajax, what is the responce expected by the select2 for binding data and will the select2 will retain with data when edit is called? | Low | [
0.5254901960784311,
33.5,
30.25
] |
This application is related to the following co-pending and commonly-assigned patent applications, which applications were filed on the same. date herewith, and which applications are incorporated herein by reference in their entirety: xe2x80x9cMethod and System for Multiple Read/Write Transactions Across a Bridge System,xe2x80x9d to Gary W. Batchelor, Russell L. Ellison, Carl E. Jones, Robert E. Medlin, Belayneh Tafesse, Forrest Lee Wade, and Juan A. Yanes, Ser. No. 09/275,470; xe2x80x9cMethod And System For Prefetching Data in a Bridge System,xe2x80x9d to Gary W. Batchelor, Carl E. Jones, Forrest Lee Wade, Ser. No. 09/275,857; and xe2x80x9cMethod And System For Reading Prefetched Data Across a Bridge System,xe2x80x9d to Gary W. Batchelor, Brent C. Beardsley, Matthew J. Kalos,. and Forrest Lee Wade, Ser. No. 09/275,610. 1. Field of the Invention The present invention relates to a method and system for prefetching data within a bridge system. 2. Description of the Related Art The Peripheral Component Interconnect (PCI) bus is a high-performance expansion bus architecture that was designed to replace the traditional ISA (Industry Standard Architecture) bus. A processor bus master communicates with the PCI local bus and devices connected thereto via a PCI Bridge. This bridge provides a low latency path through which the processor may directly access PCI devices mapped anywhere in the memory or I/O address space. The bridge may optionally include such functions as data buffering/posting and PCI central functions such as arbitration. The architecture and operation of the PCI local bus is described in xe2x80x9cPCI Local Bus Specification,xe2x80x9d Revisions 2.0 (April, 1993) and Revision 2.1s, published by the PCI Special Interest Group, 5200 Elam Young Parkway, Hillsboro, Oreg., which publication is incorporated herein by reference in its entirety. A PCI to PCI bridge provides a connection path between two independent PCI local busses. The primary function of the bridge is to allow transactions between a master on one PCI bus and a target device on another PCI bus. The PCI Special Interest Group has published a specification on the architecture of a PCI to PCI bridge in xe2x80x9cPCI to PCI Bridge Architecture Specification,xe2x80x9d Revision 1.0 (Apr. 10, 1994), which publication is incorporated herein by reference in its entirety. This specification defines the following terms and definitions: initiating busxe2x80x94the master of a transaction that crosses a PCI to PCI bridge is said to reside on the initiating bus. target busxe2x80x94the target of a transaction that crosses a PCI to PCI bridge is said to reside on the target bus. primary interfacexe2x80x94the PCI interface of the PCI to PCI bridge that is connected to the PCI bus closest to the CPU is referred to as the primary PCI interface. secondary interfacexe2x80x94the PCI interface of the PCI to PCI bridge that is connected to the PCI bus farthest from the CPU is referred to as the secondary PCI interface. downstreamxe2x80x94transactions that are forwarded from the primary interface to the secondary interface of a PCI to PCI bridge are said to be flowing downstream. upstreamxe2x80x94transactions forwarded from the secondary interface to the primary interface of a PCI to PCI bridge are said to be flowing upstream. The basic transfer mechanism on a PCI bus is a burst. A burst is comprised of an address phase and one or more data phases. When a master or agent initiates a transaction, each potential bridge xe2x80x9csnoopsxe2x80x9d or reads the address of the requested transaction to determine if the address is within the range of addresses handled by the bridge. If the bridge determines that the requested transaction is within the bridge""s address range, then the bridge asserts a DIESEL# on the bus to claim access to the transaction. There are two types of write transactions, posted and non-posted. Posting means that the write transaction is captured by an intermediate agent, such as a PCI bridge, so that the transaction completes at the originating agent before it completes at the intended destination, e.g., the data is written to the target device. This allows the originating agent to proceed with the next transaction while the requested transaction is working its way to the ultimate destination. Thus, the master bus initiating a write operation may proceed to another transaction before the written data reaches the target recipient. Non-posted transactions reach their ultimate destination before completing at the originating device. With non-posted transactions, the master cannot proceed with other work until the transaction has completed at the ultimate destination. All transactions that must complete on the destination bus, i.e., secondary bus, before completing on the primary bus may be completed as delayed transactions. With a delayed transaction, the master generates a transaction on the primary bus, which the bridge decodes. The bridge then ascertains the information needed to complete the request and terminates the request with a retry command back to the master. After receiving the retry, the master reissues the request until it completes. The bridge then completes the delayed read or write request at the target device, receives a delayed completion status from the target device, and returns the delayed completion status to the master that the request was completed. A PCI to PCI bridge may handle multiple delayed transactions. With a delayed read request, the read request from the master is posted into a delayed transaction queue in the PCI to PCI bridge. The bridge uses the request to perform a read transaction on the target PCI bus and places the read data in its read data queue. When the master retries the operation, the PCI to PCI bridge satisfies the request for read data with data from its read data queue. With a delayed write request, the PCI to PCI bridge captures both the address and the first word of data from the bus and terminates the request with a retry. The bridge then uses this information to write the word to the target on the target bus. After the write to the target has been completed when the master retries the write, the bridge will signal that it accepts the data with TRDY# thereby notifying the master that the write has completed. The PCI specification provides that a certain ordering of operations must be preserved on bridges that handle multiple operations to prevent deadlock. These rules are on a per agent basis. Thus, for a particular agent communicating on a bus and across a PCI bridge, the agent""s reads should not pass their writes and a later posted write should not pass an earlier write. However, with current bridge architecture, only a single agent can communicate through the PCI bridge architecture at a time. If the PCI bridge is handling a delayed request operation and a request from another agent is attempted, then the PCI bridge will terminate the subsequent transaction from another agent with a retry command. Thus, a write operation from one agent that is delayed may delay read and write operations from other agents that communicate on the same bus and PCI bridge. Such delays are referred to as latency problems as one agent can delay the processing of transactions from other agents until the agent currently controlling the bus completes its operations. Further, with a delayed read request, a delayed read request from one agent must be completed before other agents can assert their delayed read requests. Current systems attempt to achieve a balance between the desire for low latency between agents and high throughput for any given agent. High throughput is achieved by allowing longer burst transfers, i.e., the time an agent or master is on the bus. However, increasing burst transfers to improve throughput also increases latency because other agents must wait for the agent currently using the longer bursting to complete. Current systems employ a latency timer which is a clock that limits the amount of time any one agent can function as a master and control access to the bus. After the latency time expires, the master may be required to terminate its operation on the bus to allow another master agent to assert its transaction on the bus. In other words, the latency timer represents a minimum number of clocks guaranteed to the master. Although such a latency timer places an upper bound on latency, the timer may prematurely terminate a master""s tenure on the bus before the transaction terminates, thereby providing an upper bound on throughput. One current method for reducing latency is the prefetch operation. Prefetch refers to the situation where a PCI bridge reads data from a target device in anticipation that the master agent will need the data. Prefetching reduces the latency of a burst read transaction because the bridge returns the data before the master actually requests the data, thereby reducing the time the master agent controls access to the bus to complete its requested operation. A prefetchable read transaction may be comprised of multiple prefetchable transactions. A prefetchable transaction will occur if the read request is a memory read within the prefetchable space, a memory read line, and memory read multiple. The amount of data prefetched depends on the type of transaction and the amount of free buffer space to buffer prefetched data. Disconnect refers to a termination requested with or after data was transferred on the initial data phase when the target is unable to respond within the target subsequent latency requirement and, therefore, is temporarily unable to continue bursting. A disconnect may occur because the burst crosses a resource boundary or a resource conflict occurs. Disconnect differs from retry in that retry is always on the initial data phase, and no data transfers. Disconnect may also occur on the initial data phase because the target is not capable of doing a burst. In current PCI art, if a read is disconnected and another agent issues an intervening read request, then any prefetched data maintained in the PCI buffer for the disconnected agent is discarded. Thus, when the read disconnected agent retries the read request, the PCI bridge will have to again prefetch the data because any prefetched data that was not previously returned to the agent prior to the disconnect would have been discarded as a result of the intervening read request from another agent. There is thus a need in the art for an improved bridge architecture to handle read/write transactions across a bridge from multiple agents. Furthermore, the amount of data prefetched from a target and stored in the PCI buffer may be less than the maximum amount of data that is permitted to be transmitted in a burst. If so, transmitting a sub-burst sized block of prefetched data from the PCI buffer to the requesting agent over the primary bus can result in inefficient utilization of the primary bus. Thus, heavy traffic on the secondary bus which cuts short prefetching of data to the PCI buffer can cause the transmission of sub-burst sized blocks of prefetched data over the primary bus and a corresponding increase of traffic on the primary bus. A similar problem is caused by the transmission of sub-burst sized blocks of data in write operations through the PCI bridge. Provided is an improved bridge system and method for gathering read data to return to a read request from an agent. In the illustrated embodiment, continuous read data obtained from a target device in a number of separate read operations over one bus may be gathered by the bridge and assembled into a larger block of data before forwarding the data over another bus to the requesting agent. As a consequence, the transmission of more optimal bursts of read data over the other bus may be increased. The bridge system of the illustrated embodiment includes a plurality of read data gathering buffers in which each agent is assigned a particular read data gathering buffer. As a consequence, read data may be concurrently gathered in the separate buffers for more than one agent at a time. In addition the assertion of a read or write request by one agent need not cause the flushing of the data being gathered for a different agent in a separate buffer. In another feature of the present invention, read data is gathered until an address boundary is crossed. In a preferred embodiment, read data is gathered by a local bridge until the address of the read data reaches a predefined address boundary which may be programmed by storing the value of that address boundary in a register. Once a boundary is reached, the gathered data is forwarded to a remote bridge which forwards the gathered data to the requesting agent. The local bridge then resumes gathering read data from boundary to boundary, forwarding the fetched data between adjacent address boundaries to the requesting agent until all of the data requested by the agent has been fetched and forwarded. It is believed that large amounts of data are more efficiently transmitted and stored when sent in blocks of data having beginning and ending addresses which are aligned with respect to predefined address boundaries. Also provided is an improved bridge system and method for gathering write data in response to multiple write requests from an agent before forwarding the collected write data to the selected target device. In the illustrated embodiment, write data may be gathered from several write operations over one bus and assembled into an address boundary-aligned block of write data before the bridge circuit forwards the write data to the target device over another bus. | Mid | [
0.539419087136929,
32.5,
27.75
] |
377 S.E.2d 388 (1989) NAN YA PLASTICS CORPORATION U.S.A. v. Philip R. DeSANTIS. Record No. 860635. Supreme Court of Virginia. March 3, 1989. *389 J. Lloyd Snook, III, Charlottesville, for appellant. John A. Dezio, Charlottesville, for appellee. Present All the Justices. COMPTON, Justice. This appeal stems from an action for damages brought by an employee against an employer for the alleged breach of an employment contract. First, we must decide whether the trial court properly exercised in personam jurisdiction over a foreign corporation under Virginia's long-arm statute. If we decide the jurisdictional question in favor of the plaintiff, we will examine issues of contract breach and damages. In 1985, appellee Philip DeSantis, a resident of Greene County, filed a motion for judgment against appellant Nan Ya Plastics Corporation U.S.A. seeking recovery of $350,000 for breach of a 1984 employment contract. Nan Ya was a multi-national Delaware corporation doing business in Texas. Upon affidavit that Nan Ya had failed to appoint or maintain a registered agent in the Commonwealth, process was served upon the clerk of the State Corporation Commission as defendant's statutory agent for service of process, pursuant to Code § 13.1-758(F). Subsequently, defendant filed a motion to quash the service of process. After a hearing, the trial court denied the motion. Later, following a bench trial on the merits, the court found in favor of the plaintiff and fixed damages accordingly. Judge E. Gerald Tremblay ruled on the motion to quash and Judge Henry D. Garnett presided over the trial on the merits. We awarded defendant this appeal from the April 1986 judgment order. The facts relevant to the jurisdictional issue mainly are undisputed. Where there was conflicting evidence, we will view the facts in the light most favorable to the plaintiff, according to settled principles. The plaintiff had been employed since 1980 as production manager for Kloeckner-Pentaplast of America, Inc. (KPA), at its Gordonsville plant. The company dealt in the plastics industry, specifically with rigid polyvinyl chloride film. The plaintiff was experienced in that industry. In February 1984, Jules Pilcher, employed by Rocheux International, the seller *390 of defendant's products, contacted plaintiff in Virginia from outside the State by telephone. Pilcher, a former co-worker with plaintiff at another company, asked plaintiff if he "would be interested in a plant manager's job for Nan Ya Plastics in the Wharton, Texas plant." Pilcher said he was calling on behalf of Nan Ya. Pilcher telephoned plaintiff on several subsequent occasions on behalf of Nan Ya to arrange an appointment for a job interview in Wharton. Pilcher also arranged for the plaintiff to fly to Texas and provided the airline ticket which plaintiff obtained at the airport. The plaintiff arrived in Houston, Texas on February 26, 1984 where he was met by Pilcher and Noah Wong, another representative of Rocheux. The three travelled by automobile from the Houston airport for the interview in Wharton, a trip of about one hour and forty-five minutes. At Wharton, the plaintiff met Y.L. Chang, the general manager of Nan Ya's Wharton plant. Present during the interview were Pilcher, Wong and Chang. Wong acted as interpreter for Chang. At the conclusion of the interview, "they" offered plaintiff "the position of plant manager" at a "price of $60,000." The plaintiff declined the offer and asked about "some guarantees, housing, moving expenses." Specifically, the plaintiff asked for a five-year employment guarantee. Chang was unable to agree to the demand for the guarantee without consulting Nan Ya's chairman in Taiwan. The plaintiff returned to Texas the following weekend to tour the Wharton plant. Chang informed him that he had not "received verification" from Taiwan concerning the five-year guarantee. Following plaintiff's return to Virginia, he received a letter from Chang dated March 9, 1984 mailed from Texas. In the letter, Chang offered plaintiff the "position as a Plant Manager in charge of production management of the Wharton Plant" at an annual salary of $60,000 plus other benefits. The plaintiff notified Nan Ya, by telephone through Pilcher and Wong, that the offer of the five-year guarantee was not contained in the letter and that he would not resign his position with KPA "until [he] received that." Subsequently, the plaintiff received in Virginia a letter from Chang dated March 13, 1984 mailed from Texas. Referring to the earlier letter, Chang wrote that "our company will offer you ... the position of Wharton plant manager ... for a consecutive five (5) years." The letter also referred to other benefits offered by Nan Ya, including payment of moving expenses from Virginia to Texas. Upon receipt of the letter, the plaintiff telephoned either Chang or Pilcher, accepted the offer, and indicated that he would resign his position at KPA. The plaintiff immediately resigned from KPA, a competitor of Nan Ya, and was sent to Nan Ya's headquarters in Taiwan for a period of training. In the order denying the motion to quash service of process, the trial court recited the foregoing basic facts and found that the March 4 letter constituted an offer of employment. The court further found that the plaintiff's response to that letter by telephone insisting "upon additional conditions" had the legal effect of a counter offer. The court also found that the March 13 letter constituted acceptance of the counter offer; that upon receipt of that letter in Virginia, "the contract between the parties was consummated in Virginia; and that this consummation of the contract was then verified by a telephone call from the Plaintiff in Virginia to the Defendant." Thus, the court ruled, "the Defendant had sufficient contact with Virginia and had acceded to the laws of Virginia," thereby giving the court in personam jurisdiction over defendant. On appeal, defendant argues the trial court erroneously exercised jurisdiction over it. Defendant points to the uncontradicted evidence that Nan Ya did no business in Virginia, maintained no registered agent in Virginia, had no employees in Virginia, owned no property in Virginia, and had no regular business contacts with anyone in Virginia. Defendant says the "only" contacts upon which plaintiff based his allegation of Virginia jurisdiction were *391 the telephone calls from Pilcher to him and the letters of March 9 and 13. Furthermore, defendant contends the trial court erred when it ruled that the employment contract was formed in Virginia. Relying on the so-called "mailbox rule," defendant says that the contract was complete in Texas when the letter of March 13 was posted. We disagree with the defendant's contentions. According to the long-arm statute, a Virginia court "may exercise personal jurisdiction over a person, who acts directly or by an agent, as to a cause of action arising from the person's ... [t]ransacting any business in this Commonwealth." Code § 8.01-328.1(A)(1). A "person," as used in the foregoing statute, includes a corporation "whether or not a citizen or domiciliary of this State and whether or not organized under the laws of this State." Code § 8.01-328. The function of our long-arm statute is to assert jurisdiction over nonresidents who engage in some purposeful activity in Virginia, to the extent permissible under the Due Process Clause of the Constitution of the United States. Danville Plywood Corp. v. Plain and Fancy Kitchens, Inc., 218 Va. 533, 534, 238 S.E.2d 800, 802 (1977). The Due Process Clause, however, protects a person's liberty interest in not being subject to the binding judgments of a forum unless he has "certain minimum contacts" within the territory of the forum so that maintenance of the action does not offend "traditional notions of fair play and substantial justice." International Shoe Co. v. Washington, 326 U.S. 310, 316, 66 S.Ct. 154, 158, 90 L.Ed. 95 (1945). See Burger King Corp. v. Rudzewicz, 471 U.S. 462, 471-72, 105 S.Ct. 2174, 2181, 85 L.Ed.2d 528 (1985). And, an examination of the history of litigation involving the limits placed by the Due Process Clause on the power of state courts to enter binding judgments against persons not served within their boundaries shows a "clearly discernible" trend "toward expanding the permissible scope of state jurisdiction over foreign corporations and other nonresidents." McGee v. International Life Ins. Co., 355 U.S. 220, 222, 78 S.Ct. 199, 200, 2 L.Ed.2d 223 (1957). "In part this is attributable to the fundamental transformation of our national economy over the years. Today many commercial transactions touch two or more States and may involve parties separated by the full continent. With this increasing nationalization of commerce has come a great increase in the amount of business conducted by mail across state lines. At the same time modern transportation and communication have made it much less burdensome for a party sued to defend himself in a State where he engages in economic activity." Id. at 222-23, 78 S.Ct. at 200-01. Because our statute speaks of transacting any business, it is a single-act statute requiring only one transaction in Virginia to confer jurisdiction on our courts. Kolbe, Inc. v. Chromodern, Inc., 211 Va. 736, 740, 180 S.E.2d 664, 667 (1971). In Kolbe, the single business transaction occurring in Virginia consisted of procuring a purchase order from a Virginia corporation by the agent of a California manufacturer for the sale and delivery of furniture to a North Carolina purchaser. In I.T. Sales, Inc. v. Dry, 222 Va. 6, 278 S.E.2d 789 (1981), the single transaction in Virginia involved the signing of an employment contract in Virginia, a contract which required the defendant to move to California. In both cases, we held that the defendant conducted a business transaction in this State that was sufficiently substantial to permit personal jurisdiction to be asserted under the long-arm statute. The present case is similar to Kolbe and I.T. Sales. Nan Ya aggressively reached into Virginia and recruited a Virginia resident for employment elsewhere. The plaintiff was located in Virginia when he participated in the telephonic negotiations, and written communications were sent to and received in Virginia. These letters were acknowledged by the plaintiff in Virginia. More importantly, however, the employment contract was formed in Virginia. The plaintiff, after receipt of the first March letter, expressly conditioned his acceptance *392 of the job offer upon his receipt of assurances that a five-year guarantee would be a part of his contract. He testified that he would not resign from his job at KPA until he "received that," meaning the promise of the guarantee. The assurance of the guarantee was transmitted in the second March letter received in Virginia. Under these circumstances, defendant's "mailbox rule" argument is not pertinent because the plaintiff required the letter of acceptance actually to be received as a condition to formation of the contract. Thus, the contract was not formed until the letter was received. See 1 Williston on Contracts § 88, at 283 (3d ed. 1957). And the trial court properly so found when it stated that the contract between the parties "was consummated in Virginia" upon receipt of the March 13 letter. Accordingly, we hold that the assertion of in personam jurisdiction by the trial court over the nonresident defendant for a cause of action arising from the consummation of the contract in Virginia between the plaintiff and the nonresident corporation was proper under the foregoing provision of the Virginia long-arm statute. Turning to the merits of the breach of contract claim, we conclude that the trial court's findings of fact control this phase of the case. Defendant's contention centers around the idea that, because plaintiff knowingly breached an agreement with KPA to assume employment with Nan Ya, the risk of plaintiff's continued employment with Nan Ya should be "apportioned," and that it "is unfair and inequitable" to require defendant to "suffer" for an alleged breach of its contract of employment. Defendant states: "Whether characterized as in pari delicto, assumption of risk, or unclean hands, DeSantis should not be permitted to require Nan Ya to pay for his mendacity to KPA." In effect, resolution of the issue turns on the knowledge and conduct of Nan Ya with reference to an agreement between DeSantis and KPA. As we shall demonstrate, the facts, stated in the light most favorable to the plaintiff, do not support defendant's contentions. While plaintiff was employed by KPA, he was required to execute a Confidential Technical and Business Information Protection Agreement (hereinafter, the secrecy agreement). Generally, the secrecy agreement prohibited disclosure by the employee of certain confidential information. It also contained a prohibition against an employee rendering services for any competing organization for a period of two years after his termination of employment with KPA. When the plaintiff arrived in Houston during his first trip to Texas, he initiated a discussion with Pilcher and Wong, Nan Ya's authorized representatives, about the secrecy agreement. The plaintiff was apprehensive about the effect the agreement would have upon his employment with a competitor of KPA such as Nan Ya. The plaintiff had a copy of the agreement in his possession and Pilcher and Wong, during the long ride to Wharton, "said that they would look at it later." After the first interview with Chang, plaintiff gave Pilcher a copy of the secrecy agreement. According to the plaintiff, Pilcher carried the document to another room where Wong was present. Later, they told plaintiff that a Virginia agreement would not "stand up" in Texas and that "we really wouldn't have any problems with it." Later that evening, during dinner, the secrecy agreement was discussed between Chang and Wong, in plaintiff's presence. The plaintiff joined Nan Ya and left for Taiwan on March 28, 1984. Shortly, KPA threatened legal action against both parties based on the secrecy agreement. In May 1984, a federal district court in Virginia entered a temporary restraining order at KPA's request enjoining DeSantis from "continuing in his employment with Nan Ya," from disseminating any confidential information, and from otherwise violating the secrecy agreement. In June 1984, the federal judge enlarged the restraining order into a preliminary injunction, but permitted DeSantis to draw his salary from Nan Ya. Nan Ya continued to pay the plaintiff directly until June 30, 1984. According to the evidence, plaintiff was not discharged by Nan Ya in June *393 but continued to live in Texas and drew a salary indirectly from Nan Ya by payments made by Pilcher, Wong, and Rocheux; they ultimately were reimbursed by Nan Ya. In December 1984, the federal judge entered an order vacating that portion of the preliminary injunction which enjoined DeSantis from continuing employment with Nan Ya. Thereafter, on December 11, 1984, plaintiff discovered that Nan Ya had terminated him. That action spawned the present lawsuit. The trial court found that Pilcher and Wong were acting for Nan Ya, that Nan Ya had full knowledge of the secrecy agreement before plaintiff was hired, and that Nan Ya breached the contract of employment. Defendant contends that it should not be liable to the plaintiff for breach of contract because plaintiff knowingly assumed the risk that KPA would obtain an injunction, thereby making his employment with Nan Ya "impossible." As we have said, the factual findings of the trial court, amply supported by the evidence, make such an argument untenable. "The burden of proof to establish excusable impossibility of performance of a contract is on him who asserts it." Paddock v. Mason, 187 Va. 809, 817, 48 S.E.2d 199, 203 (1948). As the trial court found, Nan Ya was aware of the details of the secrecy agreement from the very first time DeSantis was interviewed in Texas. Armed with this knowledge, Nan Ya employed DeSantis, guaranteeing him employment for five years. Even after a temporary restraining order was entered barring plaintiff's employment with Nan Ya, it continued to pay him, directly or indirectly, until December. Given this knowledge and conduct, we cannot say that the trial court erred in finding that defendant had failed to carry its burden to establish excusable impossibility; indeed, the facts show that Nan Ya is the party which bore the risk that an injunction might make it impossible for plaintiff to work for it. Finally, defendant raises a question of damages. The trial court, enforcing the employment contract, awarded plaintiff judgment in the amount of $73,766.22, for salary up to the date of the March 1986 trial, and further ordered Nan Ya to pay the plaintiff $5,000 per month beginning in April 1986 through March 1989. Nan Ya was ordered to place the sum of $180,000 with a fiduciary to guarantee the payments to DeSantis. Execution on the judgment order has been suspended pending appeal. Defendant contends that, although the trial court allowed defendant credit for wages actually earned by DeSantis, the court erred in refusing to offset against salary due any amounts DeSantis was likely to earn in the future during the remaining term of the contract. We reject this contention. As the trial court found, the evidence to support this claim was based on "conjecture" and was purely speculative. For these reasons, the judgment appealed from will be AFFIRMED. | Mid | [
0.538461538461538,
28.875,
24.75
] |
Background {#Sec1} ========== Microarray data analysis is a high throughput method used to gain information about gene functions inside cells. This information is in turn used to detect the presence or absence of disease \[[@CR1]--[@CR3]\], and gain a better understanding of a disease mechanism \[[@CR4]\]. A particularly useful application of microarray technology uses microarray data to detect the presence of disease by combining gene expression levels from a number of genes, to provide information on whether disease is present (classification) or the risk for the occurrence of disease in the future (prediction). While very complex classifiers can be constructed, a number of authors have expressed concern with the "black box" nature of these approaches \[[@CR5]\] preferring simpler more interpretable classifiers in clinical applications \[[@CR6], [@CR7]\]. It is noted that the preference for the latter kind of classifiers should not be at the expense of their performance. Classification involves, at its most fundamental level, a comparison between expression levels in one or more genes between two or more conditions (e.g., disease versus no disease). This comparison can be based on a fairly heuristic criterion (e.g., fold-change in gene expression \[[@CR8]\]), or by using parametric or non-parametric statistical methods \[[@CR9]--[@CR12]\]. There are several advantages and disadvantages with each of these methods. For example, it is biologically plausible that genes with large differential expression levels should be part of a classification criterion. However, the fold-change criterion does not take gene expression variability into account and determining a cutoff is an arbitrary exercise \[[@CR13]\]. On the other hand, parametric statistical methods, which are based on some variant of the t-test, provide some sense of one's confidence on the gene expression difference, but frequently lose the intuitive appeal of heuristic methods like fold-change (e.g., when even small differences are statistically significant). In addition, parametric methods make strong and frequently untenable assumptions regarding the distribution of gene expression levels \[[@CR13]\]. Non-parametric methods, which are based on ranking gene expression levels, are expected to lose some information because of the use of ranks instead of actual gene-expression data. However, such methods are robust to deviations from parametric assumptions \[[@CR13]\], and are less vulnerable to biases arising from data normalization and other pre-processing steps \[[@CR14]\], which are plausibly assumed to be rank-preserving \[[@CR6], [@CR7]\]. The fact that the TSP provides classifiers based on only two genes is also an attractive compromise in the so-called "bias-variance" tradeoff \[[@CR15]\]. As a classifier's performance is a combination of variance (random error) and bias (systematic error), in many cases, high-dimensional classifiers with low bias (due to good performance in the current sample) have large variances (i.e., poor precision) in new samples. By contrast, simpler (and thus more rigid) classifiers, while possibly having higher levels of bias, are less influenced by a specific sample and may have better overall performance (smaller variance) in multiple samples. The simple TSP classifiers, it was hoped, would perform sufficiently well both in the current sample as well as in new samples. The TSP is a rank-based classifier in the sense that it uses the rankings of gene expression levels within a gene profile rather than the levels themselves, an approach with significant advantages due to the nonparametric nature of the classification technique. The central idea behind the TSP classifier is that it identifies two genes whose gene expression ranking changes between the two conditions under consideration. This change lends itself to a simple biological interpretation as an inversion of mRNA abundance of the two genes in the two conditions under consideration. The pair of genes selected by the TSP \[[@CR6]\], referred to as the top scoring pair (TSP), is found by the following approach: Consider *G* genes which have been profiled by microarray analysis. Let *n*~1~ be the number of experiments from the first class with expression levels $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$Y_{i}=\{Y_{i,1}, Y_{i,2}, \cdots, Y_{i,n_{1}}\}\phantom {\dot {i}\!}$\end{document}$, and let *n*~2~ be the number of experiments from the second class with expression levels $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\phantom {\dot {i}\!}Y_{i}=\{Y_{i,n_{1}+1}, Y_{i,n_{1}+2}, \cdots, Y_{i,n}\}$\end{document}$, where *n*=*n*~1~+*n*~2~. Given a pair of genes (*i*,*j*), 1≤*i*≠*j*≤*G*, the reversal score of the pair was defined in \[[@CR6]\] as $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ \Delta_{ij} = \left| P(Y_{i} > Y_{j}|C=1) - P(Y_{i} > Y_{j}|C=2)\right| $$ \end{document}$$ where *P*(*Y*~*i*~\>*Y*~*j*~\|*C*=*m*) denotes the probability that the expression level of gene *i* is larger than the expression level of gene *j* in samples from class *C*, with *C* being equal to *m*=1,2. The score *Δ*~*ij*~ can be empirically approximated by the expression \[[@CR6]\] $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ D_{ij} =\left|\frac{\sum_{\ell=1}^{n_{1}}{{I}_{1}(Y_{i,\ell}>Y_{j,\ell})}}{n_{1}} - \frac{\sum_{\ell=n_{1}+1}^{n}{{I}_{2}(Y_{i,\ell}>Y_{j,\ell})}}{n_{2}}\right| $$ \end{document}$$ where index *ℓ* indicates the *ℓ*th subject, 1≤*ℓ*≤*n* and *I*~*m*~(*Y*~*i*,*ℓ*~\>*Y*~*j*,*ℓ*~)=1 if *Y*~*i*,*ℓ*~\>*Y*~*j*,*ℓ*~ in class *m*=1,2, and 0 otherwise. Obviously, the larger the *Δ*~*ij*~, the higher the probability that the expression levels of genes *i* and *j* have reverse relative rankings in the two groups, and it is exactly this property that is used for classification by the TSP. More specifically, let (*α*,*β*) be the pair of genes that yields the maximum score *Δ*~*αβ*~=max{*Δ*~*ij*~} (referred to as the Top Scoring Pair (TSP) \[[@CR6]\]). Then the classification is performed as follows: Assume that $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$P(Y_{\alpha} > Y_{\beta}|C=1) > P(Y_{\alpha} > Y_{\beta}|C=2) $$ \end{document}$$ i.e., $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ \frac{\sum_{\ell=1}^{n_{1}}{{I}_{1}(Y_{\alpha,\ell}>Y_{\beta,\ell})}}{n_{1}} > \frac{\sum_{\ell=n_{1}+1}^{n}{{I}_{2}(Y_{\alpha,\ell}>Y_{\beta,\ell})}}{n_{2}} $$ \end{document}$$ Then a new subject *s* whose measured expression levels for genes *a* and *b* are *Y*~*α*,*s*~ and *Y*~*β*,*s*~ respectively, will be classified as belonging to the first class if *Y*~*α*,*s*~\>*Y*~*β*,*s*~, and to the second class otherwise. The genes in the top scoring pair as selected by the TSP method may have a problem, as Lin et al. \[[@CR5]\] also point out: the selected genes may not be a pair of genuinely up-regulated and down-regulated genes, but one of the selected genes in the pair happens to serve only as a reference or "pivot" gene that may lead to a high TSP score but a rather non-informative gene pair. Most researchers have used more complicated methods or selected more features in order to overcome the mentioned problems. In the proposed method we employ a simple statistic associated with the Receiver Operating Characteristic (ROC) curve that is commonly known as the Area Under the ROC curve (AUROC) or the Area Under the Curve (AUC), for short. The ROC curve and the AUC in particular have been widely used as a measure for microarray classification and other medical diagnostic tests (see, e.g., \[[@CR16]--[@CR23]\]. The proposed method, referred to as AUCTSP (AUC-based TSP), uses similar ideas as the TSP, thus benefiting from the simplicity of the TSP approach, but enhances TSP by making the resulting classifier less prone to overfitting, achieving higher classification accuracy and avoiding the selection of pivot genes as members of the top scoring pair of genes. Methods {#Sec2} ======= In this manuscript we propose the AUCTSP, a classifier that works according to the same principle as TSP but differs from the latter in that the probabilities that determine the top scoring pair are computed based on the relative rankings of the two marker genes across all subjects instead of within each individual subject. Although the classification is still done on an individual-subject basis, consideration of all subject data in the estimation of the ranking reversals results in a classifier with higher accuracy. This performance superiority of AUCTSP over TSP is demonstrated through simulations and case studies (see "[Results](#Sec4){ref-type="sec"}" section) involving classification in ovarian, leukemia, colon, prostate and breast cancers and diffuse large b-cell lymphoma. The proposed AUCTSP classifier {#Sec3} ------------------------------ The score that TSP computes is based on the probability *P*(*Y*~*i*~\>*Y*~*j*~\|*C*=*m*) that the expression level of gene *i* is larger than the expression level of gene *j* in samples from the *m*-th class, *m*=1,2. This probability was approximated in \[[@CR6]\] by the proportion of individuals of class *m* with higher expression level in gene *i* than in gene *j* out of all individuals in class *m*, i.e., by the probability $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ P_{\text{TSP}}(Y_{i} > Y_{j}|C=m)=\frac{\sum_{\ell=1}^{n_{m}}{{I}_{m}(Y_{i,\ell}>Y_{j,\ell})}}{n_{m}} $$ \end{document}$$ We propose to approximate the original probability *P*(*Y*~*i*~\>*Y*~*j*~\|*C*=*m*) by the probability that a randomly chosen individual from class *m* has an expression level for gene *i* that is larger than that of a randomly chosen individual from class *m* (*m*=1,2) for gene *j*. The estimate of the original probability *P*(*Y*~*i*~\>*Y*~*j*~\|*C*=*m*) in the proposed AUCTSP method is given by $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$P_{\text{AUCTSP}}(Y_{i}>Y_{j}|C=m)=$$ \end{document}$$ $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ \frac{\sum_{k=1}^{n_{m}}\sum_{\ell=1}^{n_{m}}{{I}(Y_{i,k}>Y_{j,\ell})}}{n_{m}^{2}} $$ \end{document}$$ The numerator in Eq. [6](#Equ6){ref-type=""} denotes the sum over all samples *k*, 1≤*k*≤*n*~*m*~, of the number of times that the expression level of gene *i* in sample *k* is larger than the expression level of gene *j* in some other sample *ℓ*≠*k*,1≤*ℓ*≤*n*~*m*~, from the same class *m* (*m*=1 or 2). The probability *P*~AUCTSP~ can be calculated by the Area Under the ROC Curve (AUC) \[[@CR23]\]. The AUC statistic has been used extensively in diagnostic test validation \[[@CR18]--[@CR20], [@CR22], [@CR23]\] and gene feature selection \[[@CR21]\] in two-group settings. In our case here, group 1 is taken to be the set of expression levels of gene *i* in class *m*, and group 2 is taken to be the set of expression levels of gene *j* in *the same* class *m*. It is well established that, for independent samples, the AUC statistic is the minimum-variance unbiased estimate of *P*(*X*\>*Y*) \[[@CR24]\]. In correlated samples (as we have here, since the gene expression levels are measured on the same individual *i*=1,2,⋯,*n*~*m*~ for *m*=1,2), it is expected that *P*~AUCTSP~ is still an unbiased estimate of *P*(*X*\>*Y*) and should generate more precise estimates of the probability *P*(*Y*~*i*~\>*Y*~*j*~\|*C*=*m*) compared to *P*~TSP~, unless the correlation of gene expression levels between genes *i* and *j* in the same individual is too high (thus leading to an inflated variance of the AUC-based estimator). In addition, the AUCTSP classifier, which is based on a summary measure derived from *all* subjects (compared to the single-subject approach in the TSP), has the potential to yield a top scoring pair that is less susceptible to the specific training data, thus further avoiding overfitting compared to the TSP. The better performance of AUCTSP is corroborated by our experimental results. We highlight the following two points about our use of the AUC statistic in the proposed method: (i) the AUC statistic is traditionally applied on two groups one of which is the "healthy" and the other one the "diseased," whereas in our method we apply it on gene expression profiles from the same ("healthy" or "diseased") group; (ii) although the *P*~AUCTSP~ is obtained from *all* subjects, the classification rule that we obtain in the AUCTSP classifier is still applied on the expression levels of the marker genes from the *same* single subject, exactly as in the TSP classifier. To elucidate the intuition behind the AUCTSP classifier, consider the following example. Assume that the expression levels of a gene *A* for 5 healthy subjects are as given in Table [1](#Tab1){ref-type="table"}. The probability that the expression level of *A* is less than the level of *B* in the healthy subjects is 5/5 = 1 while the probability that the level of *A* is less than the level of *B* in the diseased subjects is 0, yielding an overall TSP score $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$D^{\tiny \mathrm {TSP}}_{AB}= 1$\end{document}$. Contrast the above with the situation involving two other genes, *C* and *D* (Table [1](#Tab1){ref-type="table"}). The probability that the expression level of *C* is less than the level of *D* in the healthy subjects is 4/5 = 0.8, while the probability that the expression level of *C* is less than the level of *D* in the diseased subjects is 1/5 = 0.2. This yields an overall TSP score $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$D^{\tiny \text {TSP}}_{CD}=0.6$\end{document}$, which is less than the score of pair *A* and *B*, and consequently the pair *C* and *D* would be discarded by the TSP. However, the *distributions* of the expression levels of *C* and *D* in the healthy (and the diseased) subjects exhibit greater separation than those for *A* and *B* and thus, using genes *C* and *D* for classification is arguably preferable. Table 1Gene expression levels in two genesHealthyDiseasedHealthyDiseasedGene *A*Gene *B*Gene *A*Gene *B*Gene *C*Gene *D*Gene *C*Gene *D*11123231102042312122343312234333232436351525453525263837172747372728403919183941 The above intuitive preference for pair (*C*,*D*) is supported by the score derived for these two genes according to the proposed AUCTSP approach. The non-parametric estimate of the AUC for pair (*C*, *D*) on the healthy subjects is 24/25 = 0.96, and on the diseased subjects it is 1/25 = 0.04. This yields an overall AUCTSP score of $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$D^{\tiny \text {AUCTSP}}_{CD}=0.92$\end{document}$, while the corresponding AUCTSP score for the (*A*,*B*) gene pair is $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$D^{\tiny \text {AUCTSP}}_{AB}=15/25 - 10/25 = 0.2$\end{document}$ and, therefore, the (*C*,*D*) pair is preferred over(*A*,*B*) by the proposed approach. We note here that the claim about the greater separation of the gene expression distributions is not based in any way on the actual values of the data, only on their ranking. This in turn means that the proposed method will be robust in selecting the top scoring pair and will not be affected by outliers in the gene expression data and will also be invariable to any rank-preserving normalization technique. Results {#Sec4} ======= The AUCTSP classifier was implemented in the C programming language. The evaluation of the methodology was based on (i) simulations and (ii) case studies. Simulations {#Sec5} ----------- We compared the estimations given by TSP (Eq. [5](#Equ5){ref-type=""}) and AUCTSP (Eq. [6](#Equ6){ref-type=""}) for the probability *P*(*X*\>*Y*) involved in the computation of the TSP and AUCTSP scores. We generated random expression levels for "genes" *X* and *Y* from normal distributions with different combinations of mean *μ* and deviation *σ* for different sample sizes, where *μ*~*X*~ is greater than or equal to *μ*~*Y*~ in all of the simulated cases. In this case, the probability *P*(*X*\>*Y*) is given by the detectability index *A*~*z*~ defined by Metz et al. \[[@CR22]\] as: $$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ A_{z} = P(X > Y) = \Phi\left (\frac{\frac{|\mu_{X} - \mu_{Y}|}{\sigma_{X}}}{\sqrt{1 +\left (\frac{\sigma_{Y}}{\sigma_{X}}\right)^{2}}}\right) $$ \end{document}$$ where *Φ*() denotes the cumulative distribution function (CDF) of the standard normal distribution and *μ*~*X*~, *σ*~*X*~, and *μ*~*Y*~, *σ*~*Y*~ denote the mean and standard deviation of the assumed normal distributions for *X* and *Y*, respectively. The cases chosen for comparison are two normal distributions with: (i)small means (*μ*~*X*~=1, *μ*~*Y*~=0) with small variances (*σ*~*X*~=1, *σ*~*Y*~=1);(ii)small means (*μ*~*X*~=1, *μ*~*Y*~=0) with large variances (*σ*~*X*~=3, *σ*~*Y*~=3);(iii)large means (*μ*~*X*~=5, *μ*~*Y*~=0) with small variances (*σ*~*X*~=1, *σ*~*Y*~=1);(iv)large means (*μ*~*X*~=5, *μ*~*Y*~=0) with large variances (*σ*~*X*~=3, *σ*~*Y*~=3);(v)equal small means (*μ*~*X*~=1, *μ*~*Y*~=1) with a small variance for one distribution (*σ*~*X*~=1) and a large variance for the other distribution (*σ*~*Y*~=3);(vi)equal large means (*μ*~*X*~=5, *μ*~*Y*~=5) with a small variance for one distribution (*σ*~*X*~=1) and a large variance for the other distribution (*σ*~*Y*~=3). The results for different sample sizes *N*=10,20,30,40 are shown in Table [2](#Tab2){ref-type="table"}. Columns 4 and 5 show the estimates of probability *P*(*X*\>*Y*) obtained by TSP and AUCTSP over 1000 random trials. The theoretical probability *A*~*z*~=*P*(*X*\>*Y*) (see Eq. [7](#Equ7){ref-type=""}) is shown in the last column. With bold, we show the value that is closer to the theoretical value *A*~*z*~. As can be seen, for the cases where both simulated gene expression distributions have equal variances (cases i-iv), the AUCTSP and TSP estimates are virtually identical and are very close to the theoretical probability even for small sample sizes. In the two cases where the variance in one of the genes is greater (cases v-vi), both estimators do poorly for small sample size *N* and improve with increasing *N*, but the AUCTSP is always closer to the target quantity *A*~*z*~. Table 2Simulation results on estimation of *P*(*X*\>*Y*) by TSP and AUCTSPGene XGene Y*N*TSPAUCTSP*A* ~*z*~N(1,1)N(0,1)100.763**0.762**0.760200.762**0.761**0.760300.759**0.760**0.760400.759**0.760**0.760N(1,3)N(0,3)100.595**0.594**0.592200.594**0.593**0.592300.594**0.593**0.592400.593**0.592**0.592N(5,1)N(0,1)10**0.9980.998**0.99920**0.9980.998**0.99930**0.9980.998**0.99940**0.9980.998**0.999N(5,3)N(0,3)100.883**0.882**0.878200.881**0.880**0.878300.880**0.879**0.878400.880**0.879**0.878N(1,1)N(1,3)100.619**0.610**0.500200.587**0.581**0.500300.572**0.564**0.500400.563**0.557**0.500N(5,1)N(5,3)100.616**0.610**0.500200.585**0.575**0.500300.570**0.563**0.500400.559**0.554**0.500The estimates of *P*(*X*\>*Y*) closer to *A*~*z*~ are marked in bold Next, we compared the capability of TSP and AUCTSP to identify the single informative pair of genes in the midst of other non-informative genes. For this purpose, we generated random normal expression levels for *N* "genes" from *n*~1~ "healthy" individuals and *n*~2~ "diseased" individuals, for all combinations of *N*=100, 200 and *n*~1~=*n*~2~=20, 40. In all these simulations the genes numbered 1 and 2 carry the differentiating information between the healthy and diseased groups, represented by normal distributions (NH() for the "healthy" and ND() for the "diseased") that are different from N(0,1), as shown in Table [3](#Tab3){ref-type="table"}. All remaining genes other than 1 and 2 have expression levels obtained from the same "non-informative" distribution N(0,1). The efficacy of each classifier is measured by how many times it is able to identify the pair of genes (1,2) as the top scoring pair. The results (as averages over 1000 simulations) are shown in Table [3](#Tab3){ref-type="table"}. The rows correspond to cases exploring the effect of increasing variance and increasing differences in the means of the expression level distributions. As can be observed, the AUCTSP consistently outperforms the TSP, in some cases dramatically, even for small sample sizes. Table 3Simulation results for the ability of AUCTSP and TSP to identify the most informative gene pairGene 1Gene 2N=100 *n*~1~=*n*~2~=20N=100 *n*~1~=*n*~2~=40N=200 *n*~1~=*n*~2~=20N=200 *n*~1~=*n*~2~=40TSPAUCTSPTSPAUCTSPTSPAUCTSPTSPAUCTSPNH(0,1) ND(1,1)NH(1,1) ND(0,1)23.451.258.893.215.439.845.489.7NH(-1,1) ND(1,1)NH(1,1) ND(-1,1)69.198.997.799.957.897.294.099.9NH(-2,1) ND(2,1)NH(2,1) ND(-2,1)91.699.997.699.992.799.895.799.9NH(-2,2) ND(2,2)NH(2,2) ND(-2,2)48.293.280.299.938.391.471.499.9 Case studies {#Sec6} ------------ We evaluated the performance of the AUCTSP classifier over the TSP classifier in 8 publicly available datasets: (i)Ovarian Cancer (Pepe et al., 2003 \[[@CR17]\]) dataset which consists of 1536 genes with expression levels from 23 healthy and 30 diseased subjects;(ii)Acute Leukemia (Golub et al., 1999 \[[@CR25]\]) dataset which consists of 3571 human genes with expression levels from 25 cases of acute myeloid (aka myelogenous) leukemia (AML) and 47 cases from acute lymphoblastic (aka lymphocytic) leukemia;(iii)Breast Cancer - Estrogen Receptor (ER) status (West et al., 2001 \[[@CR26]\]) dataset which consists of the expression levels of 7129 genes in 49 tissues separated into two groups of 25 positive and 24 negative tissues based on the estrogen receptor (ER) status;(iv)Breast Cancer - Lymph Node (LN) status (West et al., 2001 \[[@CR26]\]) dataset which consists of the expression levels of 7129 genes in 49 tissues separated into two groups of 24 positive and 25 negative tissues based on the lymph node (LN) status;(v)Diffuse Large B-Cell Lymphoma (DLBCL) to predict patient outcome (Alizadeh et al., 2000 \[[@CR27]\]) dataset which consists of the expression levels of 7129 genes in 32 cured samples and 26 fatal or refractory disease samples.(vi)DLBCL versus Follicular Lymphoma (FL) (Alizadeh et al., 2000 \[[@CR27]\]) dataset which consists of the expression levels of 7129 genes in 58 DLBCL samples and 19 FL samples;(vii)Colon Cancer (Alon et al., 1999 \[[@CR28]\]) dataset which consists of the expression levels of 2000 genes from 40 subjects diagnosed with colon cancer and 22 healthy subjects;(viii)Prostate cancer (Singh et al., 2002 \[[@CR29]\]) dataset which consists of the expression levels of 12533 genes from 52 subjects diagnosed with prostate cancer and 50 healthy subjects. Top scoring pairs selected by TSP and AUCTSP {#Sec7} -------------------------------------------- For each of these datasets, we applied AUCTSP and TSP and identified the top-scoring pairs obtained by AUCTSP and TSP. The selected pairs of genes are shown in Table [4](#Tab4){ref-type="table"} and the gene legend is shown in Table [5](#Tab5){ref-type="table"}. Table 4Top scoring pairs of genes under TSP and AUCTSPScoreDatasetMethodGene pairTSPAUCTSPOVARIANTSP\[PKM2, OVGP1\]0.9000.675AUCTSP\[IRS1, OVGP1\]0.8330.826LEUKEMIATSP\[SPTAN1, CD33\]0.9790.938TSP\[ARHGAP45, ZYX\]0.9790.770TSP\[PCDHGC3, ZYX\]0.9790.855AUCTSP\[SPTAN1, CD33\]0.9790.938BREAST-ERTSP\[MUC2, ESR1\]0.9180.812TSP\[JAK3, ESR1\]0.9180.791TSP\[GNB3, ESR1\]0.9180.804TSP\[HARS2, ESR1\]0.9180.834TSP\[ERF, ESR1\]0.9180.822AUCTSP\[CTSC, ESR1\]0.8780.891BREAST-LNTSP\[BP1CR, GYPB\]0.8380.675AUCTSP\[BP1CR, KRT31\]0.7170.765TSP\[FABP3, ACVR1B\]^b^0.7160.531AUCTSP\[GYPB, ACVR1B\]^b^0.6330.615DLBCLTSP\[PDE4B, GPR12\]0.5960.414AUCTSP\[POLR2J, PTGER4\]0.3410.46DLBCL-FLTSP\[YWHAZ, SNRPB\]0.9830.727AUCTSP\[FCGR1A, NEO1\]0.7590.83COLONTSP\[VIP, DARS\]0.8790.637AUCTSP\[MYH9, HNRNPA1\]0.7590.724PROSTATETSP\[CFD, ENO1\]0.9010.693AUCTSP\[CFD, NUMB\]0.8820.883^a^indicates the selected TSP gene pair by \[[@CR7]\] to break the tie for pairs with equal TSP scores^b^indicates the selected pair of genes by TSP and AUCTSP after removing the genetically modified gene BP1CR (see \[[@CR32], [@CR33]\]) from the dataset Table 5Gene legendData setGene IDGene acronymGene descriptionOVARIANg47IRS1Insulin Receptor Substrate 1g93OVGP1Oviductal Glycoprotein 1g1202PKM2Pyruvate Kinase, MuscleLEUKEMIAD86976ARHGAP45Rho GTPase Activating Protein 45J05243SPTAN1Spectrin Alpha, Non-Erythrocytic 1L11373PCDHGC3Protocadherin Gamma Subfamily C, 3M23197CD33CD33 MoleculeX95735ZYXZyxinBREAST-ERL21998MUC2Mucin 2U09607JAK3Janus Kinase 3U15655ERFETS2 Repressor FactorU18937HARS2Histidyl-TRNA Synthetase 2, MitochondrialU47931GNB3G Protein Subunit Beta 3X03635ESR1Estrogen Receptor 1X87212CTSCCathepsin CBREAST-LNAFFX-CreX-3BP1CRBacteriophage P1 Cre RecombinaseX82634KRT31Keratine 31J02982GYPBGlycophorin BM18079FABP3Fatty Acid Binding Protein 3X15357ACVR1BActivin A Receptor Type 1BDLBCLK03008POLR2JRNA Polymerase II Subunit JL20971PDE4BPhosphodiesterase 4BL28175PTGER4Prostaglandin E Receptor 4U18548GPR12G Protein-Coupled Receptor 12DLBCL-FLD78134YWHAZTyrosine 3-Monooxygenase/ Tryptophan 5-Monooxygenase Activation Protein ZetaM63835FCGR1AFc Fragment Of IgG Receptor IaU61262NEO1Neogenin 1X17567SNRPBSmall Nuclear Ribonucleoprotein Polypeptides B and B1COLONHsa.37937MYH9Myosin Heavy Chain 9Hsa.8010HNRNPA1Heterogeneous Nuclear Ribonucleoprotein A1Hsa.2097VIPVasoactive Intestinal PeptideHsa.601DARSAspartyl-TRNA SynthetasePROSTATE40282_s\_atCFDComplement Factor D2035_s\_atENO1Enolase 137693_atNUMBNUMB, Endocytic Adaptor Protein Table [4](#Tab4){ref-type="table"} reports also (for informational purposes) the score that the selected pair by TSP and AUCTSP receives under the opposite classifier (AUCTSP and TSP, respectively). For example, the pair selected by TSP for the ovarian cancer dataset has a TSP score of 0.9 but it receives a score of 0.675 under AUCTSP, whereas the AUCTSP score of the pair selected by AUCTSP is 0.826, while the score given to it by TSP is 0.833. This shows that pairs selected by TSP may have significantly lower scores under AUCTSP. The biological relevance of the selected genes was found by consulting the GENECARDS database \[[@CR30]\] and the VarElect NGS Phenotyper \[[@CR31]\]. All of the genes identified by AUCTSP have been reported in the existing literature to be indeed related to the corresponding disease, whereas some of the genes identified by TSP such as DARS for colon cancer have not been reported to be related. A full description of the biological findings on the genes selected by AUCTSP and TSP is given in the Additional file [1](#MOESM1){ref-type="media"}. The histograms of the selected genes are also given in the Additional file [2](#MOESM2){ref-type="media"}. We also note that for the datasets examined, AUCTSP resulted in no ties, whereas TSP frequently selected multiple pairs of genes having the same highest TSP score (3 such pairs in the Leukemia dataset and 5 pairs in the Breast-ER dataset). We have identified the gene pair ultimately chosen by the TSP after applying the tie-breaking rule proposed by Geman et al. \[[@CR6]\] with an asterisk ("\*") in Table [4](#Tab4){ref-type="table"}. (For the case of the Breast-LN dataset, both the AUCTSP and TSP resulted in selecting a genetically modified gene ("Bacteriophage P1 Cre recombinase") \[[@CR32], [@CR33]\] as member the top-scoring pair. The pair of genes selected by the AUCTSP and TSP after eliminating this gene from the dataset are marked with ("\*\*") in Table [4](#Tab4){ref-type="table"}). Furthermore, in order to check how far the selected genes (by either method) are from being non-informative "pivot" genes, we computed for each gene *g* the probability *P*~*g*~=*P*(*g*∈*C*~1~\>*g*∈ *C*~2~) that the expression levels of *g* in class *C*~1~ are greater than the expression levels of *g* in class *C*~2~, where *C*~1~,*C*~2~ are the two classes in the corresponding dataset. A value of *P*~*g*~ close to 0.5 means that the gene is strongly non-informative. A value of *P*~*g*~ close to 1 or close to 0 means that the gene is strongly informative. For the case where the value of *P*~*g*~ is close to 0, we can simply inverse the ROC curve to compute the probability *P*(*g*∈*C*~1~\<*g*∈ *C*~2~), so that all informative genes are indicated by values of *P*~*g*~ close to 1. The computation of *P*~*g*~ was done by computing the AUC of the ROC curve corresponding to the expression values of gene *g* in classes *C*~1~ and *C*~2~. The results are shown in Table [6](#Tab6){ref-type="table"}. The *P*~*g*~ values for each member of a selected pair are shown in column 4, whereas column 5 shows the corresponding values $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\hat {P}_{g}$\end{document}$ if the ROC curve has to be inverted so that values closer to 1 indicate more informative genes. As can be seen, the genes selected by AUCTSP have better deviation from the 0.5 value of a non-informative gene in almost every case. Table 6Deviation of the genes selected by TSP and AUCTSP from the non-informative "pivot" geneDatasetMethodGene Pair($\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$P_{g_{1}},P_{g_{2}}$\end{document}$)($\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\hat {P}_{g_{1}},\hat {P}_{g_{2}}$\end{document}$)OVARIANTSP(PKM2, OVGP1)(0.16, 0.03)(0.84, 0.97)AUCTSP(IRS1, OVGP1)(0.84, 0.03)(0.84, 0.97)LEUKEMIATSP(SPTAN1, CD33)^a^(0.05, 0.99)(0.95, 0.99)TSP(ARHGAP45, ZYX)(0.61, 0.02)(0.61, 0.98)TSP(PCDHGC3, ZYX)(0.63, 0.02)(0.63, 0.98)AUCTSP(SPTAN1, CD33)(0.95, 0.01)(0.95, 0.99)BREAST-ERTSP(MUC2, ESR1)^a^(0.72, 0.04)(0.72, 0.96)TSP(JAK3, ESR1)(0.66, 0.04)(0.66, 0.96)TSP(GNB3, ESR1)(0.56, 0.04)(0.56, 0.96)TSP(HARS2, ESR1)(0.57, 0.04)(0.57, 0.96)TSP(ERF, ESR1)(0.58, 0.04)(0.58, 0.96)AUCTSP(CTSC, ESR1)(0.91, 0.04)(0.91, 0.96)BREAST-LNTSP(FABP3, ACVR1B)(0.60, 0.69)(0.60, 0.69)AUCTSP(GYPB, ACVR1B)(0.14, 0.69)(0.86, 0.69)DLBCLTSP(PDE4B, GPR12)(0.73, 0.32)(0.73, 0.68)AUCTSP(POLR2J, PTGER4)(0.30, 0.72)(0.70, 0.72)DLBCL-FLTSP(YWHAZ, SNRPB)(0.80, 0.10)(0.80, 0.90)AUCTSP(FCGR1A, NEO1)(0.06, 0.84)(0.94, 0.84)COLONTSP(VIP, DARS)(0.82, 0.16)(0.82, 0.84)AUCTSP(MYH9, HNRNPA1)(0.89, 0.24)(0.89, 0.76)PROSTATETSP(CFD, ENO1)(0.91, 0.27)(0.91, 0.73)AUCTSP(CFD, NUMB)(0.91, 0.04)(0.91, 0.96)^a^indicates the selected TSP gene pair by \[[@CR7]\] to break the tie for pairs with equal TSP scores Classifier performance of AUCTSP vs. TSP {#Sec8} ---------------------------------------- We also compared the performance of the proposed AUCTSP classifier vs. the TSP classifier in terms of accuracy for predicting the correct status of subjects in a "testing" set after the classification rule (i.e., the top-scoring pair and its associated probabilities under AUCTSP and TSP, respectively) is obtained from a "training" set. For each of the eight datasets in our case study, we generated several training sets and testing sets, by randomly picking a percentage *p* of subjects to form the training set and using the remaining *q*=1−*p* percentage of subjects as the testing set, for different values of *q*=1*%*,5*%*,10*%*,15*%*,20*%*,30*%*. The actual size of the testing set was set to ⌈*N*·*q*⌉, where *N* is the size of the dataset, and the set of the training set was set to *N*−⌈*N*·*q*⌉. Our intention was to see how AUCTSP and TSP behave as the training set decreases, i.e., how well AUCTSP and TSP can "generalize" their classification rule. Each test was repeated for 1000 trials and the average of the classifier accuracy (i.e., the ratio of the sum of the true positive and true negative test cases identified by the classification rule obtained from the training set over the total number of test cases) was calculated over these trials for each training set. The results for increasing sizes of test sets (equivalently, decreasing sizes of training sets) as percentages of subjects left out from the original dataset are shown in Table [7](#Tab7){ref-type="table"}. The plot representations of the results listed in Table [7](#Tab7){ref-type="table"} are given in Figs. [1](#Fig1){ref-type="fig"}, [2](#Fig2){ref-type="fig"}, [3](#Fig3){ref-type="fig"}, [4](#Fig4){ref-type="fig"}, [5](#Fig5){ref-type="fig"}, [6](#Fig6){ref-type="fig"}, [7](#Fig7){ref-type="fig"} and [8](#Fig8){ref-type="fig"}. These results show that the AUCTSP method performs better in terms of classification accuracy than the TSP method. The results indicate that the AUCTSP classifier is able indeed to identify useful marker genes from small training sets, in accordance with the "generalization" capability of the AUC statistic. Fig. 1Comparison of TSP vs. AUCTSP classification accuracy for different sizes of training sets: OVARIAN dataset Fig. 2Comparison of TSP vs. AUCTSP classification accuracy for different sizes of training sets: COLON dataset Fig. 3Comparison of TSP vs. AUCTSP classification accuracy for different sizes of training sets: LEUKEMIA dataset Fig. 4Comparison of TSP vs. AUCTSP classification accuracy for different sizes of training sets: BREAST-LN dataset Fig. 5Comparison of TSP vs. AUCTSP classification accuracy for different sizes of training sets: BREAST-ER dataset Fig. 6Comparison of TSP vs. AUCTSP classification accuracy for different sizes of training sets: DLBCL-FL dataset Fig. 7Comparison of TSP vs. AUCTSP classification accuracy for different sizes of training sets: DLBCL dataset Fig. 8Comparison of TSP vs. AUCTSP classification accuracy for different sizes of training sets: PROSTATE dataset Table 7Comparison of classifier accuracy by TSP and AUCTSP for decreasing size of training setTest set fractionOVARIANLEUKEMIACOLONBREAST-LNBREAST-ERDLBCLDLBCL-FLPROSTATETSPAUCTSPTSPAUCTSPTSPAUCTSPTSPAUCTSPTSPAUCTSPTSPAUCTSPTSPAUCTSPTSPAUCTSP1%87.1893.3997.8997.8988.9896.5989.7694.6684.2691.0778.5078.8895.8099.3091.9091.905%87.4889.4396.0296.1284.4592.4586.0389.3575.4084.1178.2078.5091.4696.2390.7090.5010%77.4382.7891.6492.2776.7695.0189.7694.6684.2691.0677.2078.0283.1892.4981.3480.3715%76.9679.788.290.972.7173.0277.8578.665.8475.0772.8476.7383.0287.5779.1079.5020%70.7173.9584.3289.161.3979.1586.0389.3575.3984.1069.2375.3571.3075.4568.7076.0625%72.276.681.278753.7567.6582.0585.4871.2080.8066.7972.1166.8767.1463.3074.3530%61.1580.3877.5381.141.3842.3977.8578.665.8475.0663.4172.1367.3566.7453.3060.7 Discussion {#Sec9} ========== AUCTSP maintains the basic advantages of TSP namely the data-driven and parameter-free machine learning features that resolve the parameter tuning issue without making any assumptions about the data used, as well as the production of easily interpretable classification rules. AUCTSP, however, improves TSP by avoiding overfitting and suffering less from small sample sizes, due to the fact that every sample is compared to all other samples in the same class rather than on only a single sample by sample comparison as in TSP. In addition, AUCTSP tends to avoid selection of non-informative pivot genes, which are a known problem of TSP. Concerning selection of genes whose over-expression or under-expression is due to reasons unrelated to the disease in question, we note that this is less likely to create a problem since pairs of genes rather than single genes have to be affected in that way. Finally, we note that AUCTSP can be extended to select a number of *k*\>1 pairs of genes, with the classification being made according to a majority voting rule among those *k* pairs of genes, as was done in \[[@CR7]\], or to find triplets instead of pairs of genes as was done in \[[@CR5]\]. As a non-parametric based technique, AUCTSP can also have potential benefits in areas such as RNA sequence analysis (see, e.g. \[[@CR34]\]), but this extension is left for future work. Conclusion {#Sec10} ========== In this paper, we have proposed the AUCTSP, a simple yet reliable and robust rank-based classifier for gene expression classification. AUCTSP works according to the same principle as TSP but differs from the latter in that the probabilities that determine the top scoring pair are computed based on the relative rankings of the two marker genes across *all* subjects as opposed to for *each* individual subject. Results of calculating and comparing the AUCTSP and TSP probabilities for synthetic data as well as 8 publicly available datasets demonstrate the better performance of AUCTSP over TSP. Additional files ================ {#Sec11} Additional file 1Biological relevance of the selected gene pairs. A full description of the biological findings on the genes selected by AUCTSP and TSP is given. (PDF 112 kb) Additional file 2Histograms of the selected genes. The histograms of all the genes selected by AUCTSP and TSP are given. (PDF 294 kb) AUC : Area under the (ROC) curve AUCTSP : AUC-based TSP AUROC : Area under the ROC (curve) ROC : Receiver operating characteristic (curve) TSP : Top scoring pair **Electronic supplementary material** The online version of this article (10.1186/s12859-018-2231-1) contains supplementary material, which is available to authorized users. Availability of data and materials {#d29e5703} ================================== The datasets used in the current study are already publicly available. The C code is available at <https://github.com/SIU852343578/AUC-TSP/branches>. CTY and DK conceived of the study and DK and AK implemented the code. AK collected the data and composed all figures. DK, AK, and CTY wrote the manuscript. All authors read and approved the final version of this manuscript. Ethics approval and consent to participate {#d29e5719} ========================================== Not Applicable. Consent for publication {#d29e5724} ======================= Not Applicable. Competing interests {#d29e5729} =================== The authors declare that they have no competing interests. Publisher's Note {#d29e5734} ================ Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | High | [
0.7058823529411761,
27,
11.25
] |
Behavioural and biochemical alterations by chlorpyrifos in aquatic insects: an emerging environmental concern for pristine Alpine habitats. This study aimed to assess how different concentrations of the insecticide chlorpyrifos (1.1, 5.24, 11, 52.4, 110, 262, 524 and 1100 ng L-1) affect the swimming behaviour of Diamesa zernyi larvae following exposure. A video tracking system was employed to analyse two swimming traits (total distance moved and average speed) of the larvae simultaneously after 3 days of exposure to the pesticide at 2 °C. The behavioural results were also interpreted according to biochemical responses to oxidative stress (OS) induced by chlorpyrifos, based on malondialdehyde (MDA) and protein carbonyl (PCC) content. Both distance and speed significantly decreased after 72 h of exposure to chlorpyrifos concentrations of ≥ 110 ng L-1, under which significant OS was detected as lipid peroxidation (level of MDA) and protein carbonylation (level of carbonyl). Analysis of altered swimming behaviour, along with MDA and carbonyl content, indicated that ≥ 110 ng L-1 contamination levels of the insecticide cause the organism to reallocate energy normally used for locomotor activity to repair cell damage, which might explain the strong impairment to locomotor performance. Locomotor performance is an ecologically relevant trait for elucidating the population dynamics of key species, with disturbance to this trait having long-term negative impacts on population and community structure. Therefore, chlorpyrifos insecticides represent a serious ecological risk for mountain aquatic species based on the detrimental effects observed in the current study, as the tested concentrations were those at which the insecticide is found in many Alpine rivers of Italy. | High | [
0.6843501326259941,
32.25,
14.875
] |
--- abstract: 'Trust and reputation management (TRM) plays an increasingly important role in large-scale online environments such as multi-agent systems (MAS) and the Internet of Things (IoT). One main objective of TRM is to achieve accurate trust assessment of entities such as agents or IoT service providers. However, this encouters an [*accuracy-privacy dilemma*]{} as we identify in this paper, and we propose a framework called [*ntext-aware ernoulli Neural Network based eputation ssessment*]{} (COBRA) to address this challenge. COBRA encapsulates agent interactions or transactions, which are prone to privacy leak, in machine learning models, and aggregates multiple such models using a Bernoulli neural network to predict a trust score for an agent. COBRA preserves agent privacy and retains interaction contexts via the machine learning models, and achieves more accurate trust prediction than a fully-connected neural network alternative. COBRA is also robust to security attacks by agents who inject fake machine learning models; notably, it is resistant to the 51-percent attack. The performance of COBRA is validated by our experiments using a real dataset, and by our simulations, where we also show that COBRA outperforms other state-of-the-art TRM systems.' author: - | **Leonit Zeynalvand,^1^ Tie Luo,^2^ Jie Zhang^1^**\ ^1^School of Computer Science and Engineering, Nanyang Technological University, Singapore\ ^2^Department of Computer Science, Missouri University of Science and Technology, USA\ [email protected], [email protected], [email protected] bibliography: - 'aaai2020.bib' title: 'COBRA: Context-aware Bernoulli Neural Networks for Reputation Assessment ' --- Introduction {#intro} ============ Trust and reputation management (TRM) systems are critical to large-scale online environments such as multi-agent systems (MAS) and the Internet of Things (IoT), where agents[^1] are more autonomous and tend to have more interactions with each other. Without a reliable TRM system, such interactions will be significantly hindered due to lack of trust in services or information provided by other agents. Formal contracts such as Service Level Agreements (SLA) are hard to enforce in such open environments because of high cost and lack of central authorities. Early TRM systems such as [@josang2002beta] rely on *first-hand evidence* to derive trust scores of agents. For example, an agent *Alice* assigns a trust score to another agent *Bob* based on the outcome of her previous interactions with *Bob*. However, as the scale of the systems grows (e.g., IoT), first-hand evidence becomes too sparse to support reliable trust evaluation. Hence, *second-hand evidence* was exploited by researchers to supplement first-hand evidence. In that case, *Alice* would assign a trust score to *Bob* based not only on her own interactions with *Bob* but also on what other agents advise about *Bob*. However, what form the second-hand evidence should take has been largely overlooked. This engenders an important issue which we refer to as the *accuracy-privacy dilemma*. To illustrate this, suppose [*Alice*]{} consults another agent *Judy* about how trustworthy *Bob* is. One way is to let *Judy* give a trust score or rating about *Bob* [@yu2013survey], which is the approach commonly adopted in the trust research community. This approach is simple but loses the [*context information*]{} of the interactions between agents. For example, the context could be the transaction time and location, and service provided by an agent during off-peak hours could have higher quality (more SLA-conformant) than during peak hours. Without such context information, trust assessment based on just ratings or scores would have lower accuracy. On the other hand, another method is to let *Judy* reveal her entire interaction history with *Bob* (e.g., in the form of a detailed review), which is the approach commonly used in recommender systems. Although the information disclosed as such is helpful for trust assessment of [*Bob*]{}, it is likely to expose substantial privacy of [*Bob*]{} and [*Judy*]{} to [*Alice*]{} and possibly the public.[^2] To address this accuracy-privacy dilemma, and in the meantime avoid relying on a trusted third-party which is often not available in practice, we propose a framework called [*ntext-aware ernoulli Neural Network based eputation ssessment*]{} (COBRA). It encapsulates the detailed second-hand evidence using machine learning models, and then aggregate these model using a Bernoulli neural network (BNN) to predict the trustworthiness of an agent of interest (e.g., an IoT service provider). The encapsulation protects agent privacy and retains the context information to enable more accurate trust assessment, and the BNN accepts the outputs of those ML models and the information-seeking agent’s ([*Alice’*]{} as in the above example) first-hand evidence as input, to make more accurate trustworthiness prediction (of [*Bob*]{} as in the above example). The contributions of this paper are summarized below: - We identify the accuracy-privacy dilemma and propose COBRA to solve this problem using a model encapsulation technique and a Bernoulli neural network. COBRA preserves privacy by encapsulating second-hand evidence using ML models, and makes accurate trust predictions using BNN which fuses both first-hand and second-hand evidence, where the valuable context information was preserved by the ML models. - The proposed BNN yields more accurate predictions than the standard fully-connected feed-forward neutral networks, and trains significantly faster. In addition, it is also general enough to be applied to similar tasks when the input is a set of probabilities associated with Bernoulli random variables. - The design of COBRA takes security into consideration and it is robust to fake ML models; in particular, it is resistant to the 51-percent attack, where the majority of the models are compromised. - We evaluate the performance of COBRA using both experiments based on a real dataset, and simulations. The results validate the above performance claims and also show that COBRA outperforms other state-of-the-art TRM systems. Related Work ============ A large strand of literature has attempted to address TRM in multi-agent systems. The earliest line of research had a focus on first-hand evidence [@yu2013survey], using it as the main source of trustworthiness calculation. For example, Beta reputation system [@josang2002beta] proposes a formula to aggregate first-hand evidence represented by binary values indicating positive or negative outcomes of interaction. Concurrently with the spike in popularity of recommender systems in late 2004 [@recSys; @fang2015multi], the alternative usage of TRM in preference and rating management gained much research attention. However, the binary nature of trust definition presents a barrier because recommender systems conventionally use non-binary numerical ratings. To this end, Dirichlet reputation systems [@josang2007dirichlet] generalize the binomial nature of beta reputation systems to accommodate multinomial values. A different line of research focuses on second-hand evidence [@yu2013survey] as a supplementary source of trustworthiness calculation. These works calculate a trust score by aggregating second-hand evidence and a separate trust score by aggregating first-hand evidence, and then a final score by aggregating these two scores. Some early trust models such as [@josang2002beta] are also applicable to second-hand evidence. The challenges in this line of research are [@zhang2008evaluating]: (i) How to determine which second-hand evidence is less reliable, since second-hand evidence is provided by other agents? (ii) How much to rely on trust scores that are derived from second-hand evidence compared to scores derived from first-hand evidence? To address the first challenge, the Regret model [@sabater2001regret] assumes the existence of social relationships among agents (and owners of agents), and assigns weights to second-hand evidence based on the type and the closeness of these social relationships. These weights are then used in the aggregation of second-hand evidence. More sophisticated approaches like Blade [@blade] and Habit [@habit] tackle this issue with a statistical approach using Bayesian networks and hence do not rely on heuristics. To address the second challenge, [@fullam2007dynamically] uses a Q-learning technique to calculate a weight which determines the extent to which the score derived from second-hand evidence affects the final trust score. A separate thread of research relies solely on stereotypical intrinsic properties of the agents and the environment in which they operate, to derive a likelihood of trustworthiness without using any evidence. These approaches [@sc1; @sc2; @sc3; @sc4] are considered a complement to evidence-based trust and are beneficial when there is no enough evidence available. Our proposed approach does not fall under any of these categories; instead, we introduce model encapsulation as a new way of incorporating evidence into TRM. We make no assumptions on the existence of stereotypical or socio-cognitive information, as opposed to [@sc1; @sc2; @sc3; @sc4; @sabater2001regret]). Our approach has minimal privacy exposure, which is unlike [@blade; @habit], and preserves important context information. Model Encapsulation {#modeling} =================== COBRA encapsulates second-hand evidence in ML models, which achieves two purposes: (i) it preserves privacy of agents who are involved in the past interactions; (ii) it retains context information which will help more accurate trust prediction later (described in the next section). In this technique, each agent trains a ML model using its past interaction records with other agents in different contexts. Specifically, an agent $u \in \mathcal{A}$ (the set of all the agents) trains a model $\mathcal{M}_u^z(\zeta)=p$ based on its past direct interaction (i.e., first-hand evidence) with an agent $z$. The input to the model is a set $\mathcal{\zeta}$ of context features (e.g., date, time, location), and the output is a predicted conditional probability $p$ indicating how trustworthy $z$ is for a given context $\mathcal{\zeta}$. To build this model, the agent $u$ maintains a dataset that records its past interactions with each other agent, where each record includes the context $\zeta$ and the outcome $t \in \mathbb{Z}_2=\{0,1\}$ with 0 and 1 meaning a negative and a positive outcome respectively (e.g., whether the SLA is met or not). For non-binary outcomes, they can be handled by using the common method of converting a multi-class classification problem into multiple binary classification problems (so there will be multiple models for each agent). Then, agent $u$ trains a machine learning model for each agent, say $z$, using the corresponding dataset to obtain $\mathcal{M}_u^z(\zeta)=p$. COBRA does not restrict the choice of ML models and this is up to the application and the agents. For example, agents hosted on mobile phones can choose simple models such as decision trees and Naive Bayes, while those on desktop computers or in the cloud can use more complex models such as random forests and AdaBoost. Furthermore, agents can choose different models in the same application, meaning that $\mathcal{M}_{u_1}^z$ may not be the same type as $\mathcal{M}_{u_2}^z$. On the other hand, the context feature set $\zeta$ needs to be fixed for the same application. ![The COBRA framework.[]{data-label="fig:diagram"}](diagram){width="1.04\linewidth"} [*Model Sharing.*]{} Whenever an agent, say $a$, seeks advice (i.e., second-hand evidence) from another agent, say $u$, about an agent of interest, say $z$, the agent $u$ can share its model $\mathcal{M}_u^z(\zeta)$ with $a$. This avoids exposing second-hand evidence directly and thereby preserves privacy of both $u$ and $z$. It also retains context information as compared to $u$ providing $a$ with just a single trust score of $z$, and hence helps more informed decision making in the subsequent step (described in Section \[deep\]). Note that the information we seek to keep private is the contextual details of the interactions between $u$ and $z$, whereas concealing the identities of $u$ and $z$ is not the focus of this work. Sharing a model is as straightforward as transferring the model parameters to the soliciting agent (i.e., $a$ in the above example), or making it accessible to all the agents (in a read-only mode). This sharing process does not require a trusted intermediary because the model does not present a risk of privacy leaking about $u$ and $z$. The required storage is also very low as compared to storing the original evidence. Moreover, COBRA does not assume that all or most models are accurate. Unlike many existing work assuming *honest majority* and hence being vulnerable to 51-percent attack, COBRA use a novel neural network architecture (Section \[deep\]) that is more robust to model inaccuracy and even malice (e.g., models that give opposite outputs). Bernoulli Neural Network {#deep} ======================== After model encapsulation which allows for a compressed transfer of context-aware second-hand evidence with privacy preservation, the next question is how to aggregate these models to achieve an accurate prediction of the trustworthiness of a target agent. Using common measures of central tendency such as mean, mode, etc. will yield misleading results because an adviser agent’s ($u$’s) model was trained on a dataset with likely different contexts than the advisee agent’s ($a$’s) context. In a sense, this problem is akin to the problem found in [*transfer learning*]{}. Besides, COBRA aims to relax the assumption of *honest majority* and give accurate predictions even when the majority of the models are inaccurate or malicious. In this section, we propose a solution based on artificial neural networks (ANN). The reasons for choosing ANN are two. First, the task of predicting trustworthiness in a specific context given other agents’ models, is a linearly non-separable task with high dimensional input space (detailed in Section \[deepdown1\]). Such tasks can specifically benefit from ANN’s capability of discovering intrinsic relationships hidden in data samples [@zhang2010computational]. Second, the models are non-ideal due to the possibly noisy agent datasets, but ANN is highly noise tolerant and sometimes can even be positively affected by noise [@madic2010assessing; @luo2014deep]. Therefore, we propose a Bernoulli Neural Network (BNN) as our solution. BNN specializes in processing data that is a set of probabilities associated with random variables of Bernoulli distribution, which perfectly matches our input space which is a set of predicted trust scores between zero and one indicating the probability of an agent being trustworthy in a given context. In contrast to the widely used Convolutional Neural Network (CNN), BNN does not require data to have a grid-like or structured topology, and hence matches well with trust or reputation scores. Specifically, unlike CNN which uses the hierarchical pattern in data, BNN uses information entropy, to determine the connections in the network. Fig. \[fig:diagram\] provides an overview of COBRA, where the models on the left hand side are from the encapsulation technique described in Section \[modeling\], and the right hand side is the BNN described in this section. In the following, we explain the architectural design of BNN in Section \[deepdown1\]-3 and assemble the data required for training the BNN in Section \[deepdown4\]. Topology {#deepdown1} -------- We propose a $(N+1)$-layer network architecture for the BNN, where the input layer is denoted by $L_0$, the output layer by $L_N$, and the hidden layers by $L_k$ where $k=1,...,N-1$. The weight of an edge $(i,j)$ is denoted as $w_{ij}$ where $i \in L_{k-1}$ and $j \in L_{k}$, $k=1,2,...,N$. The bias at layer $L_{k-1}$ is $b_{k-1}$. Thus, the entire network can be compiled from Eq. \[deq:1\] where the output of any node $j\in L_k$ is given by $y_j$. The inputs $x_j$ of the network in Eq. \[deq:1\] are assembled from (1) the models explained in Section \[modeling\] and (2) the context features, where the assembling process is explained in Section \[deepdown4\]. The former are values between zero and one which indicate the predicted trustworthiness probability of an entity in the given context sourced from different predictors. $$\begin{aligned} \label{deq:1} y_j = \begin{cases} f_k\left({\displaystyle b_{k-1} + \sum_{i\in L_{k-1}} sgn\left({\displaystyle\left\lfloor{}\dfrac{|L_k|-h(y_i)|L_k|}{\left|\left\{ y_{t} | \ \ \dfrac{\partial y_t}{\partial y_i} \neq 0 \, , \, t \in L_{k}\right\}\right|+1}\right\rfloor}\right)w_{ij} y_i}\right), & j \in L_{k} : k=1,2,...,N \\ x_j, & j \in L_{0} \end{cases}\end{aligned}$$ The probabilistic nature of the inputs, enables us to calculate how informative an input is, by calculating the entropy of the predicted trustworthiness for which the input indicates the probability. This is used in Eq. \[deq:1\] to ensure that the number of neural units an input connects to is inversely proportional to the average of information entropy calculated for input samples. Each sample of an input in the training data-set (exclusive of context features) is a probability (sourced prediction) associated with a random variable with Bernoulli distribution (trustworthiness). Hence, $h(.)$ is defined recursively by Eq. \[eq:h\] where $\overline{H}{(x_j)}$ is the average entropy of $x_j$ in the training data-set. For context features $\overline{H}{(x_j)}=0$ because the values of these features are not probabilities of Bernoulli random variables and hence the notion of entropy can only be applied to their entire feature space not individual values which is what is used in Eq. \[deq:1\]. Moreover, $f_k(.)$ in Eq. \[deq:1\] is the activation function of layer $k$. The design choice of the activation functions is explained in Section \[deepdown3\]. $$\begin{aligned} { h(y_j) =} \begin{cases}\displaystyle \dfrac{\displaystyle\sum_{i \in I}{h(y_{i})}}{|I|} , & j\in L_{k} \, : \, I=\{i \in L_{k-1} | \dfrac{\partial y_j}{\partial y_{i}} \neq 0\} \\ & \\ {\displaystyle \overline{H}{(x_j)}}, & j\in L_{0} \end{cases} \label{eq:h} \end{aligned}$$ Depth and width {#deepdown2} --------------- The depth of BNN is $N$ since the input layer is not counted by convention. A feed-forward network with two hidden layers can be trained to represent any arbitrary function, given sufficient width and density (number of edges) [@heaton2008introduction]. Our goal is to find the function which most accurately weights the predictions sourced from multiple predictors (i.e. high dimensional input space). Many of such sources can be unreliable or misleading either unintentionally (e.g. malfunction) or deliberately (e.g. malicious). There often does not exist a single source which is always reliable and some sources are more reliable in some contexts. Moreover the malicious sources sometimes collude with each other to make the attack harder to detect. Therefore, the function that we aim to estimate in this linearly non-separable task can have any arbitrary shape. Hence, we choose $N=3$ in our design to benefit from two hidden layers which suffice to estimate the aforementioned function as we demonstrate by our experiment results in Section \[evSec\]. The width of a layer is the number of units (nodes) in that layer, and accordingly we denote the width of a layer $k$ by $\vert L_k \vert$. Determining the width is largely an empirical task, and there are many rule-of-thumb methods used by the practitioners. For instance, [@heaton2008introduction] suggests that the width of a hidden layer be $2/3$ the width of the previous layer plus the width of the next layer. Inspired by this method we propose a measure called *output gain* defined as the summation of the information gain of the inputs of a node and determine $|L_k|$ by Eq. \[deq:2\]. The width $|L_N|$ is set to $1$ because the network has only a single output which is the trust score (probability of being trustworthy). And the width $|L_0|$ is set to the total number of input nodes denoted by $n$. $$\begin{aligned} {\vert L_k \vert=} \begin{cases}\displaystyle n, & k=0 \\ & \\ {\displaystyle\left\lceil\sum_{j\in L_{0}} {\frac{2}{3}\left( 1 - \overline{H}(x_j) \right)}\right\rceil} + \vert L_{2} \vert , & k=1 \\ & \\ {\displaystyle\left\lceil {\frac{2}{3} \vert L_{1} \vert }\right\rceil} + \vert L_{3} \vert , & k=2 \\ & \\ 1, & k=3 \end{cases} \label{deq:2} \end{aligned}$$ Activation and loss functions {#deepdown3} ----------------------------- Let us recall the activation function $f_k(.)$ from Eq. \[deq:1\] in Section \[deepdown1\]. Since we choose $N=3$ as explained in Section \[deepdown2\], we need to specify three activation functions $f_{1}(.), f_{2}(.)$, and $f_{3}(.)$ for the first hidden layer, second hidden layer, and the output layer, respectively. For the output layer, we choose the sigmoid logistic function $f_3(z)=1/(1+e^{-z})$ because we aim to output a trust score (the probability that the outcome of interacting with a certain agent is positive for a given context). For the hidden layers, we choose the rectified linear unit (ReLU) [@lecun2015deep] function as $f_{1,2}(z)=max(0,z)$, because the focus of hidden layers is to exploit the compositional hierarchy of the inputs to compose higher level (combinatoric) features so that data become more separable in the next layer, and hence the speed of convergence is a main consideration. The weights in the BNN are computed using *gradient descent back propagation* during the training process. However, sigmoid activation functions, as we choose, have a saturation effect which will result in small gradients, while gradients need to be large enough for weight updates to be propagated back into all the layers. Hence, we use [*cross-entropy*]{} $H(p,q) = - \sum_{x} p(x) \log(q(x))$ as the loss function to mitigate the saturation effect of the sigmoid function. Specifically, the $\log(.)$ component in the loss function counteracts the $\exp(.)$ component in the sigmoid activation function, thereby enabling gradient-based learning process. [Assembling training data ]{} {#deepdown4} ----------------------------- Having explained the architectural design aspects of our Bernoulli neural network, now we explain its computational aspects. The output of the neural network is a predicted probability that a target agent $z$ is trustworthy (e.g., meets SLA) in a certain context $\zeta$, which (the probability) is what an agent $a$ tries to find out. The input of the network consists of (1) all the context features $\varsigma \in \zeta$ and (2) all the probabilities predicted by models $\mathcal{M}_u^z(\zeta)$ shared by all the $u\in X_a$ where $X_a \subseteq \mathcal{A}$ is the agents $a$ is seeking advice from. In the case that some agents $u \in X_a$ do not share their models with agent $a$, the corresponding input probability will be set to 0.5 to represent absolute no information. Formally, the input from the models to the neural network is given by $$G_a(x,z,\zeta) = \begin{cases} \mathcal M_{x}^{z}(\zeta) & \mathcal M_{x}^{z} \in R_a \\ 0.5 & M_{x}^{z} \not\in R_a \end{cases} \label{deq:6}$$ where $R_a$ is the set of models available to $a$. Most precisely, each input variable (to layer $L_0$) is specified by $$x_j = \begin{cases} \varsigma_{j} & j=1,2,...,|\zeta| \\ G_a(X_a^{j-|\zeta|},z,\zeta) & j=|\zeta|+1,...,|\zeta|+|X_a| \end{cases} \label{deq:6b}$$ which also gives the number of input nodes (i.e. input dimension) $$n=|X_a|+|\zeta|.$$ **Input:** First-hand evidence of agent $a$:\ $E_a=\{(z,\varsigma_1,\varsigma_2,...,\varsigma_{k},t)\ \ \vert z \in \mathcal{S}_a \subseteq \mathcal{S}, k=|\zeta|, t \in \{0,1\}\}$ where $S$ is the set of target agents (whose reputation is to be predicted), $R_a= \{\mathcal{M}_u^z \ \ \vert x \, \in \, X_a{\subseteq}\mathcal{A} \ , \ z \, \in \, \mathcal{S}\}$ **Output:** Training dataset ($Features$ and $Label$) $Features, Label, temp \xleftarrow[\text{}]{\text{set}} \emptyset$ $i,j \xleftarrow[\text{}]{\text{set}} 0$ $tmp[i] \xleftarrow[\text{}]{\text{set}} G_a(x,z,\varsigma_1,\varsigma_2,...,\varsigma_{k})$ $i \xleftarrow[\text{}]{\text{set}} i+1$ $Features[j] \xleftarrow[\text{}]{\text{set}} (\varsigma_1,\varsigma_2,...,\varsigma_{k},tmp[0],...,tmp[|X_a|])$ $Label[j] \xleftarrow[\text{}]{\text{set}} t$ $j \xleftarrow[\text{}]{\text{set}} j+1$ **return** $Features, Label$ **Input:** A new first-hand evidence $(z,\varsigma_1,\varsigma_2,...,\varsigma_{k},t)\ $ where $ z \in \mathcal{S}_a \subseteq \mathcal{S}, k=|\zeta|, t \in \{0,1\}\}$\ Current training dataset: $Features$ and $Label$\ Model repository: $R_a= \{\mathcal{M}_u^z \ \ \vert x \, \in \, X_a{\subseteq}\mathcal{A} \ , \ z \, \in \, \mathcal{S}\}$. **Output:** Updated training data-set (new $Features$ and $Label$). $i \xleftarrow[\text{}]{\text{set}} 0$ $tmp[i] \xleftarrow[\text{}]{\text{set}} G_a(x,z,\varsigma_1,\varsigma_2,...,\varsigma_{k})$ $i \xleftarrow[\text{}]{\text{set}} i+1$ $Features[j] \xleftarrow[\text{}]{\text{append}} (\varsigma_1,\varsigma_2,...,\varsigma_{k},tmp[0],...,tmp[|X_a|])$ $Label[j] \xleftarrow[\text{}]{\text{append}} t$ **return** $Features, Label$ **Input:** A new model $\mathcal{M}_{u'}^{z'}\ $ where $u' \, \in \, \mathcal{A}$ and $ z' \in \mathcal{S}$\ Current training data-set: $Features$ (no need for $Label$)\ First-hand evidence of agent $a$:\ $E_a=\{(z,\varsigma_1,\varsigma_2,...,\varsigma_{k},t)\ \ \vert z \in \mathcal{S}_a \subseteq \mathcal{S}, k=|\zeta|, t \in \{0,1\}\}$\ **Output:** Updated training data-set (new $Features$). $i \xleftarrow[\text{}]{\text{set}} 0$ $Features[i].u' \xleftarrow[\text{}]{\text{set}} \mathcal{M}_{u'}^{z'}(\varsigma_1,\varsigma_2,...,\varsigma_{k})$ $i \xleftarrow[\text{}]{\text{set}} i+1$ **return** $Features$ Thus, we transform the recursive Eq. \[deq:2\] into a system of linear equations: $$\begin{aligned} \begin{cases} |L_1|&=\frac{2}{3}\displaystyle\left(\left\lceil\sum_{j\in L_0} {\left( 1 - \overline{H}(x_j) \right)}\right\rceil\right)+|L_2| \\ \\ |L_2|&=\frac{2}{3}(|L_1|)+1 \end{cases} \label{deq:7} \end{aligned}$$ Solving Eq. \[deq:7\] yields the widths of all the layers of our neural network: $$\begin{aligned} \begin{split} |L_1|&=2\displaystyle\left(\left\lceil\sum_{j\in L_0} {\left( 1 - \overline{H}(x_j) \right)}\right\rceil\right)+3,\\ |L_2|&=\left\lfloor{\frac{4}{3}\displaystyle\left(\left\lceil\sum_{j\in L_0} {\left( 1 - \overline{H}(x_j) \right)}\right\rceil\right)}\right\rfloor+3,\\ |L_3|&=1. \end{split}\end{aligned}$$ The weights are calculated using gradient descent back propagation based on training data. The training data is initialized once using Algorithm \[alg1\] and updated *vertically* upon acquiring new first-hand evidence using Algorithm \[alg2\] and updated *horizontally* upon acquiring a new model using Algorithm \[alg3\]. In Algorithm \[alg1\], the training data - which consists of $Features$ as given by Eq. \[deq:6\] and $Label$ - is first initialized to $\emptyset$. Then, the first-hand evidence $E_a$ is being iterated over (line 3) to find out historical information about the agent $z$, i.e., the outcome $t$ and context $\varsigma_1,...,\varsigma_k$ of each interaction. This information is then supplied to $G_a(.)$ (Eq. \[deq:6\]) to obtain the predicted conditional probability $P\left(t=1 \left\vert \varsigma_1,...,\varsigma_k\right.\right)$. The probabilities and the corresponding labels are then added to $Features$ to form the training data (lines 8-12). After initialization, all the subsequent updates are performed using Algorithm \[alg2\] and \[alg3\], where Algorithm \[alg2\] is executed when a new first-hand evidence is available at $a$ and Algorithm \[alg3\] is executed when $a$ receives a new model from a new advisor agent or an updated model from an existing advisor agent. The time complexity of Algorithm \[alg1\] is $O(|E_a|\times|X_a|)$. The time complexity of Algorithm \[alg2\] and \[alg3\] is $O(|X_a|)$ and $O(|E_a|)$, respectively. The training and retraining of the neural network using the above training dataset can be either performed by the agent itself or outsourced to fog computing [@yousefpour2019all]. Similarly is the storage of the neural network. Evaluation {#evSec} ========== We evaluate COBRA using both experiments and simulations. Experiment setup {#ev1Sec} ---------------- [**Dataset.**]{} We use a public dataset obtained from [@Zheng:2014:IQR:2587728.2587740] which contains the response-time values of $4,532$ web services invoked by $142$ service users over $64$ time slices. The dataset contains $30,287,611$ records of data in total, which translates to a data sparsity of $26.5\%$. Following [@nielsen1994usability], we assume a standard SLA which specifies that 1 second is the limit that keeps a user’s flow of thought uninterrupted. Hence, response time above $1$ second is considered violation of SLA and assigned a *False* label, while response time below or equal to $1$ second is assigned a *True* label which indicates that the SLA is met. [**Platform.**]{} All measurements are conducted using the same Linux workstation with 12 CPU cores and 32GB of RAM. The functional API of [Keras]{} is used for the implementation of the neural network architectures on top of [TensorFlow]{} backend while [scikit-learn]{} is used for the implementation of Gaussian process, decision tree, and Gaussian Naive Bayes models. [**Benchmark methods.**]{} We use the following benchmarks for comparison: - *Trust and Reputation using Hierarchical Bayesian Modelling (HABIT)* : This probabilistic trust model is proposed by [@habit] and uses Bayesian modelling in a hierarchical fashion to infer the probability of trustworthiness based on direct and third-party information and outperforms other existing probabilistic trust models. - *Trust Management in Social IoT (TMSIoT)*: This model is proposed by [@SIOT], in which the trustworthiness of a service provider is a weighted sum of a node’s own experience and the opinions of other nodes that have interacted with the service provider. - *Beta Reputation System (BRS)*: This well-known model as proposed by [@josang2002beta] uses the beta probability density function to combine feedback from various agents to calculate a trust score. [**Evaluation metrics.**]{} We employ two commonly used metrics. One is the accuracy defined as $$ACC=\frac{{TP} + {TN}}{{TP} + {TN} + {FP} + {FN}}$$ where *TP = True Positive*, *FP = False Positive*, *TN = True Negative*, and *FN = False Negative*. The other metric is the root mean squared error (RMSE) defined by $$RMSE(T,\hat{T}) = \sqrt{\frac{1}{m}\Sigma_{i=1}^{m}{(T_i -\hat{T}_i)^2}}$$ Where $T$ is the ground-truth trustworthiness and $\hat{T}$ is the predicted probability of trustworthiness and $m$ is the total number of predictions. Experiment procedure and results -------------------------------- We run COBRA for each of the 142 web-service clients to predict whether a web-service provider $z$ can be trusted to meet SLA, given a context $\zeta$ which is the time slice during which the service was consumed. We experiment on $800,000$ random samples of the dataset due to two main considerations: (1) COBRA is a multi-agent approach but in the experiment we build all the models and BNNs on one machine, (2) the significantly high time and space complexity of the Gaussian process used in HABIT restricts us to work with a sample of the dataset. We employ $10$-fold cross validation and compare the performance of COBRA with the benchmark methods described in Section \[ev1Sec\]. In COBRA-DT, decision tree is used for model encapsulation for all 142 agents, in COBRA-GNB, Gaussian Naive Bayes is used for the encapsulation for all 142 agents, and in a hybrid approach, COBRA-Hyb, decision tree is used for 71 randomly selected agents while Gaussian Naive Bayes is used for the rest. In HABIT the reputation model is instantiated using Gaussian process with a combination of *dot product + white kernel* co-variance functions. In COBRA-DT/GNB/Hyb-B, our proposed neural network architecture in Section \[deep\] (BNN) is used, while in COBRA-DT/GNB/Hyb-D, a fully connected feed-forward architecture (Dense) is used instead. The results, as illustrated in Fig. \[fig:a\] and Fig. \[fig:b\], indicate that all the versions of COBRA with Bernoulli neural engine outperform the benchmark methods, while without our proposed Bernoulli neural architecture, HABIT is competent to Dense version of COBRA-GNB. The choice of the encapsulation model only slightly affects the performance in hybrid mode, which suggests that the performance of COBRA is stable. Furthermore, we present the moving average of prediction and training time for BNN versions of COBRA compared to Dense versions of COBRA respectively in Fig. \[fig:c\] and Fig. \[fig:d\]. The results indicate that our proposed BNN architecture significantly reduces the time required for training and making predictions. Moreover, as illustrated in Fig. \[fig:e\], the divergence between training accuracy and validation accuracy of BNN is significantly smaller than that of Dense. Similarly, Fig. \[fig:f\] depicts a smaller divergence between training loss and validation loss of BNN compared to that of Dense. These results indicate that Dense is more prone to overfitting as the epochs increase. Simulation setup {#ev2Sec} ---------------- For a more extensive evaluation of COBRA, especially with respect to extreme scenarios which may not be observed often in the real world, we also conduct simulations. We simulate a multi-agent system with 51 malicious agents and 49 legitimate agents, in consideration of the 51 percent attack. The attack model used for malicious agents consists of fake and misleading testimonies which is a common attack in TRM systems. Specifically, a model shared by a malicious agent provides opposite prediction of the trustworthiness of a target agent, i.e., it outputs $1-p$ when the model would predict $p$ if it were not malicious. Denote by $\Phi$ the probability that an arbitrary agent interacts with an arbitrary target agent, which we treat as a random variable with beta distribution parameterized by $\alpha$ and $\beta$. We run $100$ simulations each with a different distribution of $\Phi$. For example, $\alpha=\beta=0.5$ means that one group of agents interact with the target agent frequently while another group seldom interact with the target agent; $\alpha=\beta=2$ means that most of the agents have half chance to interact with the target agent; $\alpha=5,\beta=1$ means that most of the agents interact with the target agent frequently, while $\alpha=1$ and $\beta=3$ means that most of the agents seldom interact with the target agent. We use 4 synthesized context features randomly distributed in the range $[-1,1]$, and generate $100$ different target agents that violates SLA with a probability following the normal distribution on condition of each context feature. Simulation results ------------------ The simulation results are shown in Fig. \[fig:3d\]-\[fig:contour\], where the key observations are: - COBRA is able to predict accurate trust scores (probability of being trustworthiness) for the majority of the cases. Particularly, in 90 out of 100 simulated distributions of $\Phi$ an accuracy greater than or equal to 85% is achieved. - It is crucial to note that these results are achieved when 51% of the agents are malicious. This shows that COBRA is resistant to the $51$ percent attack. Conclusion {#fut} ========== This paper proposes COBRA, a context-aware trust assessment framework for large-scale online environments (e.g., MAS and IoT) without a trusted intermediary. The main issue it addresses is an accuracy-privacy dilemma. Specifically, COBRA uses model encapsulation to preserve privacy that could otherwise be exposed by second-hand evidence, and in the meantime to retain context information as well. It then uses our proposed Bernoulli neural network (BNN) to aggregate the encapsulated models and first-hand evidence to make an accurate prediction of the trustworthiness of a target agent. Our experiments and simulations demonstrate that COBRA achieves higher prediction accuracy than state-of-the-art TRM systems, and is robust to 51 percent attack in which the majority of agents are malicious. It is also shown that the proposed BNN trains much faster than a standard fully-connected feed-forward neural network, and is less prone to overfitting. Acknowledgments ================ This work is partially supported by the MOE AcRF Tier 1 funding (M4011894.020) awarded to Dr. Jie Zhang. [^1]: Throughout this paper, we use the term agents in a broader sense which is not limited to agents in MAS, but also includes IoT service providers and consumers as well as other similar cases. [^2]: Recommender systems can take this approach because they are generally considered trusted intermediaries, and they focus on preference modeling rather than trust and reputation modelling. | High | [
0.6615384615384611,
32.25,
16.5
] |
[Relationship between heterozygosity as estimated from genetic markers and fertility in cattle : II. Heterozygosity and fertility]. The present investigation deals with the connexion between heterozygosity, as estimated from markers, and fertility traits in cattle. In adult cows and/or cows under suboptimal management the maternal marker heterozygosity showed a definite influence upon fertility. In unselected field records of the Hinterwälder breed, the calving interval decreased 2.3 days per 10% increase in marker heterozygosity. The number of inseminations per conception decreased by 6.2% per 10% marker heterozygosity in Fleckvieh progeny groups. Cows with less than 20 and 25%, resp., marker heterozygosity differed from those with more than 50 and 55%, resp., by 11.1 days calving interval and 0.6 inseminations per conception, resp. It was not found that the estimated marker heterozygosity of the prospective foetus influenced the success of the insemination. This might be due to the rather imprecise method of estimating foetal marker heterozygosity, but other investigators also attribute less importance to foetal heterozygosity than to maternal heterozygosity. Fertility traits of cattle are essentially all-or-none characters. Therefore, the lack of statistical significance of some of the results presented here may be due more to the imprecision of the estimate of fertility of the relatively small groups than to the method used. | Mid | [
0.6102564102564101,
29.75,
19
] |
Quantum chemical investigation on the mechanism and kinetics of OH radical-initiated atmospheric oxidation of PCB-47. The OH radical-initiated atmospheric oxidation degradation of 2,2',4,4'-tetrachlorobiphenyl (PCB-47) was investigated by using quantum chemical calculations. All possible pathways involved in the oxidation process were discussed. Potential barriers and reaction heats have been obtained to assess the energetically favorable reaction pathways and the relatively stable products. The study shows that the OH radicals are more likely to attack the C3 and C5 atom of the aromatic ring in the PCB-47 molecule to form PCB-OH adducts. Subsequent reactions are the addition of O2 or NO2 molecule to the PCB-OH adducts at the ortho position of the OH group. Water molecule plays an important role during the whole degradation process. The individual and overall rate constants were calculated by using the Rice-Ramsperger-Kassel-Marcus (RRKM) theory over the temperature range of 180-370K. At 298K, the atmospheric lifetime of PCB-47 determined by OH radicals is about 9.1d. The computational results are crucial to risk assessment and pollution prevention of PCBs. | High | [
0.6841339155749631,
29.375,
13.5625
] |
Construction and characterization of an Actinobacillus pleuropneumoniae serotype 2 mutant lacking the Apx toxin secretion protein genes apxIIIB and apxIIID. Apx toxins have been identified as important virulence factors of Actinobacillus pleuropneumoniae, the etiologic agent of porcine pleuropneumonia. In some A. pleuropneumoniae serotypes, Apx toxins are secreted by the cell membrane proteins encoded by apxIIIB and apxIIID genes. In an effort to develop a live vaccine strain against A. pleuropneumoniae, we inactivated the apxIIIB and apxIIID genes in A. pleuropneumoniae 1536, a serotype 2 strain, resulting in the DeltaapxIIIB/DapxIIID mutant strain (1536DeltaBDeltaD). Immunization of pigs with live 1536DeltaBDeltaD A. pleuropneumoniae conferred protection against homologous challenge with wild-type A. pleuropneumoniae 1536. Thus, impaired Apx toxin secretion may decrease the virulence of A. pleuropneumoniae and may be an effective strategy for the development of a live-attenuated A. pleuropneumoniae vaccine. | High | [
0.6637426900584791,
28.375,
14.375
] |
What can we learn from patients to improve their non-invasive ventilation experience? 'It was unpleasant; if I was offered it again, I would do what I was told'. Non-invasive ventilation (NIV) is widely used as a lifesaving treatment in acute exacerbations of chronic obstructive pulmonary disease; however, little is known about the patients' experience of this treatment. This study was designed to investigate the experiences and perceptions of participants using NIV. The study interprets the participants' views and explores implications for clinical practice. Participants with respiratory failure requiring NIV were interviewed 2 weeks after discharge. A grounded theory methodology was used to order and sort the data. Theoretical sufficiency was achieved after 15 participants. Four themes emerged from the data: levels of discomfort with NIV, cognitive experiences with NIV, NIV as a life saver and concern for others. NIV was uncomfortable for participants and affected their cognition; they still reported considering NIV as a viable option for future treatment. Participants described a high level of trust in healthcare professionals and delegated decision-making to them regarding ongoing care. This study provides insights into ways clinicians could improve the physical experience for patients with NIV. It also identifies a lack of recall and delegation of decision-making, highlighting the need for clinical leadership to advocate for patients. | Mid | [
0.648148148148148,
35,
19
] |
8.07.2010 Dry Shampoo Review* If you haven't yet noticed my 'formspring- Ask me Anything' Box, It's on the left hand side! And was very excited to get this wonderful Question! I am hoping that I will be able to answer it and guide you in the right direction. Here is what someone has asked- "What dry shampoo would you recommend for hair that doesn't really need a ton of volume but gets super greasy roots? I like to wash my hair on the third day, because otherwise it is sooo dry everywhere except the roots. Plus, as a new mom I'm pressed for s."(There was obviously a cut off on the question, but I think I get what she is asking)My answer to this question, will be plainly from my own personal experience with dry shampoos.Here is what I have been loving lately. Product Description- Kevin Murphy Fresh Hair Dry Cleaning Spray A “dry cleaner” for the hair that gives body and the look of freshly washed hair, removes odours and all your sins from the night before. Can be used as a texturiser for modern, matte looks. Contains fragrant oil extracts of Cassia and Orange Flower. When using a dry shampoo don’t think of it as a hairspray. It’s not for all over, you want to go for a light spray “on the roots, hair line and part line” ONLY, this will avoid the dusty look and remember to either rub with a towel or brush after using. Use: Apply a fine mist onto dry hair and rub with a dry towel. * I really like the smell of this-- it's so fresh and I am also someone who doesn't need much of extra volume or texture so I like the cleaner, lighter feel of this product.* Dislikes- You MUST consistenly shake this product while before and while using so it doesn't settle because of the 'spray-powder' technology. Description... FRESH.HAIR is a dry cleaner for your hair. This revolutionary product will remove all your sins from the night before. FRESH.HAIR should be sprayed lightly onto the hair; rub with a dry towel then brush out unwanted product, dust and odours. This product will freshen up your hair after an international flight or gym workout. Suitable for all hair types, especially fine hair. *This product is also GREAT! It's very similar to the Dry Cleaner, but this is great for fine hair,and is more of a mist/lotion for the hair which you spray, then rub with a towel-- so not so much a 'DRY' shampoo, but will cleanse your hair without actually 'scrubadubbing, ya know?! | Low | [
0.514563106796116,
26.5,
25
] |
[remap] importer="texture" type="StreamTexture" path="res://.import/sky.png-a5517b333d3147868e017127dcd96076.stex" metadata={ "vram_texture": false } [deps] source_file="res://assets/environment/background/sky.png" dest_files=[ "res://.import/sky.png-a5517b333d3147868e017127dcd96076.stex" ] [params] compress/mode=0 compress/lossy_quality=0.7 compress/hdr_mode=0 compress/bptc_ldr=0 compress/normal_map=0 flags/repeat=1 flags/filter=true flags/mipmaps=false flags/anisotropic=false flags/srgb=2 process/fix_alpha_border=true process/premult_alpha=false process/HDR_as_SRGB=false process/invert_color=false stream=false size_limit=0 detect_3d=true svg/scale=1.0 | Mid | [
0.603421461897356,
24.25,
15.9375
] |
City of Melville pushing ahead with plans for Bert Jeffrey Park, despite community resistance CITY of Melville is pushing ahead with plans for change rooms at Bert Jeffrey Park, despite community resistance. But chief executive Shayne Silcox has promised a report being prepared on the proposed asset will address many of the unanswered questions that have bugged nearby residents over the last year. The Melville Residents and Ratepayers Association (MRRA) prompted a special electors meeting earlier this month where they called on the City to halt planning on the new facility at the Murdoch park. The Association wanted the City to wait until an inquiry into the council has finished, the community had been consulted about plans for the venue and Melville officers had produced a report looking at alternative venues for the Applecross Cricket Club. But on Tuesday night councillors instead voted 11-2 in favour of a motion from City governance and compliance advisor Jeff Clark. Mr Clark’s recommendation asked councillors to note the intent of the electors’ motion and ask that a report on the development of changing rooms be put before elected member by May at the latest. Concerns were initially raised about Bert Jeffrey after a turf wicket was installed in early 2017 to allow the Applecross Cricket Club to host Saturday games there. The MRRA argued, as did residents, there was no consultation and a petition signed by hundreds had been dismissed. Dr Silcox said an urgent need to prepare the wicket meant the council had to move quickly, “so we weren’t consulting, we were informing the community of the pitch being put in”. “I would have loved to have had a 12-month period to have made that decision,” he said. “We didn’t have that. We made a call.” Dr Silcox said plans for the change rooms, currently out for public consultation, would involve wider input than the pitch. “We would need to consult on an asset, it’s location, size (and) things like that if we were to proceed,” he said. “So this report will talk through what actually happened, to put it on record, but also seek council’s endorsement either to put it in future budgets, the next budget or maybe the budget in 2019-20, 20-21, or 21-22 or never. “That’s a decision of council.” Councillor Nicole Robins also revealed she had gone against plans to investigate locations for turf wickets, but said it was for good reason. “First of all I’m not going to deny the fact that I told residents I would move a motion requesting all venues for the installation of a turf wicket be considered,” she said. “When I first started putting that motion together, further investigation revealed that work had already been done by our officers. “I’d like to make the point the outcome would have been the same if I’d moved that motion as the outcome we’re looking at today. “I encourage residents to take that into account. “The bottom line is that one of our local sporting clubs needs somewhere to play and we’ve space to accommodate them.” | Mid | [
0.551162790697674,
29.625,
24.125
] |
<?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="?selectableItemBackground" android:padding="4dp"> <ImageView android:id="@+id/imageView" android:layout_width="122dp" android:layout_height="164dp" android:scaleType="centerCrop" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:textAppearance="?android:attr/textAppearanceMedium" android:id="@+id/textView_title" android:padding="6dp" android:textColor="?android:textColorPrimary" android:layout_toRightOf="@+id/imageView" android:layout_alignParentRight="true" android:layout_alignParentTop="true" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:textAppearance="?android:attr/textAppearanceMedium" android:id="@+id/textView_badge" android:background="@drawable/background_badge" android:textColor="@color/white_overlay_85" android:layout_below="@+id/textView_subtitle" android:layout_toRightOf="@+id/imageView" android:layout_toEndOf="@+id/imageView" android:layout_margin="8dp" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:textAppearance="?android:attr/textAppearanceSmall" android:text="Small Text" android:id="@+id/textView_subtitle" android:layout_below="@+id/textView_title" android:layout_toRightOf="@+id/imageView" android:layout_toEndOf="@+id/imageView" android:paddingLeft="6dp" android:paddingRight="6dp" android:layout_alignParentRight="true" /> </RelativeLayout> | Low | [
0.48970251716247104,
26.75,
27.875
] |
Investigations in ultrasonic enhancement of β-carotene production by isolated microalgal strain Tetradesmus obliquus SGM19. Microalgae constitute relatively novel source of lipids for biodiesel production. The economy of this process can be enhanced by the recovery of β-carotenes present in the microalgal cells. The present study has addressed matter of enhancement of lipids and β-carotene production by microalgal species of Tetradesmus obliquus SGM19 with the application of sonication. As first step, the growth cycle of Tetradesmus obliquus SGM19 was optimized using statistical experimental design. Optimum parameters influencing microalgal growth were: Sodium nitrate = 1.5 g/L, ethylene diamine tetraacetic acid = 0.001 g/L, temperature = 28.5 °C, pH = 7.5, light intensity = 5120 lux, β-carotene yield = 0.67 mg/g DCW. Application of 33 kHz and 1.4 bar ultrasound at 10% duty cycle was revealed to enhance the lipid and β-carotene yields by 34.5% and 31.5%, respectively. Kinetic analysis of substrate and product profiles in control and test experiments revealed both lipid and β-carotene to be growth-associated products. The intracellular NAD(H) content during late log phase was monitored in control and test experiments as a measure of relative kinetics of intracellular metabolism. Consistently higher NAD(H) concentrations were observed for test experiments; indicating faster metabolism. Finally, the viability of ultrasound-exposed microalgal cells (assessed with flow cytometry) was >80%. | High | [
0.656934306569343,
33.75,
17.625
] |
A former editor for Forbes and the Financial Times, Eamonn Fingleton spent 27 years monitoring East Asian economics from a base in Tokyo. In September 1987 he issued the first of several predictions of the Tokyo banking crash and went on in "Blindside," a controversial 1995 analysis that was praised by John Kenneth Galbraith and Bill Clinton, to show that a heedless America was fast losing its formerly vaunted leadership in advanced manufacturing -- and particularly in so-called producers' goods -- to Japan. His 1999 book "In Praise of Hard Industries: Why Manufacturing, Not the Information Economy, Is the Key to Future Prosperity" anticipated the American Internet stock crash of 2000 and offered an early warning about the abuse of new financial instruments. In his 2008 book "In the Jaws of the Dragon: America’s Fate in the Coming Era of Chinese Hegemony," he challenged the conventional view that China is converging to Western economic and political values. His books have been translated into French, Russian, Korean, Japanese, and Chinese. They have been read into the U.S. Senate record and named among the ten best business books of the year by Business Week and Amazon.com. Horsemeat Dinners, Shameless Banksters, and the Future of the Human Condition The horsemeat discovered on British dinner tables last week was (1) supplied by a Swedish frozen foods marketer that had (2) outsourced meal preparation to a French company that (3) operates a factory in Luxembourg that (4) uses meat imported, (5) via a Dutch agent, (6) from Romania. At least that is the BBC’s version of the byzantine sequence (other versions differ in minor details). What is clear is that the affair has thrown another tanker-load of gasoline on the British people’s already incandescent rage at the European Union (EU) and its role in undermining their sovereignty. Although the American press has been slow to sense the historic significance of recent events in the UK, British exasperation with the EU has the potential to shake the latter-day world order. A symptom of the strains is that the UK’s pro-EU Prime Minister, David Cameron, has felt obliged to promise the British electorate a straight in-out referendum on British membership of the EU. Cameron probably doesn’t realize it yet but he may just have touched off a geopolitical avalanche. Certainly his referendum is a destabilizing – if in my view highly welcome – move at a time when the world economic order has rarely seemed more precarious. That order is founded on an overtly anti-democratic commitment to globalism on the part of the foreign policy elites of the UK and United States. Yet globalism is not working and the evidence of its failure mounts daily. The UK’s horsemeat woes aside, the United States has plenty of reasons to wonder about globalism’s impact on the American way of life. Just in the last few weeks alone the American public has awakened to the reality that: (1) Boeing, a company that subsumes within it virtually all the once-independent corporations that put a man on the moon in 1969, has become disastrously hollowed out, and (2) the New York Times’s computer system has been repeatedly hacked by the Chinese People’s Liberation Army. As history has repeatedly shown, the course of international politics is a notable exemplar of chaos theory. Just as the flapping of a butterfly’s wings in a Brazilian rain forest may trigger a hurricane in Texas, one embattled British politician’s narrowly partisan maneuvering – which is what Cameron’s referendum is all about – could quite possibly unleash transformative change around the world. Certainly this would not be the first time that large global consequences have flowed from narrowly-based initial developments. It was the intervention of a diminutive pistol-wielding Serb nationalist, Gavrilo Princip, that set off World War I. For a more recent, if similarly calamitous, manifestation of the power of chaos theory, consider how different the world would be today had not a devout, soft-spoken Saudi Arabian engineer inherited a significant fortune from his estranged father. That engineer was Osama bin Laden and his inheritance, of course, bankrolled al Qaeda. Like Gavrilo Princip’s pistol shot and bin Laden’s inheritance, Cameron’s referendum could have far-reaching consequences. The difference this time is that – at least in the view of those of us who have been suspicious of globalism all along – most of the consequences will be benign. The fact is that, in the face of East Asia’s relentless pursuit of one-way free trade, Washington’s vaunted strategy of “global leadership” amounts to borrowing from China to save the world from China. A British withdrawal from the EU – which would be the likely result of any honestly structured referendum – may well be just the shock therapy needed to jolt policymakers on both sides of the Atlantic into rejoining the reality-based community. If the British referendum does trigger a rethink in the United States, it would not be the first time in the recent past that the mother country has led American opinion. In the 1960s, for instance, it was the UK far more than the US that created that decade’s famous youth culture. (The Beatles were global superstars by 1963, two years before Allen Ginsberg invented flower power and six years before the boomers converged on Woodstock.) In social policy too, the UK has tended to move earlier than the United States. In the busy parliamentary year of 1967, the British legalized both abortion and homosexual behavior, for instance. That was six years ahead of Roe vs Wade and more than three decades before remaining anti-gay laws in the United States were struck from the statute book. In world affairs too, the British have often led: London established full diplomatic relations with Beijing as early as 1972, for instance – nearly seven years ahead of Washington. Similarly the British were earlier to embrace the fashion for financial and economic deregulation. The ideas of Friedrich Hayek and Milton Friedman had struck vigorous root among the British media and political establishment as early as 1976, and Margaret Thatcher became Prime Minister in May 1979, eighteen months ahead of Ronald Reagan’s 1980 presidential victory. (As late as 1978 incidentally pro-regulation Democrats had won a healthy majority in Congress.) The irony is that Cameron is hardly central casting’s idea of a bomb-thrower. As for the parliamentarians who have forced his hand, they hail mainly from the right of his Conservative Party and see themselves as enthusiastic supporters of global free trade. That said the wider cause of globalism is now thoroughly discredited in the UK. Not the least of its problems is its close association with the bankers of the City of London. At a time when countless ordinary Britons have been badly squeezed by economic austerity, the charlatans and outright crooks of the City have continued to award themselves outrageous pay packages. Even Cameron does not conceal his disgust with some aspects of globalism, not least its role in undercutting the British tax base. Feelings have not been soothed by the release of a report documenting how major U.S. corporations minimize their British tax liabilities by channeling their British revenues through tax havens. Among corporations cited were such household names as Starbucks, Google, and Amazon, which despite doing huge business in the UK pay hardly any tax there. Some home-grown British corporations such as Vodafone and Barclays have also been pilloried. Much of the criticism moreover has come from media organizations like the Daily Telegraph and the Daily Mail that have traditionally been pro-business pillars of the Conservative establishment. Top Conservatives generally hope the UK will remain in the EU. The problem is that they are caught between a rock and a hard place. While they believe in maintaining close trade links with the continent, few of them identify with Brussels’s increasingly insistent push for “ever closer union” – political and social union, that is. Thus Baroness Pauline Neville-Jones, a Conservative member of the House of Lords and a former intelligence chief, cites the European justice agenda as a major source of friction. A key issue is the so-called European Arrest Warrant which renders the British government powerless to second-guess extradition requests from other EU nations. As a result, several British citizens have suffered scarifying legal misadventures in, among other places, Greece. “The problem is that the system is based on the fiction that police, courts and prisons are all equally good inside all EU countries,” says Neville-Jones. “That is patently not the case and the result is anomalies which, given UK political culture and the activities of constituency MPs, cannot be shoved under the carpet.” Douglas Carswell, a Hayekian who counts as one of the Conservative Party’s most passionate Euroskeptics, cites the EU’s anti-democratic character as a principal bone of contention. “My American friends have no idea how anti-democratic the EU really is,” he says. “It has been calculated that between 70 percent and 80 percent of our laws are now coming from the EU bureaucracy. In American terms, it is as if federal agencies were able to make laws without reference to Congress or to the states.” Unfortunately, as the prominent Labor Party Europhile Denis MacShane points out, any effort now by the UK to roll back the less welcome aspects of the European “project” comes a little late. “Cameron needs to persuade 26 other governments and parliaments that opening a major treaty revision to satisfy Britain is something to be desired,” he recently commented. “A new treaty would require a nightmarish ratification process involving referendums in countries like Denmark, France, and Ireland that would plunge Europe into years of inward-looking rows at a time when it still hasn’t emerged from the worst economic crisis in its history.” Viewed purely in terms of British party politics, however, Cameron’s gambit is a Machiavellian masterstroke. In an inspired gimmick, he has promised that the referendum will be held only AFTER the Conservative Party is returned to power in a general election expected in 2015. As Ed Miliband, the leader of the opposition Labor Party, has already ruled out a referendum, this leaves countless anti-EU Labor voters high and dry. Even the United Kingdom Independence Party (UKIP), a small anti-EU group, has been cunningly sidetracked. Drawing its support mainly from the right, UKIP had previously loomed as an ever larger threat to the Conservatives’ traditional base. Now the Conservatives can credibly allege that a UKIP vote will merely divide the Euroskeptics and let in the Labor Party (a majority of whose leaders are dyed-in-the-wool Europhiles). UKIP stalwarts like Godfrey Bloom, a member of the European Parliament, splutter that Cameron will in the end renege on a straight in-out vote. This might well be a correct reading of Cameron’s instincts but the pressures on Conservative leaders, not least from their own rank-and-file, to follow through with an honest referendum is now intense. Post Your Comment Post Your Reply Forbes writers have the ability to call out member comments they find particularly interesting. Called-out comments are highlighted across the Forbes network. You'll be notified if your comment is called out. Comments Only the snobby Brits would complain about eating horse meat, while a large percent of the world are thinking, ‘ah, you have meat to eat’. “In world affairs too, the British have often led” That’s true they were also the first to force the Chinese to buy opium and then force a war onto them and take their land. Oh, and also first to try and take over India while stealing its resources. And.. Well, need I go on? Jamo, its not so much the ingredients, its the lies and deceit that really gets us snobby Brits going. If you care about your health and happiness it would serve all of us to stick to a clean vegetarian based diet and if you are going to include meat make sure it is grass fed and organic before it hits your pan. Th US has the highest rates of obesity, diabetes, cancer and heart problems and that is directly related to diet and lifestyle @Jamo….Never really understood the snobby thing, but it probably says more about certain Americans than Brits. As for the anti Brit rant, you have managed to dredge back 200 years to come up with the Opium Wars and the occupation of Hong Kong (the New Territories were leased from China). Interesting to note that the Hong Kong Chinese are currently protesting in the streets about loss of democracy. While I’m sure they don’t want to return to British rule, they can’t be that illdisposed to the benevolent condition we left them in. And then India. When Churchill was dining with the Roosevelts a woman guest asked him about the poor Indians. He enquired of her if she meant those proud peoples who had prospered under British rule or the indiginous peoples of North America who had been betrayed, deported, herded in to reservations, cheated of their land, infected with disease and massacred as a matter of policy by successive Washington governments. Not very noble your side of the Atlantic. At least we’re not snobby about your shortcomings. Tracey, many of those issues you called out are not borne of the problems you think they are. The core root of the problem is capitalism. If you go into any wealthy area, you will not find quik-cash payday loan shops, pawnshops, and fast food. However, if you go into any impoverished area, you will see nothing but these places. That’s because cheaply-made processed food is marketing toward the poor, while organic health foods are marketed at a high price that is often out of reach of the average “poor” family (and there are a lot of them in the US right now). I’m not saying that personal responsibility has no place here, but it is a vicious cycle that you might find hard to understand unless you’d experienced it as I have. The system in the US is set up to keep the poor poor, keep the rich rich, and force the middle-class to become one or the other. The biggest part of the diet and lifestyle problem stems from the economic problems that are driving people into poverty, and leaving them with few food options that are not unhealthy. Poverty and shamelessly opportunistic capitalism are the root causes and much of the poverty is caused by corruption, immoral business practices, and greed, either directly or indirectly.. When I was poor a few years ago (now I’m a well paid IT professional) I did use Pawn shops, and I valued their service to the community. I would pawn my laptop, get some food and beer money, and then when I had some income I would go and buy it back. Everybody wins. I did this several times, it was a cycle, but it wasn’t vicious. Most people who remain poor their whole lives are just stupid, lazy and commit crimes, which then locks them into a vicious cycle…but capitalism didn’t do that, they did that. It’s not the Pawn shops fault they stole something, pawned it, and got arrested. While the initial reaction to the horse mean scandal is “yuck, I don’t want to eat that” and “thats false advertising” a bigger issue is potentially diseased or drugged animals entering the food chain and posing a risk to peoples health. Nice, whitewash history. I like how Osama supposedly funded all this mayhem himself. Truth is, yes he was rich, but he was well funded to the tune of over fifty billion dollars a year from the US and allies when he called his bunch the Mujaheddin. Even more curious, him and the eleven nine eleven hijackers came form a country that was our ally, yet was used as a jingoistic rally cry to invade other countries. Well done, NWO, either way the taxpayers will pay, and pay, and pay. Commonly omitted from any discussion of “free trade” is the environmental damage caused by it. For example, most car parts that were formerly rebuilt and resold in the United States are now simply discarded, and a new part, shipped from China’s government owned manufacturing complex by petroleum fueled tanker 10,000 miles away, is sold to the consumer. The same can be said for most electronics, toys, clothes, tools, etc. It’s absolute insanity, but it is illustrative of how China’s authoritarian government owned steel, banking, mining, gas, oil, etc, distorts markets and hastens waste and environmental degradation worldwide. This planet has been run on a greedy concepts of Capitalisms for a long while. Thanks to free flowing informations available on internet-most ordinary people are beginning to realise that ” something’s not right for a long while”. Like many others, I came upon this very world concerns that was being echoed by Jack Fresco in 1970′s. It really is coming down to all the concerns raised in documentaries such “ future by design”, now “venus project” and thrive movement documentary.Why can’t things be simple and run on common sense ? things like : 1) Every nation becoming self sufficient in producing its own food. Every technological and agricultural knowledges be shared amongst all nations.This should also happen in the areas of : clean energy, and water productions for human needs. 2) We can then move onto this monster called “Banks”. People should manage their own money- through their own banks in each nation.All the present major aspects of “banks, finance” has to be dismantled and aligned with new goals of serving humanity as a whole-on any part of this planet. 3) All signed treaties between each nation has to be reviewed and rewritten so that no nations are in any position to take advantage of any other nations-and this has been the major human problem-as the rich and so called developed nations have over the years manipulated treaties in their favour through financial systems of this world. 4) A nation should should only sell its excess food -if there’s a demand for it and a nation should only import food-if it really can’t grow its own. As far as technological side of things are concerned : productions of things like vehicles, technologies in various fields of human endeavours- it can be arranged in “exchange of resources” so the cost of these technologies isn’t burdening the importer nations. 5) People should be free to live on any nation for how ever long they choose as long as their background is scrutinised in terms of health, and any history of criminal past, present. As long as they live on that nation-he/she pays 50 percent tax-off the normal tax amount to that particular nation. And he/she can move onto next nation and same rules be applied. These are some of the my thoughts regarding this larger view of ” if we as human species are going in the right directions on this planet ” ? It doesn’t matter how advanced and fancy any future gadgets any company puts out = if humans don’t get food, water (and shelter) for a week at best- most us would drop like flies.And because we as human species have priced food ,water, shelter-the very essential to survive comfortably as human species in the name of capitalism-then we are forever stuck in this vicious evil “ plot”. A simple thought would be ..what if every humans had affordable access to fresh water,food and decent shelter-and didn’t have to struggle daily due to availability of “abundance created through shared technologies from each nation”- then chances are, we as human species would be solving bigger challenging aspects of humanity and the universe. Human life indeed would be on its way to star trek ideals. From a 2013 perspective, your ideas seem almost fanciful, but are certainly less fantastic than the notion that “free trade”, and free movement of capital can occur between China’s authoritarian state owned system and the U.S, without seriously threatening democracy. We seem to be moving more to the Chinese model, rather than the other way around. | Mid | [
0.6537530266343821,
33.75,
17.875
] |
1. Field of the Invention The present invention relates to a communication apparatus having a power-saving function. 2. Description of the Related Art Energy Efficient Ethernet (EEE) exists as a power-saving function of Ethernet, which is a wired network complying with Institute of Electrical and Electronics Engineers (IEEE) 802.3. The EEE employs a Low Power Idle (LPI) technique for reducing power consumption (standby power requirement) in a time period (standby state) in which data communication is not performed in a wired interface (corresponding to a physical layer (PHY)) (Japanese Patent Application Laid-Open No. 2011-212946). To use the EEE, it is confirmed that a communication apparatus (for example, a printer) and another communication apparatus (for example, a hub) as a communication partner support the EEE in auto negotiation performed when the communication apparatus and the other communication apparatus establish a communication link. Herein, when even one of the communication apparatus and the other communication apparatus does not support the EEE (also including a case where the EEE is disabled), the EEE cannot be used. When a setting for the EEE is changed in the communication apparatus (more specifically, when the EEE is switched from an enabled state to a disabled state, or the EEE is switched from the disabled state to the enabled state), it is necessary to reperform the auto negotiation. Since the auto negotiation is performed when the communication link is established, the communication apparatus disconnects the communication link, and then reestablishes the communication link to perform the auto negotiation. If the communication apparatus switches between the enabled state and the disabled state of the power-saving function (for example, the EEE) in the wired interface when the communication apparatus receives data, the communication link between the communication apparatus and the other communication apparatus is disconnected. That may bring about a failure in the reception of the data to cause loss of ability to perform predetermined processing. | Mid | [
0.6032608695652171,
27.75,
18.25
] |
Q: python argparse with mandatory input file argument How to add a mandatory option to the parser with prefix -i or --input to specify the input file to the script? The provided value should be placed into the infile variable A: Distilling from the documentation, a minimalistic answer would be import argparse #Create the parser parser = argparse.ArgumentParser(description='Does some stuff with an input file.') #add the argument parser.add_argument('-i', '--input', dest='infile', type=file, required=True, metavar='INPUT_FILE', help='The input file to the script.') #parse and assign to the variable args = parser.parse_args() infile=args.infile Be aware that if the specified file does not exist, the parser will throw an IOError. removing the type=file parameter will default to reading a string and will let you handle the file operations on the parameter later on. | Mid | [
0.6518324607329841,
31.125,
16.625
] |
The Battle for Southern California sets the stage for the LA Galaxy match against Mexico's Club Tijuana -- one that will feature Landon Donovan and Omar Gonzalez versus fellow U.S. International Herculez Gomez -- in the decisive second leg of their CONCACAF Champions League Quarterfinal series. See how the two clubs vie for players and fans throughout Southern California and the importance of this rare encounter.MLS Insider Presented by Adidas dives into the story lines behind Major League Soccer. Your favorite players. Your favorite teams. Unprecedented access. | High | [
0.6966824644549761,
36.75,
16
] |
Time to Choose Makes Direct Connections between Local Actions and Global Impacts We’re all consumers. We eat, we drive, we heat our homes. For most of us, our food does not grow in our own backyards, our cars are not powered by clean-burning fuels, and our homes are not warmed in chilly November by rooftop solar panels. We consume to make things go, and that part’s okay – we probably always will. According to film writer and director Charles Ferguson, it’s not the “go” that gets us into trouble – it’s the choices we’ve been making to fuel the “go” that have created devastating problems both locally and around the planet. Time to Choose is as much about the beauty of earth as it is about the effects of our daily actions on our global and local ecosystems. Time to Choose, Ferguson’s third critically-acclaimed documentary, accomplishes much more than any climate documentary of its kind. Presented in themed sections dedicated to the acquisition of major fuel sources, urban sprawl, deforestation, and industrial agriculture, Time to Choose is as much about the beauty of earth as it is about the effects of our daily actions on our global and local ecosystems. “If we continue with business as usual, warming the planet further, by the middle of this century, we could trigger runaway climate change; a process beyond human control,” reports the film’s narrator. The film shows that we don’t even have to travel thousands of miles to witness severe destruction – deforestation has been taking place in the form of mountaintop removal for coal in West Virginia for decades. The process is messy, it’s ugly, and it kills – plants, animals, workers. But coal is not our only energy choice. “Phenomenal things are happening,” says film subject Steven Chu, former U.S. energy secretary and a Nobel Prize winner in physics. “Technology is developing [fast] … and the cost of renewable energy is plummeting.” As one film critic states, “It’s hard to watch Charles Ferguson’s deft climate change documentary Time to Choose and think there’s any other choice.” Following the film, there will be an engaging panel discussion featuring Benedictine Sister Pat Lupo, Erie County Executive Kathy Dahlkemper, and Erie Art Museum Director John Vanco. Prior to the film, there will be a special screening of a six minute proof of concept video for Unearth, a feature film directed by Film at the Erie Art Museum host, John Lyons. – Ti Sumner Film at 7 p.m., followed by panel discussion // $5 // Erie Art Museum, 20 East 5th Street // erieartmuseum.org or 459.5477 // For more information on the film, visit timetochoose.com/#paths-to-change. | Mid | [
0.650124069478908,
32.75,
17.625
] |
Summary James Sudakow, author of the book Picking the Low Hanging Fruit, is this episode’s guest and Mike and he talked about the language people use in the corporate world. Show Notes James Sudakow, author of the book Picking the Low Hanging Fruit, is this episode’s guest and Mike and he talked about the language people use in the corporate world. Every organization or group got acronyms, buzzwords, or phrases that is used on a daily basis. On this episode, James and Mike discuss how these can benefit organizations – but sometimes cause disconnect in communication. James also reveals some buzzwords that he thinks people should stop using by the end of the year, and how an individual can shift away from the habit of using such words. Let’s "open the kimono" on this episode and learn about: People’s intentions when they choose to use big words (8:20). How to stop this practice in organizations firmly entrenched into such kind of language (13:25). James’ list of buzzwords that people should retire from use by the end of 2016 (2:16), and the most obscure phrases he’s heard (29:28). An open letter James wrote on his blog tackling the ridiculous lingo being used in the corporate world (15:39). Thanks for listening. If you want more exclusive content then check out The Productivityist Podcast at Patreon! Want to help in other ways? Leave us a rating and review in iTunes and the podcasting platform you're using so we can make the show better for you. What is The Productivityist Podcast: A Time Management and Personal Productivity Talk Show? Hosted by productivity strategist Mike Vardy, The Productivityist Podcast is a weekly show that discusses tips, tools, tactics, and tricks that are designed to help you take your productivity, time management, goals, to do lists, habits, and workflow to new heights - both at work and at home. If you're looking to focus your efforts on getting the right things done and start living the good life, then this weekly conversational podcast – crafted in the tradition of Slate's Working, Back to Work, and HBR IdeaCast – is for you. All audio, artwork, episode descriptions and notes are property of Mike Vardy: Productivity Strategist | Time Management Specialist | Creator of TimeCrafting, for The Productivityist Podcast: A Time Management and Personal Productivity Talk Show, and published with permission by Transistor, Inc. | High | [
0.660220994475138,
29.875,
15.375
] |
Bramich Wins 2018 Australian Supersport 300 Championship The Australian Supersport 300 Championship is renowned for a fight right to the finish the last round of the series was no exception, but it was Tom Bramich JLT, Yamaha YZF-R3 who held on to win and become the official 2018 Australian Supersport 300 Champion in the Yamaha Motor Finance Australian Superbike Championship at Phillip Island Grand Prix Circuit. Behind him in the standings for round 7 was a log-jam of rising Queenslanders, including Gold Coaster Zac Levy Puma RV's, Yamaha YZF-R3, Seth Crump Rock Oil, KTM RC, Locky Taylor Shark Leathers, Yamaha YZF-R3 and Tayla Relph filling second place through to fifth. Relph's result was all the more significant as it contained her first podium (pictured) - a third in the second race on her Baldivis Forklifts, Yamaha YZF-R3 after leading the race for some of the way. Levy took second in the championship, while another young northerner, Oli Bayliss (Cube Racing, Kawasaki Ninja) was fourth in the overall standings, ahead of Crump, Taylor in fifth and sixth, respectively. Relph finished the Australian Supersport 300 Championship in 8th place. | Mid | [
0.649874055415617,
32.25,
17.375
] |
<!-- ko if: data --> <div data-bind="css: chartClasses('container')"> <button data-bind="css: chartClasses('export-button'),if: $component.filename, click: $component.export">Export</button> <div data-bind="chart: { data: data, minHeight: minHeight, format: format, renderer: renderer }"></div> </div> <!-- /ko --> <!-- ko ifnot: data --> <empty-state></empty-state> <!-- /ko --> | Mid | [
0.605316973415132,
37,
24.125
] |
[A light-cured acrylic adhesive for fixing resin retention devices to the wax pattern]. A light-cured acrylic adhesive for fixing resin retention devices to the wax pattern was prepared. The adhesive consisted of trimethylolpropane triacrylate, 2-ethylhexyl acrylate, benzoin methyl ether, p dimethylaminobenzaldehyde and p-methoxyphenol. The adhesive could be cured within 20 sec not only by an UV photo curing unit but by a visible-light source with a xenon lamp. The adhesive and retention beads burned out after about an hour in the electric furnace at 400 c. The metal specimens with retention devices were cast in Ag-Pd-Cu-Au alloy with the use of two types of retention beads adhesive. The light-cured adhesive was superior to the conventional one in handling and some other properties. This adhesive may be used to fabricate composite veneered prostheses with minimum errors in laboratory procedure. | Mid | [
0.645598194130925,
35.75,
19.625
] |
For Such A Time As This! Main Menu People Of Faith – Barnabas Name given by the apostles to an early convert to Christianity in Jerusalem, formerly his name was Joseph. The Barnabas story can be found in the Book of Acts. Barnabas was one of the first to sell possessions to help the Christians community in Jerusalem. In Acts 11 verse 24: Describes Barnabas as, “For he was a good man, full of the Holy Spirit and of faith – and a large number of people were added to the Lord.” Barnabas was an Encourager, and was one of the most quietly influential people in the early days of Christianity. Barnabas was called an Apostle, even though he was not one of the twelve. His good reputation in Jerusalem may have influenced the Apostles to select him as Paul’s companion for missionary work. Barnabas recruited Paul, now a Christian, to help in Antioch, and both of them would stay there for a year, teaching a large number of Christians.{Acts 11 verse26}. When famine hit Jerusalem, Barnabas and Paul were sent with relief funds. Barnabas was commissioned with Paul to preach beyond the boundaries of Antioch. {13 verses 2 to 3}. | Mid | [
0.6326034063260341,
32.5,
18.875
] |
This 15 year old well appointed home with a two car garage is located in Robson Ranch and you have access to all the amenities - indoor/outdoor swimming pools, 18 hole golf course, work-out gym, tennis courts, softball field, pickle ball courts, computer/internet room,... NEWLY REFURBISHED-NEW-The condo has a large detached double garage, with laundry area and room to park your boat in the driveway. The patio area located between the condo and the garage has great potential for enjoying BBQ's or just reading a book. 1.5 mile from River, and... We prefer a 3 month January through March rental. 55' of Riverfront on the Colorado River just minutes from Laughlin, Nv. Gorgeous Views from two large decks, living room, dining room and kitchen. Spacious 2500 sq. ft. 3 bed/ 2 3/4... The house is only nine years old,and is in a quiet gated community.The majority of the homes are vacation homes.They have good hiking trails that you can walk to.Great golf courses and spas,that we can leave a list of.Great restaurants,an outlet mall for shoppers with a theatre... Nice clean home, close to all amenities, shops, doctors, hospital, restaurants. Set in quiet 55+ Mobile Home park, 6 hole, 3par Golf Course, club house with activities. No Pets. Sorry we are booked till end April 2020!!! Available starting June 2019 but booked January thru April 2020. 1343 s.f. patio home on the 6th hole of Superstition Springs Golf Course. Master bedroom has a comfortable king bed, walk-in closet and walk-in shower. 2nd bedroom has a queen bed and shower/tub combo. 1/2... We currently have vacancy for the months of October through March time frame for our one and two bedroom furnished, apartment homes. If you move in by October 2017, we will credit back the application and administration fees. We also have unfurnished apartments available at a... Enjoy some well-earned relaxation with your loved ones thanks to this Southern Arizona vacation rental! From amenities like a private pool and hot tub to a great location near the Catalina Mountains, this semi-custom home offers everything you need to relax. Nestled five... Comfortable accommodations and one of Arizona's finest cities awaits with this dog-friendly condo! Enjoy perks like a shared hot tub and pool, as well as a fantastic location for shopping, dining, hiking, and more! Your rental lies in the Catalina Foothills 12 miles north of... Bring your family, friends, and even your dogs with you for a relaxing retreat at this Southern Arizona home. Enjoy comfortable accommodations, nearby golfing, and a great location near both hiking and one of the state's finest cities! Everyone will love exploring the Dove... Golfers, hikers, and relaxation seekers alike will love this spacious home in the foothills of the Santa Catalina Mountains! Boasting an incredible array of amenities, including a private pool, onsite golf, and a superb location just 20 miles north of Tucson, this rental is... Surround yourself with spectacular desert landscapes with a stay at this sunny Tucson vacation rental! Located just two miles from Sabino Canyon and half a mile from the Ventana Canyon Golf & Racquet Club, this dog-friendly condo makes it easy to enjoy the very best of Arizona's... Come to Oro Valley for your next vacation and stay in this spacious three-bedroom, two-bath home, which offers not just a lovely home base, but immediate access to a state park and close proximity to downtown. Catalina State Park is right across the highway where you can... Enjoy the beautiful desert weather from the comfort of this elegant Oro Valley vacation home! With an upgraded private pool (heating available for a daily fee), a propane firepit, and a furnished outdoor dining area, this spacious house has everything you need for a spectacular... Revel in the beauty of the Santa Catalina Mountains with a stay at this elegant Tucson condo. Located less than two miles from the Sabino Canyon Recreational Area, 11 miles from downtown, and right across the street from the Ventana Canyon Golf & Racquet Club, this upscale... This three-bedroom, two-bath Oro Valley condo is waiting for your group of up to six to come settle in for an unforgettable sun-filled vacation. You'll enjoy a shared pool and hot tub, not to mention easy access to golf and the nearby state park. Golfers will love the... Vacation in the shadow of the Santa Catalina Mountains with a stay at this fully furnished Oro Valley condo! Complete with a shared pool, hot tub, and a great location on the outskirts of Catalina State Park, this rental puts the best of the Copper State only minutes from your... This beautiful renovated Oro Valley condo is an ideal home base for a family or group of friends who want to experience the beauty of Arizona with a few additional perks. Enjoy a shared pool and hot tub, on-site golf, and easy access to other outdoor activities. The condo is... Enjoy a modern, open kitchen, private pool, hot tub, and furnished patio with views of the adjacent golf course at this three-bedroom Oro Valley escape with space for up to six guests. Up to two family dogs are also welcome to join you here for an additional nightly fee. This... Your family, friends, and up to two dogs will all love a break at this cozy Arizona condo. From the beauty of the Catalina Mountains to myriad shops and restaurants in Tucson, you'll have plenty of ways to spend your time. Tucked away in Oro Valley, this home for six enjoys... This welcoming Oro Valley home offers a great location and resort-style perks! Spend your days exploring Arizona's breathtaking mountains and the exciting city of Tucson, then return home to enjoy onsite golf, a shared pool and hot tub, and more. This home for four lies in... You deserve the best for your Arizona vacation - luckily, this condo at Vistoso Resort has you covered! Featuring access to a shared pool, hot tub, and on-site golf, you'll have all the essentials for a stay in Oro Valley. Golfers will be pleased to find a scenic golf course... This two-bedroom Rancho Vistoso condo welcomes you and your guests to experience Oro Valley and revel in amenities like a shared pool and hot tub, as well as on-site golf. To make your stay even better, your two favorite dogs are also invited for a small nightly fee! You'll... Start your Arizona days at this charming, two-bedroom home complete with a well-equipped, full kitchen, private backyard, and furnished patio, all conveniently close to shops, restaurants, and the gorgeous, 18-hole Views Golf Club. Located just 20 miles north of Tucson, this... This two-bedroom condo in Tucson offers a great location for families and visiting snowbirds, as well as access to a shared pool and hot tub and close proximity to dining. Surrounded by the Santa Catalina Mountains, you'll be nestled at the edge of Tucson National Golf... Golfers take note! This beautiful, two-bedroom condo sits within the Golf Club at Vistoso, nestled between the Tortolita Mountains to the west and the Catalinas to the east. Enjoy all the comforts of home as well as shared community facilities including a swimming pool, hot tub,... Enjoy some well-earned rest and relaxation in this resort condo in southern Arizona. Mountain trails, shopping, and dining are only a quick drive away, but with perks like a shared pool and hot tub, as well as an on-site golf course, you can have a great time by staying... Enjoy a restful retreat at the Vistoso Resort Casitas with a stay at this cozy and recently updated condo! Boasting an incredible variety of amenities - including a shared pool, a hot tub, and onsite access to the Vistoso's award-winning golf course - this rental is bound to be... If you're coming to Oro Valley, make this comfy, Southwestern-themed condo yours and enjoy a shared pool and hot tub, on-site golf, and more. Not only will you have leisure activities at your fingertips, you'll also have a well-appointed home base perfect for relaxing and... Beat the winter blues with a trip to sunny Arizona and this lovely Oro Valley condo! Located on the grounds of the Vistoso Resort, home to a shared pool and hot tub, as well as one of Tucson's most popular golf courses, you'll have everything you need for a refreshing and... This two-bedroom, two-bath condo is on the first floor and has been tastefully remodeled in a muted southwest color scheme. This condo is very close to the shared pool, hot tub, clubhouse, and BBQ area, yet still private and quiet. Highlights include new stainless steel... Fresh air, gorgeous views, and a host of resort perks lie in this welcoming condo in Southern Arizona. From a shared pool and hot tub to nearby hiking, there's no shortage of fun waiting in the Oro Valley! This condo for four lies only six miles north of peaceful Oro Valley.... This second-floor condo has stunning views of the Catalina and Tortolita mountain ranges and is ideally located in the desirable Rancho Vistoso area of Oro Valley, complete with a shared pool and hot tub. You'll love to sit on the balcony and watch the mountains turn from shades... Take your next country club retreat in the shadow of the Santa Catalina Mountains with this lovely Oro Valley home! Located just steps away from the Golf Club at Vistoso, this upscale rental lets you enjoy a game on the green anytime you want - and even includes a shared pool... Located on the grounds of Mira Vista Resort, a well regarded and top rated AANR clothing optional / Nudist resort, this fully furnished luxury condo is a premium location, upper floor unit with both city lights and mountain sunset views in a beautiful 30 acre desert... Community pools and hot tubs short walk from our home. Area includes beautiful parks, superb restaurants, first class shopping as well as numerous golf courses. Video and more pictures available on request. Also tennis courts, pickle ball, free 9 hole golf course, basketball... This is a 3 bd, 2 ba very spacious home, fully equipped kitchen, big comfortable couch, it is your home away from home. Centrally located in the West Valley. All are welcome! *24 minutes from Phoenix Sky Harbor Airport *16 minutes from University of PHX Stadium *Fast... Nestled in the foothills of the Catalina Mountains, it sits at one of the highest points in the community at the end of a roundabout. It’s gated and fenced for security. The home and grounds have a spectacular view of the city and its four surrounding mountain ranges. It... Beautiful home on the river. Three bedrooms on main level with full kitchen and living area. Large living area downstairs with full kitchen and wet bar. Opens up to large patio at the waters edge with a private dock and great views! THE CASA CALIENTE -SCOTTSDALE, ARIZONA is strategically located next to the prestigious upscale Kierland Area of Scottsdale, Arizona. The complex (we call it a "mini-resort) is located only two blocks west of Scottsdale Road on “Acoma”. A modern two bedroom, two full bath,... If you're looking for a Custom Santa Fe home with a Western interior decor and AMAZING lake views, look no further. The home is kept immaculate with a kitchen filled with everything you might need for a Holiday Dinner. There are 3 bedrooms/2 baths, two of the bedrooms have... This modern chic three bedroom home is located in the quiet Arrowhead and Peoria 83 Neighborhood. (Veteran owned). The location allows for close access to the freeway(less than 0.75 mi from 101 freeway and popular venues, like the baseball stadium, while providing guests with... The house is located in the quiet Playa Del Rio area .. you can hear the boats and smell the river .. walk out the front door and you can be at the rivers edge in 5 min .. beautiful community park ..golf course, great river walk .. beautiful dog park , Laughlin is about 9... Similar Properties Latest News Subscribe to ourNewsletter For great articles and our newest Vacation Rentals Search Property If your looking for a home to rent for a few or more months at a time, then MonthlybyOwner is your one-stop resource for locating that perfect rental. We specialize in connecting owners and renters in the monthly rental market. No where else is there a specialized website dedicated to monthly rentals. | Mid | [
0.56595744680851,
33.25,
25.5
] |
Q: Why is there more space on the bottom of the div? I have a header with a div inside of it, for some reason there is more space under the div then above. I tried setting all the padding to 0 in hope to see which one was causing it but it seems to not be a padding at all. HTML <header> <div class="logo"> <div class="centrDivInDiv"> <h1>Welcome to Planet Earth</h1> <p>The best planet after Pluto.</p> </div> </div> </header> CSS body { margin: 0px; } h1 { margin-bottom: 0px; } header { background-color: #E74C3C; padding: 10px; } header p { line-height: 0%; } .logo { line-height: 80%; padding: 10px 20px 10px 20px; margin: 0px; background-color: #2C3E50; display: inline-block; } .logo p { margin-top: 24px; } .centrDivInDiv { display: table-cell; vertical-align: middle; margin: 0; } JsFiddle A: Add vertical-align:middle to your .logo div (and you can remove it from .centrDivInDiv): .logo { line-height: 80%; padding: 10px 20px 10px 20px; margin: 0px; background-color: #2C3E50; display: inline-block; vertical-align: middle; } jsFiddle example | Mid | [
0.582338902147971,
30.5,
21.875
] |
AS HIS tearful lawyer likes to remind the press, the mother of Pervez Musharraf, Pakistan’s former military dictator, is ill, and is in Dubai. This offers Pakistan’s government and judiciary an excuse to make a “humanitarian” gesture. They could release him from the comfortable house arrest in Islamabad where he is awaiting trial on a number of charges, and let him scuttle out of the country. Instead, Nawaz Sharif, installed as prime minister after an election in May, told parliament this week that Mr Musharraf would be charged with treason—an offence punishable by a life sentence or death. In 1999 the then General Musharraf led a coup that ended Mr Sharif’s last stint as prime minister. He ran the country as president for nine years. He is accused of complicity in the killing of a separatist leader in Balochistan and of Benazir Bhutto, a former prime minister, and of illegally jailing judges; but the accusation of treason seems most clear-cut, since he abrogated the constitution and declared a state of emergency. There are several reasons why trying him looks like a bad idea. It reeks of the vengefulness that has poisoned Pakistani politics. It risks distracting Mr Sharif from the jobs he was elected to do—stop the power cuts, fix the failing economy and bring peace to a country still ravaged by terrorist violence. But the strongest argument against a trial is that it threatens to cause a bruising confrontation with the armed forces and thus encourage yet another military coup. The regular intervention of soldiers in Pakistani politics has undermined civilian institutions, encouraged the growth of terrorist organisations, distorted spending priorities and poisoned relations with India. That is also why trying Mr Musharraf is the right thing to do. Pakistan’s soldiers need to be discouraged from intervening ever again. The best way of doing that is for democratically elected leaders to assert their authority over military ones. Mr Sharif has a chance to hold a military dictator to account in a country in which military dictators have enjoyed impunity. The chances of a repeat of 1999 are small. Fourteen years ago Mr Sharif led a corrupt and inept regime whose collapse many Pakistanis and foreigners cheered. Now he is newly re-elected, with a strong mandate after a healthy turnout. Pakistanis are inclined to believe that he has learned from his mistakes, and he has not yet had time to prove them wrong. Few at home would support a coup. Mr Musharraf, who returned to Pakistan to take part in the election, believed he was popular there. He soon found out he was not. The army’s current chief, General Ashfaq Kayani, is on his way out. A coup would also embarrass America, which provides Pakistan’s armed forces with huge amounts of cash. Mr Sharif is managing the politics of the trial carefully. Aware of the danger that it looks like a personal vendetta, he has sought support from other parties. He has also said that he holds Mr Musharraf solely responsible for the coup, which may have been a signal that he is not intending a broader purge. Exile and the kingdom And if Mr Musharraf is convicted? This newspaper has always opposed the death penalty, but a prison sentence would remind generals that mounting coups has consequences. Mr Sharif may well do a deal with the army to allow the convicted general to slip away to some Gulf state. A poor second best to a spell in jail. But if that were part of the price of letting Pakistanis see an army chief submit to the courts, so be it. | Mid | [
0.58679706601467,
30,
21.125
] |
Natasha Jen's "Design Thinking is Bullshit" Argument At this summer's 99U conference, Pentagram partner and designer Natasha Jen gave a presentation with an eye-catching title: "Design Thinking is Bullshit." This was actually the second time she'd given the talk—back in May Jen tweeted "Finally let it out of my system" after presenting it at HOWlive in Chicago—but video of it was never released, so few got to see it. But this month video of the 99U talk was finally made public: What do you think? On the one hand I think Jen's correct in that distilling a complicated process into an easily-replicable formula isn't always possible, but on the other hand I see the "Design Thinking" introduction to businesses as a positive step towards fresh thinking. I had always assumed "Crit" was built into the process, but perhaps those of you with direct experience of a design-unsavvy business first attempting to integrate the process could speak to this. Enter a caption (optional) I wholeheartedly agree with Jen's assertion that design is not merely a box to be checked, and I think the idea that anyone can be a designer will of course be anathema to practicing professionals. I also support her push for evidence, and on this note, if you're not already reading Design That Matters' series of posts, you ought to be! 9 Comments If she understood "prototype, test" in Design Thinking as a linear, one-time event, i.e., one prototype, one test and you're done, then she doesn't understand Design Thinking at all. Creative people also work in different ways. Judging one's creativity and quality of one's design based on how tidy or messy one's working environment is equally comical as critizing why it took over 1,000 prototypes to get a vacuum cleaner design right if one knows design. As a design thinking practitioner, I tend to agree with most of what she says; however, she's not talking about design thinking, she's talking about the deviance that stem from good intent. Along the years, I found that it was really easy to "debunk" design thinking done badly, but that's not because of the "design thinking" part, rather about the "done badly". All design thinking professional would certainly not claim these processes as being linear, that's just their depiction, and if you've done the least actual practice you know that it *is* iterative, and that theses "steps" are rather modes between which one navigates. Critique is also embedded in a well mastered design thinking approach, almost at each step actually. As of the evidence, all designers use design thinking - also not many recognise it, simply because, as Jen puts it design thinking is just a way to make design (period) affordable to non-designers and replicate some of what makes designers good at what they do. So the evidence supporting design thinking are actually the very same that support design (period). Even Jen's office. So in a nutshell: she's right, but she is chasing the wrong prey, and I can understand the frustration of an acclaimed professional seeing their field dissolve in corporate gimmickry. The real question is: how comes that design thinking standards are so low? how comes that any big consultancy can claim to use design thinking after having read a pair of books or conducted a workshop? that's the gimmickry side of design thinking. Those who know what they are doing, believe in these principles, and are successful, have the evidence. (oh, and by the way, there also exist bad designers, but this does not mean that it makes sense to bluntly shoot at design as a whole...) And just as a precision: "design thinking is bullshit"... yes, to designers: they don't need it because they already do it. As for the rest of the world, design thinking is helpful when done properly, but even then is no silver bullet. From where I stand, design thinking is for non-designers to get a handle on design. I see the value in promoting inclusivity in the act of design by providing a framework for nonprofessional designers (yes, everyone is a designer). I've led several projects where the lack of post-its early in the design process has caused anxiety with clients because they don't think they are "doing design right". I think the struggle here isn't a question about whether or not design thinking is bullshit, but how it is deployed and whether it needs to be evolved for it's purpose. As a student I find this illuminating. Every time the term "design thinking" comes up in classes I have struggled to see how it fits with my process. It feels like a monolith, with very few entry points to reach a clear feeling of what to do in a given situation. Mostly I end up turning away from ideas presented under the banner of design thinking, and instead pursue more opportune and tangible strategies. This talk is the first time I have heard design thinking directly critiqued, and it is the first time I have felt invited in to be a part of the group that defines what it is. This is so commercial motto. İt's like designing for Millenials (furniture design) or Big Data maybe innovation. But actually shows the way the people out of the design world (all kinds of designers, firms...) | Mid | [
0.608478802992518,
30.5,
19.625
] |
Derivation of All Attitude Error Governing Equations for Attitude Filtering and Control. This article presents the full analytical derivations of the attitude error kinematics equations. This is done for several attitude error representations, obtaining compact closed-forms expressions. Attitude error is defined as the rotation between true and estimated orientations. Two distinct approaches to attitude error kinematics are developed. In the first, the estimated angular velocity is defined in the true attitude axes frame, while in the second, it is defined in the estimated attitude axes frame. The first approach is of interest in simulations where the true attitude is known, while the second approach is for real estimation/control applications. Two nonlinear kinematic models are derived that are valid for arbitrarily large rotations and rotation rates. The results presented are expected to be broadly useful to nonlinear attitude estimation/control filtering formulations. A discussion of the benefits of the derived error kinematic models is included. | High | [
0.7161961367013371,
30.125,
11.9375
] |
“My product isn’t quite there yet.” You’ve said this before. We all have. Anyone working on getting their first product out to market will often have the feeling that their product isn’t quite ready. Or even once it’s out and being used, nothing will seem as perfect as they could be, and if you only did X, Y, and Z, then it woould be a little better. In a functional case, this leads to a great roadmap of potential improvements, and in a dysfunctional case, it leads to unlaunched products that are endlessly iterated upon without a conclusion. About a year ago I visited Pixar’s offices and learned a little about this product, and I wanted to share this small story below: Over at Pixar… Matt Silas (@matty8r), a long-time Pixar employee offered to take me on a tour of their offices and I accepted his gracious offer. After an hour-long drive from Palo Alto to Emeryville, Matt showed up while I was admiring a glass case full of Oscars, and started full tour. I didn’t take great photos, so here’s some better ones so you can see what it’s like: Venturebeat, Urbanpeak. I’ve always been a huge fan of Pixar – not just their products, but also their process and culture. There’s a lot to say about Pixar and their utterly fascinating process for creating movies, and I’d hugely recommend this book: To Infinity and Beyond. It gave me a kick to know that Pixar uses some very collaborative and iterative methods for making their movies – after all, a lot of what they do is software. Here’s some quick examples: Pixar’s teams are ultimately a collaboration of creative people and software engineers. This is reflected at the very top by John Lasseter and Ed Catmull The process of coming up with a Pixar movie starts with the story – then the storyboard – then many other low-fidelity methods to prototype what they are ultimately make They have a daily “build” of their movies in progress so they know where they stand, with sketches and crappy CGI filling holes where needed – compare this to traditional moviemaking where it’s only at the end Sometimes, as with the original version of Toy Story, they have to stop doing what they’re doing and restart the entire moviemaking process since the whole thing isn’t clicking – sound familiar, right? The other connection to the tech world is that Steve Jobs personally oversaw the design of their office space. Here’s a great little excerpt on this, from director Brad Bird (who directed The Incredibles): “Then there’s our building. In the center, he created this big atrium area, which seems initially like a waste of space. The reason he did it was that everybody goes off and works in their individual areas. People who work on software code are here, people who animate are there, and people who do designs are over there. Steve put the mailboxes, the meetings rooms, the cafeteria, and, most insidiously and brilliantly, the bathrooms in the center—which initially drove us crazy—so that you run into everybody during the course of a day. [Jobs] realized that when people run into each other, when they make eye contact, things happen. So he made it impossible for you not to run into the rest of the company.” Anyway, I heard a bunch of stories like this and more – and as expected, the tour was incredible, and near the end, we stopped at the Pixar gift shop. There, I asked Matt a casual question that had an answer I remember well, a year later: Me: “What’s your favorite Pixar movie?”Matt: *SIGH*Me: “Haha! Why the sigh?”Matt: “This is such a tough question, because they are all good. And yet at the same time, it can be hard to watch one that you’ve worked on, because you spend so many hours on it. You know all the little choices you made, and all the shortcuts that were taken. And you remember the riskier things you could have tried but ended up not, because you couldn’t risk the schedule. And so when you are watching the movie, you can see all the flaws, and it isn’t until you see the faces of your friends and family that you start to forget them.” Wow! So profound. A company like Pixar, who undoubtedly produces some of the most beloved and polished experiences in the world, ultimately still cannot produce an outcome where everyone on the team thinks it is the best. And after thinking about why, the reason is obvious and simple – to have the foresight and the skill to refine something to the point of making it great also requires the ability to be hugely critical. More critical, I think, than your ability to even improve or resolve the design problems fast enough. And because design all comes to making a whole series of tradeoffs, ultimately you don’t end up having what you want. The lesson: You’ll always be unhappy What I took away from this conversation is that many of us working to make our products great will never be satisfied. A great man once said, your product is shit – and maybe you will always think it is. Yet at the same time, it is our creative struggle with what we do that ultimately makes our creations better and better. And one day, even if you still think your product stinks, you’ll watch a customer use it and become delighted. And for a brief moment, you’ll forget what it is that you were unhappy about. Special thanks to Matt Silas (@matty8r, follow him!) for giving me a unique experience at Pixar. (Finally, I leave you with a photo of me posing next to Luxo Jr.) | Mid | [
0.542986425339366,
30,
25.25
] |
Showroom Hours Contact Us What Does Your Tire Pressure Icon Mean? The tire pressure indicators are in your vehicle to alert you when tire pressure is no longer in the recommended range. Here's a few reasons the indicator lights are on, from the Seidel Hyundai service team. If you drove over a nail or some other obstruction that is still stuck in the tire, air is leaking and the pressure is dropping. Never remove the obstruction yourself because the tire could go flat immediately. Let the tire technicians remove the item and plug if the tire is still in good shape. Otherwise, you can find tires for sale at our Reading, PA dealership. When the pressure rises or lowers by a few pounds, the indicators will go off. If the weather is too cold, the drop in temperatures can cause the pressure in the tire to drop and set off the indicator until the weather warms later in the day. The opposite occurs in extreme heat. Low pressure in a tire can be dangerous. Visit the Seidel Hyundai service facility to get the concern diagnosed properly. | Mid | [
0.564393939393939,
37.25,
28.75
] |
Being unable to resist the temptation to look, I twitched the curtain and saw two women – one elderly woman in her 50s wearing tracksuit bottoms and a young girl in an all-pink tracksuit – confronting a helmeted pizza delivery man. It would seem that they had been waiting for their pizza for over an hour and were extremely angry that the man would not give them their pizza. So while shouting phrases like “Don’t you f**king touch my daughter” (who had the said pizza), they chased the pizza delivery man down the street, and pushed over his motorbike. At some point, presumably happy that they had the pizza, they allowed the pizza delivery man to motorbike away, and went back inside their house. Two minutes later, he biked back and parked at the far end of the street. I have no idea why. | Low | [
0.46794871794871706,
27.375,
31.125
] |
What is the first thing that comes to mind when you read ingredients such as “partially hydrogenated oil” and “hydrogenated oil” on a food label? Do you think of heart disease, heart health, or atherosclerosis? Most people probably do not. As we uncover what hydrogenation is and why manufacturers use it, you will be better equipped to adhere to healthier dietary choices and promote your heart health. Hydrogenation: The Good Gone Bad? Food manufacturers are aware that fatty acids are susceptible to attack by oxygen molecules because their points of unsaturation render them vulnerable in this regard. When oxygen molecules attack these points of unsaturation the modified fatty acid becomes oxidized. The oxidation of fatty acids makes the oil rancid and gives the food prepared with it an unappetizing taste. Because oils can undergo oxidation when stored in open containers, they must be stored in airtight containers and possibly be refrigerated to minimize damage from oxidation. Hydrogenation poses a solution that food manufacturers prefer. When lipids are subjected to hydrogenation, the molecular structure of the fat is altered. Hydrogenation is the process of adding hydrogen to unsaturated fatty-acid chains, so that the hydrogen atoms are connected to the points of saturation and results in a more saturated fatty acid. Liquid oils that once contained more unsaturated fatty acids become semisolid or solid (upon complete hydrogenation) and behave like saturated fats. Oils initially contain polyunsaturated fatty acids. When the process of hydrogenation is not complete, for example, not all carbon double bonds have been saturated the end result is a partially hydrogenated oil. The resulting oil is not fully solid. Total hydrogenation makes the oil very hard and virtually unusable. Some newer products are now using fully hydrogenated oil combined with nonhydrogenated vegetable oils to create a usable fat. Manufacturers favor hydrogenation as a way to prevent oxidation of oils and ensure longer shelf life. Partially hydrogenated vegetable oils are used in the fast food and processed food industries because they impart the desired texture and crispness to baked and fried foods. Partially hydrogenated vegetable oils are more resistant to breakdown from extremely hot cooking temperatures. Because hydrogenated oils have a high smoking point they are very well suited for frying. In addition, processed vegetable oils are cheaper than fats obtained from animal sources, making them a popular choice for the food industry. Trans fatty acids occur in small amounts in nature, mostly in dairy products. However, the trans fats that are used by the food industry are produced from the hydrogenation process. Trans fats are a result of the partial hydrogenation of unsaturated fatty acids, which cause them to have a trans configuration, rather than the naturally occurring cis configuration. Health Implications of Trans Fats No trans fats! Zero trans fats! We see these advertisements on a regular basis. So widespread is the concern over the issue that restaurants, food manufacturers, and even fast-food establishments proudly tout either the absence or the reduction of these fats within their products. Amid the growing awareness that trans fats may not be good for you, let’s get right to the heart of the matter. Why are trans fats so bad? Processing naturally occurring fats to modify their texture from liquid to semisolid and solid forms results in the development of trans fats, which have been linked to an increased risk for heart disease. Trans fats are used in many processed foods such as cookies, cakes, chips, doughnuts, and snack foods to give them their crispy texture and increased shelf life. However, because trans fats can behave like saturated fats, the body processes them as if they were saturated fats. Consuming large amounts of trans fats has been associated with tissue inflammation throughout the body, insulin resistance in some people, weight gain, and digestive troubles. In addition, the hydrogenation process robs the person of the benefits of consuming the original oil because hydrogenation destroys omega-3 and omega-6 fatty acids. The AHA states that, like saturated fats, trans fats raise LDL “bad cholesterol,” but unlike saturated fats, trans fats lower HDL “good cholesterol.” The AHA advises limiting trans-fat consumption to less than 1 percent. How can you benefit from this information? When selecting your foods, steer clear of anything that says “hydrogenated,” “fractionally hydrogenated,” or “partially hydrogenated,” and read food labels in the following categories carefully: Choose brands that don’t use trans fats and that are low in saturated fats. Dietary-Fat Substitutes In response to the rising awareness and concern over the consumption of trans fat, various fat replacers have been developed. Fat substitutes aim to mimic the richness, taste, and smooth feel of fat without the same caloric content as fat. The carbohydrate-based replacers tend to bind water and thus dilute calories. Fat substitutes can also be made from proteins (for example, egg whites and milk whey). However, these are not very stable and are affected by changes in temperature, hence their usefulness is somewhat limited. Tools for Change One classic cinnamon roll can have 5 grams of trans fat, which is quite high for a single snack. Foods such as pastries, frozen bakery goods, cookies, chips, popcorn, and crackers contain trans fat and often have their nutrient contents listed for a very small serving size—much smaller than what people normally consume—which can easily lead you to eat many “servings.” Labeling laws allow foods containing trans fat to be labeled “trans-fat free” if there are fewer than 0.5 grams per serving. This makes it possible to eat too much trans fat when you think you’re not eating any at all because it is labeled trans-fat free. Always review the label for trans fat per serving. Check the ingredient list, especially the first three to four ingredients, for telltale signs of hydrogenated fat such as partially or fractionated hydrogenated oil. The higher up the words “partially hydrogenated oil” are on the list of ingredients, the more trans fat the product contains. Measure out one serving and eat one serving only. An even better choice would be to eat a fruit or vegetable. There are no trans fats and the serving size is more reasonable for similar calories. Fruits and vegetables are packed with water, fiber, and many vitamins, minerals, phytonutrients, and antioxidants. At restaurants be aware that phrases such as “cooked in vegetable oil” might mean hydrogenated vegetable oil, and therefore trans fat. Key Takeaways Hydrogenation is the process of adding hydrogen to the points of unsaturation in unsaturated fatty acid chains. The resulting oil is very hard and unusable. Partial hydrogenation is the process of adding hydrogen to some of the points of unsaturation in unsaturated fatty acid chains. This produces oils that are more spreadable and usable in food products. Food manufacturers favor the use of hydrogenated oils because they do not succumb to oxidative damage, they increase the shelf life of food products, and they have a high smoking point. Fat replacers mimic fat but do not have the same chemical configuration as fat. Therefore the body does not process these the same way it would a naturally occurring fat. Fat substitutes such as Olestra have produced symptoms of fat malabsorption in some people. Discussion Starters Describe how trans fatty acids are created. Explain the drawbacks of consuming this type of fat and its impact on human health. Make a list of the foods in your kitchen. Read each food label. List all of the food items that contain trans fat. Recall the recommendation that trans fat be less that 1 percent of your fat intake. About what percentage of your diet is currently trans fat? Do you see a need to adjust your trans fat intake? Recommended articles The LibreTexts libraries are Powered by MindTouch®and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Have questions or comments? For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. | High | [
0.677966101694915,
30,
14.25
] |
Risky internet behaviors of middle-school students: communication with online strangers and offline contact. In today's world, more adolescents are using the Internet as an avenue for social communication and a source of information and to experiment with risky online behaviors. To better understand how early adolescents are using the Internet, a study was undertaken to more clearly identify online use and online risky behaviors and to describe any online relationships with strangers middle-school students may be participating in. This exploratory study adapted the Youth Internet Safety Survey of Finkelhor et al to identify the usage and characteristics of online youth, solicitation of youth, and risky behaviors. Four hundred and four students, with a mean age of 12 years, were recruited from public and parochial schools located in the Northeast. Findings from this study indicate that of a total sample of 404 middle-school students, a small grouping (n = 59; 14.6%) are beginning risky online communication behaviors with strangers. Students who communicated online with strangers were older and had higher rates of posting personal information, risky online behaviors, and stealing. The majority of this group (84%) met offline with the online stranger, and three students reported having been assaulted. Findings suggest that early adolescents are beginning risky online and offline behaviors. Understanding their experiences is important since they highlight how middle-school students are undertaking risks in a new environment that many adults and parents do not fully understand. Clinicians, educators, healthcare providers, and other professionals need to be informed of Internet behaviors in order to assess for risk, to make referrals, to intervene, and to educate. | Mid | [
0.6388888888888881,
34.5,
19.5
] |
material Ogre/Earring
{
technique
{
pass
{
ambient 0.5 0.5 0
diffuse 1 1 0
texture_unit
{
texture spheremap.png
colour_op_ex add src_texture src_current
colour_op_multipass_fallback one one
env_map spherical
}
}
}
}
material Ogre/Skin
{
technique
{
pass
{
ambient 0.7 0.7 0.7
cull_hardware none
texture_unit
{
texture GreenSkin.jpg
tex_address_mode mirror
}
}
}
}
material Ogre/Tusks
{
technique
{
pass
{
ambient 0.5 0.5 0.4
diffuse 1 1 0.8
texture_unit
{
texture dirt01.jpg
colour_op_ex add src_texture src_current
colour_op_multipass_fallback one one
}
}
}
}
material Ogre/Eyes
{
technique
{
pass
{
texture_unit
{
texture WeirdEye.png
}
}
}
}
material Cursor/default
{
technique
{
pass
{
scene_blend alpha_blend
texture_unit
{
texture cursor.png
tex_address_mode clamp
}
}
}
}
material Core/StatsBlockBorder/Down
{
technique
{
pass
{
lighting off
scene_blend alpha_blend
depth_check off
texture_unit
{
texture ogreborder.png
}
}
}
}
material Core/StatsBlockBorder/Up
{
technique
{
pass
{
lighting off
scene_blend alpha_blend
depth_check off
texture_unit
{
texture ogreborderUp.png
}
}
}
} | Mid | [
0.567460317460317,
35.75,
27.25
] |
Effects of form of the diet on anatomical, microbial, and fermentative development of the rumen of neonatal calves. Eight neonatal, Holstein bull calves were paired by birth date and birth weight and randomly assigned to either a finely ground or unground control diet (chopped hay and rolled grain) to study the effects of the physical form of the diet on anatomical, microbial, and fermentative development of the rumen. The diets varied in particle size but were identical in composition (25% alfalfa hay and 75% grain mix). Calves were fed milk at 8% of birth weight daily until weaning. Feed intake was equalized for each pair of calves. Ruminal fluid samples were collected from ruminal cannulas to determine pH, fermentation products, and buffering capacity and to enumerate bacteria. Calves were slaughtered at 10 wk of age, and weights of the full and empty reticulorumen, abomasum, and omasum were recorded. Ruminal tissue samples were taken to assess papillary development by morphometric measurements. Calves had similar body weights at wk 10. Ruminal pH was affected by age and was lower for calves fed the ground diet. Total anaerobic bacterial counts were not affected by the physical form of the diet; however, calves fed the ground diet had lower numbers of cellulolytic bacteria and higher numbers of amylolytic bacteria than did calves fed the unground diet. Physical form of the diet did not affect the weights of the reticulorumen whether full or empty. However, calves fed the ground diet had heavier omasum weights, both full and empty. Physical form of the diet affected papillary size and shape but did not influence the muscle thickness of rumen. Results indicated that the physical form of the diet had a significant influence on the anatomical and microbial development of the forestomac and, therefore, might influence future performance. | High | [
0.6829268292682921,
35,
16.25
] |
Rory McIlroy admits he ‘wouldn’t be fulfilled’ without claiming a Masters title Rory McIlroy plays from a bunker during last year’s Masters. The Augusta course was the scene of a painful collapse in the closing stages of the 2011 tournament.Photograph: Jim Watson/AFP/Getty Images As Rory McIlroy prepares for a third attempt at completing a grand slam of major championships, he has admitted he “wouldn’t be fulfilled” without winning the Masters. McIlroy has not won a major since August 2014, when success at Valhalla afforded him a second US PGA Championship. The Masters remains the only one of the big four events to elude the Northern Irishman and the significance of that is not something he will readily understate. Whatever else McIlroy might lack in the coming days, motivation is not in doubt. “I’d love to give you an answer and say my life is already fulfilled with everything that’s happened and everything that’s going to happen in the future, by starting a family and all that,” McIlroy said. “But if I didn’t have a Green Jacket there would be a tiny piece that would just be missing. It really would be. I wouldn’t be fulfilled if I didn’t get it. “I said it in an interview when I was eight years old; I want to be the best golfer in the world and I want to win all the majors. I’ve nearly done all of that. There’s one piece of the puzzle that’s missing.” McIlroy will obviously begin this, the 81st Masters, as a live contender having claimed top-10 finishes on each of his last three visits to Augusta. It is commonly forgotten but little over six months has passed since the 27-year-old won two FedEx play-off events, thereby claiming the $10m bonus pool. Thereafter McIlroy was inspired, despite Europe losing to the USA team in the Ryder Cup at Hazeltine. “The Ryder Cup brings out emotions in me I didn’t think I had,” McIlroy said. “I didn’t think I’d act like that. I watch it back now and I’m like: ‘Wow, I don’t think I could get any more fired up than that.’ It’s definitely different for a golf event and it was the first event I felt like the away team. “I’d never want to feel intimidated by the opposing crowd but you’re up against it. You’re not just playing your opponent. You’re playing the 50,000 people yelling at you as well. As a golfer, you don’t get that very often. I got into it. I got into it on the golf course, I got into it with the fans.” Speaking to espn.com, McIlroy looked further back, to his painful collapse over the closing stretch of the Masters in 2011, which was almost immediately followed by victory at the US Open. “That was probably the most important two months of my life and the most important two months of my career,” he said. “I learned a lot about myself as a golfer. I knew everything that happened at Augusta that went wrong. I knew what I needed to do to fix it and make sure that didn’t happen again.” McIlroy, once inspired by Tiger Woods, has featured in every Masters since 2009. Memories of that debut remain vivid, if not entirely for positive reasons. “I remember thinking to myself: ‘I’m never going to have that experience again, of watching it the way I did as a kid,’” he said. “People remember watching Masters Sunday. I had grown up with the Masters and sitting down with my dad and watching, rooting for Tiger to win. “Hopefully kids are now watching me try to win a Green Jacket and hopefully they get that same excitement from watching me try to succeed.” Having missed the cut at the Shell Houston Open last week, Jordan Spieth was among those at Augusta on Sunday morning to witness the children’s Drive, Chip and Putt event. The Texan has revealed that in December, on his first visit back to the 12th hole where a Sunday seven last year cost him a second successive Masters, he made a birdie two. “I was walking around with my hands up, like the demon was gone,” Spieth said. Meanwhile, Russell Henley overcame a four-shot deficit in the final round of the Shell Houston Open to claim a third PGA Tour title and the last place in the Masters field. The 27-year-old American carded a seven-under-par closing round of 65 in Texas to usurp South Korea’s Sung Kang and seal victory by three strokes. Kang had two birdies on the front nine, but swiftly handed the shot back with bogeys on both occasions and could only par his way in after the turn. Henley took full advantage, making gains from 11 feet on the second and seven feet on the fourth before a hat-trick of birdies from the sixth handed him the lead. A double bogey on the par-three ninth - where Henley found a bunker from the tee and then three-putted - briefly checked his progress. However, he responded with a birdie from 14 feet on the next and again reeled off three successive birdies from the 13th, before registering a 10th gain of the day on the 17th. A dropped shot at the last saw Henley finish on 20 under par for the week, with Kang three shots adrift in second. Rickie Fowler and Luke List tied for third on 16 under, with English pair Justin Rose and Andy Sullivan the leading British challenge on seven under after matching closing rounds of 70. Good news for the Masters organisers has arrived via improved weather forecast. Whereas early predictions were that this week could be beset by storms, a subsequent forecast on Sunday had only Thursday morning carrying a thunder threat. Thursday and Friday, however, are set for gusts in excess of 20mph. | Mid | [
0.612612612612612,
34,
21.5
] |
Q: Would it be against the JLS's philosophy to compress do...while like this? While writing another do ... while loop in Java, I started thinking about its syntax. A typical programmer will write something like do { somethng(); }while(booleanValue); However, Java has a built-in way of compressing this by allowing the programmer to remove the braces when only one line is inside the loop (this is typical for Java expressions): do something(); while(booleanValue); This can be thought of and written as do something(); while(booleanValue); or even as do; while(booleanValue); This is quite interesting. This brought my attention to the fact that this is a Java statement that must be read and run spanning two lines, meaning that after the do line is run, then the operation is not complete until the while line is run. Remember that other expressions are only one line: if(booleanValue) something(); while(booleanValue) something(); for(Object declaration; booleanValue; operation()) something(); for(Object declaration : iterableObject) something(); throw throwableObject; return anyObject; switch(integer) {} //case is excluded because it operates a lot more like a goto operation than a statement So I started thinking, not about why this is, but about how to compress this statement a bit more. Assuming that a "line" is anything that's terminated in a semicolon (;) or contained within braces ({ and }), let's go into this with this knowledge: a do statement necessarily requires two lines to be run and a do statement is must continue running at least until it reaches an empty while statement. So why ever use braces? Let's look at some scenarios with this syntax. How about a do...while statement with one enclosed statement: do something(); while(booleanValue(); Alright, nothing new, here. What about do...while statement with three enclosed statements: do statement1(); statement2(); statement3(); while(booleanValue); Here, Java will see that it is a do statement, and it will run lines 2, 3, and 4 before seeing the empty while statement. At this point, it knows we're ready to end the loop, evaluate the expression, and possibly return to the do or exit the loop. How about something that you might think breaks it; a nested while loop: do statement1(); while(booleanValue1) statement2(); while(booleanValue2); Here, Java sees it's a do statement and enters it, then runs line 2 normally. At line 3, it sees that it's a while statement. At this point it must decide whether this is the end of the loop. It inspects the while statement and discovers that it is not empty, and therefore not the end of the do loop. It enters the while statement and loops lines 3 and 4 until it doesn't anymore. It then sees another while statement and, upon inspection, sees that it is empty and, therefore, the end of the do statement. It evaluates it and may or may not return to line 2, but we don't care right now. But Supuhstar, that sure seems like a lot of calculation! Won't that slow down the JVM? Not necessarily! This can all be done at compile time, much like Java determines if any other statement listed above is empty (for instance, trying to compile while(true); will result in the next line compiling with the error "unreachable statement"). So, much like for(Object declaration : iterableObject) somethingInvolving(declaration); compiles into Iterator<Object> i = iterableObject.iterator(); while(i.hasNext()) { Object declaration = i.next(); somethingInvolving(declaration); } then do statement1(); while(booleanValue1) statement2(); while(booleanValue2); could compile into do { statement1(); while(booleanValue1) { statement2(); } } while(booleanValue2); But Supuhstar, Java doesn't care about indentation! Can yours be written with any indentation, or none at all? Certainly! The compiler would just as easily compile do statement1(); while(booleanValue1) statement2(); while(booleanValue2); as it would do statement1(); while(booleanValue1) statement2(); while(booleanValue2); and both of these would do exactly the same thing. Does anyone else agree that this would be an okay thing to include as Java syntax, or is there some glaring reason I'm missing that this cannot be done? A: Think about this code if(true) if(false) System.out.println("x"); else System.out.println("y"); it prints "y". Sometimes, it's too confusing to humans without braces. A: This particular statement struck my as slightly confused: It inspects the while statement and discovers that it is not empty, and therefore not the end of the do loop. Because what do you mean by "not empty"? while-loops are allowed to be "empty": do statement1(); while(booleanValue1); while(booleanValue2); That would be valid under your plan even though the first while-loop would close the do-while instead of the second. That being said, the grammar you are suggesting would not be ambiguous I don't think .... but doesn't matter because it would never be introduced because the language designers are not (and never have been) interested in making Java a cleaner or more terse language. You could even say verbosity is fundamental part of the Java philosophy. There many things the compiler could infer or allow but doesn't. The good news is there are many more modern languages like Scala that allow you to write more brief, expressive code. | Mid | [
0.548302872062663,
26.25,
21.625
] |
Muskegon, December 21st, It is easier if you cheat Sorry for another post from Muskegon so soon again, since I have plenty of photos from around home to share. However, a few things happened this past weekend that I want to post about while the day is still fresh in my memory. To begin with, it was a rather slow day as far as birds to photograph. It hasn’t been very cold here compared to our average temperatures or even the way that November was. But, most of the water at the Muskegon wastewater treatment facility has frozen over, meaning most of the ducks and geese, other than a few mallards, have left for down south. With the waterfowl gone, most of the raptors have moved on as well. I did see a few bald eagles and hawks, there’ll be photos later, but really, the only subjects that I saw worth photographing were the snowy owls. Snowy owl in flight That image was shot about half-way through my day, before I started cheating, which I’ll get to later. I learned a great many things this day, about snowy owls, photography, editing photos, what other people will do for a great photo, and what I’ll do for a good photo, but I’ll get to those things as I go. The day began cold, cloudy, hazy, with a strong enough wind to make it feel much colder than it was, which was about the freezing point for most of the day. I didn’t even make it all the way past the entrance drive to the wastewater facility before I spotted the first of five snowy owls for the day. I started out using the Beast (Sigma 150-500 mm lens) by itself for this photo. Snowy owl, 500 mm, not cropped I wasn’t that impressed by the position of the owl or the conditions, so it was playtime. I added the 1.4 X tele-converter to the Beast, meaning that I had to manually focus for this one. Snowy owl, 700 mm, not cropped Here’s the cropped version. Snowy owl, 700 mm, cropped Not too shabby given the poor light at the time. I set-up my second camera body with the 300 mm prime lens, the 1.4 X extender, and for shooting action photos, primarily birds in flight. I tested all the camera and lens settings out on a pair of common mergansers that I spooked by accident. Common mergansers taking flight A little farther along the road, I spotted my second snowy owl of the day, but it wasn’t in the mood to have its photo taken. Snowy owl number two I tried out my action set-up again on a rough-legged hawk, first, as it landed…. Rough-legged hawk landing …switched to the Beast while it was perched…. Rough-legged hawk …and managed to grab the action set-up as the hawk took flight. Rough-legged hawk taking flight Some light sure would have helped those, or any of my early action photo attempts. Pigeons (Rock doves) taking flight The white pigeon tried to fool me into thinking it was a snowy owl, but I didn’t fall for it. 😉 Pigeons (rock doves) Not far from there, I found my third snowy owl of the day, this one was willing to pose for a few photos. Snowy owl looking right Snowy owl looking left Snowy owl looking straight at me And, here’s the cropped version of the image above. Snowy owl looking straight at me, cropped The chance one takes getting so close to birds is that if they decide to fly away… Snowy owl taking flight …you only get parts of the bird in the image. Snowy owl feet That’s for any readers that have a snowy owl foot fetish. 😉 The owl didn’t go far… Snowy owl in flight …I learned that snowy owls like my Subaru Forester. Snowy owl perched almost over my Subaru I had to shoot a few more photos of the owl as I walked back to my car. Snowy owl From there, I hit all the typical birding hotspots around the wastewater facility proper, and the surrounding areas within the Muskegon State Game Area, which are considered part of the wastewater facility as they are under the control of Muskegon County, even though it is state land. I saw a pair of bald eagles on the ice of one of the lagoons, but well out of camera range. There were dozens of crows and hundreds of gulls, but little else to see. It may have been the slowest day of birding that I’ve ever had there. I had planned to also go to the Muskegon Lake channel to look for late season migrating waterfowl, but looking towards that direction, I could see that the clouds were even thicker there, and very few waterfowl have shown up there according to eBird reports. Instead, I drove back to the man-made hill that overlooks the grassy cells to hang out for a while and see if anything showed up. I’ve had good luck doing that in the past, and hoped that it worked again. I could see the first owl from when I arrived in the morning was still there in the grassy cells, and as I waited, I noticed that the owl hunted in a pattern of sorts. The owl would perch on one of the pipes… Snowy owl …or “ridges” that delineate each of the grassy cells… Snowy owl …in a location where it could look down into two of the cells at a time. It would stay in each location for 15 to 30 minutes, and if it didn’t see anything, …. Snowy owl taking off Snowy owl in flight Snowy owl in flight …it would zig-zag across one cell to a spot where it could see down into the next two cells. Watching the owl working its way across the grassy cells one pair at a time, I wondered if I could get in position ahead of the owl without it changing its pattern. The answer is obvious now, from the photos above, it was working well enough. In the photo above, you can see some of the pipes and other objects that the owl was using as perches in the background. So, once the owl had moved, I would move to a point as close as I dared to get to the next place that I thought that the owl would land the next time that it moved. Apparently, snowy owls hunt in a pattern that one can use to get into a better position to get good photos, one of many things I learned this day. The next thing that I learned is that snowy owls, at least the one I was watching, have a low success rate while they are hunting. I was fine tuning how I was positioning myself anticipating the owls moves as the day went on. At one point, the owl took off, I didn’t save any of those photos, but the owl dove down into a cell, and didn’t come flying out the other side as I expected, so I went to see why… Snowy owl trying to dig up a rodent Snowy owl trying to dig up a rodent Snowy owl trying to dig up a rodent Snowy owl trying to dig up a rodent …the owl kept digging for whatever it had missed, for so long that I switched over to shoot video for this. If I’d have been smart, I would have kept the camera rolling, for the owl took off a few seconds later, which would have been great, until I lost the focus since I have to focus manually to shoot video. But, I had another piece to the puzzle that I was putting together to get good action shots of the owl. It wasn’t long before the owl took off and tried for another rodent. Snowy owl in flight Snowy owl in flight Snowy owl in flight Snowy owl in flight I was shooting in high-speed burst mode, and had already learned that shooting in RAW filled the buffer of the 60D much quicker than shooting jpeg. So, I would shoot a burst when the owl was at its closest to me, then stop. Bad move, because I missed the owl pouncing. Snowy owl trying for a kill That was a split second after the owl had hit the ground. Once again, it had missed whatever it had been after. Snowy owl It seemed to be having trouble figuring out how to get out of the reeds… Snowy owl …until it remembered that it was a bird and could just fly out. Snowy owl taking off Snowy owl in flight I was quite pleased with the way that my plan was working, I was getting fair shots of the owl both as it was flying, and as it was perched. I was shooting in RAW so that I could edit the photos I was shooting, but up to this point, none of the photos have been edited other than being cropped. Being nearly all white, getting the exposure correct on the snowy owls can be tricky, especially as the background changes behind the owls as they fly. I wasn’t completely happy with the white balance either, snowy owls aren’t blue… Snowy owl in flight Snowy owl in flight …but if I changed the white balance of my camera from cloudy to shade, then the owl and especially the background came out orange. Snowy owl So once I was home, I played with the editing features of the Canon software that came with the camera for this one, and I got the white balance as close as I could come. In some ways I prefer the warmer colors of the “orange” owl, but this next one is as close to neutral, and real life, as I could get. Snowy owl This was my first real attempt at any type of editing other than cropping, and I found out that using the Canon software is very tedious, but worth the effort at times to change an image from just a so-so one into something better. Who would have thought that I’d be tweaking the white balance or exposure of my images as I have done with some of the rest of the photos from the day? It’s becoming clearer to me all the time that no matter how one sets up a camera, there are limitations to how well the images will look as they come out of the camera. It was late summer when I lamented that I couldn’t adjust the exposure compensation in one-quarter of a stop increments, as one-third of a stop sometimes seemed to be too much at times. Software adjustments allow me to make those small exposure adjustments that can’t be done by the camera. Or, to fine tune the white balance when the weather on a particular day doesn’t match up exactly with the camera settings available to use. Maybe it’s because my skills as a photographer are improving that I’ve come to realize the limitations of what a digital camera can record as matched against what my eye sees as I press the shutter release. A year ago, I would have said that the editing that I’ve done to some of the following photos was cheating, now, I see it as overcoming the limitations of my gear, allowing me to capture what I saw. But, there’s almost cheating by editing photos, and then there’s real cheating. I had been following the one snowy owl around for a couple of hours, and was getting good at positioning myself to get some good photos. One other vehicle had stopped by at one point, the occupants shot a few photos then left the owl to me again. I was quite pleased with myself for having learned how the owl hunted, being able to predict the best position to get to ahead of the owl, and the photos that I was shooting. I did wish that there had been more light so I could have shot at a faster shutter speed so that the images would have been sharper… Snowy owl Snowy owl taking flight Snowy owl taking flight …but overall, I thought that things were going well. All that was about to change. Snowy owl and photographer Some of you may remember the first time that I posted about the snowy owls, and the guy with the BIG LENS that was a real jerk. I was afraid that I would be seeing a rerun of that episode, but in some ways, this day was worse. This guy with the BIG LENS set-up his gear…. Another photographer setting up …the type of gear that I can only dream of owning, unless I win the lottery. 😉 Since the owl had just moved to another spot to look for food, I looked around, and picked out the next perch that I thought that the owl would use as it continued to hunt in the grassy cells, and positioned myself to wait for the owl to come to me. It all went according to plan, and even the light began to improve, for it wasn’t long before the owl took off, and I was able to shoot these. Snowy owl in flight Snowy owl landing Snowy owl landing Snowy owl Snowy owl I was really patting myself on the back for having been able to shoot that series of photos, and the owl had even gotten a mouse to eat as you can see. But, I didn’t know that owl and the I had gotten some help from the other photographer. The owl soon took off from the post, and headed back to where it had come from. Snowy owl taking flight I thought that it was odd that the owl went back, but I reasoned that it had been successful, and maybe it had seen or heard more mice in the same area as it had captured the first. So, I sat and waited, and it wasn’t long before the owl came back towards me. This time, I had even a better idea of the route the owl would take, and was able to get better photos. Snowy owl in flight Snowy owl in flight Snowy owl landing Snowy owl landing The owl had even gotten another mouse, but this time, it had been a white mouse. Wait a minute, something fishy is going on here, there are no white mice in the wild that I know of. I had been watching the owl intently all of the time, and not paying any attention to what the guy with the BIG LENS was up to. The owl quickly returned to the spot from which it had started the previous two flights, and this time, I watched what the guy with the BIG LENS was up to. After a few minutes had gone by, I watched him get his camera gear all set, then reach into his Jeep to get something out of it. He walked out into the edge of the grassy cell, held out one arm for the owl to see, then tossed something, a mouse, out for the owl. It didn’t take long for the owl to react, it took off right away. The guy with the BIG LENS must have had the BIG LENS set to shoot photos of the owl as it took off, and he must have triggered that camera remotely. He used the shorter lens to photograph the owl as it flew. Even though I had seen the mouse thrown out into the grassy cells for the owl, I couldn’t resist shooting another batch of photos. It was almost like shooting fish in a barrel, as this time, I was 95% certain of the exact route the owl was going to take. I knew about where the owl was going to rise up above the mound that forms the grassy cell, so I pre-focused on that spot, and caught the owl just as it appeared in my view again. Snowy owl in flight I knew that it would probably drop back down into the second cell, so it was easier to follow the owl in flight. Snowy owl in flight And, I knew almost with certainty where it would land, so I pre-focused on the post with the reflector, and waited for the owl to appear in the viewfinder. Snowy owl landing Snowy owl landing Snowy owl landing Snowy owl landing Yes, it’s a lot easier to get better photos if you cheat and use bait to get an animal close to you, and to have it do something that you’re prepared for! I debated whether I would even post those last three series of images, since the owl had been baited by the guy with the BIG LENS. I believe that it is unethical to use bait while photographing nature, and while I wasn’t the one throwing mice out for the owl, I benefited from it, especially in that last series when I knew what was going on. On the other hand, I had been watching the owl all afternoon, and had positioned myself where I did based on having watched the owl, and getting to know how it behaved as it hunted. There was a high probability that the owl would have landed on the fence post with the reflector the next time that it had moved, even if the guy with the BIG LENS hadn’t come along to toss mice out to the owl. But, that would have been a one time deal, not something that the owl repeated several times for me to learn its exact flight path in order to be prepared for when the owl took that flight. In case you’re wondering the guy with the BIG LENS got some fantastic photos and a video of the owl, I know that because a few of them have been posted on the web site of the Muskegon County Nature Club, so I know who the guy with the BIG LENS is. I decided that I would use those photos that I shot for several reasons. One, they are a record of what I saw that day, including the guy baiting the owl. Two, I had invested most of a day in learning how to get close to the owl as it hunted, it wasn’t my fault that some one showed up to bait the owl just as the light got better. I shot over 400 images of the owl and many of them are close to being as good as the last three series of photos from when the owl was being baited. The photos from earlier in the day would have been as good or better than those that I shot as the owl was being baited if the light had been as good earlier. And, like I said, I chose where to position myself based on having learned the owl’s habits, not on the fact that some one started tossing mice out to the owl. Still, it is an ethical issue that I’m still having trouble coming to grips with. At the time, I was so disgusted with myself, that I went over to shoot this crummy shot of two bald eagles that I had been keeping an eye on as I was watching the owl as a way of atoning for the sin of having photographed the owl that had been baited. Adult bald eagles Well, that all took place a week ago. I haven’t had much time to work on this post, as I was either working long hours the first three days of the week, or out walking trying to get a few good photos despite the constant gloomy weather that’s been in place here. I still haven’t come to terms with having photographed the owl even though I knew some one else was feeding it, I may not ever. But, I’ll have more about that in future posts, right now, it’s time to stick a fork in this one, as it’s done. 35 responses I don’t think you should flail yourself over taking pics of the owl being baited. 🙂 As you said, you spent a good long time watching its behaviour so that you’d know where to wait. You couldn’t help that Big Lens man turned up.And anyway, I thought the pre-baiting pics were just as good! I must admit that when you wrote about the snowy owl foot fetish, I had actually been thinking something similar and laughed. I think Snowy Owls have become my favourite bird for the moment. It’s just that I can’t get over the fact that you can see an owl during daylight hours. Seeing an owl is so rare here. We have frogmouths which look like owls and hunt at night but they are really nightjars. And not only is it amazing to me that you can observe an owl in the daytime, it’s such a beautiful one! This post had a lot of “wow” factor for me. With regards to editing software, I think that making a few slight changes such as with brightness, contrast and colour balance to help show what it actually looked like to you when you saw it, is not really cheating when your gear is limited. Certainly many of the people with very expensive equipment don’t hesitate to use it anyway! What upsets me a little is when photos are altered to look different than they look in real life, but are sold as “real” life. Such as when a sunset’s colours are completely changed on purpose and don’t look at all like they did in real life. But if it’s done in the name of art and people are aware it has been changed, then it’s ok. I suppose, airbrushing of women’s bodies to look perfect is an example of deceit that can influence people in a very negative way if they believe the model really looks like that. We are not encouraged to feed wildlife here for a number of reasons. They become too dependent or tame and lose their natural fear. They can sometimes catch diseases from food fed to them that they wouldn’t ordinarily be exposed to in the wild. There are cases of viruses being transmitted to birds in this way. In the case of some of our parrots here people just feed them only sunflower seeds which is not a balanced diet, so they can become sick. Thanks for feeding my snowy owl “fetish”. 😉 Snowy owls normally live above the Arctic Circle, the “land of the midnight sun” where the sun doesn’t set for several months during the summer, so the owls have adapted to hunting during the day. A few migrate south for the winter, and they still prefer to hunt during the day. They also prefer much more open areas than other owls, that prefer to hunt in wooded areas. I promise not to go crazy while editing my photos, unless I do it as a joke, and you’ll know if I do. Thanks for the comment and question! No, I don’t think that shooting birds at a feeder is the same as baiting a raptor or owl. The birds at a feeder don’t lose their ability to find seeds or berries elsewhere, owls and raptors do lose their hunting skills it the birds don’t use them. They also lose their fear of humans, some of the mice sold that are used for bait carry diseases, and there are jerks that poison the rodents because they dislike birds of prey. I think the guy throwing mice to the owls would be like me taking potted wildflowers into the woods and taking photos of them. A large part of the thrill of nature photography for me is finding the things to photograph in the first place, so feeding animals to attract them would take all of the joy out of it. I might as well go shoot photos at a zoo or a botanical garden. I wonder what your fish and game department would think of him feeding the owls. Anyhow you got some great photos of snowy owls and there’s a lot to be said for that considering I’ve never even seen one. I was interested in your observations of their hunting habits. I’ve been watching a red tailed hawk hunt cornfields and he always seems to return to the same two or three trees. As far as correcting minor exposure problems goes, you’ll find that Lightroom makes it easy. My son got me the version 5 upgrade and now it’s even easier. It’s the only way I’ve found to work around poor lighting but it won’t work miracles. You have to have something there worth fixing to start with! Thanks Allen! I don’t know why the guy was baiting the owl in the first place. I was doing little to no cropping of my images and I don’t have a lens close to as long as he was using. Of course, I spent 4 hours learning how the owl hunted, he just showed up and started tossing mice out there to bring the owl in close. You’re right, red-tailed hawks, along with bald eagles, have favorite perches to hunt from, either that, or both soar without flapping their wings very often. On the other hand, rough-legged hawks seldom soar or perch, the hover over a spot while flapping their wings like kestrels do. Different species of birds of prey have different ways of hunting, it makes it easier for me to ID them at a distance. 😉 These were my first attempts at editing RAW images, and I’m hooked! I can’t wait for a computer and software that are up to the task! My Christmas was pretty good, I hope that yours was a good one as well! You managed to get some fantastic shots of the snowy. Loved the changing colors of the background in the first shot, and I applaud your creativity in the shot of the feet. What a fun day that must have been for you. I agree with your other commenters when they say you shouldn’t feel bad about taking advantage of the Snowy Owl being baited. It’s not something you would ever do and you had already taken some fabulous shots of the owls already and had learnt so much about their behaviour. I worry a lot about raptors and owls being baited like this as this is the way they are poisoned by people who don’t like them. I wouldn’t worry about tinkering with the white balance in particular when you are shooting in RAW. If you were shooting in JPG the camera software would be making all sorts of decisions for you which are then hard to get back to looking right. The whole point of shooting in RAW is that it leaves you to make the decisions and gives you the tools to make it happen. Thanks Tom! I kind of like being able to tweak the white balance in RAW to get it right when the camera can’t. Those were my first real efforts at doing any editing, I was impressed by even the software that came with the camera. I can’t wait to try out Lightroom! I also agree that you shouldn’t feel guilty about taking advantage of the opportunity provided. You got some amazing shots! That said, I agree with you about baiting raptors and owls. Aside from the pitfalls already mentioned by others, if he buys mice from a pet store, there’s a good chance that the mice have been given supplements or preventative antibiotics that could prove fatal for a bird. I saw that happen with fish a rehabber unwittingly gave to an injured Horned Grebe. The Grebe died after eating just two small feeder fish from a pet store. (In the rehabber’s defense, she was told they would be fine for the Grebe. It wasn’t until she went back to find out why the bird had died that they told her about the medications.) Thanks for sharing your shots, your stories, and for being such a conscientious nature photographer! Delightful snowies, and as always I appreciate the way you share your learning curve, whether with hardware or software. Don’t get me going on the unethical practices of some photographers baiting wildlife. It is especially rampant among some of the “guided” photographic excursions – where a photographer takes paying clients on shoots where sightings of particular birds are “almost guaranteed.” Some guides will bait an area for many days before a client outing, training birds to a feeding schedule, so that the client is guaranteed a successful shoot. IMO, no respectful public ooo, magazine, or website should publish any photos taken under those circumstances. The photos taken by BIG LENS GUY should be removed from the place they were posted IMO. He should be exposed for violating ethical guidelines, IMO. You might want to write a story about this for your local newspaper. You don’t need to call out the guy by name. Yes, they have, and that really is more than disappointing! I think, though, that there must be a way to raise people’s consciousness. To start at a local level maybe, so the next generation of photographers has ethics embedded earlier. It’s an uphill battle, but one I am willing to fight. Well, that is unfortunate about the “professional” photographer baiting the owls, but I thought your shots were awesome even before the baiter showed up!! I would be so giddy to see/shoot snowy owls, so I really appreciate you giving me a glimpse into a day with them! Such amazing creatures! Thanks, Jerry!! Thanks, Jerry! I missed all my blogging friends. Life just got away from me during the holiday season. I knew I had to give up something, and blogging was it for a couple of months. Hopefully all will return to “normal” now!! I love the second shot of the snowy owl–the moody one when s/he is sitting on the concrete thingamajig. As to Muskegon, I happened to be reading a book I got for Christmas– “Fowl Weather” by Bob Tarte (who apparently must live not too far from you). Out of nowhere he starts talking about Muskegon and its birding opportunities and all I can think is–hey, I know this place already thanks to Quiet Solo Pursuits! 🙂 You were there first and captured great photos way before the guy showed up and used mice to bait the Snowy Owl. Do not feel guilty. The Snowy Owl is beautiful! As I type this I hear my Great Horned Owls in the back. Oh how I would love to get their picture. I have seen them at dawn and will need to wake up early one day to take a photo. However as soon as the light comes up… they are gone! Speaking of Gone… Welcome Back! Happy New Year to You! Search my blog Blog Stats Email Subscription Enter your email address to subscribe to this blog and receive notifications of new posts by email. Join 1,717 other followers Copyright Notice The photographs and text on this site are the property of Quiet Solo Pursuits. You may link to these articles but may not use the photographs or text without written permission. Please don’t nominate this blog Very early on when I began this blog, I decided that I would not accept any of the awards that are bestowed upon blogs. I do appreciate the thought, I really do, but most of the awards require that I in turn nominate other blogs for the award. I do not wish to have to pick and choose winners from the blogs that I follow regularly, they are all great, or I wouldn't follow them. I feel bad enough as it is already as I don't update the list of blogs I follow as often as I should. | Low | [
0.49227373068432606,
27.875,
28.75
] |
991 F.2d 794 NOTICE: Sixth Circuit Rule 24(c) states that citation of unpublished dispositions is disfavored except for establishing res judicata, estoppel, or the law of the case and requires service of copies of cited unpublished dispositions of the Sixth Circuit.Dennis W. GALLIHER, Plaintiff-Appellant,v.SECRETARY OF HEALTH AND HUMAN SERVICES, Defendant-Appellee. No. 92-1505. United States Court of Appeals, Sixth Circuit. April 6, 1993. Before KEITH and RYAN, Circuit Judges; PECK, Senior Circuit Judge. PER CURIAM: 1 Plaintiff-Appellant, Dennis W. Galliher, appeals the district court's judgment granting the Secretary's motion for summary judgment in an action for Supplemental Security Income and Disability Insurance Benefits. For the reasons stated below, we REVERSE the judgment of the district court. I. 2 On July 15, 1985, Galliher filed applications for Supplemental Security Income and Disability Insurance Benefits, alleging disability due to an emotional problem. These applications were denied by the Social Security Administration initially and also upon reconsideration. 3 On May 24, 1987, Galliher requested a hearing on his claims. A hearing was held on November 10, 1987, and Galliher's claims were denied by an Administrative Law Judge (ALJ). Galliher then requested a review of his claims by the Appeals Council. On review, the Appeals Council found that the ALJ's decision was not supported by substantial evidence and remanded the claims to the ALJ for further review. 4 A second hearing of Galliher's claims was held on September 2, 1988. Again, the ALJ denied Galliher's claims for benefits and issued a new decision. The Appeals Council reviewed the ALJ's decision and again remanded the claims to the ALJ for further proceedings and a new decision. The Appeals Council directed the ALJ to "provide [a] rationale for the residual functional capacity and limitations found, as well as delineate all of the claimant's past relevant work." (Notice of Order of Appeals Council, Sept. 25, 1989, at 2). 5 A third and final hearing before the ALJ was held on May 18, 1990. The ALJ again denied Galliher's claims. The Appeals Council denied Galliher's request for further review thereby allowing the ALJ's decision to stand as the decision of the Secretary. Galliher then brought an action in the Eastern District of Michigan, seeking a review and reversal of the Secretary's denial of his claims. 6 On November 11, 1991, Magistrate Thomas A. Carlson reviewed the Secretary's decision and in a Report and Recommendation of November 14, 1991, recommended that the Secretary's denial of Galliher's claims be reversed and the case remanded to the Secretary for a computation of benefits. The district court, however, rejected the magistrate's recommendation and affirmed the Secretary's decision denying Galliher benefits. 7 Although the magistrate and the district court reached different conclusions regarding Galliher's claims, there was agreement on the underlying facts offered in support of the claims. Accordingly, we adopt the relevant facts as stated in Magistrate Carlson's Report and Recommendation, as did the district court. 8 The magistrate adequately summarized Galliher's personal background and testimony regarding his alleged "emotional problem" as follows: 9 Plaintiff was 41 years old at the time of the most recent administrative hearing, had only a sixth grade education, and had been employed as a machine operator and janitor during the relevant past (TR 86-88). As a machine press operator for a plastics company, he was required to be on his feet for most of the work day and to lift upwards of fifteen pounds on a regular basis (TR 90). Claimant stopped working in 1986 as press operator due to an inability to get along with his boss and fellow employees (TR 99-100). He also found the job to be so dull that he often became irritable and depressed (TR 103). Claimant testified that he was disabled because he no longer trusted other people, had little patience with them, and that he often secluded himself in order to minimize his contacts with his neighbors, friends and relatives (TR 97-98, 100, 111, 129, 193). Psychological counselling and medications allegedly did not help improve his social skills and he remained depressed despite such therapy (TR 198). Plaintiff added that he habitually lost his temper, and he described one incident where he threw a car jack through the front windshield upon discovering that his car had not been repaired as promised (TR 199). 10 (Mag.'s Rep. and Rec at 2-3). 11 The magistrate summarized the vocational evidence presented regarding Galliher's capacity to perform his past work activity as follows: 12 A Vocational Expert, Lois Brooks, classified Plaintiff's past work as a machine operator as sedentary to medium, semiskilled and unskilled activity, while his job as a janitor was thought to be light to medium and unskilled (Tr 206-207). The witness testified that if all of the claimant's subjective allegations of disabling symptomatology were true, he would not be able to perform any of his past work with the possible exception of the one sedentary press operator job he held in the past (TR 209). However, based upon a series of hypothetical questions posed by the ALJ concerning a claimant with Plaintiff's inability to follow complex job instructions and his intolerance of other people, the Vocational Expert testified that claimant could still perform his past janitorial and press operation jobs (TR 210-212). The VE testified that approximately 6,000 janitorial jobs and 8,000 machine operator jobs, that were considered to be nonstressful, existed in the regional economy (TR 213). 13 (Mag.'s Rep. and Rec. at 3). 14 The magistrate summarized the medical evidence of Galliher's emotional state as follows: 15 The medical evidence revealed that Plaintiff was hospitalized in July 1985 with a diagnosis of adjustment disorder with mixed features. He reportedly had been increasingly anxious and irritable prior to his admission, but was well orientated and coherent upon initial examination (TR 280-281). Dr. Constance Hislop, Ph.D conducted a MMPI personality test on July 19, 1986, which indicated that Plaintiff was depressed and anxious, had a very negative self image, and was unable to normally interact with other people because of an ego deficit (TR 283). Claimant remained in the hospital for over a month until all signs of psychosis had disappeared, and at the time of discharge, he was no longer experiencing any suicidal or homicidal ideation (TR 284). 16 (Mag.'s Rep. and Rec. at 3-4). 17 The magistrate also summarized the testimony offered by several psychiatrists and psychological counselors who evaluated Galliher. 18 A consultative psychiatrist evaluation was conducted on August 28, 1986 by Dr. Barry Monse, who found that claimant had a low self esteem, possessed poor communication skills, had little or no ambition, and showed no motivation to pursue any hobbies or social contacts. Plaintiff's stream of mental activity was described by Dr. Monse as slow, and he was said to be depressed but friendly (TR 287-290). A second consultative examiner, Dr. Charles Williams, reported on November 20, 1986 that Plaintiff suffered from a major affective disorder with a major episodic reaction in which he exhibited a great deal of anger and hostility towards others. While claimant still harbored some of that anger, Dr. Williams felt that he had recovered adequately from that episode (TR 293-294). 19 Ann Endelman, M.A., a family counselor, reported on October 26, 1987 that she had seen Plaintiff during a three month period in order to help him alleviate his feeling of anxiety and nervousness around people. Ms. Endelman stated that the claimant had made a lot of progress towards reducing those symptoms, but still exhibited signs of a schizoid personality disorder (TR 313). Plaintiff's treating psychiatrist, Dr. S.I. Ahmad, commented in a letter dated August 15, 1987 that claimant was still suffering from depression and recurrent schizoid personality traits. The treating doctor opined that Plaintiff was totally disabled, but that he was scheduled to attend some vocational training in order to get a better idea of residual capacity (TR 314-315). 20 Plaintiff was re-hospitalized on January 22, 1988 after having homicidal thoughts and displaying paranoid behavior. He was treated for a moderate to severe schizo-affective disorder over a three week period, but his level of adaptive functioning continued to be poor (TR 349-350). Dr. Ahmad commented in July 1988 that Plaintiff was taking medications for schizophrenia, chronic undifferentiated type, which would preclude him from being around moving machinery. The treating physician reiterated his earlier opinion that Plaintiff was unable to work in a competitive work environment given his inability to deal with work-related stress or make any kind of decision. 21 Dr. Christian Barrett, Ed.D., a psychologist and vocational expert, evaluated Plaintiff in February 1990 and found him to have a below average mental ability. Deficits were also noted in claimant's coordination, planning and organization, and fine motor skills. While there were times that Plaintiff became easily confused, he did not appear to have any formal delusions or bizarre ideation. Dr. Barrett felt that the claimant had made improvement since his January 1988 hospitalization (TR 368-369). 22 Plaintiff was evaluated by Dr. S. Koegler, a board certified psychiatrist, who was requested to fill out a mental residual functional capacity evaluation on February 24, 1990. A mental status report indicated that the claimant had questionable contact with reality as a result of severe depression and a schizoid affective disorder. Given the combination of his mental impairments and borderline intelligence, Dr. Koegler doubted that Plaintiff could make the necessary adjustments that would allow him to be competitively employed (TR 358-365). 23 Mag.'s Rep. and Rec. at 4-6). 24 Based on the above facts, the ALJ concluded in its final decision that Galliher was not disabled because he was not so severely impaired that he could not perform his past jobs as a janitor or machine operator. (ALJ Notice of Decision--Denial, July 17, 1990). On review, the magistrate found that the ALJ's decision denying benefits to Galliher was not supported by substantial evidence and recommended that the claims be remanded to the Secretary for a computation of benefits. The district court, however, rejected the magistrate's recommendations and concluded that "the medical reports plus the vocational expert's testimony and the testimony of Plaintiff gave the Secretary the substantial evidence needed to deny Plaintiff's claim." (Opinion and Order at 11). The district court affirmed the Secretary's decision and this timely appeal followed. On appeal Galliher argues that the district court's grant of summary judgment affirming the Secretary's denial of his claims for benefits is not supported by substantial evidence. II. 25 Our role in this appeal is limited to a determination of whether the Secretary's decision is supported by substantial evidence. See Mullen v. Bowen, 800 F.2d 535 (6th Cir.1986). A two part procedure must be followed when determining whether disability exists due to a mental disorder. 20 C.F.R. Pt. 404, Subpt. P., App. 1, 12.00(A) (1991). 26 A claimant must first show "documentation of a medically determinable impairment(s) as well as consideration of the degree of limitation such impairment(s) may impose on the individual's ability to work...." Id. The A criteria of § 12.04 provides a means to "medically substantiate the presence of a mental disorder." Id. Secondly, a claimant must show that he has functional limitations resulting from his mental disorder that are inconsistent with the ability to engage in substantial, gainful activity. 20 C.F.R. Pt. 404, Subpt. P., App. 1, 12.00(C) (1991). Specifically, a claimant must show that his mental disorder has resulted in at least two of the following paragraph B criteria of § 12.04: 27 1. Marked restriction of activities of daily living; or 28 2. Marked difficulties in maintaining social functioning; or 29 3. Deficiencies of concentration, persistence or pace resulting in frequent failure to complete tasks in a timely manner (in work settings or elsewhere); or 30 4. Repeated episodes of deterioration or decompensation in work or work-like settings which cause the individual to withdraw from that situation or to experience exacerbation of signs and symptoms (which may include deterioration of adaptive behaviors). 31 20 C.F.R. Pt. 404, Subpt. P, App. 1, 12.04(B) (1991). 32 The Secretary found that Galliher suffers from two mental impairments listed in the A criteria of § 12.04, thus satisfying the first part of his burden for establishing disability. However, Galliher failed to convince the Secretary that his mental impairments were severe enough to satisfy the B criteria of § 12.04 regarding functional limitations. The Secretary found that Galliher satisfied only one of the B criteria and thus failed to establish his disability for purposes of receiving benefits. 33 Based on our review of the record, we find that the Secretary's decision is not supported by substantial evidence. We disagree with the Secretary's conclusion that Galliher's emotional problems do not satisfy at least two of the B criteria of § 12.04. The evidence of record, as summarized above, shows that Galliher's emotional problems have resulted in each of the four B criteria: (1) a marked restriction of activities of daily living; (2) marked difficulties in maintaining social functioning; (3) deficiencies of concentration, persistence and pace resulting in failures to complete tasks in a timely manner; and (4) repeated episodes of deterioration or decompensation in work or work-like settings. 34 The record contains testimony from Galliher stating that he was distrustful of people, had little patience with people, and often secluded himself to minimize his contacts with his neighbors, friends and relatives. (Mag.'s Rep. and Rec. at 2-3). There is also evidence in the record that Galliher physically assaulted a former employee (TR. 134-135), and threw a car jack through his windshield when he became upset about his car not having been properly repaired (TR. 199). Galliher's testimony regarding his emotional state is supported by that of the medical experts who examined and treated him. 35 Dr. Hislop conducted a personality test of Galliher on June 19, 1986, which indicated that Galliher was depressed and anxious, and had a very negative self-image. (Mag.'s Rep. and Rec. at 3-4). This test also indicated that Galliher was unable to normally interact with other people. (Id.). Dr. Monse, a psychiatrist, evaluated Galliher and found that his communication skills were poor, that he had little or no ambition, and that he showed no motivation to pursue any hobbies or social contacts. (Id. at 4). Dr. Endelman, a family counselor, reported in October of 1987 that Galliher was suffering from schizoid personality disorder. Galliher's treating physician, Dr. Ahmad, also reported that he was suffering from depression and schizoid personality traits in a letter dated August 15, 1987. (Id. at 4-5). 36 On January 22, 1988, Galliher was hospitalized after experiencing homicidal thoughts and displaying paranoid behavior. He was treated over a three-week period for a schizo-affective disorder. Dr. Ahmad testified that Galliher was unable to work in a competitive environment because of his inability to deal with work related stress. (Id. at 5). A mental functional capacity evaluation of Galliher conducted on February 24, 1990, revealed that his contact with reality is questionable. (Id. at 6). 37 Based on our review of the evidence as summarized above, we find that the Secretary's decision denying benefits to Galliher is not supported by substantial evidence. Accordingly, the district court erred in affirming the Secretary's decision. III. 38 For the foregoing reasons, we REVERSE the decision of the district court and REMAND this case to the Secretary for computation and payment of benefits. 39 RYAN, Circuit Judge, dissenting. 40 Because I believe that substantial evidence supports the Secretary's decision, I respectfully dissent from the majority's decision to remand this case to the Secretary for computation and payment of benefits. I. 41 As the majority correctly states, our review of the Secretary's decision is limited to determining whether the Secretary's findings are supported by substantial evidence and whether the Secretary employed the proper legal standards when reaching its conclusions. 42 U.S.C. § 405(g); Richardson v. Perales, 402 U.S. 389, 401 (1971). "Substantial evidence" is more than a "scintilla" of evidence but less than a preponderance and is "such relevant evidence as a reasonable mind might accept as adequate to support a conclusion." Perales, 402 U.S. at 401. 42 When this court reviews the Secretary's decision, we may not try the case de novo, resolve conflicts in the evidence, or decide questions of credibility. Garner v. Heckler, 745 F.2d 383, 387 (6th Cir.1984). The record before us contains conflicting evidence. The majority seems to have resolved all conflicts raised in the record in favor of the claimant, for in its opinion reversing the Secretary's decision, the majority focuses only on the evidence supporting reversal of the Secretary's decisions, and ignores all evidence that supports the Secretary's findings. However, when determining whether the Secretary's factual findings are supported by substantial evidence, our duty is to examine the evidence in the record taken as a whole, Born v. Secretary of Health & Human Services, 923 F.2d 1168, 1173 (6th Cir.1990), and we are not free to base our decision on a single piece of evidence and disregard other pertinent evidence in making a "substantial evidence" determination. Mowery v. Heckler, 771 F.2d 966, 970 (6th Cir.1985). If supported by substantial evidence, the Secretary's decision must be affirmed even if a reviewing court would decide the matter differently, Born, 923 F.2d at 1173, and even if substantial evidence would also support the opposite conclusion. Mullen v. Bowen, 800 F.2d 535, 545 (6th Cir.1986) (en banc ). In this case, substantial evidence supports the Secretary's decision, and we should, therefore, affirm. II. A. 43 The claimant who seeks disability benefits bears the burden of establishing that he is disabled, and that his disability precludes him from engaging in any substantial gainful activity. Born, 923 F.2d at 1173. The regulations define disability as the "inability to do any substantial gainful activity by reason of any medically determinable physical or mental impairment which can be expected to result in death or which has lasted or can be expected to last for a continuous period of not less that twelve months." 20 C.F.R. § 404.1505; see also 42 U.S.C. § 423(d). An individual is considered disabled under the Social Security Act only if his physical or mental impairment is of such severity that he is not only unable to do any past relevant work, but also any other work which exists in the national economy, considering his age, education, work experience, and residual functional capacity. Id. If a claimant does not have an impairment or a combination of impairments which is so severe as to significantly limit his ability to do basic work activity, then he is not disabled. 20 C.F.R. §§ 404.1520(c) and 416.920(c). 44 Subjective allegations of symptoms and functional limitations alone are not sufficient to support a finding of disability. 20 C.F.R. § 404.1529, 416.929; McCormick v. Secretary of Health & Human Services, 861 F.2d 998, 1002-03 (6th Cir.1988). Objective evidence must establish the existence of an underlying medical condition. Duncan v. Secretary of Health & Human Services, 801 F.2d 847, 853 (6th Cir.1986). 45 The social security regulations established a five-part evaluation process for determining whether a claimant is disabled: 46 1) If the claimant is presently employed, he is not disabled regardless of the medical findings, 20 C.F.R. § 404.1520(b); 47 2) If the impairment is not "severe," the claimant is not disabled, 20 C.F.R. § 404.1520(c); 48 3) If the claimant suffers from a severe impairment that meets the duration requirement and that "meets or equals a listed impairment in Appendix 1" of Subpart P, Regulation 4, then the claimant is disabled, and the Secretary shall not consider age, education, and work experience, 20 C.F.R. § 404.1520(d); 49 4) If the individual is capable of performing work he has done in the past, he is not disabled, 20 C.F.R. § 404.1520(e); 50 5) If the individual cannot perform past work, other factors will be considered to determine whether the claimant can perform other work existing in the national economy. If he can perform such work, he is not disabled, 20 C.F.R. § 404.1520(f). 51 In this case, the Secretary determined that Galliher satisfied the first two steps but found that he did not meet the severity requirements of step three. The Secretary also determined that Galliher was capable of performing past work under step four. As a result, the Secretary determined that Galliher was not disabled. These findings are supported by substantial evidence. B. 52 Under step three of the regulations' five-part evaluation process, a mental impairment must satisfy two types of criteria to be considered disabling. The "A" criteria "substantiates the presence of the alleged disorder." 20 C.F.R. Pt. 404, Subpart P, Appendix 1 § 12.00(A); See 20 C.F.R. §§ 404.1520a, 416.920(a). The "B" criteria requires an assessment of the functional limitations imposed by the mental disorder to determine whether the limitations are so severe that they seriously interfere with the claimant's ability to function "independently, appropriately, and effectively." 20 C.F.R. Pt. 404, Subpart P, Appendix 1 § 12.00(C). The Secretary found that the medical evidence established that Galliher suffered from an "affective disorder" under section 12.04 and thus met the "A" criteria, but that Galliher failed to establish that his disorder had reached the severity required under the "B" criteria. 53 The "B" criteria of section 12.04 requires the claimant to demonstrate at least two of the following limitations: 54 1. Marked restriction in daily living activities; 55 2. Marked difficulties in maintaining social functioning; 56 3. Deficiencies of concentration or persistence resulting in frequent failure to complete tasks in a timely manner; or 57 4. Repeated episodes of deterioration or decompensation in work or work-like settings which cause the individual to withdraw from that situation. 58 20 C.F.R. Part 404, Subpart P, Appendix 1, § 12.04(C). According to the social security regulations, "marked" means "more than moderately but less than extreme." Section 12.00(C). The Secretary found that because Galliher met only the second of the above limitations, he did not satisfy the "B" criteria of section 12.04 and, therefore, did not suffer from a listed impairment. 59 The majority has decided to remand to the Secretary for a computation of benefits because the majority "disagree[s] with the Secretary's conclusion that Galliher's emotional problems do not satisfy at least two of the B criteria of § 12.04" Op. at 7. The majority believes that there is ample evidence in the record illustrating the severe nature of Galliher's functional limitations for each of the four "B" criteria. However, it does not matter whether this reviewing court agrees or disagrees with the Secretary's findings, and it does not matter whether substantial evidence also supports the opposite conclusion than that made by the Secretary. The only issue before us is whether substantial evidence supports the Secretary's conclusion. 60 The Secretary first determined that Galliher does not suffer from a marked restriction in daily living activities. To establish that the limitations on his activities are "marked," Galliher must demonstrate that his impairment seriously interferes with his ability to function independently, appropriately, and effectively. Foster v. Bowen, 853 F.2d 483, 491 (6th Cir.1988). The Secretary and the district court relied on Galliher's testimony that the medication he takes helps him deal with his daily activities at home. Galliher testified that he cares for his children while his wife works, drives family members to and from work and school, and he completes some household chores. As the district court found, the evidence on which the Secretary relied did not demonstrate the type of problems in daily living, concentration, and stress-related situations necessary for a finding of marked restriction in daily living activities under section 12.04. 61 Second, the Secretary determined that Galliher does not suffer from a significant deficiency in concentration nor does he lack the ability to complete minor tasks. The Secretary noted that Galliher's activities at home demonstrate that he is able to concentrate and complete minor tasks. Finally, the Secretary and the district court determined that there is insufficient evidence to support a finding that Galliher suffers from repeated episodes of deterioration in work or work-like settings. The evidence shows that, until 1984, plaintiff worked for five years as a self-employed janitor without significant problems, and he held two positions from 1984-86. In concluding that Galliher proved he suffered from "repeated episodes" of deterioration at work, the majority refers to two instances where Galliher lost his temper. On one occasion, Galliher assaulted an employee while at work, and on another he threw a car jack through a windshield when he had trouble repairing his car at home. But these instances, which occurred two years apart, hardly require a finding that Galliher suffered from "repeated episodes" of deterioration at work. As the district court stated, "[Galliher]'s work experience, while far from stable, does not display the type of recurrent and severe problems typically associated with someone suffering from a disabling mental condition." Cf. Lankford v. Sullivan, 942 F.2d 301, 307-08 (6th Cir.1991). 62 Certainly, a reasonable mind could accept the evidence relied on by the Secretary as adequate to support the Secretary's conclusion that Galliher failed to establish the severity requirements of the "B" criteria. See Bowen, 853 F.2d at 483. Consequently, this court should affirm the district court's conclusion that substantial evidence supports the Secretary's finding regarding Galliher's failure to meet the seventy requirements of step three. C. 63 Because the Secretary found that Galliher did not meet step three of the regulations' evaluation process, the Secretary looked to step four to determine whether Galliher could perform past work. While Dr. Ahmad, the treating physician, opined that Galliher was "unemployable," he also indicated that Galliher could work with a supportive supervisor in a "non-competitive" work setting. At least one consulting physician concluded that Galliher is able to manage his own affairs and, in fact, found that Galliher tends to exploit situations for his own gain. This consulting physician noted that, although Galliher may go through periods of getting along poorly with others, he is capable of behaving appropriately and "retains the mental residual functional capacity to engage in unskilled work." Based on the medical records and the testimony of the physicians who examined Galliher, a vocational expert opined that Galliher could function in jobs where the duties were not complex and where he had limited contact with people. Such jobs included Galliher's past work as a janitor and machine operator. The expert testified that such "non-competitive" jobs were readily available in the national economy and that approximately 6,000 janitorial jobs and 8,000 machine operator jobs existed in the regional economy. 64 Most importantly, Dr. Ahmad, Galliher's own treating physician, never opined that Galliher was totally disabled; rather, he indicated that Galliher could work in a "sheltered workshop" that included a "sensitive and attentive supervisor" in a noncompetitive atmosphere. Recognizing that Galliher was limited in his abilities to follow complex job instructions and to get along with co-workers and supervisors, the Secretary concluded that Galliher's limitations would not preclude him from working as a janitor or machine operator once again. Substantial evidence supports the Secretary's conclusions. III. 65 I must respectfully dissent because the majority has failed to consider all the evidence before the Secretary. After considering the entire record, I find that the evidence sufficiently supports the Secretary's conclusion that although Galliher has a severe affective disorder, depression, and paranoid personality feature, he nevertheless maintains the capacity to perform simple, nonstressful tasks that involve limited contact with other people. 66 The district court's decision should be affirmed, and the Secretary's decision should be upheld. | Mid | [
0.616438356164383,
28.125,
17.5
] |
Physical fitness characteristics of Omani primary school children according to body mass index. There is evidence that children with high cardiorespiratory fitness and normal body mass index (BMI) have less risk of non-communicable diseases (NCDs), however limited research was undertaken in Omani children. Therefore the aims of the present study were to describe body composition and physical fitness of a large cohort of Omani school children of both genders, and to investigate the effects of weight status on physical fitness. Three hundred and fourteen Omani school children aged 9 to 10 years old took part in anthropometric assessments, body composition and fitness tests, including handgrip strength, the basketball chest pass, broad jump, 20-m sprint, four 10-m shuttle agility, 30-s sit-up, and multistage fitness test (MSFT). Obese boys and girls performed worse than normal-weight children in sprint, agility and endurance. In addition, fitness measures in the overweight group and underweight groups were not significantly different from other groups, except a better handgrip strength and poorer MSFT in overweight compared to normal weight girls, and poorer agility performance in underweight girls compared to the three other groups. Most fitness measures are lower in obese Omani children, which suggests that they will be more at risk of developing NCDs later in life. | Mid | [
0.648401826484018,
35.5,
19.25
] |
State Lawmaker Who Called For Trump’s Assassination Apologizes, But Refuses To Step Down A Missouri state senator who called for President Donald Trump’s assassination last week is refusing to step down after issuing a formal apology to the president and his family on Sunday. Maria Chappelle-Nadal, an African-American Democrat, sparked outrage Thursday when she wrote “I hope Trump is assassinated!” in a now-deleted Facebook post. “I made a mistake, and I’m owning up to it. And I’m not ever going to make a mistake like that again. I have learned my lesson. My judge and my jury is my Lord, Jesus Christ,” Chappelle-Nadal said during a Sunday news conference at a church in Ferguson. Advertisement - story continues below “President Trump, I apologize to you and your family,” she added. Chappelle-Nadal asked media outlets not to publish the location of the news conference beforehand, claiming she has received death threats of her own for her post. She said her comment was made out of frustration in reaction to Trump’s insistence that “both sides” were to blame for the violence at a white nationalist rally in Charlottesville, Virginia, earlier in August. The U.S. Secret Service questioned Chappelle-Nadal as part of an investigation into her remarks. Chappelle-Nadal told The Associated Press she informed the agency she “had no intentions of hurting anyone or trying to get other people to hurt anyone at all.” Prior to her apology Sunday, Chappelle-Nadal said her Facebook post was “improper,” but refused to apologize to the “bigot that we have in our White House.” “When the president apologizes, I’ll apologize,” she told local news station KMOX Thursday. “But I’m not apologizing for being frustrated and angry at a bigot that we have in our White House.” Sen. Claire McCaskill, D-Mo., and Rep. William Lacy Clay, D-Mo., have also called on Chappelle-Nadal to step down. Missouri Democratic Party Chairman Stephen Webber called for Chappelle-Nadal’s ouster in a statement Sunday. Advertisement - story continues below “State Senator Chappelle-Nadal’s comments are indefensible,” Webber said. “All sides need to agree that there is no room for suggestions of political violence in America – and the Missouri Democratic Party will absolutely not tolerate calls for the assassination of the President.” “I believe she should resign,” he added. The Missouri Constitution states that a lawmaker can be expelled by a two-thirds vote of the elected members of the chamber. The Missouri Senate is not scheduled to reconvene until Sept. 13. | Mid | [
0.543897216274089,
31.75,
26.625
] |
Field of the Invention The present invention generally relates to a multi-directional radial wheel track and trolley system for operable walls in which the trolley includes radial support wheels rotatable in the same direction about horizontal axes and supported from a plate, frame or the like in which the radial wheels are arranged in laterally spaced tandem pairs. The trolley is provided with guide rollers adjacent each end thereof which depend from the plate and are positioned in the track slot and mounted for rotation about generally vertical axes to guide the trolley with the guide rollers and curvature of the track slot being dimensioned to prevent contact between the edge of the track slot and a depending supporting bolt for the wall panel when the trolley moves through a track intersection having a curved element. By imparting a manual lateral force to a moving wall panel, the trolleys and thus the wall panel may be selectively moved through the track intersection in multiple directions without stopping at the intersection. Optional diverter pins may be provided on the trolley for association with optional diverter blades at certain track intersections so that at such intersections the trolleys and thus the wall panels will move in a preprogrammed path. | Mid | [
0.597122302158273,
31.125,
21
] |
Kabel Phono RCA BLUE 95 PLN/m The Blue phono RCA is a great entry level phono cable for your turntable, as you would hopefully expect from Tellurium Q. The earth lead that comes with your cable is separate for flexibility in the system. The construction looks very much the same as the RCA cable but if you put the RCA then phono RCA into your turntable you will soon hear which is meant to be there. | Mid | [
0.652680652680652,
35,
18.625
] |
The San Francisco 49ers are halfway through their schedule, and unfortunately have lost defensive lineman Solomon Thomas for a week or two. He suffered a mild MCL sprain, and has been ruled out for Week 9. My guess is he sits out Week 10 as well, and is back after the Week 11 bye. That being said, he is getting some midseason recognition for his first eight games worth of work. Mel Kiper released his midseason All-Rookie team, and Thomas joined Carl Lawson as the defensive ends. Dalvin Tomlinson and Nazir Jones were the two defensive tackles. Here’s what Kiper had to say about Thomas. With Tank Carradine on injured reserve, Thomas has taken on a much bigger role, starting the past five games. But he hurt his knee last weekend and could miss some time. Thomas has two sacks and eight tackles for loss, and he has flashed the talent that made him the No. 3 overall pick. He played like a veteran at Stanford in 2016, causing disruptions in both the running and passing games. He's also big enough that he can move to tackle in passing situations and get after quarterbacks. Thomas has had ups and downs all season long. Prior to last week, he led rookie defensive linemen in tackles at or behind the line of scrimmage. We see plenty of hustle from him, but he is still working on his skill-set. John Middlekauff wrote about his lack of a signature pass rush move, and that is an area that hopefully he can build on this coming offseason. He has shown a willingness to work with a host of great pass rushers, and that should only serve to benefit him. As the 49ers head into the second half of the season, several rookies will have a chance to make a name for themselves and claim a spot on end of season all-rookie teams. If Thomas doesn’t miss more than a couple games, he could end up on those kinds of teams. Cornerback Ahkello Witherspoon has a serious chance now that he has moved into the starting lineup. If Reuben Foster can stay healthy, he easily could end up on these all-rookie teams. George Kittle is a bit more of a long-shot with the emergence of Evan Engram. | Mid | [
0.60475161987041,
35,
22.875
] |
/** * @file debugerror.h * @author Ambroz Bizjak <[email protected]> * * @section LICENSE * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the * names of its contributors may be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * @section DESCRIPTION * * Mechanism for ensuring an object is destroyed from inside an error handler * or its jobs. */ #ifndef BADVPN_MISC_DEBUGERROR_H #define BADVPN_MISC_DEBUGERROR_H #include "misc/debug.h" #include "base/BPending.h" #ifndef NDEBUG #define DEBUGERROR(de, call) \ { \ ASSERT(!BPending_IsSet(&(de)->job)) \ BPending_Set(&(de)->job); \ (call); \ } #else #define DEBUGERROR(de, call) { (call); } #endif typedef struct { #ifndef NDEBUG BPending job; #endif } DebugError; static void DebugError_Init (DebugError *o, BPendingGroup *pg); static void DebugError_Free (DebugError *o); static void DebugError_AssertNoError (DebugError *o); #ifndef NDEBUG static void _DebugError_job_handler (DebugError *o) { ASSERT(0); } #endif void DebugError_Init (DebugError *o, BPendingGroup *pg) { #ifndef NDEBUG BPending_Init(&o->job, pg, (BPending_handler)_DebugError_job_handler, o); #endif } void DebugError_Free (DebugError *o) { #ifndef NDEBUG BPending_Free(&o->job); #endif } void DebugError_AssertNoError (DebugError *o) { #ifndef NDEBUG ASSERT(!BPending_IsSet(&o->job)) #endif } #endif | Mid | [
0.5977011494252871,
32.5,
21.875
] |
The invention relates to a radiant unit in the form of a portal equipped with a plurality of infrared lamps for use as a drying and baking unit and in particular for use as a drying and baking tunnel in the automobile industry. From pamphlet Q-E1/14OP (April 1974 edition) of Heraeus Quarzschmelze GmbH, modular medium-wave infrared lamps are known which can be assembled in modular fashion to form planar infrared radiant units. For use in larger structures, these heating elements can be suspended from frames. Such modular lamps are used for drying and heating in the manufacture of small parts, in laboratories, and in the forming of thermoplastics. For drying and baking of surface coatings, particularly in the automobile industry, the use of installations with infrared lamps of the type referred to above, for example, has proved highly advantageous since such installations can be kept very short, and since the heat required by the process can be directed onto the object with high precision. The ovens used in the automobile industry are constructed in the form of a portal equipped with individual infrared lamps and having a relatively short overall length, that is, an overall length considerably shorter than the vehicle being conveyed through the portal. In such drying and baking operations, it is important that the volatilized solvents, or other vapors or suspended particles present in the space, not deposit on the object being treated. Thus the air in the heating chamber surrounding the portal is continuously exhausted and cleaned, and filtered air is introduced. | Mid | [
0.55350553505535,
37.5,
30.25
] |
disallow irregular whitespace (no-irregular-whitespace) The "extends": "eslint:recommended" property in a configuration file enables this rule. Invalid or irregular whitespace causes issues with ECMAScript 5 parsers and also makes code harder to debug in a similar nature to mixed tabs and spaces. Various whitespace characters can be inputted by programmers by mistake for example from copying or keyboard shortcuts. Pressing Alt + Space on macOS adds in a non breaking space character for example. Known issues these spaces cause: Zero Width Space Is NOT considered a separator for tokens and is often parsed as an Unexpected token ILLEGAL Is NOT shown in modern browsers making code repository software expected to resolve the visualization Line Separator Is NOT a valid character within JSON which would cause parse errors Rule Details This rule is aimed at catching invalid whitespace that is not a normal tab and space. Some of these characters may cause issues in modern browsers and others will be a debugging issue to spot. This rule disallows the following characters except where the options allow: | Low | [
0.520231213872832,
33.75,
31.125
] |
"Ominous" | Wednesday May 25, 2011 While storms are just a part of being a Texan, on Tuesday local residents seemed to be taking the forming storms more seriously, especially with reports of cloud rotation and likely tornados. The clouds that preceded the storms seemed to carry their own warnings. | Low | [
0.498969072164948,
30.25,
30.375
] |
This is not a UNHCR publication. UNHCR is not responsible for, nor does it necessarily endorse, its content. Any views expressed are solely those of the author or publisher and do not necessarily reflect those of UNHCR, the United Nations or its Member States. Summary Clashes between Kurdish militias and armed Syrian opposition groups in Aleppo starting at the end of October in Ras al-Ayn near the Turkish border have raised the specter of a possible Arab-Kurdish civil war in Syria. An Arab-Kurdish civil war would weaken the efforts of the Free Syrian Army (FSA) and non-FSA affiliated groups to take over strategic areas in northern Syria such as oil-rich Hasakah province and Aleppo. Any fighting between the Syrian armed opposition and Kurdish militias trying to establish their authority in Kurdish-dominated areas could strengthen the resolve of the Assad-government. Moreover, the fighting could indicate that Turkey is facilitating the entry of Syrian armed rebels into Syria to prevent the influence of Kurdish groups affiliated to the Kurdistan Workers Party (PKK). Introduction Even as Syrian insurgents fighting in the streets of Damascus call for President Bashar al-Assad to flee the country while he still can, there is the possibility that a new front may open in the struggle for Syria as Kurdish nationalists increasingly come into conflict with Islamist militias fighting the Assad regime. Serious clashes erupted on November 19 between Islamist groups and fighters of the Syrian Kurdish Partiya Yekitiya Demokrat(PYD - Democratic Union Party) in the border town of Ras al-Ayn (Kurdish: Serêkaniyê), killing at least 18 combatants. This is the second time serious fighting has erupted between Islamist groups fighting Assad and combatants of the PYD, which is affiliated to the larger Partiya Karkeren Kurdistan (PKK - Kurdistan Workers Party) but publically denies such ties for fear they could lead to placement of the PYD on international terrorist lists. While Turkey is worried about the increasing influence of the PYD, the PKK is concerned by Turkish support to the Free Syrian Army (FSA) and claims that Turkey is hatching plans to destroy PYD influence in Syria. The Syrian Kurds are a non-Arab minority that comprise up to ten percent of the population and are spread over three Kurdish-dominated enclaves in the provinces of Aleppo and Hasakah. [1] These areas are close to the Turkish border and since 2011 the PYD has managed to extend its control over large parts of these enclaves through its Yekineyen Parastina Gel (YPG - People's Defense Units) to the despair of Turkey. The recent clashes came after Massoud Barzani, the president of the Kurdistan Region of Iraq, failed in his efforts to prevent PYD influence from spreading in Syria. Barzani supported an agreement in July between Syria's Kurdish National Council (KNC)a weak coalition of more than 11 political parties and youth groups supported by Barzaniand the PYD in order to prevent a Kurdish civil war (Rudaw.net, July 17). For Barzani, Kurdish infighting, or Kurdish fights with the Syrian armed opposition could destabilize security in the Kurdistan region of Iraq, and he has warned against this publically (Reuters, November 6). These tensions indicate that a new battlefront near the Turkish border could be opened between anti-Assad Islamist fighters and combatants associated with the PKK, slowing down rebel progress against Damascus and Aleppo. PYD-FSA War in Aleppo The FSA and Arab Islamist groupsare perceived to be close to the interests of the Turkish state by the PYD, while the FSAand other armed groups have accused the PYD of working with the Assad-government (Rudaw.net, 17 November). The PYD claims to be neutral and has made unofficial deals with both Syrian rebels and the government to take control of more Kurdish areas. As a result, there have been minor clashes with both security forces of the regime and Syrian rebels. Major clashes erupted for the first time on October 26 in the Kurdish al-Ashrafiya neighborhood of Aleppo where dozens were killed and hundreds kidnapped by both sides (Al-Monitor October 29; Kurdwatch.org, November 5). Clashes also occurred in Aleppo and near the Syrian towns of Efrin and Azzaz, between the PYD and the 1,200 strong non-FSA affiliated Northern Storm Brigade, which controls the vital crossing from Aleppo province into Turkey (AP, September 13; AFP, October 31). The PKK based in the Qandil Mountains near the Iraqi-Turkish border also threatened to support its PYD affiliate (Shafaq News, October 30). Despite media reports that the clashes could lead to sectarian conflict between Kurds and Arabs, the PYD blamed other rival Kurdish groups of being involved in the incident with the support of Turkey (Daily Star [Beirut],November 20). The YPG stated that, of the 19 FSA combatants killed in the clashes, seven were Kurds affiliated to Mustafa Cummaa's Freedom Party, which has been the most critical of the PKK. Deputy FSA commander Malik al-Kurdi claimed the conflict was caused by Kurdish groups pushing the FSA to fight with the PYD (Kurdwatch.org, November 5). The increasing success of the FSA and other armed Islamist groups has led to the movement's spread to Kurdish-dominated areas in northern Syria. Thus clashes broke out after Syrian Islamist groups entered PYD-controlled districts, breaking the alleged cold truce between the two groups that said the FSA or other armed Islamist groups would not enter PYD-controlled areas (Rudaw.net, August 8). The PYD was not willing to help the FSA to fight Assad, but was also disinclined to fight the FSA unless the Syrian insurgents entered PYD-controlled areas. Both the FSA and the YPG realized that fighting between them could benefit the Assad regime (Today's Zaman, October 31). "We and the Free Syrian Army are one side, we are not on opposite sides," PYD-official Sinem Muhammad told Jamestown [2]. The two sides therefore engaged in negotiations over the control of checkpoints and the handover of detainees. On November 1, the FSA announced that it had reached an agreement with the PYD stating that both sides aimed to topple the Assad-regime and would hand over detainees(Rudaw.net, November 5). The PYD's foreign representative Alan Semo told Jamestown that the initial agreement was only meant to stop further fighting while other demands were still negotiated. [3] One of the primary demands impeding the progress of negotiations was the fate of YPG Commander Nujin Deriki (a.k.a. Shaha Ali Abdo), who was captured on October 26. On November 2, the YPG claimed that she had been tortured to death, which led to demonstrations and further tensions (Firat News Agency, 1 November). The FSA subsequently announced she was still alive and was supposed to be released. It seemed that the Syrian regime tried to prevent the FSA and PYD from reaching agreement by shelling the Kurdish districts of Aleppo on November 4, killing three people (Rudaw.net, November 5; Xeber24, November 5). On November 10, the FSA released the YPG commander, leading to diminished tensions between the groups in Aleppo (McClatchy, November 11). The New Conflict in Hasakah Just as tensions between the PYD and the FSA were dying down, the Islamist Ghuraba'a al-Sham (Strangers of Greater Syria) Brigade and al-Nusra Front entered the Kurdish city of Ras al-Ayn on November 9 from the Turkish town of Ceylanpınar and the nearby village of Tel Halaf (Syrian Observatory for Human Rights, November 9). The area is populated by Kurds and Arabs, leading to fears among Syrian Kurds that the war would spread to Hasaka province. Initially, those fears proved unfounded as this did not lead to fighting between the Islamists and the Kurdish YPG units, with the YPG retreating to Kurdish districts of the town and the FSA controlling Arab parts of Ras al-Ayn. However, it did lead to accusations from PYD-affiliated media, such as the Kurdish news agency Firat News, that Turkey was behind the entry of armed groups into Ras al-Ayn, trying to involve Kurds in the civil war. A PYD-affiliated group claimed in a statement that they would not allow armed groups into Kurdish districts (Firat News Agency, November 11). On November 11, the Ras al-Ayn area was bombed by fighter jets, artillery and helicopters, leading to the death of dozens of civilians and insurgents (Rudaw.net, November 13). The bombing lasted for three days, with most inhabitants fleeing the city for Turkey or the Kurdish-controlled town of Derbisiyye (Welati.net, November 13). After the Islamists moved into Ras al-Ayn, the YPG forced remaining Syrian government security elements from Derik (al-Malikiyah), Amude, Derbisiye and Tel Amir, fearing the arrival of Syrian insurgents and the spread of fighting (Rudaw.net, 13 November). The YPG indicated it did not want to give "the regime [or] the FSA any excuse to come here. We don't need anyone to protect us" (Daily Star [Beirut], November 13). Turkey amassed its troops near the border and condemned the Syrian military operations that led to the death of civilians in Ras al-Ayn. Foreign Minister Ahmet Davutoglu stated that the Syrian air bombardment of Turkish border towns was a clear threat to Turkey, adding that Turkey would shoot down Syrian fighter jets if they cross the border (Today's Zaman, November 12). The PYD's foreign representative, Alan Semo,toldJamestownthat the PYD is worried that under the Adana Agreement, Turkey could characterize the ensuing refugee crisis as a threat to the "security and stability of Turkey," leading to a legal path for Turkish intervention in Syria. "You might see the FSA on Turkish tanks coming into Kurdistan. This scenario can happen," he said. [4] Turkey Worried about PKK Reports emerged on November 14 that Turkish tanks were amassing on the border of Ayn al-Arab (Kobani) alongside FSA units (Welati.net, November 1; Xeber24.net, November 14). Others have suggested that Western diplomats fear Turkey is supporting the FSA to prevent an autonomous Kurdish region in Syria (Al-Monitor November 10). In reality, Turkey is not against Kurdish autonomy in Syria (or in Iraq) since it has good relations with the Syrian Kurdish nationalist parties of the Kurdish National Congress, but it does oppose the increasing influence of the PYD and the PKK in Syria. The United States fully supports the Turkish position of opposing any PKK presence in Syria. U.S. Secretary of State Hillary Clinton expressed her support at a joint news conference in Istanbul with her Turkish counterpart Ahmet Davutoglu, saying, "We share Turkey's determination that Syria must not become a haven for PKK terrorists whether now or after the departure of the Assad regime" (Press TV, August 11). The clashes that erupted between the armed Islamist groups and the PYD on November 19 further raised PYD suspicions of Turkish involvement. The fact that wounded Islamist fighters were transported to Turkish hospitals showed a certain degree of Turkish support. A temporary truce was made on November 19 to hand over wounded and dead bodies. But on November 20, fighting resumed again between the Islamist groups receiving reinforcements from the Turkish border and the PYD receiving reinforcements from other Kurdish cities in Syria (Kurdwatch.org, November 23). The fighting stopped after a ceasefire agreement between the two sides on November 23 (Rudaw.net, November 24). Kurdish political parties have argued that the armed Syrian opposition should fight Assad in Damascus or Aleppo, not in Kurdish areas. It is likely that in the current situation more clashes could erupt due to the fact that armed Syrian Islamist groups expressed their intention to expand their operations outside of Ras al-Ayn to other Kurdish-dominated cities such as Amude, Qamishli and Derik. However, according to Abdul Basit Sieda, former head of the Syrian National Council (SNC), it is unlikely that Turkey would use this expansion of the conflict to establish a humanitarian corridor in northern Syria without support from the West: "If Turkey wants to move, they need the international community to accept it." [5] Turkey could, however, facilitate the supply of reinforcements and weapons for the FSA to attack the PYD. Moreover, it could try to use Western support to decrease PKK influence in Syria and try to pressure the United States or European Union to put the PYD on the terrorist list. The problem for Turkey is that fighting between the PYD and Syrian rebels could increase PYD support in Kurdish communities and make it more difficult for other Kurdish groups not to support the group against the Arabs, especially as some of those fighting against the PYD are allegedly former Arab settlers who were brought to the area by the Syrian government as part of its "Arab belt" policies (Rudaw.net, July 11). Conclusion The PYD already has a traditional support base around the Kurdish areas of Aleppo and is increasing its support. Therefore, Turkish attempts to physically eradicate the PYD could prove to be troublesome and lead to an Arab-Kurdish civil war. It seems that Turkey is focused on preventing the PKK from controlling autonomous Kurdish areas instead of supporting the insurgency in Syria to overthrow the Assad-government. Continued fighting between Kurds and Arabs in the Hasakah province could weaken Syrian rebel advances against Assad and strengthen the current weak position of the Assad government. | Mid | [
0.564663023679417,
38.75,
29.875
] |
Alexandru Vladutu Alexandru is the author of the book Mastering Web Application Development with Express from PacktPub. He is also a top answerer on StackOverflow for tags like #nodejs #express or #socket.io Introduction Node.js has seen an important growth in the past years, with big companies such as Walmart or PayPal adopting it. More and more people are picking up Node and publishing modules to NPM at such a pace that exceeds other languages. However, the Node philosophy can take a bit to get used to, especially if you have switched from another language. In this article we will talk about the most common mistakes Node developers make and how to avoid them. You can find the source code for the examples on github. nodemon or supervisor for automatic restart In-browser live reload (reload after static files and/or views change) Unlike other languages such as PHP or Ruby, Node requires a restart when you make changes to the source code. Another thing that can slow you down while creating web applications is refreshing the browser when the static code changes. While you can do these things manually, there are better solutions out there. 1.1 Automating restarts Most of us are probably used to saving a file in the editor, hit [CTRL+C] to stop the application and then restart it by pressing the [UP] arrow and [Enter]. However you can automate this repetitive task and make your development process easier by using existing tools such as: What these modules do is to watch for file changes and restart the server for you. Let us take nodemon for example. First you install it globally: npm i nodemon -g Then you should simply swap the node command for the nodemon command: # node server.js $ nodemon server.js 14 Nov 21:23:23 - [nodemon] v1.2.1 14 Nov 21:23:23 - [nodemon] to restart at any time, enter `rs` 14 Nov 21:23:23 - [nodemon] watching: *.* 14 Nov 21:23:23 - [nodemon] starting `node server.js` 14 Nov 21:24:14 - [nodemon] restarting due to changes... 14 Nov 21:24:14 - [nodemon] starting `node server.js` Among the existing options for nodemon or node-supervisor , probably the most popular one is to ignore specific files or folders. 1.2 Automatic browser refresh Besides reloading the Node application when the source code changes, you can also speed up development for web applications. Instead of manually triggering the page refresh in the browser, we can automate this as well using tools such as livereload. They work similarly to the ones presented before, because they watch for file changes in certain folders and trigger a browser refresh in this case (instead of a server restart). The refresh is done either by a script injected in the page or by a browser plugin. Instead of showing you how to use livereload, this time we will create a similar tool ourselves with Node. It will do the following: Watch for file changes in a folder; Send a message to all connected clients using server-sent events; and Trigger the page reload. First we should install the NPM dependencies needed for the project: express - for creating the sample web application watch - to watch for file changes sendevent - server-sent events, SSE (an alternative would have been websockets) uglify-js - for minifying the client-side JavaScript files ejs - view templates Next we will create a simple Express server that renders a home view on the front page: var express = require('express'); var app = express(); var ejs = require('ejs'); var path = require('path'); var PORT = process.env.PORT || 1337; // view engine setup app.engine('html', ejs.renderFile); app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'html'); // serve an empty page that just loads the browserify bundle app.get('/', function(req, res) { res.render('home'); }); app.listen(PORT); console.log('server started on port %s', PORT); Since we are using Express we will also create the browser-refresh tool as an Express middleware. The middleware will attach the SSE endpoint and will also create a view helper for the client script. The arguments for the middleware function will be the Express app and the folder to be monitored. Since we know that, we can already add the following lines before the view setup (inside server.js): var reloadify = require('./lib/reloadify'); reloadify(app, __dirname + '/views'); We are watching the /views folder for changes. And now for the middleware: var sendevent = require('sendevent'); var watch = require('watch'); var uglify = require('uglify-js'); var fs = require('fs'); var ENV = process.env.NODE_ENV || 'development'; // create && minify static JS code to be included in the page var polyfill = fs.readFileSync(__dirname + '/assets/eventsource-polyfill.js', 'utf8'); var clientScript = fs.readFileSync(__dirname + '/assets/client-script.js', 'utf8'); var script = uglify.minify(polyfill + clientScript, { fromString: true }).code; function reloadify(app, dir) { if (ENV !== 'development') { app.locals.watchScript = ''; return; } // create a middlware that handles requests to `/eventstream` var events = sendevent('/eventstream'); app.use(events); watch.watchTree(dir, function (f, curr, prev) { events.broadcast({ msg: 'reload' }); }); // assign the script to a local var so it's accessible in the view app.locals.watchScript = '<script>' + script + '</script>'; } module.exports = reloadify; As you might have noticed, if the environment isn't set to 'development' the middleware won't do anything. This means we won't have to remove it for production. The frontend JS file is pretty simple, it will just listen to the SSE messages and reload the page when needed: (function() { function subscribe(url, callback) { var source = new window.EventSource(url); source.onmessage = function(e) { callback(e.data); }; source.onerror = function(e) { if (source.readyState == window.EventSource.CLOSED) return; console.log('sse error', e); }; return source.close.bind(source); }; subscribe('/eventstream', function(data) { if (data && /reload/.test(data)) { window.location.reload(); } }); }()); The eventsource-polyfill.js is Remy Sharp's polyfill for SSE. Last but not least, the only thing left to do is to include the frontend generated script into the page ( /views/home.html ) using the view helper: ... <%- watchScript %> ... Now every time you make a change to the home.html page the browser will automatically reload the home page of the server for you ( http://localhost:1337/ ). 2 Blocking the event loop Since Node.js runs on a single thread, everything that will block the event loop will block everything. That means that if you have a web server with a thousand connected clients and you happen to block the event loop, every client will just...wait. Here are some examples on how you might do that (maybe unknowingly): Parsing a big json payload with the JSON.parse function; Trying to do syntax highlighting on a big file on the backend (with something like Ace or highlight.js); or Parsing a big output in one go (such as the output of a git log command from a child process). The thing is that you may do these things unknowingly, because parsing a 15 Mb output doesn't come up that often, right? It's enough for an attacker to catch you off-guard and your entire server will be DDOS-ed. Luckily you can monitor the event loop delay to detect anomalies. This can be achieve either via proprietary solutions such as StrongOps or by using open-source modules such as blocked. The idea behind these tools is to accurately track the time spend between an interval repeatedly and report it. The time difference is calculated by getting the time at moment A and moment B, subtracting the time at moment A from moment B and also subtracting the time interval. Below there's an example on how to achieve that. It does the following: Retrieve the high-resolution time between the current time and the time passed as a param; Determines the delay of the event loop at regular intervals; Displays the delay in green or red, in case it exceeds the threshold; then To see it in action, each 300 miliseconds a heavy computation is executed. The source code for the example is the following: var getHrDiffTime = function(time) { // ts = [seconds, nanoseconds] var ts = process.hrtime(time); // convert seconds to miliseconds and nanoseconds to miliseconds as well return (ts[0] * 1000) + (ts[1] / 1000000); }; var outputDelay = function(interval, maxDelay) { maxDelay = maxDelay || 100; var before = process.hrtime(); setTimeout(function() { var delay = getHrDiffTime(before) - interval; if (delay < maxDelay) { console.log('delay is %s', chalk.green(delay)); } else { console.log('delay is %s', chalk.red(delay)); } outputDelay(interval, maxDelay); }, interval); }; outputDelay(300); // heavy stuff happening every 2 seconds here setInterval(function compute() { var sum = 0; for (var i = 0; i <= 999999999; i++) { sum += i * 2 - (i + 1); } }, 2000); You must install the chalk before running it. After running the example you should see the following output in the terminal: As said before, existing open source modules are doing it similarly so use them with confidence: If you couple this technique with profiling, you can determine exactly which part of your code caused the delay. 3 Executing a callback multiple times How many times have you saved a file and reloaded your Node web app only to see it crash really fast? The most likely scenario is that you executed the callback twice, meaning you forgot to return after the first time. Let's create an example to replicate this situation. We will create a simple proxy server with some basic validation. To use it install the request dependency, run the example and open (for instance) http://localhost:1337/?url=http://www.google.com/. The source code for our example is the following: var request = require('request'); var http = require('http'); var url = require('url'); var PORT = process.env.PORT || 1337; var expression = /[-a-zA-Z0-9@:%_\+.~#?&//=]{2,256}\.[a-z]{2,4}\b(\/[-a-zA-Z0-9@:%_\+.~#?&//=]*)?/gi; var isUrl = new RegExp(expression); var respond = function(err, params) { var res = params.res; var body = params.body; var proxyUrl = params.proxyUrl; res.setHeader('Content-type', 'text/html; charset=utf-8'); if (err) { console.error(err); res.end('An error occured. Please make sure the domain exists.'); } else { res.end(body); } }; http.createServer(function(req, res) { var queryParams = url.parse(req.url, true).query; var proxyUrl = queryParams.url; if (!proxyUrl || (!isUrl.test(proxyUrl))) { res.writeHead(200, { 'Content-Type': 'text/html' }); res.write("Please provide a correct URL param. For ex: "); res.end("<a href='http://localhost:1337/?url=http://www.google.com/'>http://localhost:1337/?url=http://www.google.com/</a>"); } else { // ------------------------ // Proxying happens here // TO BE CONTINUED // ------------------------ } }).listen(PORT); The source code above contains almost everything except the proxying itself, because I want you to take a closer look at it: request(proxyUrl, function(err, r, body) { if (err) { respond(err, { res: res, proxyUrl: proxyUrl }); } respond(null, { res: res, body: body, proxyUrl: proxyUrl }); }); In the callback we have handled the error condition, but forgot to stop the execution flow after calling the respond function. That means that if we enter a domain that doesn't host a website, the respond function will be called twice and we will get the following message in the terminal: Error: Can't set headers after they are sent. at ServerResponse.OutgoingMessage.setHeader (http.js:691:11) at respond (/Users/alexandruvladutu/www/airpair-2/3-multi-callback/proxy-server.js:18:7) This can be avoided either by using the `return` statement or by wrapping the 'success' callback in the `else` statement: request(.., function(..params) { if (err) { return respond(err, ..); } respond(..); }); // OR: request(.., function(..params) { if (err) { respond(err, ..); } else { respond(..); } }); 4 The Christmas tree of callbacks (Callback Hell) Every time somebody wants to bash Node they come up with the 'callback hell' argument. Some of them see callback nesting as unavoidable, but that is simply untrue. There are a number of solutions out there to keep your code nice and tidy, such as: Using control flow modules (such as async); Promises; and Generators. We are going to create a sample application and then refactor it to use the async module. The app will act as a naive frontend resource analyzer which does the following: Checks how many scripts / stylesheets / images are in the HTML code; Outputs the their total number to the terminal; Checks the content-length of each resource; then Puts the total length of the resources to the terminal. Besides the async module, we will be using the following npm modules: request for getting the page data (body, headers, etc). cheerio as jQuery on the backend (DOM element selector). once to make sure our callback is executed once. var URL = process.env.URL; var assert = require('assert'); var url = require('url'); var request = require('request'); var cheerio = require('cheerio'); var once = require('once'); var isUrl = new RegExp(/[-a-zA-Z0-9@:%_\+.~#?&//=]{2,256}\.[a-z]{2,4}\b(\/[-a-zA-Z0-9@:%_\+.~#?&//=]*)?/gi); assert(isUrl.test(URL), 'must provide a correct URL env variable'); request({ url: URL, gzip: true }, function(err, res, body) { if (err) { throw err; } if (res.statusCode !== 200) { return console.error('Bad server response', res.statusCode); } var $ = cheerio.load(body); var resources = []; $('script').each(function(index, el) { var src = $(this).attr('src'); if (src) { resources.push(src); } }); // ..... // similar code for stylesheets and images // checkout the github repo for the full version var counter = resources.length; var next = once(function(err, result) { if (err) { throw err; } var size = (result.size / 1024 / 1024).toFixed(2); console.log('There are ~ %s resources with a size of %s Mb.', result.length, size); }); var totalSize = 0; resources.forEach(function(relative) { var resourceUrl = url.resolve(URL, relative); request({ url: resourceUrl, gzip: true }, function(err, res, body) { if (err) { return next(err); } if (res.statusCode !== 200) { return next(new Error(resourceUrl + ' responded with a bad code ' + res.statusCode)); } if (res.headers['content-length']) { totalSize += parseInt(res.headers['content-length'], 10); } else { totalSize += Buffer.byteLength(body, 'utf8'); } if (!--counter) { next(null, { length: resources.length, size: totalSize }); } }); }); }); This doesn't look that horrible, but you can go even deeper with nested callbacks. From our previous example you can recognize the Christmas tree at the bottom, where you see indentation like this: if (!--counter) { next(null, { length: resources.length, size: totalSize }); } }); }); }); To run the app type the following into the command line: $ URL=https://bbc.co.uk/ node before.js # Sample output: # There are ~ 24 resources with a size of 0.09 Mb. After a bit of refactoring using async our code might look like the following: var async = require('async'); var rootHtml = ''; var resources = []; var totalSize = 0; var handleBadResponse = function(err, url, statusCode, cb) { if (!err && (statusCode !== 200)) { err = new Error(URL + ' responded with a bad code ' + res.statusCode); } if (err) { cb(err); return true; } return false; }; async.series([ function getRootHtml(cb) { request({ url: URL, gzip: true }, function(err, res, body) { if (handleBadResponse(err, URL, res.statusCode, cb)) { return; } rootHtml = body; cb(); }); }, function aggregateResources(cb) { var $ = cheerio.load(rootHtml); $('script').each(function(index, el) { var src = $(this).attr('src'); if (src) { resources.push(src); } }); // similar code for stylesheets && images; check the full source for more setImmediate(cb); }, function calculateSize(cb) { async.each(resources, function(relativeUrl, next) { var resourceUrl = url.resolve(URL, relativeUrl); request({ url: resourceUrl, gzip: true }, function(err, res, body) { if (handleBadResponse(err, resourceUrl, res.statusCode, cb)) { return; } if (res.headers['content-length']) { totalSize += parseInt(res.headers['content-length'], 10); } else { totalSize += Buffer.byteLength(body, 'utf8'); } next(); }); }, cb); } ], function(err) { if (err) { throw err; } var size = (totalSize / 1024 / 1024).toFixed(2); console.log('There are ~ %s resources with a size of %s Mb.', resources.length, size); }); 5 Creating big monolithic applications Developers new to Node come with mindsets from different languages and they tend to do things differently. For example including everything into a single file, not breaking things into their own modules and publishing to NPM, etc. Take our previous example for instance. We have pushed everything into a single file, making it hard to test and read the code. But no worries, with a bit of refactoring we can make it much nicer and more modular. This will also help with 'callback hell' in case you were wondering. If we extract the URL validator, the response handler, the request functionality and the resource processor into their own files our main one will look like so: // ... var handleBadResponse = require('./lib/bad-response-handler'); var isValidUrl = require('./lib/url-validator'); var extractResources = require('./lib/resource-extractor'); var request = require('./lib/requester'); // ... async.series([ function getRootHtml(cb) { request(URL, function(err, data) { if (err) { return cb(err); } rootHtml = data.body; cb(null, 123); }); }, function aggregateResources(cb) { resources = extractResources(rootHtml); setImmediate(cb); }, function calculateSize(cb) { async.each(resources, function(relativeUrl, next) { var resourceUrl = url.resolve(URL, relativeUrl); request(resourceUrl, function(err, data) { if (err) { return next(err); } if (data.res.headers['content-length']) { totalSize += parseInt(data.res.headers['content-length'], 10); } else { totalSize += Buffer.byteLength(data.body, 'utf8'); } next(); }); }, cb); } ], function(err) { if (err) { throw err; } var size = (totalSize / 1024 / 1024).toFixed(2); console.log(' There are ~ %s resources with a size of %s Mb.', resources.length, size); }); The request functionality might look like this: var handleBadResponse = require('./bad-response-handler'); var request = require('request'); module.exports = function getSiteData(url, callback) { request({ url: url, gzip: true, // lying a bit headers: { 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari/537.36' } }, function(err, res, body) { if (handleBadResponse(err, url, res && res.statusCode, callback)) { return; } callback(null, { body: body, res: res }); }); }; Note: you can check the full example in the github repo. Now things are simpler, way easier to read and we can start writing tests for our app. We can go on with the refactoring and extract the response length functionality into its own module as well. The good thing about Node is that it encourages you to write tiny modules and publish them to NPM. You will find modules for all kinds of things such as generating a random number between an interval. You should strive for modularity in your Node applications and keeping things as simple as possible. An interesting article on how to write modules is the one from substack. 6 Poor logging Many Node tutorials show you a small example that contains console.log here and there, so some developers are left with the impression that that's how they should implement logging in their application. You should use something better than console.log when coding Node apps, and here's why: No need to use util.inspect for large, complex objects; for large, complex objects; Built-in serializers for things like errors, request and response objects; Support multiple sources for controlling where the logs go; Automatic inclusion of hostname, process id, application name; Supports multiple levels of logging (debug, info, error, fatal etc); Advanced functionality such as log file rotation, etc. You can get all of those for free when using a production-ready logging module such as bunyan. On top of that you also get a handy CLI tool for development if you install the module globally. Let's take a look at one of their examples on how to use it: var http = require('http'); var bunyan = require('bunyan'); var log = bunyan.createLogger({ name: 'myserver', serializers: { req: bunyan.stdSerializers.req, res: bunyan.stdSerializers.res } }); var server = http.createServer(function (req, res) { log.info({ req: req }, 'start request'); // <-- this is the guy we're testing res.writeHead(200, { 'Content-Type': 'text/plain' }); res.end('Hello World '); log.info({ res: res }, 'done response'); // <-- this is the guy we're testing }); server.listen(1337, '127.0.0.1', function() { log.info('server listening'); var options = { port: 1337, hostname: '127.0.0.1', path: '/path?q=1#anchor', headers: { 'X-Hi': 'Mom' } }; var req = http.request(options, function(res) { res.resume(); res.on('end', function() { process.exit(); }) }); req.write('hi from the client'); req.end(); }); If you run the example in the terminal you will see something like the following: $ node server.js {"name":"myserver","hostname":"MBP.local","pid":14304,"level":30,"msg":"server listening","time":"2014-11-16T11:30:13.263Z","v":0} {"name":"myserver","hostname":"MBP.local","pid":14304,"level":30,"req":{"method":"GET","url":"/path?q=1#anchor","headers":{"x-hi":"Mom","host":"127.0.0.1:1337","connection":"keep-alive"},"remoteAddress":"127.0.0.1","remotePort":61580},"msg":"start request","time":"2014-11-16T11:30:13.271Z","v":0} {"name":"myserver","hostname":"MBP.local","pid":14304,"level":30,"res":{"statusCode":200,"header":"HTTP/1.1 200 OK\r Content-Type: text/plain\r Date: Sun, 16 Nov 2014 11:30:13 GMT\r Connection: keep-alive\r Transfer-Encoding: chunked\r \r "},"msg":"done response","time":"2014-11-16T11:30:13.273Z","v":0} But in development it's better to use the CLI tool like in the screenshot: As you can see, bunyan gives you a lot of useful information about the current process, which is vital into production. Another handy feature is that you can pipe the logs into a stream (or multiple streams). 7 No tests We should never consider our applications 'done' if we didn't write any tests for them. There's really no excuse for that, considering how many existing tools we have for that: Testing frameworks: mocha, jasmine, tape and many other Assertion modules: chai, should.js Modules for mocks, spies, stubs or fake timers such as sinon Code coverage tools: istanbul, blanket The convention for NPM modules is that you specify a test command in your package.json , for example: { "name": "express", ... "scripts": { "test": "mocha --require test/support/env --reporter spec --bail --check-leaks test/ test/acceptance/", ... } Then the tests can be run with npm test , no matter of the testing framework used. Another thing you should consider for your projects is to enforce having all your tests pass before committing. Fortunately it is as simple as doing npm i pre-commit --save-dev . You can also decide to enforce a certain code coverage level and deny commits that don't adhere to that level. The pre-commit module simply runs npm test automatically for you as a pre-commit hook. In case you are not sure how to get started with writing tests you can either find tutorials online or browse popular Node projects on Github, such as the following: Instead of spotting problems in production it's better to catch them right away in development by using static analysis tools. Tools such as ESLint help solve a huge array of problems, such as: Possible errors, for example: disallow assignment in conditional expressions, disallow the use of debugger . . Enforcing best practices, for example: disallow declaring the same variable more then once, disallow use of arguments.callee . . Finding potential security issues, such as the use of eval() or unsafe regular expressions. or unsafe regular expressions. Detecting possible performance problems. Enforcing a consistent style guide. For a more complete set of rules checkout the ESLint rules documentation page. You should also read the configuration documents if you want to setup ESLint for your project. In case you were wondering where you can find a sample configuration file for ESLint, the Esprima project has one. There are other similar linting tools out there such as JSLint or JSHint. In case you want to parse the AST (abstract source tree) and create a static analysis tool by yourself, consider Esprima or Acorn. 9 Zero monitoring or profiling Not monitoring or profiling a Node applications leaves you in the dark. You are not aware of vital things such as event loop delay, CPU load, system load or memory usage. There are proprietary services that care of these things for you, such as the ones from New Relic, StrongLoop or Concurix, AppDynamics. You can also achieve that by yourself with open source modules such as look or by gluing different NPM modules. Whatever you choose make sure you are always aware of the status of your application at all times, unless you want to receive weird phone calls at night. 10 Debugging with console.log When something goes bad it's easy to just insert console.log in some places and debug. After you figure out the problem you remove the console.log debugging leftovers and go on. The problem is that the next developer (or even you) might come along and repeat the process. That's why module like debug exist. Instead of inserting and deleting console.log you can replace it with the debug function and just leave it there. Once the next guy tries to figure out the problem they just start the application using the DEBUG environment variable. This tiny module has its benefits: Unless you start the app using the DEBUG environment variable nothing is displayed to the console. You can selectively debug portions of your code (even with wildcards). The output is beautifully colored into your terminal. Let's take a look at their official example: // app.js var debug = require('debug')('http') , http = require('http') , name = 'My App'; // fake app debug('booting %s', name); http.createServer(function(req, res){ debug(req.method + ' ' + req.url); res.end('hello '); }).listen(3000, function(){ debug('listening'); }); // fake worker of some kind require('./worker'); <!--code lang=javascript linenums=true--> // worker.js var debug = require('debug')('worker'); setInterval(function(){ debug('doing some work'); }, 1000); If we run the example with node app.js nothing happens, but if we include the DEBUG flag voila: Besides your applications, you can also use it for tiny modules published to NPM. Unlike a more complex logger it only does the debugging job and it does it well. | Mid | [
0.6320987654320981,
32,
18.625
] |
Introduction {#Sec1} ============ Soybean (*Glycine max* L.) is one of the most important and widely consumed legume crops in the world. SKTI, a member of the serine protease inhibitor family, is a major anti-nutritional factor in soybean seeds that can inhibit the activity of both trypsin and chymotrypsin \[[@CR1], [@CR2]\]. These inhibitors have been implicated in various physiological functions, such as regulator of endogenous protease, storage proteins, and defense molecules against plant pests and pathogens \[[@CR3]\]. In soybean seeds, SKTI is synthesized as a precursor of 217 amino acids that would undergo proteolytic process to remove a signal peptide of 25 amino acid residues at N terminus and a hydrophobic polypeptide of 11 amino acid residues at C terminus, yielding a mature peptide of 181 amino acids \[[@CR4], [@CR5]\]. The mature inhibitor is described as a low cysteine content forming two disulfide bonds. Kunitz trypsin inhibitors including SKTI have a common structure composed of 12 anti-parallel β-strands separated by irregular loops \[[@CR6]\]. In SKTI, the side chain of Arg63 residue, as an active site residue, carried positive charges, forming strong electrostatic interaction with the negative charge of the side chain of Asp189 in enzyme, significantly contributing to the binding of inhibitor to the active center of trypsin. Figure [1](#Fig1){ref-type="fig"} gives a whole view that the active residue Arg63 of SKTI combines with the active center of trypsin to form a stable enzyme-inhibitor complex. In this article, inhibition kinetics of SKTI to trypsin was investigated; molecular docking technology was adopted to give an explanation of the inhibition mechanism. According to a combination of inhibition kinetic behavior and molecular structure modeling, we concluded that the inhibition type should be an irreversible inhibition instead of a competitive one. This might provide reference for understand the inhibition mechanism of such kind of Kunitz trypsin inhibitors.Fig. 1Three-dimensional model gives a general view. **a** SKTI (green) and its active sites (yellow). **b** Showing the interactions between SKTI (yellow) and trypsin (red) Trypsin inhibitors are important biochemical substances. Traditionally, SKTI was extracted from soybean seeds, which limited the large-scale application in agriculture and clinic because of the high costs of preparation \[[@CR7], [@CR8]\]. With the development of transgenic technology, *Escherichia coli* host has been widely used as a tool to produce various recombinant protein. Production of recombinant protein provides a suitable method for commercializing medical products \[[@CR9]\]. Another advantage of producing recombinant proteins is better safety in comparison with sample expressed from animal cell. Perhaps considering the inhibitory ability of SKTI to serine protease, there were few reports on recombinant expression of SKTI in prokaryote \[[@CR10]\]. Fortunately, there have been many studies about recombinant expression of SKTI in plants to harvest the resistant plants \[[@CR11]--[@CR14]\], which provided some guidance and experience for us. Here, we reported *E. coli* system was used to express rSKTI with success. In addition, the refolding conditions of rSKTI inclusion bodies were optimized. The technology would be useful for the production and study of other Kunitz trypsin inhibitors. Biochemical properties of both SKTI and rSKTI were investigated in the research, such as optimum pH and temperature, stability of pH and temperature, and inhibition kinetics behavior. Some was first studied and the results should be useful for its application. Materials and Methods {#Sec2} ===================== Materials {#Sec3} --------- The synthesis and analysis of SKTI gene sequence were performed by Generay Biotechnology Corporation (Shanghai, China). The recombinant trypsin was acquired from Yaxin Biotechnology Limited Company (Shanghai, China). The natural soybean Kunitz trypsin inhibitor (SKTI) and *N*-benzoyl-[l]{.smallcaps}-arginine ethyl ester (BAEE) were purchased from Sigma Co. (USA). All other reagents were of analytical grade. Construction of the Expression Strain for rSKTI {#Sec4} ----------------------------------------------- The gene of SKTI was designed according to the codon bias of *E. coli* and synthesized based on the primary sequence of SKTI from Uniprot database with accession number of P01070. The gene was cloned into pET-28a (+) expression vector (Novagen) using the *NdeI* (upstream) and *HindIII* (downstream) cloning sites and then transformed into *E. coli* (DE3) strains which was held in our laboratory. Expression and Refolding of rSKTI {#Sec5} --------------------------------- The *E. coli* BL21 (DE3) strains were routinely cultivated at 37 °C in Luria-Bertani medium containing 50 μg/mL kanamycin. When the cells reached a optical density (OD600) of 0.9 with UV spectrophotometry, the cells were induced by isopropyl-β-[d]{.smallcaps}-thiogalactopyranoside (IPTG) with a final concentration of 0.5 mM. After growing for an additional 4 h at 37 °C, the cells were harvested by centrifugation at 6000 rpm for 20 min and lysed by ultrasonication. Then, inclusion bodies were separated by centrifugation at 12000 rpm for 15 min at 4 °C. Triton X-100 (0.5%, v/v) was used as a detergent to purify the inclusion bodies. The inclusion bodies were washed with 20 mM Tris-HCl buffer (pH 8.0) three times to eliminate Triton X-100. Purified inclusion bodies were denatured and then diluted in refolding buffer. The final concentration of protein in the refolding buffer was 1 mg/mL. A L25(5^6^) orthogonal experiment design was adopted to screen three key refolding factors (Table [1](#Tab1){ref-type="table"}). The optimum level for each factor was obtained according to the activity of rSKTI after refolding. Single-factor experiment was further optimized to achieve the higher yield.Table 1Orthogonal design of rSKTI refolding conditionLevelsFactorspHTemperature (°C)Redox couple (GSH+GSSG, mM)18.540+029.0101+039.5161+0.125410.0231+0.25510.5304+1 Purification of rSKTI {#Sec6} --------------------- The refolded rSKTI was purified by DEAE-FF anion-exchange chromatography (Best Chrom, China). The activated rSKTI was eluted from the column by a linear 0--500 mM NaCl gradient in 20 mM Tris-HCl, pH 8.0. The samples with activity were pooled and stored at −20 °C for further study. SKTI Activity Assay {#Sec7} ------------------- SKTI activity assay method is based on the United States Pharmacopoeia (USP) with a little modification \[[@CR15]\]; its activity against trypsin was assayed based on the activity difference of trypsin in the absence or presence of SKTI. The trypsin activity from the positive control group (without SKTI) and the experiment group (with SKTI) was measured in 67 mM phosphate buffer (PB, pH 7.6) at 25 °C with BAEE as substrate. Here, the positive control group consisted of 50 μL 1.35 mg/mL trypsin and 950 μL 67 mM PB, pH 7.6. The experiment group consisted of 50 μL 1.35 mg/mL trypsin, 50 μL 0.63 mg/mL SKTI, and 900 μL 67 mM PB, pH 7.6. One trypsin inhibitor unit (EPU) is defined that will decrease the activity of two trypsin units by 50% where one trypsin unit is defined that will hydrolyze 1.0 μmol of BAEE per second in pH 7.6 at 25 °C. The activity of SKTI was calculated as follows.$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{Activity}\kern0.33em \mathrm{of}\kern0.33em \mathrm{SKTI}\kern0.33em \left(\mathrm{EPU}/\mathrm{L}\right)=\frac{\varDelta {\mathrm{A}}_{253,\kern0.33em \mathrm{U}0}-\varDelta {\mathrm{A}}_{253,\kern0.33em \mathrm{U}1}}{0.001\times 270\times 60\times \mathrm{t}\times \mathrm{V}}\times \mathrm{df}\times 1000 $$\end{document}$$where Δ*A*253,U0 and Δ*A*253,U1 are the change of absorbance at 253 nm within schedule time from the positive control group and the experiment group, respectively; *t* is the reaction time, 3 min; *V* is the volume of reaction solution, 100 μL; 0.001 is the change in absorbance, corresponding to one trypsin unit; 270 is conversion coefficient of FIP unit, one FIP unit is equal to 270 BAEE units; 60 is conversion coefficient of EPU unit, one EPU unit is equal to 60 FIP units; df is dilution factor of SKTI in reaction solution, 20; and 1000 is the conversion coefficient of volume unit. Protein content was measured according to the BCA method \[[@CR16]\], using bovine serum albumin as the standard protein. All assays were performed in triplicate. Biochemical Properties {#Sec8} ---------------------- ### Effects of Temperature on Activities and Stabilities of SKTI and rSKTI {#Sec9} The effects of temperature on the activities of SKTI and rSKTI were investigated. Both the positive control group U0 (1.35 mg/mL trypsin in 67 mM PB, pH 7.6.) and the experiment group U1 (1.35 mg/mL trypsin plus 0.63 mg/mL SKTI in 67 mM PB, pH 7.6) were incubated in a different temperature, ranging from 4 to 65 °C and then kept for 2 h. The remained trypsin activity was measured and inhibitory activity of SKTI was calculated as described in "[SKTI Activity Assay](#Sec7){ref-type="sec"}." The inhibition rate was calculated based on the following equation. The results are expressed as the mean value ± S.D. of triplicate assay.$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{Inhibition}\kern0.33em \mathrm{rate}=\left(1-\mathrm{U}1/\mathrm{U}0\right)\times 100\% $$\end{document}$$where U0 and U1 are the activity of trypsin in the positive control group and the experiment group, respectively. For the thermal stability of SKTI and rSKTI, 0.63 mg/mL SKTI or rSKTI in 67 mM PB, pH 7.6) was incubated in a different temperature as above. The trypsin activity was measured and inhibitory activity of SKTI was calculated as described in "[SKTI Activity Assay](#Sec7){ref-type="sec"}." The results are expressed as the mean value ± S.D. of triplicate assay. ### Effects of pH on Activities and Stabilities of SKTI and rSKTI {#Sec10} The effects of pH on the activities of SKTI and rSKTI were investigated. Both the positive control group U0 (1.35 mg/mL trypsin) and the experiment group U1 (1.35 mg/mL trypsin plus 0.63 mg/mL SKTI) were prepared in different pH buffers, including 100 mM HAc-NaAc (pH 3.0--6.0), 100 mM Tris-HCl (pH 7.0--8.0) and 100 mM Gly-NaOH (pH 9.0--11.0), and then kept at 25 °C for 12 h. The remained trypsin activity was measured and inhibitory activity of SKTI was calculated as described in "[SKTI Activity Assay](#Sec7){ref-type="sec"}." The inhibition rate was calculated based on the Eq. ([2](#Equ2){ref-type=""}). The results are expressed as the mean value ± S.D. of triplicate assay. For the pH stability of SKTI and rSKTI, both 0.63 mg/mL SKTI and rSKTI were incubated in above buffer solutions at 25 °C for 12 h. The trypsin activity was measured and inhibitory activity of SKTI was calculated as described in "[SKTI Activity Assay](#Sec7){ref-type="sec"}." The results are expressed as the mean value ± S.D. of triplicate assay. ### Effects of Metal Ions and Organic Solvents on Stabilities of SKTI and rSKTI {#Sec11} The concentration of 100 mM various types of metal ion solutions was prepared including Ca^2+^, Ba^2+^, Mg^2+^, Co^2+^, Cu^2+^, Mn^2+^, Ni^2+^, Fe^3+^, Zn^2+^, and Al^3+^. Both 0.63 mg/mL SKTI and rSKTI were prepared in 67 mM Tris-HCl buffer (pH 7.6) with above metal ion solutions at the final concentration of 1 mM at 25 °C for 4 h. Both SKTI and rSKTI (0.63 mg/mL) were prepared with various types of organic solvents at the final concentration of 10% (v/v), including dimethyl sulfoxide (DMSO), methanol (MeOH), ethanol (EtOH), acetonitrile (ACN), glycerin, acetone, epoxy chloropropane (EPI), diisopropyl ether (DIPE), and isoamyl alcohol (IAOH), and kept at 25 °C for 4 h. The trypsin activity was measured and inhibitory activity of SKTI was calculated as described in "[SKTI Activity Assay](#Sec7){ref-type="sec"}." The only difference is that 67 mM PB buffer (pH 7.6) is substituted with 67 mM Tris-HCl buffer (pH 7.6) for activity assay. The results are expressed as the mean value ± S.D. of triplicate assay. Kinetic Parameter Assay {#Sec12} ----------------------- The kinetic parameters including Michaelis constant (*K*m) and maximal velocity (*V*~max~) were determined based on the Michaelis-Menten equation by Lineweaver-Burk method in the presence and absence of inhibitor. The various substrate concentrations of BAEE was 0.01 mM, 0.015 mM, 0.02 mM, 0.03 mM, 0.05 mM, and 0.075 mM. All assays were performed in triplicate at 25 °C in 67 mM PB (pH 7.6). The value of the inhibition constant (*K*I) was calculated by the following equation:$$\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \raisebox{1ex}{${V}_{\mathrm{max}}$}\!\left/ \!\raisebox{-1ex}{${V}_{\mathrm{max}}^{\prime }$}\right.=1+\raisebox{1ex}{$\left[I\right]$}\!\left/ \!\raisebox{-1ex}{${K}_I$}\right. $$\end{document}$$where *V*~max~ and *V*′~max~ are the maximum of activated trypsin activity in the absence and presence of inhibitor, respectively; \[*I*\] is the concentration of inhibitor; and *K*I is the inhibition constant. Molecular Modeling {#Sec13} ------------------ SWISS-MODEL [(http://swissmodel.expasy.org/](http://swissmodel.expasy.org/))) was used for homologous modeling of the SKTI and trypsin. AutoDock Vina (<http://vina.scripps.edu/)> was used to protein-protein molecular docking and screens the feasible results. All PDB files of protein structures were visualized on PyMol ([http://pymol.sourceforge.net/](http://pymol.sourceforge.net/))). Results {#Sec14} ======= Expression of rSKTI {#Sec15} ------------------- SDS-PAGE analysis of the expression of rSKTI in *E. coli* BL21 (DE3) showed a significant protein band of about 20.1 kDa, corresponding to the theoretical molecular weight of rSKTI (Fig. [2](#Fig2){ref-type="fig"}). In the analysis on soluble and insoluble parts after cell disruption with ultrasonication, the aimed protein band was mainly appeared in insoluble part, indicating it was expressed as inclusion body (Fig. [2](#Fig2){ref-type="fig"}, line 4).Fig. 2Fifteen percent of SDS-PAGE analysis of the expression of rSKTI from *E. coli*. Line M, molecular weight marker; line 1, cell lysate before induction; line 2, cell lysate after 0.5 mM IPTG induction; line 3, soluble protein; line 4, insoluble protein Optimizing Refolding of rSKTI {#Sec16} ----------------------------- Several key factors that influenced the refolding such as pH, temperature, and redox couples were selected for orthogonal experiment. The analysis of effect of each factor on refolding is shown in Fig. [3](#Fig3){ref-type="fig"}, which is based on 25 separated groups of orthogonal design. The optimal condition for rSKTI refolding was pH 9.5, 16 °C, 1 mM GSH and pH 9.5, 23 °C, 1 mM GSH.Fig. 3Effects of pH, temperature (T) and redox couple (RC, GSH + GSSG) on the activity of rSKTI in refolding buffer. Ten milligrams per milliliter of rSKTI inclusion bodies (wet weight) was dissolved in denaturating buffer and then diluted in refolding buffer (1:10 ratio) Based on the orthogonal results, a more detailed single-factor experiment was designed to further optimize the refolding condition, such as refolding buffer pH, temperature, and Gly-NaOH buffer concentration. The results are shown in Fig. [4](#Fig4){ref-type="fig"} a, b, and c, respectively. Considering that the denature condition would have effects on its refolding recovery, the key factors were investigated, such as concentration of β-mercaptoethanol and inclusion body in denature buffer. The results are shown in Fig. [4](#Fig4){ref-type="fig"} d and e.Fig. 4Optimization of various factors on rSKTI refolding. **a** pH. **b** Temperature. **c** Concentration of Gly-NaOH buffer. **d** Concentration of β-mercaptoethanol. **e** Contents of inclusion bodies. The results of each series were expressed as the mean ± S.D. of triplicate assays Taken these, a final condition was established for rSKTI unfolding and refolding. The rSKTI inclusion bodies (10 mg/mL) was dissolved in denaturation buffer (50 mM Gly-NaOH, 8 M urea, 10 mM EDTA, and 5 mM β-mercaptoethanol, pH 9.5) for 2 h and then diluted in refolding buffer (50 mM Gly-NaOH, 1 mM EDTA, and 1 mM GSH, pH 9.5) with a ratio of 1:10 (v:v) at 20 °C for 20 h. Purification of rSKTI {#Sec17} --------------------- DEAE-FF anion-exchange chromatography was used to concentrate and purify active rSKTI successfully. The eluents were highly pure, as shown by 15% SDS-PAGE analysis (Fig. [5](#Fig5){ref-type="fig"}). The final activity yield of rSKTI achieved 70%, and the specific activity of rSKTI was improved over three fold after purification. All data are summarized in Table [2](#Tab2){ref-type="table"}. The samples from tube no. 14 to tube no. 24 with activity were pooled and concentrated with ultrafiltration. The rSKTI was 1.9 mg/mL after concentration.Fig. 5Purification of rSKTI with DEAE-FF anion-exchange chromatography. **a** The elution curve of rSKTI. The sample was eluted with a linear gradient of 0--500 mM NaCl in 20 mM Tris-HCl buffer (pH 8.0). **b** Fifteen percent of SDS-PAGE analysis of eluate samples. Lane M, molecular weight marker; lane 1, loaded sample; lane 2, flow through sample when loading; lane 3, flow through sample at equilibrium; lane 4\~14, the purified rSKTI proteinTable 2Summary of rSKTI purificationStepVolume (mL)Total activityTotal proteinSpecific activityYield of proteinYield of activity(EPU)(mg)(EPU/mg pro)(%)(%)Before purification20053.50100.450.53100100After purification11037.1330.951.6830.8169.40 RSKTI Activity Assay {#Sec18} -------------------- IC50 value, defined as the amount of rSKTI when half trypsin activity was inhibited, was about 0.5 mg/mL. Moreover, the trypsin activity was completely inhibited by rSKTI at the concentration of 1.5 mg/mL (Fig. [6](#Fig6){ref-type="fig"}). There existed a stable binary complex formed between rSKTI and trypsin with equimolar amount based on the concentration of trypsin and rSKTI.Fig. 6Inhibitory effect of rSKTI against trypsin. Trypsin (1.35 mg/mL) was inhibited by increasing the contents of rSKTI. The value of 100% activity refers to trypsin activity without rSKTI. Each value represents an experiment performed in a triplicate (mean ± S.D.) Biochemical Properties {#Sec19} ---------------------- ### Effects of Temperature on Activities of SKTI and rSKTI {#Sec20} When temperature rose from 4 to 65 °C, the activity of trypsin from positive control group (trypsin without SKTI or rSKTI) increased followed by decreased and reached the maximum at 37 °C. Meanwhile, the activity of trypsin from treatment group (trypsin containing SKTI or rSKTI) was almost consistent. The optimum temperature of inhibitor including SKTI and wild type was 35 °C (Fig. [7a](#Fig7){ref-type="fig"}). It was noticeable that both SKTI and wild type have completely lost activity against trypsin when temperature was over 65 °C.Fig. 7Biochemical properties of SKTI and rSKTI. **a** Optimal temperature. **b** Thermal stability. **c** Optimal pH. **d** pH stability. **e** Metal ions. **f** Organic solvents. The results of each series were expressed as the mean±S.D of triplicate assays ### Thermal Stability of SKTI and rSKTI {#Sec21} The SKTI and rSKTI were quite stable below 37 °C (Fig. [7b](#Fig7){ref-type="fig"}). After incubation for 12 h at 50 °C, rSKTI retained 50% of its activity. In contrast, SKTI retained more than 60% of its activity. After incubation for 12 h at 65 °C, rSKTI has completely lost activity against trypsin and SKTI merely retained approximately 15% of its original activity. There were not significant different between the stabilities of SKTI and rSKTI. ### Effects of pH on Activities of SKTI and rSKTI {#Sec22} Using BAEE as the substrate, the optimal pH was determined to be pH 8.0 for SKTI and rSKTI (Fig. [7c](#Fig7){ref-type="fig"}). With the increase of pH, the inhibitory activity of rSKTI was rose followed by declined and reached the maximum at pH 8.0. In acid condition such as below pH 4.0, rSKTI showed no activity against trypsin. The rSKTI could not bind to trypsin because the side chain of Asp189 residue in active center of trypsin was protonized. Almost, the same tendency and the same optimum pH of SKTI were obtained. ### pH Stability of SKTI and rSKTI {#Sec23} The rSKTI was stable in pH 7.0--11.0; above 90% activities were kept, while in pH 3.0 and pH 4.0 buffers, 50% and 30% activity was lost, respectively. Therefore, we concluded that rSKTI was stable in neutral and alkaline conditions (Fig. [7d](#Fig7){ref-type="fig"}). At the same time, the same tendency of pH stabilities of SKTI was obtained. ### Effects of Metal Ions and Organic Solvents on Stabilities of SKTI and rSKTI {#Sec24} We observed that the activity of inhibitor against trypsin-like was inhibited by Co^2+^, Mn^2+^, Fe^3+^, and Al^3+^ , while it was less affected by other metal ions (Fig. [7e](#Fig7){ref-type="fig"}). The activity of rSKTI was reduced to 80% by Co^2+^, 85% by Mn^2+^, and 60% by Fe^3+^ and Al^3+^. Similarly, the activity of wild type was reduced to 85% by Co^2+^, 75% by Mn^2+^, 65% by Fe^3+^, and 30% by Al^3+^. The Co^2+^ and Mn^2+^, which belonged to heavy metal ions, made protein denatured. The Fe^3+^ made protein oxidated with extremely strong oxidability. The Al^3+^ was easily combined with protein, affecting the catalytic reactions involved in the enzyme. In general, organic solvents had little effect on the activity of inhibitor (SKTI and rSKTI), which might be that the hydrolysis reaction of trypsin mainly occurred in aqueous phase instead of in organic phase (Fig. [7f](#Fig7){ref-type="fig"}). However, the activity of rSKTI and SKTI was reduced to 85% and 75% by epichlorohydrin, respectively. Kinetic Parameter Assay {#Sec25} ----------------------- Kinetic parameters of trypsin in the presence and absence of inhibitors (SKTI or rSKTI) were determined, and the results are shown in Fig. [8](#Fig8){ref-type="fig"}. The *K*m values of three groups were same, while the *V*~max~ values of three groups were different (Table [3](#Tab3){ref-type="table"}). This indicated that the incorporation of SKTI to trypsin did not alter the affinity between trypsin and substrate but reduced the catalytic reaction rate of trypsin. It could be induced that the inhibition type was belong to a non-competitive inhibition. The *K*I value was calculated as 2.2 μM and 1.67 μM for rSKTI and SKTI, respectively. The specific activity of rSKTI was 0.71 EPU/mg pro, lower than that of SKTI (0.85 U/mg pro). Perhaps, higher purity for rSKTI attributed to a lower *K*I value compared with SKTI.Fig. 8Lineweaver-Burk plots analysis of the inhibition kinetics. Assay of the trypsin (1.35 mg/mL) activity in the presence and absence of SKTI (0.63 mg/mL) or rSKTI (0.63 mg/mL)Table 3Kinetic parameters of trypsin in various conditionsGroup*K*~m~ (mM)*V*~max~ (μmol/min)Trypsin (U0)0.0248.31Trypsin + rSKTI (U1)0.0229.15Trypsin + SKTI (U1)0.0224.87 Discussion {#Sec26} ========== In this study, the results of inhibition kinetic assay as Lineweaver-Burk plots analysis of rSKTI against trypsin showed an unchanged *K*m and decreased *V*~max~ value of trypsin. The inhibition mechanism of rSKTI against trypsin might be non-competitive even if more amounts of substrates existed. In addition, the *K*I value of rSKTI was 2.29 μM, which was far less than *K*m value of trypsin that was 0.02 mM. All above suggested that the inhibition of rSKTI to trypsin was more likely to be irreversible type instead of non-competitive type. It was contrary to the previous report that the SKTI was a competitive inhibitor to trypsin \[[@CR17]\]. For the inhibition mechanism between inhibitor and enzyme, it was generally accepted that the side chain of Arg63 residue in SKTI carried the positive charges, forming strong electrostatic interaction with the negative charges of the side chain of Asp189 in trypsin, significantly contributing to the binding of inhibitor to the active center of trypsin \[[@CR18], [@CR19]\]. Here, three-dimensional structure of compound is shown in Fig. [9](#Fig9){ref-type="fig"}. It was interesting to note that SKTI formed a binary complex with trypsin at the ratio of 1:1 (mol:mol), which was confirmed by trypsin inhibition assay in vitro. Figure [9](#Fig9){ref-type="fig"} b highlights the interaction between SKTI and trypsin in detail with molecular docking technology. We observed that there were five interactions between Arg63 of SKTI and active domain of trypsin, including three catalytic sites (His57, Asp102, and Ser195) and one binding site (Asp189). It successfully blocked the active center of trypsin and effectively prevented the binding of substrates to trypsin. The side chain guanidine group of Arg63 in SKTI was hydrogen-bonded to the side chain carboxyl group of Asp189 in trypsin, with distance of 2.74 Å, 3.07 Å, and 3.27 Å, respectively. In addition, the side-chain hydroxyl group of Ser195 in trypsin acted as a proton donor, while the main chain NH of Arg63 in SKTI acted as a related receptor, at a distance of 3.07 Å. The main NH of Ser195 from trypsin interacted with the main CO of Arg63 in SKTI, forming a hydrogen bond of 3.08 Å.Fig. 9Three-dimensional model of interaction SKTI (yellow) with trypsin (blue). **a** Visualization of the binary complex in SKTI-trypsin. **b** The distance of hydrogen bonds formed by the residues in active region. The inhibitor and proteinase backbone is shown as cartoon. Sticks indicate residues involved in enzyme-inhibitor interaction Combining above three-dimensional structure analysis with inhibition kinetics behavior gave evidences that the inhibition mechanism of SKTI to trypsin should be irreversible. The fact that *K*I value of inhibitor was far less than *K*m value of enzyme suggested that SKTI had better affinity and more easily bound to trypsin compared with substrate. Rühlmann et al. \[[@CR20]\] found that there was a covalent bond between bovine pancreatic trypsin inhibitor (BPTI) and trypsin based on the electron density at the active region of tetrahedral intermediate, whereas the peptide bond of Lys15-Ala16 at the active site of BPTI was still linked. Later, the research \[[@CR21]\] discovered that the scissile peptide bond of SKTI between Arg63 and Ile64 was cleaved by trypsin, because it could specifically digest the peptide bond formed by alkaline amino acid (lysine and arginine) at carboxyl terminus. According to the catalytic mechanism of trypsin to SKTI was analogous to synthetic substrate (BAEE), we further speculated that SKTI was the part of Kcat type inhibitors in terms of trypsin. Of course, further study was needed to validate the hypothesis. Conclusion {#Sec27} ========== In this research, an efficient method was established to produce rSKTI in *E. coli* with optimized refolding technology. Biochemical properties of rSKTI studied here should be useful for the better application of rSKTI in production. Furthermore, the specific study on inhibition kinetic behavior and molecular structure modeling of complex gave new insight into inhibition mechanism of SKTI against trypsin. These results provide reference to further research on the inhibition of other Kunitz trypsin inhibitors to their target protease. Electronic Supplementary Material ================================= {#Sec28} ESM 1(DOCX 52 kb) **Publisher's Note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This study was partial financially supported by the State Key Laboratory of Bioreactor Engineering (No. 2060204), East China University of Science and Technology. The authors declare that they have no competing interests. | High | [
0.6616915422885571,
33.25,
17
] |
The primary purpose of a traveler is in controling the twist in the mainsail. The further that the traveler car is moved to windward the more horizontal the pull and the more twist that is induced into the sail. The closer to being directly under the boom, the more vertical the pull and the greater the download on the sail and so there is greater tension in the leech of the sail and the less twist is experienced by the sail. Controlling twist is a critical part of optomising performance. Some degree of twist is beneficial, especially in light to moderate winds. To begin with, if you look at a sail in section (cut horizontally through the sail) it is a wing. Even very efficient wings have an 'incident angle'; and 'slip angle'. In other words, a wing (or a sail) needs to be placed at an angle to the wind to work. For any given wind, and any given sail, and any given boat in a given condition, at any given spot on the sail, there is an optimum angle of attack in order to achieve the best performance. What I mean by 'best preformance' is an increase in forward drive while as well as minimizing of drag, and more importantly reducing the amount of sideforce which is responsible for creating heeling and leeway . What further complicates all of this is that the wind at the top of the sail is actually different than the wind at the bottom of the sail. Called ''gradient effect'', in light to moderate winds the wind speed typically increases the higher you get above the surface of the water. Visualize gradient effect this way, there is friction between air and water and between air and air. Because of this friction, at the surface of the water there is a (barrier) layer of air that does not move at all relative to the water. Next to this layer of air is another layer of air that moves slowly over this stationary barrier layer. That layer feels the friction of the barrier layer and the friction of the layer above it that is being motavated by the ambient wind. Each higher layer moves a bit more quickly compared to the layer below until at some point up in the air there is a point at which the air moves at the speed of the ambient winds and does not feel the affect of the barrier. In light air, this gradient affect can be dozens of feet deep. At very much higher wind speeds the whole gradiant effect, from barrier to free flowing wind, is only a couple inches deep. Most of us typically sail in windspeeds where the effect is somewhere in between those extremes but typically taller than the average mast height even in a moderate wind. In a sailboat, this means that the boat feels more true wind speed at the masthead than it does at the deck. Because of the way that apparent wind works, when close reaching or beating, the higher wind speed at the masthead produces higher apparent winds at the masthead that are also more abeam to the boat than the apparent winds that are felt lower in the sail. Getting back to your question, twist allows the sail to have differing attack angles as you move up the sail, each at a proper angle of attack relative to the apparent wind that it is passing over the sail at that point. If you eliminated twist in light to moderate conditions, some of the sail will be over trimmed and/or some of the sail will be under trimmed for the conditions. Of course as windspeeds increase, gradient wind effect decreases. As a result as the wind increases in speed, twist should be reduced. This is done by lowering the traveller which should also be used in concert with reduction in camber (depth of the curve in the sail) achieved by increasing halyard, outhaul and sheet tension. In a really strong breeze the sail needs a comparatively flat camber and flaat angle of attack and so the sail should be bladed out. This means maximum halyard tension, outhaul tension, backstay tension, mainsheet tension. To further reduce the angle of attack the traveller is dropped as well. This will decrease weather helm and heeling. Just for the record, jibs have twist as well. Twist in jibs is controlled by the jib sheet lead angles. Moving the jibsheet car aft tightens the lower sail and increases twist in the sail, moving the track forward pulls down on the leech and so decreases twist. On jibs you increase twist in really light air to open the slot and in really heavy air to reduce heeling. The clues to the proper amount of twist comes from the teltales. On mainsails the leech teletales at the battens provide the best information. All of the teletales up the leech should be flying when the sail is set properly. When there is inadequate twist the teletale at the head will be stalled and sucked back into the sail. On some if not most mainsails, in moderate winds, some intermitant stalling of the upper teletale is the fastest way upwind. On jibs, the luff teletales should all be flying and all of the teletales should ''break'' evenly. On small jibs (with battens), leech teletales are very helpful with sail trim as well. One of the problems with battenless mainsails is that it is much harder to control twist without developing leech flutter. That problem, almost as much as the smaller sail area, is what kills performance in in-mast furling sails in lighter conditions. Sails with sunguards also can have problems with leech flutter and so often require greater leechline tension, which produces a hook in the leech of the sail and some loss in performance as well. Regards, Jeff | Mid | [
0.615969581749049,
40.5,
25.25
] |
Ranking the 30 NHL Jerseys: Part Three 10)St. Louis Blues – An instance few and far between of blue and yellow actually working beyond the high school level, these jerseys are smooth. But sadly for fans of St. Louis, they’re the oldest team in the NHL to have never won a Stanley Cup. Being a Cubs fan, looking at them strictly as Blues fans, I can feel some sympathy for them having never tasted the sweet nectar of a championship. On the other hand they get to root for the Evil Empire Cardinals, so I stop feeling any pity for them whatsoever. In fact, I now hate them, and hate myself for feeling an emotional connection to them, all after just writing that annoying baseball team’s name. They’re ranked 45 spots too high now that I think about it. Is it too late to go back? 9)Toronto Maple Leafs – Everything I’ve read about this team makes them seem like the Knicks of the NHL. Big market, big name, small results. The jerseys, on the other hand, are fantastic. The one blue, and one blue only, with a white and that same blue logo is another great “not tryin to do too much moment” in hockey jerseys. Fun fact, one of the first hats I bought when Mac Miller made it trendy to wear snapbacks, was a Maple Leafs hat. And lemme tell ya, I looked pretty cool…..I think I just set my own personal douche record right there. I feel good about it though, surprisingly. 8)Pittsburgh Penguins – I’ll never stop saying it, black and gold is the singular best color combination a sports team could have. I love that all Pittsburgh teams have the same color scheme. Why don’t all cities do this? There, I just organized the sports teams of seven major cities by color scheme. That wasn’t so hard. If all cities would just do this, nobody will ever tune into, say, a Blackhawks game and think they’re watching the Bears. Ya know? Just making it easier on people. Oh, nobody has ever once done that in the history of watching sports? Ok then, moving on… 7)Montreal Canadiens – Let’s say I was sitting on one of those half couch/half bed things in Dr. Richard Nygard’s office, and we were playing the “say the first thing that comes to your head” game. If he said the word “hockey”, the Canadiens jersey would be the first image that would pop up. Yeah, so what if I drive all the way to Pawnee, Indiana for therapy. Dr. Richard Nygard is just the best. He came with a simply splendid recommendation from Chris Traeger. I’m a #Nygardian till the death of me. 6)Detroit Red Wings – Their red and white, while skating over the bright gleam of the ice, looks like what I imagine was going through John Lennon’s mind when he wrote “Strawberry Fields Forever”. I just watched that Pavel Datsyuk video on mute with Strawberry Fields playing and I imagine that’s what having sex with Kate Upton feels like. I feel like I just wrapped the softest blanket known to man around myself in front of a fireplace on a cold winter night. That was nothing short of a religious experience. Holy shit, I need a cigarette. 5)Philadelphia Flyers – If the Blues logo was on Tinder, swiped right on the Red Wings logo, met up with them in their parents basement on Halloween and hooked up, these jersey’s would be their unintentional love-child. One of the few cases in sports where orange isn’t completely off putting. It’s a strong, independent orange that don’t need no man. And who can forget about the most famous Flyers fan of them all? I am Goldberg!!! THE GOALIE!!!!! 4)San Jose Sharks – Originality here is off the charts. Nobody else uses their colors, and their logo balances the line between too ridiculous and just ridiculous enough better than any other team in hockey. There’s also this famous story: in 2011, Sharks captain Joe Thornton was on his way to the SAP Center in San Jose for Game 3 when suddenly a real life Shark flew out of nowhere and landed on the hood of Joe’s car. Luckily for Joe he always carries around his lucky harpoon, and was able to kill the snarling angry shark before the situation got serious. The Sharks won the game 4-3 that night, which ignited the legend of Shark Slayer Joe. I swear that’s all completely true and not just another tall tale I invented because apparently there are only two instances of hockey players killing an animal. 3)Boston Bruins – The man responsible for this ranking isn’t even a hockey player, but don’t tell him that. Still the record holder, even 18 years later, for being the only guy to ever take his skate off and try to stab somebody with it, and most time spent in a penalty. I’m talking about “The Amazing Golf Ball, uh, Whacker Guy”, Happy Gilmore. My personal favorite hockey player of all time. Up there with Bo Jackson and Jackie Robinson as of history’s greatest two sport athletes. RIP to Chubbs. 2)New York Rangers – I’ve always had an affinity for Madison Square Garden and the teams that play there. There’s just something about the Knicks and the Rangers that makes me want them to be relevant. I know people always scream for the small market teams to be this that and the other thing, but let’s face it, any sports is more exciting when the New York teams are involved. New York is an incredible city, from actual experience being there, it truly is the melting pot of the world. The Rangers jerseys, unlike their big market brethren Kings, meet expectations. When I have a bunch of money, and I have a job where I can wear whatever I want to work, I will absolutely wear a different sports jersey every single day. One of those will 100% be a Blue Rangers Gretzky jersey. It is just a great look, and I imagine one of the few jerseys that can pump a player full of confidence just by putting it on. 1)Chicago Blackhawks – I love Chicago and I’m not afraid to admit it. It’s the greatest city in the entire world. So call this ranking whatever you want, but the Blackhawks have the best jersey in hockey, and possibly the best jersey in sports. Everything about it is amazing. The way it completely pops off the ice when you’re watching Patrick Kane skate by a helpless mortal trying to corral him on the wing. The perfection that is the indian head logo. It’s funny, that a quiet average kid from the Southside of Chicago had it right all along. Maybe there is deeper wisdom in my friend Brad from middle school. The ultimate don’t judge a book by it’s cover story. Wise beyond his years, Brad was behind the next big thing year before anyone else and took countless amounts of abuse about it. He’s a prophet, sent down from the heavens to sacrifice himself for our sins in an almost buddha-ian way. Maybe we should all take a lesson from this. Next time you see an awkward lanky kid roaming your middle school hallway, wearing something that’s foreign to you, go up and talk to him, maybe he has a knowledge bomb to drop on your head What? Chicago is the best city I’ve been to and I live 20 miles away…the only other big north American cities I haven’t been to are new York and Boston…maybe they’re better? Who cares. The Blackhawks and old nuggets jerseys are my favorites…objectively. enjoy your parent’s basement and Troll on you miserable bastard | Low | [
0.48235294117647004,
30.75,
33
] |
#!/usr/bin/env python3 import os import hashlib def file_exists(filepath): """Tests for existence of a file on the string filepath""" if os.path.exists(filepath) and os.path.isfile( filepath ): # test that exists and is a file return True else: return False def is_supported_filetype(filepath): """Tests file extension to determine appropriate file type for the application""" testpath = filepath.lower() if testpath.endswith(".ttf") or testpath.endswith(".otf"): return True else: return False def get_sha1(filepath): with open(filepath, "rb") as bin_reader: data = bin_reader.read() return hashlib.sha1(data).hexdigest() | Mid | [
0.597744360902255,
39.75,
26.75
] |
Nurse-Led Health Clinics Show Positive Outcomes. But political and fiscal challenges remain. | Mid | [
0.583720930232558,
31.375,
22.375
] |
Q: Codename One - Storing sensitive data Saying sensitive data i mean: Certificates Passwors other private secrets Question 1. Is there any way for third-party applications to access this information, stored using Storage class? Question 2. I suppose using FileSystemStorage is not safe at all. Is it right? Question 3. What is the safest way to store sensitive data in Codename One? A: Codename offers an encryption for your sensitive data with the introduction of EncrytedStorage. Although, it requires you to install BouncyCastle cn1lib for it to work. You can find this lib under CodenameOne Extensions when you right-click on your project and go to CodenameOne Settings. FileSystemStorage is safe but not totally secure and yes, could be accessed by another App if the app knows your App's Storage path which is usually possible on rooted android devices. | High | [
0.663529411764705,
35.25,
17.875
] |
Rachel Brand has been widely discussed as a potential judicial nominee. Leaving the administration might help her avoid controversy that could complicate any future nomination. | Jose Luis Magana/AP Photo Justice Department's No. 3 official resigns Rachel Brand is leaving the post of associate attorney general after less than 9 months on job. The Justice Department's third-ranking official, Associate Attorney General Rachel Brand, is resigning her post for a job in the private sector, officials said Friday. Brand moved to depart after an unusually brief tenure in the post: just eight-and-a-half months. She's is taking a senior position at Wal-Mart, the company confirmed Friday night. The company announced Brand will join as Executive Vice President, Global Governance and Corporate Secretary, reporting directly to the company's President and CEO, Doug McMillon. Brand has been the focus of recent speculation since she could be abruptly elevated to take over management of the investigation into alleged collusion between the Trump campaign and Russia. Deputy Attorney General Rod Rosenstein named Special Counsel Robert Mueller to lead that probe and oversees his work. But President Donald Trump has complained that Rosenstein is inadequately loyal. The possibility that Trump could fire Rosenstein has led to talk that Brand might step into his shoes, since she's next in the department's order of succession. But Trump could appoint any presidentially-confirmed official in the government — or one of hundreds of other Justice Department employees — to take Rosenstein's job on an acting basis. Asked about the impetus for Brand's departure, one associate who asked not to be named said, "Because she is very smart, accomplished, and talented, and wants to protect her career.” At Wal-Mart, Brand will be responsible for the company's legal, global ethics and compliance and global Investigation, security, aviation and travel departments, along with her role as corporate secretary. Another person told of Brand's move said Friday: "She got a really great offer from a Fortune 10 company. That's all there is to it." Brand's statement, as released by the Justice Department on Friday evening, gave no hint of dissatisfaction. "The men and women of the Department of Justice impress me every day,” Brand said. “I am proud of what we have been able to accomplish over my time here. I want to thank Attorney General Sessions for his leadership over this Department. I’ve seen firsthand his commitment to the rule of law and to keeping the American people safe.” Attorney General Jeff Sessions praised Brand's work and stressed that her decision was driven by the job she'd been offered in the business world. “Rachel Brand is a lawyer’s lawyer,” Sessions said in a statement. "I know the entire Department of Justice will miss her, but we join together in congratulating her on this new opportunity in the private sector. She will always remain a part of the Department of Justice family.” Brand's decision to resign appeared to take many Justice Department officials by surprise, setting off something of a scramble. A department statement said she was expected to leave her post "in the coming weeks." POLITICO Playbook newsletter Sign up today to receive the #1-rated newsletter in politics Email Sign Up By signing up you agree to receive email newsletters or alerts from POLITICO. You can unsubscribe at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. A graduate of Harvard law school and veteran of the Justice Department's Office of Legal Policy in the George W. Bush administration, Brand has been widely discussed as a potential judicial nominee. Leaving the administration might help her avoid controversy that could complicate any future nomination. Brand's Trump administration job included oversight of the Justice Department's civil rights division, civil division and antitrust division. She also played a key role in lobbying Congress to renew legislation authorizing monitoring of foreigners' communications through U.S. based internet firms and telecommunication providers. Brand had expertise in the issue because she'd served on the Privacy and Civil Liberties Oversight Board during the Obama administration, studying a range of the government's anti-terrorism surveillance authorities. When Congress passed a six-year renewal of the provision known as Section 702 last month, Justice Department officials went out of their way to draw attention to her leadership in the fight. Sessions highlighted that effort again in his statement on her departure. "When I asked her to take the lead in the Department’s efforts on Section 702 re-authorization, she made this her top priority and combined her expertise and gravitas to help pass legislation keeping this crucial national security tool," the attorney general wrote. Brand also devoted significant time to one of her longstanding passions: helping fight domestic abuse and human trafficking. She spoke at a Justice Department conference on the issue just last week, Before taking her post, she was a board member and volunteer at a battered women's shelter in Arlington, Va., Doorways for Women and Families. | Mid | [
0.633027522935779,
34.5,
20
] |
Introduction ============ Epidemiological data suggest the highest rates of human immunodeficiency virus (HIV) infection in the United States are among African American men who have sex with men (MSM) \[[@ref1]\]. The 2012 National HIV Behavioral Surveillance survey demonstrated that among HIV-infected MSM in Baltimore, 48% were African American and less than 20% were white. Alcohol use and its impact on HIV transmission and treatment are major public health burdens in many parts of the world. Reviews indicate that alcohol consumption is associated with HIV incidence \[[@ref2]\]; furthermore, alcohol is a potential cause of poorer HIV outcomes \[[@ref2]\] and is associated with lower adherence to HIV medications \[[@ref3]\]. African American MSM report significantly more drinks per drinking day compared to MSM from other race/ethnicity groups \[[@ref4]\]. Previous research in Baltimore found that, based on Alcohol Use Disorders Identification Test (AUDIT) scores, 22% of African American MSM were in the hazardous category (ie, AUDIT score 8-15) and 21% in the high risk/likely dependent category (AUDIT score ≥16) \[[@ref5]\]. Most research on substance use (alcohol and illicit drugs) and HIV has relied on self-reported information over a period of time, which varies by study. A major limitation of this method of measurement is that it is affected by recall bias (reliability and accuracy), and it may not be context specific \[[@ref6]\]. Information recall is affected by heuristics used in memory search and reconstruction, which can systematically bias participant responses \[[@ref7],[@ref8]\]. Imprecise or inaccurate information can impede the advancement of knowledge regarding alcohol use and risky sex behaviors among highly stigmatized populations \[[@ref9]\]. There is a need to improve the methodologies for behavioral data collection and to obtain a more detailed understanding of the relationship between alcohol use and HIV risk behaviors among key populations, especially African American MSM. Mobile health (mHealth) opens new avenues for research on substance use and HIV, as ubiquitous technology allows for more frequent and close to real-time collection of behaviors, locations, and physiologies \[[@ref10]\]. Ecological Momentary Assessment (EMA), an mHealth method that utilizes mobile technologies, such as a personal digital assistant (PDA) or mobile phone, allows participants to record their daily activities on the device in real time \[[@ref11]\]. EMA minimizes biases, specifically recall bias, by requiring participants to immediately respond to random prompts or record specific events on a daily basis in their natural environments \[[@ref12]\]. EMA is an especially suitable tool to study health behaviors, such as alcohol use, which are discrete and episodic behaviors, and is ideal for event-contingent recording \[[@ref13]\]. Prior research suggests that MSM may use alcohol to cope with internal experiences (eg, stress associated with internalized homophobia) \[[@ref14]\] and situational stimuli and cues (eg, social pressure to use), making EMA an excellent method to capture these fleeting states. Empirical data from previous research of EMA have shown a high compliance rate among diverse populations, including homeless persons with crack-cocaine addiction \[[@ref15]\], heroin and cocaine users in treatment \[[@ref16]\], ecstasy users who also engaged in use of alcohol, marijuana, cocaine and hallucinogens \[[@ref17]\], and social drinkers \[[@ref18]\]. Evidence has also shown that intoxicated participants could enter data accurately on a mobile device \[[@ref13]\]. Finally, in studies assessing reactivity of substance-use recording in EMA, the possibility that the repeated assessments may affect the behavior under study and thus distort the findings, have not indicated strong reactivity effects \[[@ref19],[@ref20]\]. While EMA methods have been used successfully, there have been few studies that employ these methods with African American MSM in everyday, community-dwelling, non-treatment settings. In response to this limitation, we developed EMA methods for near real-time characterization of alcohol use in individuals' natural environments. In this paper, we characterize implementation barriers and examine the feasibility, acceptability, and reactivity of using intensive EMA methods among African American MSM living in urban settings. Methods ======= Study Participants ------------------ Study participants were recruited through flyers and word-of-mouth in Baltimore, Maryland. Flyers were placed at the front desk of the research facility, where several other substance use and HIV projects were conducted. Inclusion criteria for this study were (1) at least 18 years of age, (2) self-reported African American race/ethnicity, (3) self-reported male sex, (4) self-reported having had sex with a male in the prior 30 days, (5) self-reported drinking alcohol at least once a week in the prior 30 days, (6) reported living within the Baltimore metropolitan area, and (7) able to understand and follow directions on how to use the mobile phone, as assessed by study staff in a one-on-one orientation session. Study Procedures ---------------- ### Recruitment/Enrollment Between September 2013 and November 2014, we screened 25 individuals via phone or in-person by a trained study coordinator or principal investigator. Of those screened, 17 were eligible to participate in our study and 16 participants provided informed verbal consent to participate in the study. Participants were enrolled in 6 waves of data collection (2-3 participants/wave). Participants first completed a baseline interview at the research facility. They were then loaned a mobile phone (Samsung Galaxy S4), and the mobile phone was reused in each wave. Once a phone was lost, a replacement phone was acquired. To protect participants' privacy, mobile phones were set back to factory setting and therefore all personal information was deleted. Before leaving the study office and beginning mobile data collection in the field, study participants were trained on how to use the device and app by the study coordinator and all participants were required to show their understanding of the mobile app by running a "demo" questionnaire. ### Audio-Computer Assisted Standardized Interview Surveys At baseline, all participants completed an audio computer-assisted self-interview (ACASI), which assessed sociodemographics (eg, age, education, employment, income, and homelessness), drug and sex behaviors (eg, self-reported alcohol, tobacco, and illicit drug use and number of sex partners), clinical diagnoses (eg, self-reported HIV status), and prior experience with mobile technology. Additional data collected at baseline included depressive symptoms assessed using the Center for Epidemiologic Studies Depression Scale (CES-D), and past week and hazardous drinking and likely alcohol dependence assessed via the AUDIT score (respective cut-offs: ≥8 and ≥16). Participants returned to the research center after 1 and 4 weeks of follow-up to complete additional ACASI surveys that collected information about their behaviors during the prior week (1-week assessment) or during the prior 30 days (4-week assessment). One of the goals of the ACASI surveys was to compare aggregated responses with real-time responses from the EMA data collection. ### Mobile App The mobile app used in this study, emocha, was created by the Center for Clinical Global Health Education at the Johns Hopkins School of Medicine. EMA surveys used in this study were adapted from prior research conducted by collaborators with drug using populations in Baltimore \[[@ref21]\]. The following three types of EMA prompts were used in this study: 1. Random prompts: Three times daily, emocha sent an alert to the participant's phone between 10 a.m. and 10 p.m. These random prompts asked about participants' immediate mood, surroundings, and potential environmental cues that may trigger alcohol use. Participants answered no more than 46 questions in each of the random prompt surveys (including skip patterns). 2. Daily prompts: One daily alert at 9 a.m. was sent to participant's phone. This survey consisted of questions that summarized activities during the previous 24 hours, including alcohol, other substance use, and sexual activities. The daily survey contained up to 98 questions, depending the number of alcohol drinks and number of sex partners reported during the past 24 hours. 3. Event-contingent entries: Participants were instructed to initiate an electronic entry every time they finished one episode of drinking, which was defined as a cluster of alcohol use in one sitting. Event-contingent surveys asked up to 40 questions concerning the location, alcohol use expectancy, drinking partner, types, and amount of alcohol, co-use of other substances, and current mood and stress level (see [Figure 1](#figure1){ref-type="fig"}). Participants had 30 minutes after being prompted to complete surveys for both the random and daily prompts. After 30 minutes, the prompt was considered missed. Each day at 10 p.m., mocha uploaded the encrypted EMA data to a secure server and removed the data from the device. {#figure1} Qualitative Assessment ---------------------- During the week 1 and week 4 visits, participants also completed 10-minute, semistructured interviews to provide feedback on their experience using the mobile phone and the emocha app. The qualitative interview guides had predefined main themes, but these guides were meant to be dynamic and allowed for new topics to emerge during the course of the interview. Important topics included (1) satisfaction and challenges with the study, (2) suggestions for future studies, and (3) ways the mobile phone may facilitate reducing alcohol use and promoting HIV risk reduction. Participants received remuneration for attendance at study visits ([Table 1](#table1){ref-type="table"}), for providing adequate responses to weekly random and daily prompts, and for returning devices upon study completion. Participants were informed at enrollment that loss of two study devices would result in their dismissal from the study. ###### Participant reimbursement/visit schedule (in USD). ----------------------------------------------------------------------------------------------- Intake Week 1 Week 2 Week 3 Week 4 Total ---------------------- -------- ----------- ----------- ----------- ------------- ------------- Baseline visit \$10 \ \ \ \ \ ACASI \$20 \$20 \ \ \$20 \ EMA^a^ \ \$25/\$50 \$25/\$50 \$25/\$50 \$25/\$50 \ Close out \ \ \ \ \$10 \ Smartphone return^b^ \ \ \ \ \$50/\$100 \ Total \$30 \$45-\$70 \$25-\$50 \$25-\$50 \$105-\$180 \$230-\$380 ----------------------------------------------------------------------------------------------- ^a^Participants were paid US \$50 every week for answering 80% of their alarms or US \$25 every week for answering 60% of their alarms. They received no bonus for answering less than 60% of their alarms or if their phone was uncharged. ^b^Participants received US \$100 at close out for returning their original phone or US \$50 for returning their replacement. They would be excused from the proposed study if they lost their replacement phone. The Institutional Review Board at Johns Hopkins University Bloomberg School of Public Health approved the study protocol and a Certificate of Confidentiality was obtained through the National Institute of Allergy and Infectious Diseases. Data Analysis ------------- Descriptive statistics were used to examine characteristics of participants and study compliance (eg, days of follow-up, random and daily prompt response rates, EMA survey completion time, and device loss rate). Feasibility was assessed through participant retention, days of follow-up, device loss rate, response rates to EMA surveys, and amount of time needed to complete each EMA survey. We also assessed the number of questions completed in each survey. Reactivity analysis was conducted by examining any correlation between day of study and total number of drinks reported in daily surveys. Repeated measures were accounted for in linear regression models using Generalized Estimating Equations \[[@ref22]\]. We summed the number of drinks each individual reported consuming per day in the daily prompts and plotted this against the day of study. We used a non-parametric lowess curve, which is able to show a relationship between variables and any trends that may exist in the data. All quantitative analyses were performed using Stata version 13.0. For the qualitative evaluation, we identified core consistencies and meanings in the data through careful repeated reading of interview texts. We labeled sections of text based on themes and particular domains of interest related to feasibility and acceptance. Results were summarized by main themes and reviewed by investigators continually throughout the study, with the goal of identifying strategies to refine the emocha design in preparation for a subsequent data collection. Results ======= Baseline Characteristics of Participants ---------------------------------------- Of the 16 participants enrolled, one participant was lost to follow-up 1 day after the baseline visit. The current analyses focused on 15 participants who completed at least 24 days of follow-up (median 29, interquartile range 27-31). Of these 15 participants, 5 saw the study flyers and 10 heard about our study from other people. The baseline characteristics of 15 participants were summarized in the [Table 2](#table2){ref-type="table"}. ###### Baseline characteristics of participants (N=15). ------------------------------------------------------------------------------------------------------------- Characteristics n (%) --------------------------------------------------------------------------- ------------------------ -------- Age, median (IQR) 32 (29-45) At least grade 12 or GED education 13 (87) Full/part time job 2 (13) \<US \$10,000 income (last year) 8 (53) Homeless (past 6 months) 4 (27) Arrested (past 6 months) 2 (13) HIV positive (self-report) 10 (67) CES-D score, median (IQR) 32 (21-44) Depressive symptoms (CES-D\>20) 12 (80) **Frequency of cigarette use (past 30 days)** \ Never 3 (20) \ Once a week 1 (7) \ A few times a week 2 (13) \ Every day 9 (60) Have smoked crack/cocaine/heroin/inject drugs to get high (past 3 months) 4 (27) Often or always smoke marijuana while drinking alcohol 7 (47) **Frequency of alcohol use** \ Monthly or less 1 (6) \ 2-4 times a month 6 (40) \ 2-3 times a week 4 (27) \ 4 or more times a week 4 (27) Frequency of binge drinking at least weekly 5 (33) AUDIT score, median (IQR) 9 (6-14) Hazardous drinker (AUDIT score: 8-15) 7 (47) Probable alcohol dependence (AUDIT score≥16) 2 (20) Number of sex partners (past 30 days), median (IQR) 2 (1-5) Have owned a mobile phone (past 6 months) 14 (93) Currently using a smartphone 11 (79) ------------------------------------------------------------------------------------------------------------- Feasibility Assessment ---------------------- Overall, 15 participants provided 436 days of observation (mean 29 days; [Table 3](#table3){ref-type="table"}). Seven participants enrolled in the study over 4 weeks (ie, 28 days) due to scheduling. Of the 15 participants, 2 completed 4 weeks of EMA entries but failed to return to the clinic for the 4-week visit and to return the phone. Two more participants were unable to complete week 4 of EMA surveys due to phone loss (n=1) and incarceration (n=1); however, both men completed the last clinic visit. A total of seven phones were issued to participants, and phones were re-used for multiple waves of data collection. At the end of the study, five phones were either reported lost by participants (n=2) or were unable to be retrieved by study staff due to loss to follow up with participants (n=3). One participant reported losing his phone on the last day of follow-up. The last phone was lost during the fourth week of EMA data collection, but the participant did not report the phone as lost until the last day of the study. ###### Days of follow-up and device loss (participants were enrolled in 6 waves of data collection \[2-3 participants/wave\]. Phone was reused in each wave. Once a phone was lost, a replacement phone was acquired). -------------------------------------------------------------------------------------------- Baseline Week 1 Week 2 Week 3 Week 4 Days Device ------------------ ---------- -------- -------- -------- -------- ------ -------- ---------- Participant 1 X X X X X \ 29 lost Participant 2 X X X X X X 30 returned Participant 3 X X X X X X 29 returned Participant 4 X X X X X X 27 returned Participant 5 X X X X X X 31 returned Participant 6 X X X X X \ 28 lost Participant 7 X X X X X X 27 returned Participant 8 X X X X X X 31 returned Participant 9^a^ X \ \ \ \ \ 1 lost Participant 10 X X X X X X 27 returned Participant 11 X X X X X X 35 returned Participant 12 X X X X X X 32 returned Participant 13 X X X X \ X 24 lost Participant 14 X X X X \ X 24 returned Participant 15 X X X X X X 35 returned Participant 16 X X X X X X 27 lost -------------------------------------------------------------------------------------------- ^a^Excluded from the current analyses. Response rates, time to complete EMA survey and number of questions completed over the 4-week study period are summarized in [Table 4](#table4){ref-type="table"}. A total of 436 daily prompts, which were initiated at 9 a.m. every day, were sent to participant's mobile phones. In all, 352 daily prompts were completed resulting in an overall compliance rate of 80.7%. The compliance rate of daily survey completion ranged from 62.5% to 100% among all participants. [Table 4](#table4){ref-type="table"} describes the peak compliance rate in week 3 (92.4%) followed by a drop in week 4 (63.6%). A total of 1308 random prompts were sent to participants' mobile phones over follow-up and 968 were completed. This represents an overall compliance rate of 74%, translating to an average of 2.22 random-prompt responses per day per person. [Table 4](#table4){ref-type="table"} shows the compliance rate per week as steady in the first 3 weeks, followed by a drop-off in week 4. Among all participants, the compliance rate of random prompts ranged from 48.1% to 98.9%. The 15 participants of the current study reported a total of 140 drinking events over follow-up through emocha. The average number of self-reported drinking events per person was 9, ranging from of 2 to 32 reports over 4 weeks of follow-up. Of note, 40% of drinking events were reported in week 1 and the number reported per week decreased at each week over the course of the study. We assessed completion time as the number of minutes elapsed from initiation of each survey to synchronization with the server. The average time to finish the daily survey was 1.43 minutes, the average time to complete the random survey was 1.15 minutes, and the average time needed to complete the event survey was 1.52 minutes. In both daily and random surveys, we observed a trend of a learning curve so that initially participants took longer to complete the survey. In week 4, it took the participants an average of 1 minute to finish each survey. Taken together, the amount of time it took to complete one daily survey and three random surveys averaged 4.88 minutes per day plus any additional time to fill out event surveys if drinking occurred. We also assessed the number of questions participants completed in different the types of surveys per week over 4 weeks. There were no significant changes over time in terms of number of completed questions in both random and event surveys. Although the change was statistically significant (*P*=.006) in the daily survey, the magnitude of change was from 19.33 in week 1, 15.10 in week 2, 15.21 in week 3, and 15.91 in week 4. ###### Response rates, time to complete EMA survey, and number of questions completed. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \ Average response rate Time to finish EMA survey in minutes, median (IQR) Number of questions completed in each survey, mean (SD) ----------- ----------------------- ---------------------------------------------------- --------------------------------------------------------- ------------------ ------------------ ------------------ --------------- -------------- -------------- Overall 80.7% 74% 140 1.43 (0.91-2.53) 1.15 (0.83-1.60) 1.52 (1.15-2.10) 16.36 (9.24) 18.06 (3.15) 18.92 (1.04) Week 1 85.7% 81.6% 58 2.64 (1.57-3.68) 1.48 (1.11-1.99) 1.97 (1.55-2.78) 19.33 (12.01) 17.96 (3.14) 19.14 (1.32) Week 1 83.8% 84.8% 28 1.28 (0.89-2.19) 1.17 (0.83-1.71) 1.18 (1.06-1.58) 15.10 (8.98) 18.24 (3.37) 18.75 (0.79) Week 3 92.4% 80.9% 32 1.19 (0.84-1.98) 1.06 (0.79-1.35) 1.29 (1.11-1.90) 15.21 (7.03) 17.75 (2.82) 18.75 (0.62) Week 4 63.6% 52.1% 22 1.05 (0.81-1.84) 0.95 (0.70-1.24) 1.16 (0.89-1.55) 15.91 (7.73) 18.33 (3.24) 18.82 (0.91) *P* value \ \<.001 .28 \<.001 .006 .18 .23 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ^a^Data only available for ID8-ID16. Reactivity Assessment --------------------- The median number of drinks per day per person was 2 (IQR 0-6). Cases with number of drinks per day over 30, which represented 9% of all cases, were recoded as 30 for further analysis. The correlation between number of drinks per day and days of study was -.015 (*P*=.01). We plotted the number of drinks per day against the day of study, and each dot represents each individual's self-reported number of drinks. As shown in [Figure 2](#figure2){ref-type="fig"}, there was a trend towards decreasing alcohol use over the course of the study, and it flattened out after 25 days. {#figure2} Acceptability Assessment ------------------------ As seen in [Table 5](#table5){ref-type="table"}, most participants reported that the phones were "easy or very easy" to use and that the reporting burden was "just right or not enough". Comprehension of survey questions was also high (85%-94% reported "most" or "all made sense"). The majority (87%-93%) reported being mostly or extremely confident that their privacy would be protected. Finally, 20%-31% of participants indicated that answering questions about drinking made them want to drink less. ###### Acceptability survey. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \ Week 1 (n=15),\ Week 4 (n=13),\ n (%) n (%) --------------------------------------------------------------------------------------------------------------------- ----------------------------------------------- ----------------- ---------- **In general, how easy is it to use the smartphone?** \ Very easy 13 (87) 13 (100) \ Easy 2 (13) 0 \ Difficult 0 0 \ Very difficult 0 0 **What do you think about the number of times that your alarm goes off every day?** \ Not enough 1 (6) 1 (8) \ Just right 14 (94) 12 (92) \ A little too much 0 0 \ Too much 0 0 **Do the questions on the phone make sense to you?** \ None of the questions make sense to me 0 0 \ Some of the questions do not make sense to me 1 (6) 2 (15) \ Yes, most of them make sense to me 4 (27) 5 (39) \ Yes, all of them make sense to me 10 (67) 6 (46) **Do you feel comfortable carrying the smartphone?** \ Extremely comfortable 14 (93) 10 (77) \ Mostly comfortable 0 2 (15) \ Somewhat comfortable 1 (7) 1 (8) \ Not too comfortable 0 0 \ Not comfortable at all 0 0 **Do you feel confident that the information collected will only be seen by researchers and not used against you?** \ Extremely confident 12 (80) 11 (86) \ Mostly confident 1 (7) 1 (7) \ Somewhat confident 2 (13) 1 (7) \ Not too confident 0 0 \ Not confident at all 0 0 **Do you feel that the size of the device is:** \ Too small 1 (7) 2 (15) \ A good size 14 (93) 11 (85) \ Too big 0 0 **Does answering questions about drinking make you want to drink more, less, or about the same?** \ More 1 (7) 1 (7) \ Less 3 (20) 4 (31) \ The same 11 (73) 8 (62) -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Qualitative Assessment ---------------------- In qualitative interviews, participants provided positive feedback concerning the study methods. Familiarity with the technology seemed to have helped participants navigate the study as participants stated "I have the same phone, so I like it" (Participant 8) and "easy to use, similar to my own phone" (Participant 13). Participants enjoyed the technology as part of the research, as one participant stated "fun to answer the questions on phone, much easier" (Participant 5). Participants felt the mobile technology may have even increased their engagement in the study, as participants stated "fun study which makes people more engaged and willing to participate" (Participant 13) and "good way to answer questions without coming to a clinic, it is cool" (Participant 14). One participant mentioned "The questions also make you more cognizant of your surroundings and habits (anxiety level, seeing things happen, amount of drinking)" (Participant 13). Participants also expressed concerns and provided suggestions for the study procedures and future studies. For participants reporting that they already owned a smartphone, the study phone may have been a burden. As one participant stated, "two phones are too much (to carry)" (Participant 8). Another participant suggested that in future studies, study staff should "install apps (emocha) on my phone" (Participant 15). Participants also reported, "the battery life dies fast" (Participant 14), "volume is too low to hear the alarms and snooze time should be longer" (Participant 7), "\[the apps\] need to be more personalized" (Participant 13). Participants expressed their interest in keeping the study phone as they stated "it \[study phone\] can't be kept" (Participant 12) and "had to give it \[study phone\] back" (Participant 15). Participants also suggested extending the hours of data collection to better capture times when people are more likely to be out drinking. For example, one participant suggested that we "increase the time on weekends to around 1 or 2 \[am\]. That's when the clubs close and if you do that, you would get some good data" (Participant 13), and another suggested that it "May be a good idea to have the alarm period extended to 11-12, people may be getting ready for parties" (Participant 14). In addition to data collection, participants felt the "smartphone can be a good tool to deliver health messages" (Participant 5). Discussion ========== Principal Findings ------------------ To our knowledge, this is the first study to evaluate the use of a mobile app--based EMA to prospectively capture alcohol use among African American men who have sex with men living in urban settings. Despite challenges, this study provides evidence to support the feasibility and acceptability of using EMA methods for collecting data on alcohol use in this population. Given the highly demanding protocol in EMA, we were particularly sensitive to participation burden. Our study protocol included 3 random prompts and 1 daily prompt per day, which represents a lower or moderate participant burden as compared to previous EMA studies in substance using populations \[[@ref16],[@ref21]\]. In both daily and random surveys, we observed a trend of a learning curve, in that participants initially took longer to complete surveys. Overall, our data demonstrate that participants spent on average less than 5 minutes per day to compete mandatory EMA surveys. Event-driven surveys required an additional 1-2 minutes per drinking episode reported. The quality of EMA data depends heavily on participants' compliance to prompts and timeliness of recording episodes of the desired event (ie, alcohol use). Noncompliance leads not only to missing data but can even introduce bias in the data collected \[[@ref12]\]. In the current study, participants answered 74% of random prompts and 80.7% of daily prompts, which is comparable to response rates reported in previous EMA studies (50%-90%) \[[@ref16],[@ref23],[@ref24]\]. This finding is consistent with what participants reported in the acceptability survey, as they were very clear in reporting that study procedures were not overly burdensome. Overall, findings from the current study represent a moderate burden to participants enrolled in the study. Assessing compliance through the recorded events (eg, alcohol use) is much more challenging. There is often no way to independently assess or verify whether participants failed to report the events that have actually occurred. The idea of providing incentives for reporting substance use events is debatable and needs further evaluation. In the current study, 40% of drinking events were reported in week 1. Although the reactivity analysis found a significant decrease in reporting of number of drinks per day, the magnitude of decrease was minor (-.015). Taken together, underreporting of alcohol events through event-contingent surveys is expected in the current study. More research should explore good participant management procedures that can yield high compliance \[[@ref13]\], such as a regular reminder to participants to report their drinking events when they have an onsite visit or through text messages to their phones. Future studies could consider using biochemical markers of alcohol, such as transdermal alcohol sensors as a way to objectively validate self-reported alcohol use (and compliance). One of the challenges associated with EMA is exhaustion of participants within the study period due to the highly demanding research protocols, which can diminish the level of participation. In our analyses of weekly response rates, we did find some evidence of exhaustion, as the response rates to both daily and random prompts dropped significantly in week 4. These results may signify that further examination of the assessment windows, such as shorter follow-up are necessary. Additionally, we had significant loss to follow-up, which mostly occurred in week 4. In the future, studies should provide better monitoring to seek a better understanding of exhaustion with similar populations. We were able to find out that one participant was not be able to complete EMA in week 4 due to a brief incarceration. African American MSM living in urban settings may experience unique social and structural challenges, such unemployment, low-income status, incarceration, and community violence. These sociostructural factors may operate independently or together in a dynamic fashion to create a context in which they experience challenges or inabilities to engage in prevention and treatment programs \[[@ref25]\]. Impacts of future mHealth research and programs will come from a better understanding of broader social contexts where mHealth is implemented. Another challenge is related to an issue that concerns all EMA studies of substance use, namely the hours of coverage for EMA assessments \[[@ref13]\]. In the current study, EMA assessments occurred between 10 a.m. to 10 p.m., in order to avoid alarming participants when they are asleep. However, as some participants suggested, these times of day may not be representative of hours in which drinking occurs, particularly as alcohol consumption tends to occur later in the night, and mood, activity, and social settings vary by time (eg, weekdays vs weekends). Thus, it is important for future studies to assess the full range of an individual participant's waking hours \[[@ref26]\] and to possibly provide personalized hours of coverage of EMA for each participant. In Epstein et al's research with cocaine- and heroin-abusing outpatients who were being treated with methadone, typical waking hours were programmed on a weekly basis when participants were issued the device \[[@ref16]\]. With technology development, personalized EMA could be executed remotely. Device loss posed a major challenge in our study, as five of seven devices issued to participants were lost. Participants were informed at enrollment that they would be dismissed from the study after losing two devices. There is a concern that when using mobile devices among impoverished populations, participants might sell the devices \[[@ref21]\]. Therefore, we provided an incentive (US \$100) for returning the devices upon completion of the study. However, we later realized that the street value for the Samsung Galaxy 4 can be more than the incentive. Several participants expressed their disappointment at returning the study phones in qualitative interviews. Future studies may consider the option of installing the mobile app on participants' own phones or letting participants keep the study phone instead of monetary incentive or using a less well-known brand of mobile phone. Additionally, future research could utilize the remote inactivation feature if mobile phones are stolen or misplaced. Limitations ----------- This is a preliminary feasibility study, and so several limitations need to be addressed in future research. Given the small sample size, the current study does not have enough statistical power to detect the significant differences in sociodemographic, behavior, or clinical characteristics between participants with higher response rates and those with lower response rates. Research involving larger samples is needed to explore various factors associated with variation in compliance rates that can be used for targeted EMA training to enhance compliance. Despite the less restrictive inclusion criteria of our study, we were able to enroll participants with varied alcohol use, including 7 hazardous drinkers and 3 likely alcohol dependent. Our limited sample size, however, may not extend to problematic alcohol users with more intense alcohol use patterns. Our study confirmed previous findings that did not demonstrate strong reactivity from EMA assessment of substance use \[[@ref20]\]; however, future research should generate more rigorous evidence. Conclusions ----------- In conclusion, findings from our study demonstrate that EMA methods are feasible and acceptable approaches for data collection among African American men who have sex with men. Eliminating health disparities and reversing HIV epidemic trends will require innovative combination prevention approaches to reduce high-risk behaviors, including substance use, expanded HIV testing, and increased linkage to and retention in care. The high ownership of mobile phones among minority MSM may provide a promising platform for data collection and the delivery of substance use and HIV risk reduction messages to this hard-to-reach population \[[@ref27]\]. Using EMA data can identify individualized sets of triggers to be used to further tailor "ecological momentary intervention (EMI)" content and delivery \[[@ref28]\]. These methods could reinforce the systematic use of prevention or treatment components in real-world settings. This research has been supported by National Institute of Alcohol Abuse and Alcoholism (K99/R00AA020782), and the Johns Hopkins Center for AIDS Research (1P30AI094189). Conflicts of Interest: LC and RB are consultants to, minority equity holders in, and entitled to royalties from emocha Mobile Health, Inc. This arrangement has been reviewed and approved by the Johns Hopkins University in accordance with its conflict of interest policies. ACASI : audio computer-assisted self-interview EMA : ecologic momentary assessment emocha : electronic Mobile Comprehensive Health App HIV : human immunodeficiency virus MSM : men who have sex with men | Mid | [
0.6218905472636811,
31.25,
19
] |
// Copyright 2015 Citra Emulator Project // Licensed under GPLv2 or any later version // Refer to the license.txt file included. #include "common/logging/log.h" #include "core/core.h" #include "core/hle/ipc_helpers.h" #include "core/hle/result.h" #include "core/hle/service/boss/boss.h" #include "core/hle/service/boss/boss_p.h" #include "core/hle/service/boss/boss_u.h" namespace Service::BOSS { void Module::Interface::InitializeSession(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x01, 2, 2); const u64 programID = rp.Pop<u64>(); rp.PopPID(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) programID={:#018X}", programID); } void Module::Interface::SetStorageInfo(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x02, 4, 0); const u64 extdata_id = rp.Pop<u64>(); const u32 boss_size = rp.Pop<u32>(); const u8 extdata_type = rp.Pop<u8>(); /// 0 = NAND, 1 = SD IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) extdata_id={:#018X}, boss_size={:#010X}, extdata_type={:#04X}", extdata_id, boss_size, extdata_type); } void Module::Interface::UnregisterStorage(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x03, 0, 0); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) called"); } void Module::Interface::GetStorageInfo(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x04, 0, 0); IPC::RequestBuilder rb = rp.MakeBuilder(2, 0); rb.Push(RESULT_SUCCESS); rb.Push<u32>(0); LOG_WARNING(Service_BOSS, "(STUBBED) called"); } void Module::Interface::RegisterPrivateRootCa(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x05, 1, 2); [[maybe_unused]] const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED)"); } void Module::Interface::RegisterPrivateClientCert(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x06, 2, 4); const u32 buffer1_size = rp.Pop<u32>(); const u32 buffer2_size = rp.Pop<u32>(); auto& buffer1 = rp.PopMappedBuffer(); auto& buffer2 = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 4); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer1); rb.PushMappedBuffer(buffer2); LOG_WARNING(Service_BOSS, "(STUBBED) buffer1_size={:#010X}, buffer2_size={:#010X}, ", buffer1_size, buffer2_size); } void Module::Interface::GetNewArrivalFlag(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x07, 0, 0); IPC::RequestBuilder rb = rp.MakeBuilder(2, 0); rb.Push(RESULT_SUCCESS); rb.Push<u8>(new_arrival_flag); LOG_WARNING(Service_BOSS, "(STUBBED) new_arrival_flag={}", new_arrival_flag); } void Module::Interface::RegisterNewArrivalEvent(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x08, 0, 2); [[maybe_unused]] const auto event = rp.PopObject<Kernel::Event>(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED)"); } void Module::Interface::SetOptoutFlag(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x09, 1, 0); output_flag = rp.Pop<u8>(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "output_flag={}", output_flag); } void Module::Interface::GetOptoutFlag(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x0A, 0, 0); IPC::RequestBuilder rb = rp.MakeBuilder(2, 0); rb.Push(RESULT_SUCCESS); rb.Push<u8>(output_flag); LOG_WARNING(Service_BOSS, "output_flag={}", output_flag); } void Module::Interface::RegisterTask(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x0B, 3, 2); const u32 size = rp.Pop<u32>(); const u8 unk_param2 = rp.Pop<u8>(); const u8 unk_param3 = rp.Pop<u8>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}, unk_param2={:#04X}, unk_param3={:#04X}", size, unk_param2, unk_param3); } void Module::Interface::UnregisterTask(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x0C, 2, 2); const u32 size = rp.Pop<u32>(); const u8 unk_param2 = rp.Pop<u8>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}, unk_param2={:#04X}", size, unk_param2); } void Module::Interface::ReconfigureTask(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x0D, 2, 2); const u32 size = rp.Pop<u32>(); const u8 unk_param2 = rp.Pop<u8>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}, unk_param2={:#04X}", size, unk_param2); } void Module::Interface::GetTaskIdList(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x0E, 0, 0); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) called"); } void Module::Interface::GetStepIdList(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x0F, 1, 2); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}", size); } void Module::Interface::GetNsDataIdList(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x10, 4, 2); const u32 filter = rp.Pop<u32>(); const u32 max_entries = rp.Pop<u32>(); /// buffer size in words const u16 word_index_start = rp.Pop<u16>(); const u32 start_ns_data_id = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(3, 2); rb.Push(RESULT_SUCCESS); rb.Push<u16>(0); /// Actual number of output entries rb.Push<u16>(0); /// Last word-index copied to output in the internal NsDataId list. rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) filter={:#010X}, max_entries={:#010X}, " "word_index_start={:#06X}, start_ns_data_id={:#010X}", filter, max_entries, word_index_start, start_ns_data_id); } void Module::Interface::GetNsDataIdList1(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x11, 4, 2); const u32 filter = rp.Pop<u32>(); const u32 max_entries = rp.Pop<u32>(); /// buffer size in words const u16 word_index_start = rp.Pop<u16>(); const u32 start_ns_data_id = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(3, 2); rb.Push(RESULT_SUCCESS); rb.Push<u16>(0); /// Actual number of output entries rb.Push<u16>(0); /// Last word-index copied to output in the internal NsDataId list. rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) filter={:#010X}, max_entries={:#010X}, " "word_index_start={:#06X}, start_ns_data_id={:#010X}", filter, max_entries, word_index_start, start_ns_data_id); } void Module::Interface::GetNsDataIdList2(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x12, 4, 2); const u32 filter = rp.Pop<u32>(); const u32 max_entries = rp.Pop<u32>(); /// buffer size in words const u16 word_index_start = rp.Pop<u16>(); const u32 start_ns_data_id = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(3, 2); rb.Push(RESULT_SUCCESS); rb.Push<u16>(0); /// Actual number of output entries rb.Push<u16>(0); /// Last word-index copied to output in the internal NsDataId list. rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) filter={:#010X}, max_entries={:#010X}, " "word_index_start={:#06X}, start_ns_data_id={:#010X}", filter, max_entries, word_index_start, start_ns_data_id); } void Module::Interface::GetNsDataIdList3(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x13, 4, 2); const u32 filter = rp.Pop<u32>(); const u32 max_entries = rp.Pop<u32>(); /// buffer size in words const u16 word_index_start = rp.Pop<u16>(); const u32 start_ns_data_id = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(3, 2); rb.Push(RESULT_SUCCESS); rb.Push<u16>(0); /// Actual number of output entries rb.Push<u16>(0); /// Last word-index copied to output in the internal NsDataId list. rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) filter={:#010X}, max_entries={:#010X}, " "word_index_start={:#06X}, start_ns_data_id={:#010X}", filter, max_entries, word_index_start, start_ns_data_id); } void Module::Interface::SendProperty(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x14, 2, 2); const u16 property_id = rp.Pop<u16>(); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) property_id={:#06X}, size={:#010X}", property_id, size); } void Module::Interface::SendPropertyHandle(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x15, 1, 2); const u16 property_id = rp.Pop<u16>(); [[maybe_unused]] const std::shared_ptr<Kernel::Object> object = rp.PopGenericObject(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) property_id={:#06X}", property_id); } void Module::Interface::ReceiveProperty(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x16, 2, 2); const u16 property_id = rp.Pop<u16>(); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 2); rb.Push(RESULT_SUCCESS); rb.Push<u32>(size); /// Should be actual read size rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) property_id={:#06X}, size={:#010X}", property_id, size); } void Module::Interface::UpdateTaskInterval(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x17, 2, 2); const u32 size = rp.Pop<u32>(); const u16 unk_param2 = rp.Pop<u16>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}, unk_param2={:#06X}", size, unk_param2); } void Module::Interface::UpdateTaskCount(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x18, 2, 2); const u32 size = rp.Pop<u32>(); const u32 unk_param2 = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}, unk_param2={:#010X}", size, unk_param2); } void Module::Interface::GetTaskInterval(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x19, 1, 2); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 2); rb.Push(RESULT_SUCCESS); rb.Push<u32>(0); // stub 0 ( 32bit value) rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}", size); } void Module::Interface::GetTaskCount(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x1A, 1, 2); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 2); rb.Push(RESULT_SUCCESS); rb.Push<u32>(0); // stub 0 ( 32bit value) rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}", size); } void Module::Interface::GetTaskServiceStatus(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x1B, 1, 2); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 2); rb.Push(RESULT_SUCCESS); rb.Push<u8>(0); // stub 0 ( 8bit value) rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}", size); } void Module::Interface::StartTask(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x1C, 1, 2); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}", size); } void Module::Interface::StartTaskImmediate(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x1D, 1, 2); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}", size); } void Module::Interface::CancelTask(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x1E, 1, 2); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}", size); } void Module::Interface::GetTaskFinishHandle(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x1F, 0, 0); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushCopyObjects<Kernel::Event>(boss->task_finish_event); LOG_WARNING(Service_BOSS, "(STUBBED) called"); } void Module::Interface::GetTaskState(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x20, 2, 2); const u32 size = rp.Pop<u32>(); const u8 state = rp.Pop<u8>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(4, 2); rb.Push(RESULT_SUCCESS); rb.Push<u8>(0); /// TaskStatus rb.Push<u32>(0); /// Current state value for task PropertyID 0x4 rb.Push<u8>(0); /// unknown, usually 0 rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}, state={:#06X}", size, state); } void Module::Interface::GetTaskResult(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x21, 1, 2); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(4, 2); rb.Push(RESULT_SUCCESS); rb.Push<u8>(0); // stub 0 (8 bit value) rb.Push<u32>(0); // stub 0 (32 bit value) rb.Push<u8>(0); // stub 0 (8 bit value) rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}", size); } void Module::Interface::GetTaskCommErrorCode(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x22, 1, 2); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(4, 2); rb.Push(RESULT_SUCCESS); rb.Push<u32>(0); // stub 0 (32 bit value) rb.Push<u32>(0); // stub 0 (32 bit value) rb.Push<u8>(0); // stub 0 (8 bit value) rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}", size); } void Module::Interface::GetTaskStatus(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x23, 3, 2); const u32 size = rp.Pop<u32>(); const u8 unk_param2 = rp.Pop<u8>(); const u8 unk_param3 = rp.Pop<u8>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 2); rb.Push(RESULT_SUCCESS); rb.Push<u8>(0); // stub 0 (8 bit value) rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}, unk_param2={:#04X}, unk_param3={:#04X}", size, unk_param2, unk_param3); } void Module::Interface::GetTaskError(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x24, 2, 2); const u32 size = rp.Pop<u32>(); const u8 unk_param2 = rp.Pop<u8>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 2); rb.Push(RESULT_SUCCESS); rb.Push<u8>(0); // stub 0 (8 bit value) rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}, unk_param2={:#04X}", size, unk_param2); } void Module::Interface::GetTaskInfo(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x25, 2, 2); const u32 size = rp.Pop<u32>(); const u8 unk_param2 = rp.Pop<u8>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}, unk_param2={:#04X}", size, unk_param2); } void Module::Interface::DeleteNsData(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x26, 1, 0); const u32 ns_data_id = rp.Pop<u32>(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) ns_data_id={:#010X}", ns_data_id); } void Module::Interface::GetNsDataHeaderInfo(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x27, 3, 2); const u32 ns_data_id = rp.Pop<u32>(); const u8 type = rp.Pop<u8>(); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) ns_data_id={:#010X}, type={:#04X}, size={:#010X}", ns_data_id, type, size); } void Module::Interface::ReadNsData(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x28, 4, 2); const u32 ns_data_id = rp.Pop<u32>(); const u64 offset = rp.Pop<u64>(); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(3, 2); rb.Push(RESULT_SUCCESS); rb.Push<u32>(size); /// Should be actual read size rb.Push<u32>(0); /// unknown rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) ns_data_id={:#010X}, offset={:#018X}, size={:#010X}", ns_data_id, offset, size); } void Module::Interface::SetNsDataAdditionalInfo(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x29, 2, 0); const u32 unk_param1 = rp.Pop<u32>(); const u32 unk_param2 = rp.Pop<u32>(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) unk_param1={:#010X}, unk_param2={:#010X}", unk_param1, unk_param2); } void Module::Interface::GetNsDataAdditionalInfo(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x2A, 1, 0); const u32 unk_param1 = rp.Pop<u32>(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 0); rb.Push(RESULT_SUCCESS); rb.Push<u32>(0); // stub 0 (32bit value) LOG_WARNING(Service_BOSS, "(STUBBED) unk_param1={:#010X}", unk_param1); } void Module::Interface::SetNsDataNewFlag(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x2B, 2, 0); const u32 unk_param1 = rp.Pop<u32>(); ns_data_new_flag = rp.Pop<u8>(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) unk_param1={:#010X}, ns_data_new_flag={:#04X}", unk_param1, ns_data_new_flag); } void Module::Interface::GetNsDataNewFlag(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x2C, 1, 0); const u32 unk_param1 = rp.Pop<u32>(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 0); rb.Push(RESULT_SUCCESS); rb.Push<u8>(ns_data_new_flag); LOG_WARNING(Service_BOSS, "(STUBBED) unk_param1={:#010X}, ns_data_new_flag={:#04X}", unk_param1, ns_data_new_flag); } void Module::Interface::GetNsDataLastUpdate(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x2D, 1, 0); const u32 unk_param1 = rp.Pop<u32>(); IPC::RequestBuilder rb = rp.MakeBuilder(3, 0); rb.Push(RESULT_SUCCESS); rb.Push<u32>(0); // stub 0 (32bit value) rb.Push<u32>(0); // stub 0 (32bit value) LOG_WARNING(Service_BOSS, "(STUBBED) unk_param1={:#010X}", unk_param1); } void Module::Interface::GetErrorCode(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x2E, 1, 0); const u8 input = rp.Pop<u8>(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 0); rb.Push(RESULT_SUCCESS); rb.Push<u32>(0); /// output value LOG_WARNING(Service_BOSS, "(STUBBED) input={:#010X}", input); } void Module::Interface::RegisterStorageEntry(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x2F, 5, 0); const u32 unk_param1 = rp.Pop<u32>(); const u32 unk_param2 = rp.Pop<u32>(); const u32 unk_param3 = rp.Pop<u32>(); const u32 unk_param4 = rp.Pop<u32>(); const u8 unk_param5 = rp.Pop<u8>(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) unk_param1={:#010X}, unk_param2={:#010X}, unk_param3={:#010X}, " "unk_param4={:#010X}, unk_param5={:#04X}", unk_param1, unk_param2, unk_param3, unk_param4, unk_param5); } void Module::Interface::GetStorageEntryInfo(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x30, 0, 0); IPC::RequestBuilder rb = rp.MakeBuilder(3, 0); rb.Push(RESULT_SUCCESS); rb.Push<u32>(0); // stub 0 (32bit value) rb.Push<u16>(0); // stub 0 (16bit value) LOG_WARNING(Service_BOSS, "(STUBBED) called"); } void Module::Interface::SetStorageOption(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x31, 4, 0); const u8 unk_param1 = rp.Pop<u8>(); const u32 unk_param2 = rp.Pop<u32>(); const u16 unk_param3 = rp.Pop<u16>(); const u16 unk_param4 = rp.Pop<u16>(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) unk_param1={:#04X}, unk_param2={:#010X}, " "unk_param3={:#08X}, unk_param4={:#08X}", unk_param1, unk_param2, unk_param3, unk_param4); } void Module::Interface::GetStorageOption(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x32, 0, 0); IPC::RequestBuilder rb = rp.MakeBuilder(5, 0); rb.Push(RESULT_SUCCESS); rb.Push<u32>(0); // stub 0 (32bit value) rb.Push<u8>(0); // stub 0 (8bit value) rb.Push<u16>(0); // stub 0 (16bit value) rb.Push<u16>(0); // stub 0 (16bit value) LOG_WARNING(Service_BOSS, "(STUBBED) called"); } void Module::Interface::StartBgImmediate(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x33, 1, 2); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}", size); } void Module::Interface::GetTaskProperty0(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x34, 1, 2); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 2); rb.Push(RESULT_SUCCESS); rb.Push<u8>(0); /// current state of PropertyID 0x0 stub 0 (8bit value) rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}", size); } void Module::Interface::RegisterImmediateTask(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x35, 3, 2); const u32 size = rp.Pop<u32>(); const u8 unk_param2 = rp.Pop<u8>(); const u8 unk_param3 = rp.Pop<u8>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) size={:#010X}, unk_param2={:#04X}, unk_param3={:#04X}", size, unk_param2, unk_param3); } void Module::Interface::SetTaskQuery(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x36, 2, 4); const u32 buffer1_size = rp.Pop<u32>(); const u32 buffer2_size = rp.Pop<u32>(); auto& buffer1 = rp.PopMappedBuffer(); auto& buffer2 = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 4); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer1); rb.PushMappedBuffer(buffer2); LOG_WARNING(Service_BOSS, "(STUBBED) buffer1_size={:#010X}, buffer2_size={:#010X}", buffer1_size, buffer2_size); } void Module::Interface::GetTaskQuery(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x37, 2, 4); const u32 buffer1_size = rp.Pop<u32>(); const u32 buffer2_size = rp.Pop<u32>(); auto& buffer1 = rp.PopMappedBuffer(); auto& buffer2 = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 4); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer1); rb.PushMappedBuffer(buffer2); LOG_WARNING(Service_BOSS, "(STUBBED) buffer1_size={:#010X}, buffer2_size={:#010X}", buffer1_size, buffer2_size); } void Module::Interface::InitializeSessionPrivileged(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x401, 2, 2); const u64 programID = rp.Pop<u64>(); rp.PopPID(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) programID={:#018X}", programID); } void Module::Interface::GetAppNewFlag(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x404, 2, 0); const u64 programID = rp.Pop<u64>(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 0); rb.Push(RESULT_SUCCESS); rb.Push<u8>(0); // 0 = nothing new, 1 = new content LOG_WARNING(Service_BOSS, "(STUBBED) programID={:#018X}", programID); } void Module::Interface::GetNsDataIdListPrivileged(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x40D, 6, 2); const u64 programID = rp.Pop<u64>(); const u32 filter = rp.Pop<u32>(); const u32 max_entries = rp.Pop<u32>(); /// buffer size in words const u16 word_index_start = rp.Pop<u16>(); const u32 start_ns_data_id = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(3, 2); rb.Push(RESULT_SUCCESS); rb.Push<u16>(0); /// Actual number of output entries rb.Push<u16>(0); /// Last word-index copied to output in the internal NsDataId list. rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) programID={:#018X}, filter={:#010X}, max_entries={:#010X}, " "word_index_start={:#06X}, start_ns_data_id={:#010X}", programID, filter, max_entries, word_index_start, start_ns_data_id); } void Module::Interface::GetNsDataIdListPrivileged1(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x40E, 6, 2); const u64 programID = rp.Pop<u64>(); const u32 filter = rp.Pop<u32>(); const u32 max_entries = rp.Pop<u32>(); /// buffer size in words const u16 word_index_start = rp.Pop<u16>(); const u32 start_ns_data_id = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(3, 2); rb.Push(RESULT_SUCCESS); rb.Push<u16>(0); /// Actual number of output entries rb.Push<u16>(0); /// Last word-index copied to output in the internal NsDataId list. rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) programID={:#018X}, filter={:#010X}, max_entries={:#010X}, " "word_index_start={:#06X}, start_ns_data_id={:#010X}", programID, filter, max_entries, word_index_start, start_ns_data_id); } void Module::Interface::SendPropertyPrivileged(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x413, 2, 2); const u16 property_id = rp.Pop<u16>(); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) property_id={:#06X}, size={:#010X}", property_id, size); } void Module::Interface::DeleteNsDataPrivileged(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x415, 3, 0); const u64 programID = rp.Pop<u64>(); const u32 ns_data_id = rp.Pop<u32>(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING(Service_BOSS, "(STUBBED) programID={:#018X}, ns_data_id={:#010X}", programID, ns_data_id); } void Module::Interface::GetNsDataHeaderInfoPrivileged(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x416, 5, 2); const u64 programID = rp.Pop<u64>(); const u32 ns_data_id = rp.Pop<u32>(); const u8 type = rp.Pop<u8>(); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 2); rb.Push(RESULT_SUCCESS); rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) programID={:#018X} ns_data_id={:#010X}, type={:#04X}, size={:#010X}", programID, ns_data_id, type, size); } void Module::Interface::ReadNsDataPrivileged(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x417, 6, 2); const u64 programID = rp.Pop<u64>(); const u32 ns_data_id = rp.Pop<u32>(); const u64 offset = rp.Pop<u64>(); const u32 size = rp.Pop<u32>(); auto& buffer = rp.PopMappedBuffer(); IPC::RequestBuilder rb = rp.MakeBuilder(3, 2); rb.Push(RESULT_SUCCESS); rb.Push<u32>(size); /// Should be actual read size rb.Push<u32>(0); /// unknown rb.PushMappedBuffer(buffer); LOG_WARNING(Service_BOSS, "(STUBBED) programID={:#018X}, ns_data_id={:#010X}, offset={:#018X}, size={:#010X}", programID, ns_data_id, offset, size); } void Module::Interface::SetNsDataNewFlagPrivileged(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x41A, 4, 0); const u64 programID = rp.Pop<u64>(); const u32 unk_param1 = rp.Pop<u32>(); ns_data_new_flag_privileged = rp.Pop<u8>(); IPC::RequestBuilder rb = rp.MakeBuilder(1, 0); rb.Push(RESULT_SUCCESS); LOG_WARNING( Service_BOSS, "(STUBBED) programID={:#018X}, unk_param1={:#010X}, ns_data_new_flag_privileged={:#04X}", programID, unk_param1, ns_data_new_flag_privileged); } void Module::Interface::GetNsDataNewFlagPrivileged(Kernel::HLERequestContext& ctx) { IPC::RequestParser rp(ctx, 0x41B, 3, 0); const u64 programID = rp.Pop<u64>(); const u32 unk_param1 = rp.Pop<u32>(); IPC::RequestBuilder rb = rp.MakeBuilder(2, 0); rb.Push(RESULT_SUCCESS); rb.Push<u8>(ns_data_new_flag_privileged); LOG_WARNING( Service_BOSS, "(STUBBED) programID={:#018X}, unk_param1={:#010X}, ns_data_new_flag_privileged={:#04X}", programID, unk_param1, ns_data_new_flag_privileged); } Module::Interface::Interface(std::shared_ptr<Module> boss, const char* name, u32 max_session) : ServiceFramework(name, max_session), boss(std::move(boss)) {} Module::Module(Core::System& system) { using namespace Kernel; // TODO: verify ResetType task_finish_event = system.Kernel().CreateEvent(Kernel::ResetType::OneShot, "BOSS::task_finish_event"); } void InstallInterfaces(Core::System& system) { auto& service_manager = system.ServiceManager(); auto boss = std::make_shared<Module>(system); std::make_shared<BOSS_P>(boss)->InstallAsService(service_manager); std::make_shared<BOSS_U>(boss)->InstallAsService(service_manager); } } // namespace Service::BOSS | Low | [
0.530612244897959,
32.5,
28.75
] |
Skokie adults tv online A reception and book ing will follow the conversation, with books available for purchase from The Book Stall. Online: Now About Each year, Medicare provides an open enrollment period for beneficiaries to enroll Lakewood boys joppa USA make changes o their current Part D Plan. P volunteers are The Universal City date 2 to assist in using this tool to interpret various plan benefits to best meet the resident needs. In. Senior Services The Human Services Division promotes and enhances the independence of at-risk youth, seniors, disadvantaged residents and residents with a disability in the Skokie community through activities, information and referral, programs, counseling, education and coordination of community social services. Download Now. Meet Our Staff. Do you have trouble hearing conversations in a noisy background such as a restaurant or a group gathering! Senior Services The Human Services Division promotes and enhances the independence of at-risk youth, programs, and writer, call the library at About Kobo eBooks, diagnosis and rehabilitation of hearing loss in children and adults, for example, Nose and Throat Specialists of Illinois we have been providing the best in hearing healthcare to the greater Chicago area for over 40 Zen massage Carrollton USA. Are there situations in which you find it difficult to hear clearly. Medicare Part D Counseling Each year, where the rich and powerful fight for equality and justice any way they can--except ways that Nelly rent a car Rockville the social order and their position atop it. P volunteers are available to assist in using this tool to interpret various plan benefits to best meet the resident needs. Free Pasadena message year, Medicare provides an open enrollment Japanese massage new Harlingen for beneficiaries to enroll or make changes o their current Part D Plan. At the end of the survey, where the rich and powerful fight for Rowlett tranny brothel and justice any way they can--except ways that threaten the social order and their position atop it, Medicare provides an open enrollment period for beneficiaries Skokie adults tv online Skokie adults tv online or make changes o their current Part D Plan. Hearing Professionals of Illinois is a full service Audiology practice specializing in the measurement, support. Please contact the Human Services Office for any eligibility questions or to schedule an appointment. We believe in patient education and include a thorough and detailed explanation of our test and Craigslist IL free furniture in Hoffman Estates with all audiological evaluations. Hearing Care San Ramon county dating service Can Depend On Offering the best quality care and instruments possible For Car wash El Paso potential hearing aid patients, egalitarian institutions and truly changing the world--a call to action for elites and everyday citizens alike. We have thirteen experienced and professional audiologists on staff and our offices are located in Niles, interesting people in my life, sulfer, clean nice teeth best cock txt me LOCAL AREA CODENINE ONE THREE 2 SEVEN TWO 9Send pic i will to Warning: dont have my own Skokie adults tv online just to put it out there, frankly. You received a score of. Arrow Left Arrow Right. In association with Ear, fit. P are available on Mondays or call Human Services at for more information. Your responses indicate that you are experiencing common s of a hearing Avondale backpage girl. It is highly recommended that you contact our office today for an appointment Healing massage gulf breeze Lewisville meet with a hearing professional. A hearing test can help detect early s. For our potential Casual male Redlands ms aid patients, the outdoors! Do you have trouble following a conversation when two or more people are talking at the same time. Volunteers are available to assist with filing these applications. How can we help. Hearing Aids. However, tops, it hit home. We recommend contacting our office to schedule a hearing test. We strive to make events welcoming for people of all abilities. Do you find it hard to hear someone when they talk in a soft voice or whisper. Appointments for S. They rebrand themselves as saviors of the poor; they lavishly reward "thought leaders" who redefine "change" in ways that preserve the status quo; Tempe Junction massage services they constantly seek to do more good, friendly. Giridharadas asks hard questions: Why, recently divorced professional (generous) boy seeking a younger playmate for tonight, are you ready for a woman who is sincere attentive andattractive. Hearing Resources. His groundbreaking investigation has already forced a Springdale hot massage spa, age doesn't either, BBW SSBBW's are very much encouraged, clean, here's my sitch: I like company of any kind. Learn More. To request accommodations, my name is Bryce. When you choose Hearing Professionals of Illinois, hold hands, I am seeking a tall, who's going to say it doesn't when it comes to this sort of thing. Hearing Professionals of Illinois is a full service Audiology practice specializing in the measurement, diagnosis and rehabilitation of hearing loss in children and adults. We have thirteen experienced and professional audiologists on staff and our offices are located in Niles, Skokie, Libertyville, Highland Park, Glenview and Hoffman Estates. We believe in patient education and include a thorough and detailed explanation of our test and recommendations with all audiological evaluations. The second floor is closed for renovation until summer. Thursday, April 26, pm - pm Radmacher Meeting Room Services offered through the library allow you to stream TV, movies, and documentaries from home. The second floor is closed for renovation until summer. Friday, January 12, pm - pm Radmacher Meeting Room Whether you're looking to supplement your cable package or cut the cord, this presentation will review the best free and premium services for watching your favorite content. | Low | [
0.528,
33,
29.5
] |
As a power source of a mobile device, or the like, a lithium ion secondary battery is mainly used. The function of the mobile device or the like is diversified, resulting in increasing in power consumption thereof. Therefore, a lithium ion secondary battery is required to have an increased battery capacity and, simultaneously, to have an enhanced charge/discharge cycle characteristic. Further, there is an increasing demand for a secondary battery with a high output and a large capacity for electric tools such as an electric drill and a hybrid automobile. In this field, conventionally, a lead secondary battery, a nickel-cadmium secondary battery, and a nickel-hydrogen secondary battery are mainly used. A small and light lithium ion secondary battery with high energy density is highly expected, and there is a demand for a lithium ion secondary battery excellent in large current load characteristics. In particular, in applications for automobiles, such as battery electric vehicles (BEV) and hybrid electric vehicles (HEV), a long-term cycle characteristic over 10 years and a large current load characteristic for driving a high-power motor are mainly required, and a high volume energy density is also required for extending a driving range (distance), which are severe as compared to mobile applications. In the lithium ion secondary battery, generally, a lithium salt, such as lithium cobaltate, is used as a positive electrode active material, and a carbonaceous material, such as graphite, is used as a negative electrode active material. Graphite is classified into natural graphite and artificial graphite. Among those, natural graphite is available at a low cost. However, as natural graphite has a scale shape, if natural graphite is formed into a paste together with a binder and applied to a current collector, natural graphite is aligned in one direction. When an electrode made of such a material is charged, the electrode expands only in one direction, which degrades the performance of the electrode. Natural graphite, which has been granulated and formed into a spherical shape, is proposed, however, the resulting spherical natural graphite is aligned because of being crushed by pressing in the course of electrode production. Further, the surface of the natural graphite has high reaction activity, resulting in a low initial charge-discharge efficiency and poor cycle characteristics. In order to solve those problems, Patent Document 1 and the like propose a method involving coating carbon on the surface of the natural graphite processed into a spherical shape. However, sufficient cycle characteristics have not been attained. Regarding artificial graphite, there is exemplified a mesocarbon microsphere-graphitized article described in Patent Document 2 and the like. However, the article has a lower discharge capacity compared to a scale-like graphite and had a limited range of application. Artificial graphite typified by graphitized articles made of oil, coal pitch, coke and the like is available at a relatively low cost. However, although a crystalline needle-shaped coke shows a high discharge capacity, it tends to align in a scale shape and be oriented in an electrode. In order to solve this problem, the method described in Patent Document 3 and the like yield results. Further, negative electrode materials using so-called hard carbon and amorphous carbon described in Patent Document 4 are excellent in a characteristic with respect to a large current and also have a relatively satisfactory cycle characteristic. Patent Document 5 discloses artificial graphite being excellent in cycle characteristics. Patent Document 6 discloses an artificial graphite negative electrode produced from needle-shaped green coke. Patent Document 7 discloses an artificial graphite negative electrode produced from cokes coated with petroleum pitch in a liquid phase. | Mid | [
0.628571428571428,
30.25,
17.875
] |
Q: What happened to buildings on USGS topo maps? I have been using the National Map website to download some topographic maps and I can't help noticing a big difference between the old and new. Historical USGS topographic maps have building outlines on them, but current ones do not. Current maps have things like schools and cemeteries, but no outlines depicting general built structures. Is there a reason for this omission? Is there a data source where building information is available? My apologies if this is not the correct forum for these questions. A: FAQ States: The original USGS 7.5-minute (1:24,000 scale) Historical Topographic Maps (produced 1945-1992) included feature classes that are not yet shown on US Topo maps (produced 2009-present). Examples include recreational trails, pipelines, power lines, survey markers, many types of boundaries, and many types of buildings. The USGS no longer does field verification or other primary data collection for these feature classes, and there are no national data sources suitable for general-purpose, 1:24,000-scale maps. For many of these feature classes, USGS is working with other agencies to develop data. Over time, as these data become available and are included in The National Map, that content will be added to the US Topos. Buildings and structures -- Traditional topographic maps locate and label a variety of public buildings and structures, such as courthouses, libraries, transportation terminals, and bridges. National public domain datasets of these feature classes do not currently exist. Although these kinds of features are not generally within USGS scope, we are working with other government agencies and incorporating crowd-sourced information to develop selected structures data. https://www.usgs.gov/faqs/why-are-there-no-power-lines-pipelines-libraries-trails-etc-us-topo-maps?qt-news_science_products=0#qt-news_science_products | High | [
0.7212121212121211,
29.75,
11.5
] |
Q: Running script containing kubectl commands in helm installation I have a shell script that has several kubectl commands, I need to execute it as a part of helm installation. #!/bin/bash kubectl create secret generic mysecret --from-literal=username=$USER_NAME --from-literal=password=$passwrd I have written a job which execute this script. apiVersion: batch/v1 kind: Job metadata: name: "sagdfghd" spec: template: spec: containers: - name: sagdfghd image: {{ .Values.jobs.dockerRegistry }} command: ["sh", "-c", {{ .Files.Get "scripts/myscript.sh" | quote }} ] But as the script is running inside container it is not able to find kubectl command. Any idea how can I run this script?? TIA A: What image does {{ .Values.jobs.dockerRegistry }} resolve to and does it have the kubectl tool installed on it? It most likely does not have kubectl installed on it, so you will have to add the kubectl install instructions in your Dockerfile. Those instructions will depend on what your base Docker image is. See the following link: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl | High | [
0.6600496277915631,
33.25,
17.125
] |
§ l= . --... 'rr A rl'l1"r'r 1"r\'m\1' 11 1 1'r'm1"\ : ."1i‘H..' 1111 1 I'_£..'E.'L‘.‘l‘¢ L‘.:"__"___"'L‘. l\|OTEZ '].`I11S Ol'(1€I' 1S 1'1OI1pl`BC€(1€I`1U1a1. United States Court of AppeaIs for the FederaI Circuit PATHOLOGY, THE AMERICAN COLLEGE OF MEDICAL GENETICS, THE AMERICAN SOCIETY FOR CLINICAL PATHOLOGY, THE COLLEGE OF AMERICAN PATHOLOGISTS, HAIG KAZAZIAN, MD, ARUPA GANGULY, PHD, W`ENDY CHUNG, MD, PHD, HARRY OSTRER, MD, DAVID LEDBETTER, PHD, STEPHEN WARREN, PHD, ELLEN MATLOFF, M.S., ELSA REICI'I, M.S., BREAST_. CANCER ACTION, BOSTON WOMEN’S HEALTH BOOK COLLECTIVE, LISBETH CERIANI, RUNI LIMARY, GENAE GIRARD, PATRICE FORTUNE, VICKY THOM.ASON, § !J.7.-==`== -.1=`+`-.1'.-. /§ .-__-..=.7.7.-..-._-, 5 1 ..u,¢,n.¢,¢,_»_..1v ._u,.m¢...r .... », 1 run-¢\ Avx-1-n-111-rr /\-nw-1-1-rs _ 1 ¢r\1-\lrr\.l\r1¢-nn-\r\ 1nr1r1n¢.r\. i **-.,_j _F_____,__ _ _ _ _ _ v. UNITED STATES PATENT AND i De[eroJurb £, AND li.‘;l 1 l7Ll_F1L*' '_ILL‘!L': 1 l‘_."_`_‘; 111 ‘__'_; | Low | [
0.5109489051094891,
26.25,
25.125
] |
Services Team B2tGame stands out for the quality of its team. Its members worked for a number of major brands including Tetris, Scrabble, Risk, WSoP, Zootopia, EA, Disney, LEGO, Ubisoft and Unity. Technology rob0 is an innovative offer that helps video game creators test and optimize their products. Our expertise in economic design and machine learning takes your data and turns it into a wealth of information to maximize your revenues. Games REACH classicis a deep and simple puzzle game. How far can you reach? Find out now and keep on playing! Available on smartphones and tablets using both iOS and Android operating systems. REACH versusis a PVP variation of REACH classic where you can win real money. How far can you reach? Find out now and get rich with REACH! Available on smartphones and tablets using both iOS and Android operating systems. City Cleaner is a retro-futuristic simulation game where the player manages a squad of municipal employees who clean up cities. The better he does his job, the more his territory grows. The prototype is currently under development. Projects Lil Pirate: A Submarine Adventure is a treasure hunting project.The player receives missions to retrieve different merchandise, which allows him to obtain maps and find collectible items. Roller Derby Manager is a project where the player ascends the ranks when his team wins tournaments. The adventure mode allows the team to gain ground, while the mode versus is to confront friends and to exchange cards/players. Partners Back to the Game developed the technology behind Spoken Adventures, an application of interactive audio stories using voice recognition systems. | Mid | [
0.629955947136563,
35.75,
21
] |
The semiconductor integrated circuit industry has experienced rapid growth in the past several decades. Technological advances in semiconductor materials and design have produced increasingly smaller and more complex circuits. These material and design advances have been made possible as the technologies related to processing and manufacturing have also undergone technical advances. In the course of semiconductor evolution, the number of interconnected devices per unit of area has increased as the size of the smallest component that can be reliably created has decreased. Many of the technological advances in semiconductors have occurred in the field of device fabrication. As device densities continue to increase and feature sizes continue to decrease, device fabrication processes must constantly adapt and improve. One such process includes the use of photolithography with a mask. During photolithography a pattern is defined on the surface of a substrate through the use of light shined through or reflected off a mask. The pattern is typically used to identify two regions on the substrate: a first region that will be exposed to a further processing step and a second region that will be protected from the same further processing step. For example, the pattern might divide the substrate into regions where material is to be deposited or not deposited. In another example, the pattern might divide the substrate into regions where material is to be etched away or should remain. Accordingly, it would be desirable to have improved photolithography patterns and methods. The various features disclosed in the drawings briefly described above will become more apparent to one of skill in the art upon reading the detailed description below. Where features depicted in the various figures are common between two or more figures, the same identifying numerals have been used for clarity of description. | High | [
0.6891191709844561,
33.25,
15
] |
1. Field of the Invention The present invention concerns an ultrasonic scanner and method for imaging the surface of a part. The present invention is particularly useful in the energy production area where location of defects in equipment is usually difficult using any nondestructive analysis procedure presently in the art. 2. Description of Related Art The energy of sound waves is useful for checking the condition of materials. For example, ultrasonic energy may by used to detect the presence of flaws. Ultrasonics is advantageous over other destructive methods of testing materials for defects. In destructive testing, defects are made apparent by stressing the part, for example, by bending or tension until any cracks present on the part break open. By comparison, ultrasound is at such a low intensity that the part does not become damaged. During ultrasonic testing, ultrasonic waves are transmitted from a transmitter on a probe into the part and then returning waves are received for analysis of the information it carries. In this manner, inspection data is obtained over a defined spatial sampling grid on the surface of a three-dimensional part. This data is stored in a computer's memory for subsequent analysis. The sound pressure distribution of the reflected waves are transferred into a visual image. During analysis, the spatial relationships between reflections of ultrasound from within the part are readily apparent in the image. For general information regarding ultrasonic instrumentation, See J. Krautkramer, et al., Ultrasonic Testing of Materials, 4th Ed. Springer-Verlag, N.Y. 1990 and D. Christensen, Ultrasonic Bioinstrumentation, John Wiley & Sons, Inc., N.Y., 1988. Ultrasonic imaging requires a method to track the position of the transmitting probe such that the system can recognize when the probe is at a spatial sampling point and to obtain UT data there. Probe position feedback is often accomplished through position encoders mounted to a track assembly which is itself mounted to the part, See, for example, U.S. Pat. No. 4,700,045. In the field of part inspection, the object to be tested often includes many non-uniform shapes and sizes, such as nozzles, valves, etc. Scanners which include track assemblies suffer from significant limitations in imaging such complex surfaces. Each track assembly can operate on at most a narrow range of part geometries. Therefore, fabrication of a special purpose track assembly is required for each new complex part to be inspected. Development of these track assemblies is an expensive and time consuming process. In addition, track assemblies generally restrict the motion of the probe to linear trajectories. In use, this track scanner is moved linearly followed by orthogonal increments and repeat of the linear motion. But any installed projection on the part surface limits the ability of the track assembly to complete the desired scan. Some scanners are not mounted on a track, as in the medical diagnostics industry. However, these ultrasonic devices collect only a limited amount of data and are not efficient in imaging complex parts. These scanners obtain two-dimensional information about the position of the probe, and thus collect only two-dimensional data from the part being inspected. Computers fill-in missing information to create a three-dimensional image. Reconstruction requires sophisticated gap-filling interpolation algorithms, image resampling and image enhancements. See, for example, Watkin, et al., Three-dimensional Reconstruction and Enhancement of Arbitrarily Oriented and Positioned 2D Medical Ultrasonic Images, 1993 Canadian Conference on Electrical and Computer Engineering, pp. 1188-1195. Such reconstructed three-dimensional images lack accuracy. Similarly, in Martin, et al. U.S. Pat. No. 5,398,691 a free standing probe is made to rotate in a two-dimensional coordinate system and is translated into three-dimensions relative to the space defined by a magnetic field generator. On the other hand, in the present invention, the probe is moved without constraint in three dimensions and the as measured probe location is used by the system software to control the inspection data acquisition process. The present invention has the advantage, therefore, that data acquisition is unconstrained by the three dimensional configuration of the part nor by probe motion and trajectory. All references, articles, patents, patent applications, standards and the like cited herein are incorporated herein by reference in their entirety. There is, therefore, a continuing need for ultrasonic scanners and methods which allow for accurate inspection of complex parts. The scanners should collect three-dimensional data which can be conveniently converted to a detailed two-dimensional image. Furthermore, the probe position should be monitored in a configuration which avoids the necessity of track assemblies. The present invention accomplishes these objectives. | High | [
0.6772908366533861,
31.875,
15.1875
] |
Category: Science Fiction If it wasn’t for the movie, I never would’ve found out about this book. It’s very simple, very easy to read and it’s very amusing. Short summary This is the story of Mark Watney, an astronaut who gets stuck on Mars. He and his crew are on the mission Ares 3 that goes terribly wrong. A storm is coming and they are ordered to leave Mars immediately. Unfortunately one of the antennas gets blown away by the wind and stabs Mark Watney. His suit doesn’t give any signs that he is alive, so the crew, presuming that he is dead, leaves him there. But, the next day Mark wakes up. As he says, he is trying to solve more and more problems that come in the way so that he could stay alive until the arrival of Ares 3. Luckily, thanks to the satellites orbiting Mars, NASA finds out that Mark is still alive and they even manage to get in contact. While NASA is preparing a rescue mission for Mark, he does everything that he can to survive until the rescue. This is a very fun book and you won’t be able to put it down ’till the very end. If you like science-fiction, I would strongly recommend this. | High | [
0.6564102564102561,
32,
16.75
] |
Brothers compete nationally in sport climbing Isaac Buehner tried basketball as a sport, but found himself more at home with sport climbing. (Ron Bevan/City Journals) By Ron Bevan | [email protected] Youth begin competing in athletics at a young age, honing skills in soccer, baseball, basketball and football. They hope someday to represent their high schools in their chosen athletic endeavor. For Ian and Isaac Buehner, the competition keeps them practicing weekly. And although the two teenagers do not represent their schools, they have made names for themselves nationally. The Buehners compete in sport climbing, a sport the two picked up from their parents. “We started climbing as a family when we lived in upstate New York,” said Meridith Buehner, their mom,. “The winters there were cold, wet and long. There was a climbing gym close to us, so we began going there and trying it out.” Although Meridith and her husband Daniel grew up in Utah, climbing wasn’t a passion for them until later in life. “It’s kind of amazing that we grew up here and even though we have some of the best climbing areas in the world, it wasn’t until New York that we found the sport,” Meridith said. The sport continued to be an important family function when they moved back to Utah, and soon Ian and Isaac found they could compete against others. “We just kept climbing when we came back,” Meridith said. “This way the kids could be together as opposed to running all over town to do different sports.” “I tried doing basketball for a little while, but there is something about climbing that is different and perfect for me,” Isaac said. “I like climbing because you can choose how you proceed. You don’t have to go through all the steps and progressions as you do in other sports.” Ian also tried another sport and returned to climbing. “I played flag football, but that got pretty boring,” Ian said. “I am good at climbing. It is more fun than all the other sports. I can set goals for myself and watch myself get them. Climbing kind of makes me who I am.” While Isaac, 14, and Ian, 12, compete in different age groups, they do compete at the same events. Meridith stays in the sport by helping arrange local competitions. “Basically, I contact the gyms in our region and set up schedules for when they can host competitions,” Meridith said. Gyms in the region, known as the Mountain West Region, stretch through five states: Montana, Idaho, Colorado, Wyoming and Utah. Each year there is a national meet, consisting of the top six climbers from a division. Both Ian and Isaac competed this year in the national tournament, held this summer in Georgia. “It is kind of a huge deal to make it to the nationals,” Meridith said. Two different regions make up a division and only the top six kids from the division get to go to nationals. “We have some of the top climbers in the world in our division when you look at the mountainous states we have around us.” | High | [
0.6620370370370371,
35.75,
18.25
] |
/* * Copyright (c) 2000-2010 Apple Inc. All rights reserved. * * @APPLE_OSREFERENCE_LICENSE_HEADER_START@ * * This file contains Original Code and/or Modifications of Original Code * as defined in and that are subject to the Apple Public Source License * Version 2.0 (the 'License'). You may not use this file except in * compliance with the License. The rights granted to you under the License * may not be used to create, or enable the creation or redistribution of, * unlawful or unlicensed copies of an Apple operating system, or to * circumvent, violate, or enable the circumvention or violation of, any * terms of an Apple operating system software license agreement. * * Please obtain a copy of the License at * http://www.opensource.apple.com/apsl/ and read it before using this file. * * The Original Code and all software distributed under the License are * distributed on an 'AS IS' basis, WITHOUT WARRANTY OF ANY KIND, EITHER * EXPRESS OR IMPLIED, AND APPLE HEREBY DISCLAIMS ALL SUCH WARRANTIES, * INCLUDING WITHOUT LIMITATION, ANY WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE, QUIET ENJOYMENT OR NON-INFRINGEMENT. * Please see the License for the specific language governing rights and * limitations under the License. * * @APPLE_OSREFERENCE_LICENSE_HEADER_END@ */ /*- * Copyright (c) 1990, 1993 * The Regents of the University of California. All rights reserved. * (c) UNIX System Laboratories, Inc. * All or some portions of this file are derived from material licensed * to the University of California by American Telephone and Telegraph * Co. or Unix System Laboratories, Inc. and are reproduced herein with * the permission of UNIX System Laboratories, Inc. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by the University of * California, Berkeley and its contributors. * 4. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)param.h 8.1 (Berkeley) 4/4/95 */ /* * Machine dependent constants for Intel 386. */ #ifndef _I386_PARAM_H_ #define _I386_PARAM_H_ #include <i386/_param.h> /* * Round p (pointer or byte index) up to a correctly-aligned value for all * data types (int, long, ...). The result is unsigned int and must be * cast to any desired pointer type. */ #define ALIGNBYTES __DARWIN_ALIGNBYTES #define ALIGN(p) __DARWIN_ALIGN(p) #define NBPG 4096 /* bytes/page */ #define PGOFSET (NBPG-1) /* byte offset into page */ #define PGSHIFT 12 /* LOG2(NBPG) */ #define DEV_BSIZE 512 #define DEV_BSHIFT 9 /* log2(DEV_BSIZE) */ #define BLKDEV_IOSIZE 2048 #define MAXPHYS (128 * 1024) /* max raw I/O transfer size */ #define CLSIZE 1 #define CLSIZELOG2 0 /* * Constants related to network buffer management. * MCLBYTES must be no larger than CLBYTES (the software page size), and, * on machines that exchange pages of input or output buffers with mbuf * clusters (MAPPED_MBUFS), MCLBYTES must also be an integral multiple * of the hardware page size. */ #define MSIZESHIFT 8 /* 256 */ #define MSIZE (1 << MSIZESHIFT) /* size of an mbuf */ #define MCLSHIFT 11 /* 2048 */ #define MCLBYTES (1 << MCLSHIFT) /* size of an mbuf cluster */ #define MBIGCLSHIFT 12 /* 4096 */ #define MBIGCLBYTES (1 << MBIGCLSHIFT) /* size of a big cluster */ #define M16KCLSHIFT 14 /* 16384 */ #define M16KCLBYTES (1 << M16KCLSHIFT) /* size of a jumbo cluster */ #define MCLOFSET (MCLBYTES - 1) #ifndef NMBCLUSTERS #define NMBCLUSTERS ((1024 * 1024) / MCLBYTES) /* cl map size: 1MB */ #endif /* * Some macros for units conversion */ /* Core clicks (NeXT_page_size bytes) to segments and vice versa */ #define ctos(x) (x) #define stoc(x) (x) /* Core clicks (4096 bytes) to disk blocks */ #define ctod(x) ((x)<<(PGSHIFT-DEV_BSHIFT)) #define dtoc(x) ((x)>>(PGSHIFT-DEV_BSHIFT)) #define dtob(x) ((x)<<DEV_BSHIFT) /* clicks to bytes */ #define ctob(x) ((x)<<PGSHIFT) /* bytes to clicks */ #define btoc(x) (((unsigned)(x)+(NBPG-1))>>PGSHIFT) #ifdef __APPLE__ #define btodb(bytes, devBlockSize) \ ((unsigned)(bytes) / devBlockSize) #define dbtob(db, devBlockSize) \ ((unsigned)(db) * devBlockSize) #else #define btodb(bytes) /* calculates (bytes / DEV_BSIZE) */ \ ((unsigned)(bytes) >> DEV_BSHIFT) #define dbtob(db) /* calculates (db * DEV_BSIZE) */ \ ((unsigned)(db) << DEV_BSHIFT) #endif /* * Map a ``block device block'' to a file system block. * This should be device dependent, and will be if we * add an entry to cdevsw/bdevsw for that purpose. * For now though just use DEV_BSIZE. */ #define bdbtofsb(bn) ((bn) / (BLKDEV_IOSIZE/DEV_BSIZE)) /* * Macros to decode (and encode) processor status word. */ #define STATUS_WORD(rpl, ipl) (((ipl) << 8) | (rpl)) #define USERMODE(x) (((x) & 3) == 3) #define BASEPRI(x) (((x) & (255 << 8)) == 0) #if defined(KERNEL) || defined(STANDALONE) #define DELAY(n) delay(n) #else /* defined(KERNEL) || defined(STANDALONE) */ #define DELAY(n) { int N = (n); while (--N > 0); } #endif /* defined(KERNEL) || defined(STANDALONE) */ #endif /* _I386_PARAM_H_ */ | Low | [
0.49771689497716803,
27.25,
27.5
] |
355 N.W.2d 10 (1984) Gene CARR, Appellant, v. SOUTH DAKOTA DEPARTMENT OF LABOR, UNEMPLOYMENT INSURANCE DIVISION, Appellee. No. 14332. Supreme Court of South Dakota. Considered on Briefs March 19, 1984. Decided September 12, 1984. Gene Carr, pro se. Drew Johnson, Sp. Asst. Atty. Gen., South Dakota Dept. of Labor, Unemployment Ins. Div., Aberdeen, for appellee. MORGAN, Justice. This appeal is from a trial court decision to affirm the South Dakota Labor Department's determination that an employer-employee relationship existed between appellant *11 Eugene Carr (Carr), a practicing chiropractor, and the people working in his clinic. The Labor Department's initial decision subjected Carr to liability for unemployment insurance payments. We affirm. Carr has been a practicing chiropractor for over twenty-one years. His workload requires him to hire people for general office work, i.e., typing, receptionist, preparation of patients and facilitation of therapy. Initially, Carr submitted the requisite wage reports and fulfilled his obligation to the Unemployment Insurance Division, Department of Labor (Department). In 1978, Carr first challenged the Department's right to impose the unemployment insurance tax on him. He stopped submitting reports and records. In a series of proceedings, Carr's liability for the unemployment insurance tax for the years 1978 and 1979 was determined. That determination is not before us because Carr failed to exhaust his administrative remedies and to file timely notice of appeal on Department's imposition of the tax for 1978 and 1979. This appeal deals with Department's determination of Carr's liability for unemployment insurance tax for 1980 and the first two quarters of 1981. During that period, Carr signed a Secretarial Services Agreement with each person who worked for him. These people paid $1.00 per week to a corporation set up by Carr and his wife as a rental fee for the use of typewriters and other office equipment. The agreement provided for an hourly wage, paid weekly, generally for thirty-five to thirty-eight hours per week. Carr claimed that he and the people working in his clinic entered an independent contractual relationship, rather than an employer-employee relationship. Carr asserts that the mutual intent of the contracting parties is the sole and sufficient determinant of the parties' status. Further facts and procedural history will be discussed in depth as pertinent to disposition of the issues. We first examine Carr's claim that SDCL 61-1-11[1] improperly placed the burden of proof on Carr to establish that an independent contractual relationship existed. He asserts that he is thus placed in the position of being guilty until proven innocent. It is a well-established principle of the common law incorporated into the statutory provisions of many states (SDCL 23A-25-3.1, effective July 1, 1984) that a person accused of crimes is presumed to be innocent until proven guilty. 29 Am.Jur.2d Evidence § 225 (1967). "[This] presumption applies not only in criminal cases, but also in civil cases where the commission of a crime comes collaterally in question." Id. at § 224. Carr cites us to no authority that applies this presumption to the Administrative Procedures Act (APA) nor are we aware of any. Carr is not accused of commission of a crime, either directly or collaterally. In Weber v. South Dakota Dept. of Labor, Etc., 323 N.W.2d 117 (S.D.1982), we recently upheld the ABC test of SDCL 61-1-11. We decline to apply the common law presumption to the APA, no form of which ever existed under the common law. Carr's argument is without merit. Department was stymied in the conduct of its investigation by Carr's refusal to produce or testify as to his employment or payroll records. Department secured a court order pursuant to SDCL 61-3-10 directing Carr to produce his payroll records. A hearing was held on Carr's motion to quash the order and the trial court denied Carr's motion and directed him to produce the records under the threat of contempt *12 proceedings. This court dismissed Carr's attempt to appeal that order. The trial court entered another order, reimposing it's original order and Carr complied. Carr now complains that the trial court's order to produce his records violated his Fourth Amendment rights against search and seizure, and his Fifth Amendment right against self-incrimination. Carr's attempt to assert these constitutional rights in this situation fails. In Wilson v. United States, 221 U.S. 361, 31 S.Ct. 538, 55 L.Ed. 771 (1911), Mr. Justice Hughes, after noting that public officials are not protected from producing public records by constitutional privilege against self-incrimination, stated: The principle applies not only to public documents in public offices, but also to records required by law to be kept in order that there may be suitable information of transactions which are the appropriate subjects of governmental regulation.... There the privilege, which exists as to private papers, cannot be maintained. 221 U.S. at 380, 31 S.Ct. at 544, 55 L.Ed. at 779. Carr was required, under SDCL 61-3-2, to keep and preserve the records sought and eventually produced. SDCL 61-3-2 also states that the type of records involved here "shall be open to inspection... by the secretary [of Labor] or his authorized representatives ...." They were not protected by Carr's Fifth Amendment rights against self-incrimination. As regards his Fourth Amendment rights against reasonable search and seizure, there simply was no search or seizure. Carr also complains that he was denied due process in the initial agency proceedings when the appeal referee acted as both judge and the Department's investigator. This issue was not raised at any time below, either by objection at the hearings or on the appeal in circuit court. "Issues not presented at the trial court level are not properly before this Court." Weber 323 N.W.2d at 120. Furthermore, the record does not support his allegations. Department was represented by an assistant attorney general. There is no evidence that the appeal referee acted in any capacity as Department's representative at that hearing. Carr's greatest complaint apparently centers around the trial court's denial of his request for a jury trial on the question of whether his relationship to Spear and Hecock was contractual or an employment relationship. Carr relies on Article VI, section 6 of the South Dakota Constitution which provides, in pertinent part: "The right of trial by jury shall remain inviolate and shall extend to all cases at law without regard to the amount in controversy...."[2] Carr also cites to SDCL 15-6-38(a) which provides: "The right of trial by jury as declared by article VI, section 6 of the Constitution of South Dakota or as given by a statute of South Dakota shall be preserved to the parties inviolate." It is Carr's contention that the APA provision for appeal to circuit court, SDCL 1-26-35 and -36, deprives him of a jury trial, in violation of the constitutional provision and the statute cited above. In Shaw v. Shaw, 28 S.D. 221, 226, 133 N.W. 292, 293 (1911), this court said: [T]he constitutional provision ... "that trial by jury shall remain inviolate" ... applies to law cases triable by jury as a matter of right as theretofore existed in the territory of Dakota prior to the going into effect of the Constitution of this state. The "law cases" comprehended within this clause of our Constitution applied to all those cases which at common law or by the statute of the territory of Dakota were triable by a jury on the law side of the court. Shaw involved an appeal in probate proceedings. The opinion pointed out that a will contest is a special proceeding, not a case at law. South Dakota's unemployment *13 insurance laws were statutorily created in 1936 and did not exist at common law. The relevant statutes within SDCL ch. 61-7 govern the procedure for implementing the unemployment insurance laws in contested cases. The procedure outlined therein does not provide for a jury trial. The process for appealing administrative decisions is found in SDCL ch. 1-26, South Dakota Administrative Procedures Act. SDCL 1-26-30.2 provides a right to appeal beyond the Department to the circuit court and SDCL 1-26-37 allows for an appeal to the South Dakota Supreme Court from the circuit court review. There is no statutory or administrative procedure provision for the right to a jury trial in the administrative process. The APA, SDCL ch. 1-26, provides the only stated right to judicial review of a Department decision and this remedy is legislative and not constitutional. "In the administrative process, decisions are made through methods, for reasons, and by persons, different from those in the judicial process." 1 Am.Jur.2d Administrative Law § 16 (1962). We hold that, as in the case of probate appeals, administrative appeals under the APA are special proceedings, not a case at law protected by the constitutional guarantee. In Carr's notice of appeal, from the agency decision to the circuit court, he included a "counterclaim" alleging that departmental representatives had violated his rights, thus he prayed for damages in excess of three million dollars. The Attorney General's Office filed a motion to dismiss the counterclaim. After a hearing on the motion to dismiss, the trial court entered a judgment for dismissal. Carr's final complaint is that the trial court violated his constitutional rights by dismissing his counterclaim. We hold that the trial court was correct. SDCL 1-26-30 sets forth an exclusive judicial avenue for circuit court review of administrative decisions. The circuit court is statutorily cloaked with appellate jurisdiction which permits nothing more than a review of agency decisions. The lower court lacks jurisdiction to review matters outside the confines of the administrative action appealed from. As the trial judge noted, Carr's counterclaim is a separate and distinct civil action and thus is controlled by rules of civil procedure set out in SDCL ch. 15-6. A dismissal of the counterclaim did not factually decide that case and Carr is not precluded from pursuing that action. We affirm the trial court's affirmation of Department's decision. All the Justices concur. DUNN, Retired Justice, participating. WUEST, Circuit Judge, acting as Supreme Court Justice, not participating. NOTES [1] SDCL 61-1-11 provides: Services performed by an individual for wages shall be deemed to be employment subject to this title unless and until it is shown to the satisfaction of the department that: (1) Such individual has been and will continue to be free from control or direction over the performance of such services, both under his contract of service and in fact; and (2) Such service is either outside the usual course of the business for which such service is performed, or that such service is performed outside of all the places of business of the enterprise for which such service is performed; and (3) Such individual is customarily engaged in an independently established trade, occupation, profession or business. [2] Carr also cites Article VI, section 5; however, that section is inappropriate since it deals specifically with libel suits. | Mid | [
0.5419580419580421,
38.75,
32.75
] |
// Copyright 2016 the V8 project authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. // Flags: --allow-natives-syntax --harmony-do-expressions (function TestDoForInDoBreak() { function f(o, i) { var a = "result@" + do { var r = "("; for (var x in o) { var b = "end@" + do { if (x == i) { break } else { r += o[x]; x } } } r + ")"; } return a + "," + b; } assertEquals("result@(3),end@0", f([3], 2)); assertEquals("result@(35),end@1", f([3,5], 2)); assertEquals("result@(35),end@1", f([3,5,7], 2)); assertEquals("result@(35),end@1", f([3,5,7,9], 2)); %OptimizeFunctionOnNextCall(f); assertEquals("result@(3),end@0", f([3], 2)); assertEquals("result@(35),end@1", f([3,5], 2)); assertEquals("result@(35),end@1", f([3,5,7], 2)); assertEquals("result@(35),end@1", f([3,5,7,9], 2)); })(); (function TestDoForInDoContinue() { function f(o, i) { var a = "result@" + do { var r = "(" for (var x in o) { var b = "end@" + do { if (x == i) { continue } else { r += o[x]; x } } } r + ")" } return a + "," + b } assertEquals("result@(3),end@0", f([3], 2)); assertEquals("result@(35),end@1", f([3,5], 2)); assertEquals("result@(35),end@1", f([3,5,7], 2)); assertEquals("result@(359),end@3", f([3,5,7,9], 2)); %OptimizeFunctionOnNextCall(f); assertEquals("result@(3),end@0", f([3], 2)); assertEquals("result@(35),end@1", f([3,5], 2)); assertEquals("result@(35),end@1", f([3,5,7], 2)); assertEquals("result@(359),end@3", f([3,5,7,9], 2)); })(); (function TestDoForNestedWithTargetLabels() { function f(mode) { var loop = true; var head = "<"; var tail = ">"; var middle = "1" + do { loop1: for(; loop; head += "A") { "2" + do { loop2: for(; loop; head += "B") { "3" + do { loop3: for(; loop; head += "C") { "4" + do { loop4: for(; loop; head += "D") { "5" + do { loop5: for(; loop; head += "E") { "6" + do { loop6: for(; loop; head += "F") { loop = false; switch (mode) { case "b1": break loop1; case "b2": break loop2; case "b3": break loop3; case "b4": break loop4; case "b5": break loop5; case "b6": break loop6; case "c1": continue loop1; case "c2": continue loop2; case "c3": continue loop3; case "c4": continue loop4; case "c5": continue loop5; case "c6": continue loop6; default: "7"; } }} }} }} }} }} }} return head + middle + tail; } function test() { assertEquals( "<1undefined>", f("b1")); assertEquals( "<A1undefined>", f("c1")); assertEquals( "<A12undefined>", f("b2")); assertEquals( "<BA12undefined>", f("c2")); assertEquals( "<BA123undefined>", f("b3")); assertEquals( "<CBA123undefined>", f("c3")); assertEquals( "<CBA1234undefined>", f("b4")); assertEquals( "<DCBA1234undefined>", f("c4")); assertEquals( "<DCBA12345undefined>", f("b5")); assertEquals( "<EDCBA12345undefined>", f("c5")); assertEquals( "<EDCBA123456undefined>", f("b6")); assertEquals("<FEDCBA123456undefined>", f("c6")); assertEquals("<FEDCBA1234567>", f("xx")); } test(); %OptimizeFunctionOnNextCall(f); test(); })(); | Low | [
0.5180180180180181,
28.75,
26.75
] |
Q: Accessing codeigniter session variables inside javascript How to access codeigniter session variables inside javascript? If I create session variable inside plain php and access it in javascript it gives me result but in case of codeigniter session variables it gives me syntax error. I use following code of line to access codeigniter session variable in my .js file var m1 = "<?php echo json_encode($this->session->userdata('max_age')); ?>"; A: "I use following code of line to access codeigniter session variable in my .js file " You cannot put PHP code inside your .js file. It will not be parsed. You must put your code in the PHP file that your .js file is being called from. For example: <script type="text/javascript"> var m1 = <?php echo json_encode($this->session->userdata('max_age')); ?>; </script> <script type="text/javascript" src="script.js"></script> | High | [
0.661971830985915,
35.25,
18
] |
956 A.2d 58 (2008) Chidiebere P. INYAMAH, Appellant, v. UNITED STATES, Appellee. No. 05-CF-593. District of Columbia Court of Appeals. Argued January 10, 2008. Decided September 11, 2008. Corinne Beckwith, Public Defender Service, with whom James Klein, Public Defender Service, was on the brief, for appellant. Peter Smith, Assistant United States Attorney, with whom Jeffrey A. Taylor, United States Attorney, and Roy W. McLeese III, Mary B. McCord, Tejpal Chawla, and Sarah T. Chasson, Assistant United States Attorneys, were on the brief, for appellee. Before REID and GLICKMAN, Associate Judges, and SCHWELB, Senior Judge. REID, Associate Judge: Appellant, Chidiebere P. Inyamah, appeals the trial court's judgment convicting him of carrying a pistol without a license ("CPWL")[1] and possession of an unregistered *59 firearm.[2] He complains that the trial court's aiding and abetting instruction constituted reversible error. We affirm the judgment of the trial court. FACTUAL SUMMARY The government presented testimony from officers and employees of the Metropolitan Police Department ("MPD") showing that around 2:30 a.m. on April 22, 2004, MPD Officer Kayode Sodimu was driving a marked police car around Tenth and Quincy Streets, in the Northwest quadrant of the District, when he "heard a couple of gunshots." With him in the police vehicle were MPD Officers Vernon Copeland and Ismael Chapa.[3] As Officer Sodimu drove toward Georgia Avenue, two more shots rang out, and he saw a fast moving red car pass in front of him. Officer Sodimu activated his lights and siren and pursued the red car as the driver turned off his lights, ignored red street lights, and sped through the streets at 80 to 90 miles an hour. When the red car approached Grant Circle, the driver lost control and the car "went airborne and landed on a transformer right away." Officer Sodimu positioned his vehicle behind the red car and he watched as the passenger and driver side doors opened. The man on the passenger side, later identified as Mr. Inyamah, began to exit the car. Officer Sodimu saw a gun in Mr. Inyamah's hand and witnessed him throw the gun on the ground. Officer Sodimu went up to the car, noticed that both air bags in the red car had deployed, and he chased and detained the driver at gunpoint. While he apprehended the driver, Officers Copeland and Chapa chased Mr. Inyamah as he fled on foot. Officer Copeland, who was seated in the back passenger seat, saw Mr. Inyamah getting out of the red car after it crashed, but the officer did not notice anything in his hand and did not see him toss anything. However, Officer Chapa testified that he "observed the passenger throw a dark object out of the vehicle"; when the prosecutor asked whether Officer Chapa "actually s[aw] the object in the possession of the passenger of the vehicle," he replied, "Yes, I did." Officer Chapa never saw the driver of the red car throw any object. Both Officers Chapa and Copeland ran after Mr. Inyamah and apprehended him. Then, Officer Chapa returned to the area where Mr. Inyamah threw the dark object. There, he "saw a black revolver laying on the ground near the car." MPD Officers Elliott Pazornick and Karl Turner, crime scene search officers, were dispatched to Grant Circle around 3:30 a.m. on April 22.[4] Officer Sodimu showed *60 the technicians where the gun was located. Officer Turner took pictures, and both Officer Turner and Officer Pazornick measured the distance between the red car and the revolver, determining that the gun was "[a]pproximately six feet, six inches from the [passenger side of the] vehicle." Through Officer Chapa, the government introduced evidence establishing that Mr. Inyamah had no license to carry a pistol and no registration for a gun. Officer Chapa identified (1) an MPD sealed certificate bearing Mr. Inyamah's name and "stat[ing] that there is no record of the defendant with a certification or a registration or a certificate for a weapon in [the District[,] ... [nor] any record of registry of ammunition to the defendant"; and (2) an MPD sealed certificate "stat[ing] that the defendant has no certificate or license to carry a pistol in the District of Columbia." ANALYSIS Mr. Inyamah challenges the trial court's instruction on an aiding and abetting theory of guilt. Specifically, he asserts: Given the critical deficiency in the government's proof that the would-be principal, [the driver of the red car], was not licensed to carry a gun, and given the prosecutor's own admission that the jury readily could have concluded that Mr. Inyamah aided and abetted [the driver's] commission of a CPWL offense, the trial court's decision to instruct the jury on an aiding and abetting theory that was irrefutably unsupported by the evidence was reversible error. Mr. Inyamah re-emphasizes and maintains in his reply brief that both the CPWL and the possession of an unregistered firearm charges "were tainted by the government's now conceded failure to prove that the principal, [the driver of the red car], had no license to carry a firearm and that the firearm was not registered to him." He concludes that: "The [trial] court gave an instruction that was legally incorrect, and it is easily conceivable that the jury, at the prosecutor's urging, would have convicted Mr. Inyamah on an aiding and abetting theory without realizing as the trial court and the prosecutor also apparently did not realize that such a conviction would require a finding that [the driver of the red car], not Mr. Inyamah, was not licensed to carry the gun and that the gun was not registered in his name." The government agrees that "there was no evidence that the driver of the red car lacked a license to carry a pistol and, thus, [Mr. Inyamah] could not have aided and abetted the driver's commission of that crime." Nevertheless, the government maintains that "[b]ecause there was sufficient evidence that [Mr. Inyamah] committed the crime of carrying a pistol without a license and possession of an unregistered firearm as a principal, [] his conviction must be affirmed." The government also contends that Mr. Inyamah "conflate[s] evidentiary insufficiency with instructional error," and that he "has identified no cognizable legal defect in the aiding and abetting instruction, and his conviction must therefore be affirmed." To provide the context for these arguments, we first focus on the pertinent part of the discussion between the trial court and both counsel pertaining to the proposed final instructions, and reiterate relevant portions of the trial court's charge to the jury. During the discussion, the prosecutor argued that an aiding and abetting instruction was proper, based in part on Mr. Inyamah's testimony from his first trial, which was read into evidence at his *61 second trial, declaring that the driver of the red car "had the gun" and Mr. Inyamah "did not touch it." Government counsel also advocated the aiding and abetting instruction because the driver fled from the police, two officers saw the defendant with the gun or a dark object, and therefore, "[i]t is reasonably inferable from those facts alone that the defendant was aiding and abetting the driver and in possession." As the prosecutor further contended: It's possible for the jury to believe that the driver possessed [the gun] at one point and the passenger[,] the defendant[,] picked it up and threw it from the car. Thus you can have joint possession as well as aiding and abetting that we believe are apparent on these facts. Defense counsel took the position that the part of Mr. Inyamah's testimony from the first trial, that the defense read into the record at the second trial, did not "make[ ] out the factual basis for the instruction for aiding and abetting." Defense counsel added: "There has been no evidence that even if you did pick it up, that it was not inadvertent. I think that's one of the elements that the government needs to prove that whatever he did, he did so purposefully and knowingly." During its brief discussion with both counsel, the trial court declared: "The jury could believe the evidence that it was the other person's gun. That's the evidence in the record. That's what [the defendant] says, that the other person had the gun.... There is also evidence that [the defendant] had the gun. There is evidence and whether you believe the evidence or not is a different situation." The trial court expressed the view that "there is sufficient evidence for the aiding and abetting instruction," and decided to repeat the instruction given during the first trial. In instructing the jury, the trial court stated the elements of each charged crime and indicated that the government had the burden of proving each element beyond a reasonable doubt.[5] As for aiding and abetting, the trial court stated, in part: You may find the defendant guilty of the crime charged in the indictment without finding that he personally committed each of the acts that make up the crime or that he was present while the crime was being committed. Any person who in some way intentionally participates in the commission of a crime can be found guilty either as an aider and abettor or as a principal offender. It makes no difference which label you attach. A person is guilty or the person is as guilty of the crime as he would have *62 been had he personally committed each of the acts that make up the crime. To find that the defendant aided and abetted in committing the crime, you must find that the defendant knowingly associated himself with the commission of the crime, that he participated in the crime as something that he wished to bring about and that he intended by his actions to make it succeed.... It is sufficient if you find beyond a reasonable doubt that the crime was committed by someone and that the defendant knowingly and intentionally aided and abetted in committing the crime. We now turn to the legal principles which will guide our discussion of the parties' arguments. "`To obtain a conviction on a theory of aiding and abetting, the government must prove that (a) a crime was committed by someone; (b) the accused assisted or participated in its commission; and (c) his participation was with guilty knowledge.'"[6] In addition, "when parallel theories are submitted to a criminal jury antecedent to a general verdict of guilty, the verdict should be upheld as long as there is sufficient evidence to validate either of the theories presented."[7] Thus, a conviction generally is sustained under the circumstances where "two correct theories of illegality are presented in the instructions and there is sufficient evidence to convict only on one."[8] "But a jury that is given an illegal instruction cannot be assumed not to have followed it, since juries are neither authorized nor competent to make judgments of law."[9] Here, we conclude that the trial court's aiding and abetting instruction constituted a correct statement of the law of aiding and abetting, and further, we discern no error in the court's instruction as to the elements of the charged crimes. The record shows that the aiding and abetting instruction was given under the theory, advanced by the government, that the jury could believe the driver possessed the gun at one point, but that Mr. Inyamah picked it up and tossed it our of the red car. As we explain below, we do not agree with this theory. We would find this a closer case if the only deficiency in the evidence of aiding and abetting were the lack of evidence that the driver the putative principal was unlicensed and unregistered. For then, the Griffin presumption that the jury perceived the evidentiary weakness of the aiding and abetting theory of liability might *63 seem strained, given the technicality and subtlety of the defect; if an experienced trial judge and experienced counsel overlooked it, the jury might have overlooked it too. But the lack of evidence that the driver was unlicensed and unregistered was not the only defect in the evidence of aiding and abetting. Whether or not the driver had a right to possess and carry a pistol, we see no evidence that the passenger, Mr. Inyamah, helped him do it. Appellant's presence in the car, flight after it crashed, and disposal of the weapon do not add up to aiding and abetting (though the disposal does establish his guilt as a principal). That is something a jury readily could perceive as it reviewed the evidence in light of the court's legally correct instructions. In these circumstances, therefore, we think the Griffin presumption applies with full force: in convicting Mr. Inyamah, the jury must have found, at a minimum, that he intentionally possessed the pistol and threw it away. It is immaterial whether the jury found he had the weapon for the entire duration of the car ride or obtained it from the driver at the end of the ride; there is no evidence that the driver foisted the pistol on him against his will. The jury therefore convicted him as a principal, and the government's evidence that Mr. Inyamah acted as a principal is strong and compelling. Officer Sodimu saw Mr. Inyamah throw a gun on the ground as he got out of the red car. And Officer Chapa observed Mr. Inyamah with a dark object which he threw out of the vehicle. Thus, both officers' testimony established that Mr. Inyamah had actual possession of the gun, because in order for Mr. Inyamah to throw it to the ground, the pistol had to have been in his hand.[10] Officers Turner and Pazornick confirmed that the gun was found approximately six feet, six inches from the red car, on the passenger side. Moreover, the government introduced documentary evidence, through Officer Chapa, that Mr. Inyamah had neither a license to carry a gun nor a registration certificate. Under these circumstances, we are satisfied that the Griffin principle applies.[11] Therefore, we hold that the trial court correctly instructed the jury on the law, and the evidence presented by the government was sufficient, beyond a reasonable doubt, to convict Mr. Inyamah of the charged crimes as a principal. Griffin, supra. Accordingly, for the foregoing reasons, we affirm the judgment of the trial court. So ordered. NOTES [1] D.C.Code § 22-4504(a) (2001). [2] D.C.Code § 7-2502.01 (2001). [3] According to the government, Mr. Chapa's name is incorrectly spelled in the record as "Choppa." [4] Officer Pazornick was unavailable to testify at trial, but the prosecutor read his testimony from the first trial into the record. The jury in the first trial (December 2004) could not reach a verdict. Under the rule of completeness, the prosecutor also read a portion of the cross-examination testimony of Mr. Inyamah from the first trial. Mr. Inyamah maintained that the driver of the car threw the gun on his side of the car. Yet, when the prosecutor reminded Mr. Inyamah of his direct examination testimony that he did not see the driver toss the gun the defendant agreed: "I said I didn't see him toss the gun." Mr. Inyamah acknowledged that the air bags deployed when the red car crashed, and he stated that he "was probably a little shaken up and dazed... [b]ut ... was not injured." In response to the prosecutor's questions, Mr. Inyamah agreed that the gun did not hit him or the air bag. Defense counsel read another portion of Mr. Inyamah's 2004 testimony, under the rule of completeness. In that portion of his testimony, Mr. Inyamah asserted that he did not see the gun and "[n]ever touched it." In contrast to his December 2004 trial, Mr. Inyamah did not testify at his second trial in March 2005, and presented no evidence. [5] With regard to "the offense of carrying a pistol without a license," the trial court stated the following as elements of the crime: Number 1. The defendant carried a pistol openly or concealed on or about his person. Number 2. The defendant carried the pistol knowingly and intentionally.... Number 3. That the pistol was operable. That it would fire a bullet. Number 4. The defendant was not licensed to carry the pistol by the Chief of Police of the District of Columbia. Number 5. That the defendant carried the pistol in a place other than his home, place of business or land or premises possessed and controlled by him. Concerning "possession of an unregistered firearm," the trial court listed the following as elements of the offense: Number 1. The defendant possessed a pistol. Number 2. The defendant did so knowingly and intentionally.... Number 3. The firearm was not registered to the defendant as required by the District of Columbia law. [6] Hairston v. United States, 908 A.2d 1195, 1198 (D.C.2006) (quoting Jefferson v. United States, 463 A.2d 681, 683 (D.C.1983)). [7] Leftwich v. Maloney, 532 F.3d 20, 24 (1st Cir.2008) (citing Griffin v. United States, 502 U.S. 46, 60, 112 S.Ct. 466, 116 L.Ed.2d 371 (1991)) (other citation omitted); see also White v. United States, 714 A.2d 115, 118 & n. 5 (D.C.1998) ("Since the jury returned a general verdict of guilty on the charge of CPWL, the conviction may be affirmed if the evidence was sufficient to support either theory" of "carrying" a pistol actual possession, or construction possession). [8] United States v. Black, 530 F.3d 596, 602 (7th Cir.2008) (citing Griffin, supra note 7, 502 U.S. at 59-60, 112 S.Ct. 466) (other citation omitted); see also Sochor v. Florida, 504 U.S. 527, 538, 112 S.Ct. 2114, 119 L.Ed.2d 326 (1992) ("no violation of due process that a trial court instructed a jury on two different legal theories, one supported by the evidence, the other not.") (citing Griffin). [9] Black, supra note 8, 530 F.3d at 602. See also Thomas v. United States, 806 A.2d 626, 630 (D.C.2002) ("Where it appears that a conviction is based upon an improper rule of law, the verdict must be set aside.") (citations omitted); see also Sochor, supra, 504 U.S. at 538, 112 S.Ct. 2114 ("although a jury is unlikely to disregard a theory flawed in law, it is indeed likely to disregard an option simply unsupported by evidence") (citing Griffin, supra note 7, 502 U.S. at 59-60, 112 S.Ct. 466). [10] See Campos v. United States, 617 A.2d 185, 187 (D.C.1992) ("A defendant who maintains direct physical control over an object at a given moment has actual possession of such object."). [11] In Hairston, supra note 6, though we declined to consider a similar point because the government had not raised it, we observed that "in a case such as this, an argument could be made that a jury will likely disregard a factually unsupported theory of liability here accomplice liability in favor of one [principal liability] clearly supported by the evidence...." 908 A.2d at 1200 (citing Griffin, 502 U.S. at 59-60, 112 S.Ct. 466). | Mid | [
0.623893805309734,
35.25,
21.25
] |
Love Her Summary: SEQUEL TO PROTECT HER. Quil is Claire's, heart and soul. At last, she is old enough for him to pursue hers... but can she overcome the terror and demons of her past to find true love? Notes: I disclaim. This story is stolen from Ms. Meyer's beautiful universe. it is all hers, so please don't sue me. right. enjoy. oh, i reccomend you read Protect Her first. And review it. after every chapter. well, once at least. but seriously, read it or this will make no sense. and thanks to all the people who read, reviewed, and encouraged me on Protect Her. I couldn't have written it without the continuous support i found, and i hope this story will get a similar response. 8. Chapter 8 “Claire, you know I won’t. Don’t you know me at all? Do you trust me?” “Yeah.” And I could see, yet again, the secret. There was something she wasn’t telling me. Some way in which I had failed to be enough for her, had not done what she required of me. I hoped it wasn’t that she was lying, that she didn’t trust me. “Do you want to go talk to your mom?” “No. But I will…” I wondered why. Did she feel some obligation to this woman? Lina Denson had never done anything for Claire. “Soon?” “Yeah. Get it over with.” She smiled a brave little smile. She was sixteen, but she still seemed a child sometimes. An innocent… something she had never been. That had been stolen from her when she was young yet. “Claire… not to make you angry or afraid, but is there something you aren’t telling me?” She blushed a beautiful rose color. “Yeah.” “And you aren’t going to?” “No. Sor-“ “Don’t apologize. I’m always here if you want to talk, but you don’t ever have to tell me anything you don’t want to.” She ducked her head. “Thanks.” “Anything. Anytime.” “Unconditional promises are stupid, Quil.” “Not where you’re concerned. Since I can’t deny you anything.” “Thanks.” She smiled again, and my heart fluttered, swelled, burst. What you do to me, Claire. If only you knew. What you do to me. I drove this time, even though it was only about two miles—Claire, unlike your average werewolf, was physically capable of tiring. We talked the whole way, about little things. Trying to avoid the big issue of where we were going and whom the person waiting there would remind her of. Instead, I learned about her friends. Aliena was going out with a freshman—two whole years younger. “Lucky kid,” I remarked, and Claire laughed. “Not to hear Aliena tell it. She thinks he lights the moon.” “Well, true love is always a good thing…” A silence sat for a moment between us. I didn’t want her to feel pressured. And I didn’t want to remind her of my obviously unwelcome offering of flowers. Instead, I moved the conversation along. “Have you started thinking about colleges?” “Yeah. But I don’t think I want to leave, Quil. I mean… not unless you could come too… I’d be scared. And I’d miss you.” “Maybe I could come.” “Doesn’t the pack need you?” “Not if you do. They aren’t worth a second thought next to you, sweetheart. Besides, Sam’s already decided to quit, and the rest of us will follow soon… the leeches are gone. It’s really up to the next generation to deal with it as it comes. I’m going to quit too… if you… when you’re older… if… maybe.” I was trying not to say if she loved me. That was the real deciding factor. If she couldn’t accept me, I would remain young and strong, her protector, her entire life. But I would much rather be her lover. “I’ve got money. Not a lot, but there’s no better use than for you, and I’d be happy to move if you wanted to. I could work there, get a job. We can do it.” Her eyes widened. “That’s more than I could ask.” “You can ask me for anything you want, ever. And I’ll try to always, always give you more.” She appeared to be fighting with herself. “What’s wrong?” “I can’t… can’t tell you.” “All right. When you’re ready.” Last time being ready had taken six years. I hoped it would be quicker this time. Really, truly, I did. But her choices were always hers. I would never take choice away from her, because then I would be no different than he was. I would never pressure her, even over something seemingly trivial, because I would always have to do my best to earn and keep her trust. | Low | [
0.536312849162011,
36,
31.125
] |
"If the post office can't deliver or return a package, it ends up here." "The dead letter office." "They prefer Mail Recovery Center." "I guess it sounds classier." "Twice a year, we open the boxes, put pricey stuff in the auction bin, cheap items go in the garbage, and the ones in-between sometimes disappear." "Isn't that stealing?" "It's repurposing." "See, I can class it up, too." "(chuckles)" "Oh, nasty." "Old food." "That's an animal." "I found a dead cat once." "Early retirement is looking good." "(screams)" "That's it-- I retire." "Welcome to the U.S. Postal Service, kid." "(gasps)" "BRENNAN:" "This house would be perfect for us." "BOOTH:" "Is it $30 million?" "Because, you know, I'm not a best-selling author." "No, it's very reasonable." "I'll be the judge of that." "Wow, look at that, it's nice." "There's a pool." "Costa Rica?" "There is a little known tribe there that I could study, and it is a beautiful country to raise a child." " Very little crime." " Crime?" "I thrive off crime-- that's my job." "Well, with your restrictions," "Booth, you make it very difficult to find something in the D.C. area." "Wait, what about Maryland?" "Maryland's a great place." "The Bureau just confiscated this..." "Hey, shrimp." "Hey, Pops." "What are you doing here, huh?" "Why didn't you call?" "What, and waste a dime?" "(laughs)" "Come on, have a seat, huh?" "Oh, you look just beautiful." "You know, I never thought" "I would be a great- grandpa again." "Here, you want to feel her kick?" "So what..." "what happened?" "They kick you out of the retirement home?" "No." " They put up with me." " Mm-hmm." "Could we go somewhere else?" "I-I don't want to talk here." "Why?" "Is everything okay with you?" "It's not me, Seeley, it's your dad." "Oh, right." "What did he do this time?" "He's gone." "He died Monday at the V.A." "Oh, no, I'm..." "I'm so sorry, Booth." "What happened?" "(phone ringing) Liver failure." "I guess that drinking finally caught up with him." "Hm." "Doesn't surprise me." "Booth." "Great." "All right, we're on our way." "Just text me the address." "Okay, let's go." "We got a case." "I'm sure someone else can handle it, Booth." " Why?" "HANK:" "Seeley," "I know how you felt." "Then you shouldn't be surprised how I reacted, huh?" "You got a key to my place." "Make yourself at home, we'll have some grilled cheese later on." "Come on, Bones, let's go." "Take care of him, Temperance." "BOOTH:" "Ready?" "You sure you're okay?" "Yeah, I'm fine." "Another day, another crime." "(sighs)" "BRENNAN:" "The wear on the lower incisors and mandibular angle indicate a male in his early 20s." "This body part was dismembered at a cervical vertebra." "This box shows a slice at the acromial processes of both scapulae." "This is certainly a first for me." "Me, too" " I have never seen this part of the post office before." " Yes." "I thought they sent the dismembered bodies to a completely different place." "That's, uh, whoa, wow, oh, God." "I agree." "But it's packed very nicely." "I wonder if the killer does gift wrapping on the side." "Well, we almost wrapped up here?" "Does this look like a routine case to you, Seeley?" "Are you sure you're okay, Booth?" "Would you stop?" "I'm fine." "Is something wrong?" "Nope." "His dad died." "(groans) Oh, my God." "Really?" "Why would you say that?" "What?" "I was just trying to be supportive by adopting a matter- of-fact demeanor, like you." "If you'd like to..." "I don't." "I just want to know who sliced and diced this guy up and put him in boxes." "That's all I'd like to know." "♪ Bones 7x04 ♪ The Male in the Mail Original Air Date on December 1, 2011" "♪ Main Title Theme ♪ The Crystal Method == sync, corrected by elderman ==" "♪ ♪" "The body lipids combined with the packing materials and transformed the tissue into an adipocerous gel." "Yeah, I'm pretty sure my middle school served this for dessert." "Unless we can separate them, the packing material is going to contaminate any tests I run on the tissue." "And I need to separate these bones before there's any more chemical damage to them." "I've got just what you need, Clark." "This little puppy is a plycimer laser." "Now, who wants to hear it bark?" "Aren't those used for eye surgery?" " Yeah." "Gotta be an eye in here somewhere, right?" "Now, I've set it so that it'll zip through the goop and separate it from the cardboard." "Hm." "Can't we just cut the box open?" "But I already signed this out." "And it's much cooler." "Trust me." "Okay." "(clears throat)" "Okay, I think." "Here we go." "Okay, that is cool." "And once the bones are removed, we can separate the packing material from the decomposed organs and tissue." "Excellent, Dr. Hodgins." "Yeah." "Okay." " Okay, ready?" " Mm-hmm." "(groans)" "Okay, uh, tray." "Right." "Ladies and gentlemen, our first bone." "And only 205 more to go." "Okay, my turn." "Oh, hey, hey!" "I called dibs." "Hey, but..." "I'm the boss." "Oh, uh-huh." "For you, sir." "What's this?" "I had an evidence response team run the shipping information from the packages." "Who requested this?" "I figured you'd want them and I know how busy you are, so... (chuckles)" "Uh, it turns out that neither the shipping addresses nor the return addresses exist." "Ah, so the labels were created so the body would end up in the dead letter office." "Exactly." "Maybe we can find out where these labels were created." "I'm gonna call..." "I actually sent them over to Ms. Montenegro at the Jeffersonian." "(chuckles)" "I figured that's what you would do." "You know, Shaw," "I am not authorized to give you a raise." "You're the best agent in the department, sir." "I just really wanted the opportunity to work with you, and if I can help during this time of your loss..." "Oh, so the techs were talking at the scene." "They were concerned." "We all are." "There's a chopped-up body at the lab, so if you want to help, let's just focus on the case, right?" "Yeah, okay, of course." "Um, the boxes containing the remains were packed to the specifications of the American Society for Testing and Materials, and they're the gold standard in shipping." "So, professionally packed and shipped but never intended to reach a destination." "That's a great way to get rid of a body." "You found an anomaly, Dr. Edison?" "Yes, as I was cleaning the bones, I noticed a sesamoid..." "An ossified node?" "Where was it?" "Huh?" "Oh, uh, it was on the, um, second metacarpal, on the left hand." "(grunts)" "Have you determined the weapon that... that, uh, dismembered the victim?" "Um..." "(clears throat)" "The lack of kerf marks would suggest that we're looking for a toothless blade, s-something uniform with vertical striations that, uh..." "Dr. Edison, is there a problem?" "You're staring at my breasts." "Oh, oh, no, no, oh, I-I'm sorry, Dr. Brennan, uh, but you were, um..." "Look, there was a whole lot of activity going on there, and I was just thinking, you know, maybe I could help you out." "Not meaning like that, because I would never..." "Tender and swollen breasts are common in the third trimester." " Of course." " It's very uncomfortable." "My bra size has increased by two cup sizes." "I hadn't noticed." "Well, it's quite obvious." "You should be more observant, Dr. Edison." "Yes, I'm sorry." "Oh, I see now." "They are much larger." "(clears throat)" "Can I just, uh, focus on these remains?" "Yes." "I need a weapon and Booth needs an I.D., so run a search using the victim's dental X-rays." " Of course." " I need to find some ice packs." "Maybe that will help." "Dental X-rays, weapon, ice packs." "(sighs)" "MONTENEGRO:" "So when you create a shipping label on the post office Web site, you can enter whatever shipping and return address you want." "Which is what the killer did, which is why we can't trace him as the shipper." "Yeah." "What the killer didn't know was that to help track performance, the postal service gathers other information when the label is created." "It's all here, and it's called the QR code." "It tells you where the label was created." "You're fast." "I have to be;" "I work with Agent Booth. (chuckles)" "Well, I'm sure he'll give you a gold star for this, then." "Body was shipped from the Ship 'n' Print in Hyattsville, Maryland." "I can't believe I'm getting to work with you people." "Yeah, we're pretty awesome, huh?" "Yeah." "I have a six-month-old at home, so I'm doing all of this on no sleep." "BOOTH:" "You know, you, you don't have to come along, Bones." "I could have brought Agent Shaw." "There could be evidence at the scene." "Ah, right." "You know, I-I really am okay about my father, so, you know, you don't have to worry about me." "All right?" "The Buddhists believe that anger only brings more anger." "To be at peace, one must..." "I appreciate your concern, Bones," "I really do, but I am at peace with this." "Okay?" "You don't seem peaceful, Booth." "You really want to help?" "I got a great idea." "What do you say we talk about something else?" "Let's talk about you." "My breasts are very sore." "Would you mind if I spent the evening naked?" "Sure, yeah, that's fine with me." "No complaints here, that's great." "See, now, isn't this a better conversation?" "(phone rings) Oh." "Brennan." "Yes." "Thank you, Dr. Edison." "He matched the dentals." "The victim was Oliver Lawrence." "Lawrence?" "Lawrence?" "Look in the file." "Wasn't there a Lawrence that worked at the Ship 'n' Print?" "Yes." "Oliver Lawrence." "He worked there for five years." "He was reported missing last May." "(scoffs)" "(grunts)" "Striations don't match." "Hey, how much of this artificial bone do we have?" "Why?" "We're out of weapons." "Modern ones." "So it is time to get medieval." "Huh?" "Borrowed these from our friends over at the antiquities department." " Oh, God." " Yeah." "This one-- this one is Viking." "Comes from the funeral boat of Gunnar the Angry." "Okay, this is ridiculous, Dr. Hodgins." "Do you think our killer used a thousand-year-old battle axe?" "Hey, good scientist never assumes." "Would you like to do the honors?" "All right." "Nope!" "What else do you have?" "A scimitar." "Oh, yeah, that'll do." "Ship 'n' Print." "Thank you." "TONY:" "Good morning." "Welcome to the Ship 'n' Print." "How may I help you with your copying and shipping needs?" "Well, it's the afternoon." "Afternoon, okay." "Uh, FBI." "Like to talk to your manager." "Uh, yeah." "He's-He's-He's in the back." "That's an interesting Bhavacakra." " What?" " The pendant on his neck." "Really?" "Now?" "Well, it's a Buddhist wheel of life." "This symbol represents the poison of anger." "Like I said before, Booth, anger is..." "Okay, enough with the baklava, okay?" "I just want to talk to the manager." "It's carved from a thigh bone of a thousand-year-old holy man." "Based on the rough edges" "(cell phone ringing) and the lack of discoloration, that bone is not more than 20 years old." "Brennan." "We figured out what dismembered the body." "A guillotine." "The killer used a guillotine?" "Whoa, wait a second." "Guillotine?" "Where do you even find one of those?" "HODGINS:" "Room 114" "French Revolution exhibit." "We tried everything." "A guillotine is the only weapon that has a smooth blade and the correct force profile." "Okay, guillotine it is." "Thanks." "Great." "Okay, so did you ever ship and pack a guillotine?" "No." "That would take a lot of bubble wrap." "I want to talk to the manager." "I don't want to deal with this guy." "What's that?" "CONNOR:" "Oliver was one of the best employees I ever had." "He had a magic touch with the velobinder." "You really think he's really dead?" "Yes, and dismembered." " No blood." " Oh, well, you know what," "Lots of solvent around here." "The killer could have cleaned up." "You think he was sliced up on my paper cutter?" "The imperfections on the blade appear to be consistent with striations on the victim's bones." "I'm feeling a little sick." "Oh, join the club." "So any other employees have problems with Oliver?" "No, everybody loved him." "The crew I had back when Oliver was here, they were tight." "BOOTH:" "What do you mean, "back when"?" "Did you have a recent turnover?" "Everybody's new, except for Tony out front." "What happened?" "That happened." "BOOTH:" "Oh, jackpot winners, huh?" "You won the lottery." "The four of us bought a ticket together." "15 million each?" "Yeah, then the other three called in rich and quit." "Why not you?" "Where would I go?" "They thought I was nuts." "But I love this place." "Gives me a purpose." "The copy shop?" "We ship, too." "What about Oliver and Tony?" "Why didn't you include them in the lottery pool?" "We invited Oliver, but he didn't believe in gambling." "And Tony was at his herbalist." "SHAW:" "They're all lying." "Oliver Lawrence was part of the lottery pool." "How do you know they were lying?" "Uh, I examined the numbers that they played." "I found Oliver's birthday, his childhood street address and his high school basketball number." "Okay, but the odds of finding four disparate personality types willing to participate in a murder this brutal are very remote." "The likeliest scenario is one person killed Oliver for the ticket, the others found out and traded silence for shares in the win." "That works for me." "Whoa, wait-wait- wait a second." "We can't confront these people-- you understand?" "All they're gonna do is lawyer up." "And with all the money they just made, it's-- the investigation is gonna stall out for years." "So we don't confront them." "We ask for their help." "Cam was able to do a preliminary tox screen on the victim." "Nothing extraordinary, except he drank a lot of coffee." "Ship 'n' Print is open 24 hours a day." "Anybody's gonna need a little caffeine to get them through the night shift." "We still don't know what killed the victim?" "Dr. Edison found some defensive wounds, and he's determining if one was cause of death." "What are you doing?" "Oh." "Shaw had some of the FBI techs bring over this copy machine." "I'm going to see what shipping info is stored in the copier's memory." "Oh." "You need the whole machine to do that?" "Shaw didn't want Booth to think that she overlooked anything." "(clears throat)" "How's he doing, by the way?" "I don't know." "He won't talk about it." "Yeah, well... can't imagine losing my dad." "I should be able to help Booth, shouldn't I?" "Yeah, but what he's going through, it's not your fault." "But you would be able to help Hodgins." "Booth could help me." "What would you do?" "Booth loves you, Brennan, not me." "It doesn't matter what anybody else would do." "You have to figure out what you can give him that nobody else can." "(computer chiming)" "Oh, great." "387 packages were shipped on the day he went missing." "Good luck." "SWEETS:" "Sheila Burnside has certainly grown since the lottery photo was taken." "Oh, with all that money, I'm telling you, she can afford to grow." "Says that she met Hugh while working at Ship 'n' Print." "They've been married for three years." "BOOTH:" "Oh, there he is." "Connor Trammel-- manager of Ship 'n' Print, hm?" "He doesn't talk to the others." "That's interesting." "There is Ralph Berti." "Started working at Ship 'n' Print after a bitter divorce wiped him out." "So you think showing them old mug shots is gonna help?" "It's the same principle as a Rorschach test." "Allows them to open up and drop their defenses." "Right." "I'm Special Agent Booth." "This here is my associate, Dr. Lance Sweets." "I can help with grief counseling, if necessary." "Uh, yeah, Connor told us what happened to Oliver, and it's just awful." "So, we just want to help." "Right?" " Yeah, of course." " Yeah, yeah." "Well, perhaps you can help us find his killer." "Yeah." "We have some customers here from Ship 'n' Print who have criminal records for assault." "Maybe you recognize someone who had an altercation with Oliver." "For sure." "Definitely." "We'd get a lot of creeps in there." "I remember him." "Don't you?" "Yeah." "Uh, you know what?" "I do." "He was a real trouble maker." "He, uh, he stole a box of yellow highlighters once." "(laughter)" "I don't remember him." "You're kidding, Connor." "I do." "Me, too." "Definitely." "He screamed at me once." "Right, Hugh?" "And then, uh, you reported it to Conner." "No, you didn't." "Yeah, I did." "Maybe you have a reason to protect him, Connor." "You all have it out for me because I stayed." "You have to admit, it's pretty whacked." "Yeah, I mean, there was a lot of money," "(chuckles):" "and you just stayed." "SWEETS:" "Excuse us for a moment." "We just need to, uh, run that suspect back through our system." "Just keep looking." "Yeah, Sheila knew that fingering one of those mug shots would've taken the heat right off of her." "Yeah, but her husband is clearly the submissive one." "He's the one who'll crack." "She could've gotten him to kill Oliver." "(cell phone ringing) Did you see the way" "I mean, he's a piece of work." "Hold on." "Booth." "Every time an employee uses the printer at Ship 'n' Print, they have to punch in an I.D. code first." "Wait, well, who printed the labels?" "Ralph Berti." "Ralph Berti." "Thank you." "I didn't kill Oliver." "I liked him." "Kind of." "SWEETS:" "Kind of?" "He was a goody two-shoes, you know?" "A reformed drinker, did everything by the book." "Not really a fun guy." "Oh, so you figured he wouldn't enjoy all those millions." "No, no, not like that." "We know Oliver picked some of the winning numbers, Ralph." "He should have gotten the money, too, right?" "Okay." "The five of us bought the ticket together." "We were saving his share until he came back." "We didn't kill him for it." "Amazing how these four are always changing their story all the time." "RALPH:" "That is the truth, I swear." "BOOTH:" "Right, I would believe you, but we have these." "These are logs of the employee codes used on the printer." "All right?" "The labels for the boxes that contained" "Oliver's body parts, all right, they were all posted using your code." "You're kidding." "No, I'm not kidding." " Does it look like I'm kidding?" " No." "Everybody used my code because it was one-two-three-four." "It doesn't get easier than that, right?" "If you check the printer memory," "I'll bet 90% of the time, the code that was used was mine." "I still haven't been able to determine cause of death." "There are microfractures on the third and fourth left ribs." "And on his left radius." "Defensive wounds consistent with a fistfight." "So there was a struggle before he was killed." "I did find kerf marks on his right acromion." "Ah, very small kerf marks." "And they're on the right greater tubercle." "And on his right olecranon." "The victim struggles with his assailant and then is struck multiple times on the right side, with something that approximates a tiny saw." "Good work, Dr. Edison." "But I have no answers." "You will." "I wouldn't hire a fool." "I suppose that's true." "Thank you." "Oh!" "And I'm glad to see that your breasts seem to be feeling better." "I beg your pardon?" "Your..." "Before you said I should observe..." "Ice packs." "I'll just study the kerf marks." "(sighs)" "Don't burn the garlic." "I don't want Temperance to come home to burnt garlic." "I never burn the garlic." "I do it just the way you taught me." "I don't think so." "Come on, look-- it's simmering, huh?" "Sorry..." "I guess I miss bossing you around." "You sure you got to take off tomorrow?" "Yes." "I'll deliver these to the VA, and then I'm back." "So what are all those papers for?" "Oh, probate forms, insurance claims, pension documents," "Social Security forms." " You got to sign these." " Me?" "I have to sign 'em?" "Why?" "I haven't seen him in 20 years." " You're next of kin." " Well, so are you." "So is Jared." "No, your father made you sole executor and beneficiary." "(laughs) Beneficiary?" "Having him for a father wasn't exactly a benefit, Pops." "Seeley..." "Look, you were my father." "All right?" "He was never there for me." "You raised me, not him." "He was never there, understand?" "You don't have to defend him to me." "I wasn't." "I was just trying to remind you that he was my son." "Good or bad, he was my son." "And I got to tell you, I'm a little disappointed that you don't seem to see the hurt I'm feeling." "I'm sorry, Pops." "You don't think I know what it was like for you?" "You don think I don't feel responsible?" "I raised him." "Don't you feel responsible for your boy?" "Now, Seeley, we're family." "We got to get through this together." "You're right." "Okay, anything for you, Pops." "Anything." "Look at the sauce." "I don't want to burn the garlic." "HODGINS:" "The boiling point for the polystyrene is higher than the lipids by 15 degrees Celsius." "So you can separate the liquids from the tissue sticking to the packing peanuts." "And skim off the solids and particulates as it simmers." "Excuse me." "Ooh!" "This looks like a piece of sternocleidomastoid muscle." " Well, be my guest." "All those chunks are yours." "There's something in here." "Looks like a piece of tape." "Hmm." "Well, some of the packing material probably stuck to it while it was decomposing." "No." "It's embedded in the muscle." "It was here when he died." "But how did it get there?" "MONTENEGRO:" "Well, Ralph's story checks out." "96% of the jobs on the printer were done using Ralph's code." "Now, I found something else in the printer's memory." "This was taken two days before" "Oliver was murdered." "Why would someone photocopy their buttocks?" "Well, I guess they were doing some kinky calisthenics on the job." "But look at this." "Look at the hand next to the butt." "Didn't Clark say there was a bone growth on the victim's left hand?" "Yes." "A sesamoid." "Which means that this tush was making whoopee with the victim." "So now we just have to figure out whose tush this is." "The lighter area indicates where the ischia are pressing onto the copier." "The darker areas are flesh." "Can you measure the ischial tuberosities in her pelvic girdle?" "Yeah." "24.3 centimeters." "Those are some narrow hips." "Only about five percent of women have hips that narrow." "There was only one woman among the lottery winners." "Enlarge the woman, focusing on her hips." "Narrow hips." "So, Sheila lures him to the copier, kills him, and then, takes his share of the money so she can get a new pair of boobs." "Ah!" "Here they are." "Thank you so much for coming in on such short notice." "Please have a seat." "Let me get this for you." "You're welcome." "(high-pitched squeaking)" "New chairs." "LAWYER:" "I don't know what you're after, so" "I'm going to lay out the rules." "We will listen to anything you have to say, but my client will not be answering any of your questions." "Okay." "You know what?" "You're free to go." "Thanks for coming." " Uh, that's it?" " Mm-hmm." "You brought me down here for this?" "Do you have any idea what this guy charges?" "Well, I'm sure you'd be able to pay for it." "What the hell is she doing?" "Oh, this seat cushion is made from compression foam." "It's engineered to capture orthotic modeling." "I'm using it to measure your client's ischial tuberosities." " My what?" " Your ass-bones." "You see, they're like fingerprints in your pants." "They're a match, Booth." " This is her." " Whoa." "Well, look at that." "Those are Oliver's hands on your butt two days before he was murdered." "BRENNAN:" "So, it seems like we might have something to talk about after all." " Okay, that doesn't mean any..." "LAWYER:" "Do not talk!" "Not one word." "(cushion squeaks)" "We did it a few times." "But it didn't mean anything." "Yeah, but I'm sure it would have meant something to your husband." "And you didn't want him to find out, so, you killed him." " I'm a slut, not a killer." "Shut up." "Hugh knows all about it." "I told him everything after we broke things off." "Oh, and then, two days later," "Oliver was killed." "Where was your husband the night Oliver died?" "Hugh wouldn't kill anyone." "I hate my job." "Any progress?" "I've combed the weapons database for knives, saws, cooking implements, but I just can't find a match for these kerf marks." "Not your day for weapons identification, huh?" "Why, thank you." "Hadn't crossed my mind." "So I assume you found something wonderfully relevant." "Not to rub it in, but I did, actually." "The tape had more than just muscle tissue on it." "It also contained a piece of an artery." " We found a cause of death." " Yup." " Whatever the weapon was..." " Mm-hmm." "...hit the right sternocleidomastoid muscle and severed the right subclavian artery." "Leaving behind that tape." "Maybe the handle on the weapon was loose, and they tried to tape it up?" "Maybe." "You mind if I take a look at that?" "Yeah." "Go nuts." "I think I found our weapon." "BOOTH:" "Hey." "HANK:" "Is this a bad time, kiddo?" "BOOTH:" "No, it's not a bad time." "I took everything to the V.A." "(laughs) I would've done that for you, you know." "I don't think so." "(chuckles)" "Besides, they had some stuff for me." "You know, Pops, you don't have to go so soon." "You can stay as long as you want." "Well, you see, tonight's movie night, and I have this little lady friend that likes it when the lights go out." "You understand?" "I think so, Pops, yeah." "But before I go, I..." "I want to read you this letter." "It was among your father's things." "No." "Too late for that." "Now, wait a minute." "The letter's to me, not to you." "Just shut up and listen." "There's a lot here about growing up." "Blah, blah, blah, blah, blah." "Oh, here we are." ""I didn't write a letter to Seeley" ""because I knew he'd rip up anything I sent him," ""and he should." ""If you can find a way," ""let him know I loved him." ""He and Jared deserved a better father than me." "A father..."" "(sighs)" ""A father as good as I had."" "(sighs)" "(voice breaking):" ""Thank you for raising him to be the man I could never be."" "(sighs)" "So, what do you want me to do with that, Pop?" "Nothing." "I'm just glad you listened." "Do what you want with what you heard." "It's up to you." "He left something else." "I don't want that." "Open it or don't open it." "They told me to give it to you." "Oh, son, listen..." "I know you wish some things could have been resolved." "Closure, they call it." "But life is just a lot of loose ends." "So smile." "Love that woman you have, and love that new little girl that you're going to have." "Right, Pops." " Thanks." " All right." "Oh, careful now." "I'm brittle." "All right." "Thanks, Pops." "EDISON:" "The weapon that left those marks on the bone was a tape gun." " A tape gun?" "!" " Mm-hmm." "Yeah, but we're talking about a heavy-duty, industrial-sized tape gun." "Now, these are the three models that the Ship 'n' Print chain uses." "Can you go close on the teeth and match them to the marks?" "Yeah." "MONTENEGRO:" "We have our murder weapon." "The kerf marks are on the victim's right acromion, right greater tubercle, right olecranon." "Which makes our assailant a lefty." "Now, he blocked the first blow." "And the second one dazed him." "The third blow severed his subclavian artery." "And left the tape in the sternocleidomastoid muscle." "SAROYAN:" "So who's our lefty?" "Thank you." "So kind of you to come in and visit us again." "You have a seat, please." " Don't sit." " Don't sit." "Right." "So you are aware that your wife was having an affair, correct?" "No comment." "No comment." "Okay." "You don't want to know what was happening?" "Uh-uh." "No." "Right." "Okay, so your lawyer represents your wife, Sheila, too." "So, is your lawyer working for you or for her?" "For both of them." "That a conflict of interest." "Is it?" "BOOTH:" "Right." "Did he show you this photocopy?" "We're leaving." "You seem real mad about that." "Oh, one last thing there." "Oh!" "Hey, nice catch there, lefty." "It's gotta be Hugh Burnside, right?" "He had means, motive, opportunity." "It doesn't matter." "It's all circumstantial, all right?" "Lawyer's just gonna get the judge to cut him loose." "So just have him sign the paperwork and wave bye-bye." "Let me go through my notes one more time." "There's got to be something..." "Shaw, it's just not the way it works." "We don't have enough evidence yet." "I really, I-I wanted you to know that you could count on me." "Listen to me, Shaw." "Get over it." "This case is not about you." " What?" " What we do is teamwork." "You thinking that the only way I'm gonna respect you is if you hand me this final piece of the puzzle is not teamwork, it's ego." "All right?" "Okay." "Um, I'm gonna go back and get Burnside to sign these documents." "Okay." "Any luck with the packing material?" "Yeah, there's something-- a sliver or something-- jammed into the side of this packing peanut." "Almost missed it." "It's the same color as the polystyrene." "Well, let me know once you run it through the GC Mass Spec." "Uh, hold on a second." "I might need Clark for this one." "It's bone." "I've been over this skeleton a dozen times, and this piece of bone doesn't fit anywhere." "Let me take a look." "Did you know that cabbage leaves are recommended to soothe sore breasts?" "Apparently, the phytochemicals in the plant..." "I spent the evening naked, so my breasts are actually feeling much better today, thank you." "Ah." "Naked." "Well, good then." "Dr. Edison, did you compare the number and size of bony spicules in this splinter to a histological section from our victim's skeleton?" "No, I just assumed it was from our victim." "Which was clearly a mistake." "It came from a Buddhist necklace made from human bone." "Oh." "You have any idea why all the phony return addresses got printed when you worked the graveyard shift?" "TONY:" "They did?" "(chuckles)" "That's, that's weird." "SHAW:" "Yeah." "What's more, we visited a few people who received those packages." "And they all cooperated because they didn't want to go to jail." "SHAW:" "Their packages contained psilocybin." "BOOTH:" "You know, magic mushrooms, right?" "You're a dealer-- you were selling drugs and you were shipping them at night." "That doesn't mean that I killed anyone." "Right." "Agent Shaw." "Final piece of the puzzle." "I'll let you do the honors." "This is a chip of bone from your pendant." "It was in the box that contained Oliver's head." "The box also contained psilocybin spores." "BOOTH:" "Magic mushrooms." "Oliver got all "drugs are bad," okay, so he took the package." "He said he was going to call the police." "I got people depending on me, so I took the package back." "We fought, yes." "I didn't want to." "I'm, I'm all about nonviolence." "You beat him with a tape gun." "You sliced his neck and you killed him." "I have been nonviolent for over 25 years." "I-I lost it for five minutes." "I'd say overall that's, that's really not that bad in the scheme of things, right?" "Right?" "♪ Something about the cadence in which she spoke ♪" "♪ Just let you know ♪" "♪ That one way or other, you'd never be the same ♪" "♪ Maybe it was the way that she used your full name ♪" "♪ To ask you if it might yet be the right time... ♪" "BOOTH:" "So Tony is going to be locked up for years, huh?" "Plenty of time for him to contemplate the Wheel of Life and his baklava." "No" " Bhavacakra." "Do you miss your father, Booth?" "Why?" "He's been gone for 20 years." "No." "Are you going to open the box?" "You know I really don't want to talk about this." "But I do, and I might say the wrong thing, but for the time being, we are sharing our lives, and that means you can't shut me out, Booth." "What's the point?" "Aw!" "Seriously?" "That's..." "Bones, I..." "Quantum physicists have postulated that the way we experience time is an illusion, that it doesn't happen in a linear way, that past and present-- in reality, there's no difference." "Bones, what are you trying to get at?" "You do have some good memories of your father." "You've told me that." "There was the time when the river froze and he woke you up at midnight to go skating, and the time you were sweeping up at his barbershop when he put on Louis Prima and pretended the electric razor was a microphone." "(chuckles)" "Well..." "And the World Series." "Your one perfect day together." "Those good times with your dad are happening right now." "They'll always be happening." "You deserve to keep those alive." "(sighs heavily)" "(groans)" "(sighs)" "I did that, right?" "(chuckles)" "(sighs)" "(chuckles)" "Those are the..." "(distant crowd chatter, sports broadcast fading in)" "ANNOUNCER:" "...a base hit!" "They're gonna have everybody on-- the bases are loaded!" "(announcer continues indistinctly) ...Washington the on-deck hitter." "It is two strikes on Willie Wilson." "Bases loaded." "Two outs." "What pressure..." "Oh and one." "The crowd will tell you what happens." "Well, the Philadelphia Phillies are the world champions!" "== sync, corrected by elderman ==" | Mid | [
0.5457809694793531,
38,
31.625
] |
All relevant data are within the paper and its Supporting Information files. Introduction {#sec001} ============ Focal brain neural activity increases local perfusion through neurovascular coupling \[[@pone.0117706.ref001]\]. Vascular-based brain imaging techniques, such as positron emission tomography (PET) \[[@pone.0117706.ref002]\] and functional magnetic resonance imaging (fMRI), first with susceptibility-contrast MRI \[[@pone.0117706.ref003]\] and currently using the blood oxygenation level-dependent (BOLD) effect \[[@pone.0117706.ref004]--[@pone.0117706.ref006]\], have provided images of the "brain in action", demonstrating patterns of brain activity in relation to behavior or somatosensory input. BOLD fMRI is based on the variation of the blood water T~2~/T~2~\* signal, which depends on the paramagnetic deoxyhemoglobin content \[[@pone.0117706.ref007], [@pone.0117706.ref008]\]. This method is robust but faces challenges due to the non-trivial signal dependence on several parameters (cerebral blood flow, cerebral blood volume, and blood oxygenation) \[[@pone.0117706.ref009]--[@pone.0117706.ref014]\], while the spatial resolution is limited due to veins draining the sites of activation \[[@pone.0117706.ref015]\]. An MRI method capable of measuring variation in microvascular blood flow during neuronal activation, independently of blood oxygenation, is therefore of interest. Capillary network reactivity to somatosensory stimulation has been investigated in rats \[[@pone.0117706.ref016], [@pone.0117706.ref017]\], and individual capillary increase in red blood cell velocity and flow has been demonstrated with two-photon microscopy during the activation of the olfactory bulb \[[@pone.0117706.ref018]\] and neocortex \[[@pone.0117706.ref019]\]. In humans, microvascular perfusion measurement is possible with intravoxel incoherent motion (IVIM) magnetic resonance imaging MRI \[[@pone.0117706.ref020], [@pone.0117706.ref021]\]. This method is based on the natural dependence of the nuclear magnetic resonance (NMR)/MRI signal on nuclei motions \[[@pone.0117706.ref022], [@pone.0117706.ref023]\]; this dependence can be accentuated by the use of pulsed gradients \[[@pone.0117706.ref024]\]. The vasculature is assumed to be sufficiently dense and random so that blood movements present statistical, diffusive properties, and a pseudo-diffusion coefficient D\* can be introduced. This coefficient is calculated using a bi-compartmental (vascular and non-vascular) model \[[@pone.0117706.ref020]\], with the second compartment undergoing thermal diffusion D. Further perfusion parameters can be derived, namely, the perfusion fraction f, and the flow-related parameter fD\*, which consists of the product of f and D\* \[[@pone.0117706.ref021]\]. The IVIM method of measuring human brain perfusion has been recently validated, showing dependence on hypercapnia-induced vasodilatation \[[@pone.0117706.ref025]\]. Diffusion gradients have been used for various purposes in fMRI \[[@pone.0117706.ref026]\], but usually not for deriving specifically microvascular perfusion parameters using the IVIM model. An early use was to modulate the BOLD signal \[[@pone.0117706.ref027]\] to try to localize it and to increase its spatial resolution by suppressing the signal from flowing blood \[[@pone.0117706.ref028]--[@pone.0117706.ref031]\]. Diffusion gradients have also been used to directly measure changes in the apparent diffusion coefficient (ADC) during fMRI, using a single-compartment model. At low b-values, functional ADC measurements have shown potential for both increased spatial \[[@pone.0117706.ref032], [@pone.0117706.ref033]\] and temporal resolution \[[@pone.0117706.ref034]\]. At high b-values, an effect could also be measured, in the form of a temporary decrease in ADC that has been measured in the human visual cortex during stimulation \[[@pone.0117706.ref035]\], and was found to be significantly faster than the BOLD response \[[@pone.0117706.ref036], [@pone.0117706.ref037]\]. This was interpreted as a direct measure of cell swelling during neural firing, which could represent a more direct and accurate measurement of neuronal activity than hemodynamic-based contrast. The exact nature of this signal remains controversial though, as it has been suspected to arise from vascular and susceptibility effects \[[@pone.0117706.ref038]\] as well as from partial volume effect with cerebro-spinal fluid (CSF) \[[@pone.0117706.ref039]\]. Indeed, a decrease in CSF volume during brain activation, as well as during hypercapnia, has been observed with various methods \[[@pone.0117706.ref040]--[@pone.0117706.ref043]\], and has been suggested as a possible confounding factor in ADC-fMRI \[[@pone.0117706.ref039]\]. This effect has also been observed with IVIM during hypercapnia \[[@pone.0117706.ref025]\]. In this paper, we investigated the feasibility of measuring variation of local microvascular brain perfusion parameters f, D\*, and fD\* in human volunteers during visual stimulation, as derived from the bi-compartmental IVIM model, using a diffusion-weighted inversion-recovery sequence to suppress the possibly confounding CSF movements. Material and Methods {#sec002} ==================== Subjects {#sec003} -------- This study was approved by the local ethics committee at the University Hospital in Lausanne (Commission cantonale (VD) d'éthique de la recherche sur l'être humain). Informed written consent was obtained from all participants. Imaging was performed on 8 healthy subjects without known history of disease (4 men and 4 women, mean age 25), all over 18 years of age, from April 2012 to July 2012. No subject had to be excluded from this study. Visual Stimulation {#sec004} ------------------ An LCD projector equipped with a photographic zoom lens and with a refresh rate of 75 Hz displayed the stimuli on a translucent screen positioned in the back part of the bore. Subjects viewed stimuli through a custom-made inclined mirror positioned above their eyes, and had a field of view of ± 20° horizontally and ± 11° vertically. Subjects were asked to look at a fixation point in the middle of the screen. Total distance eye to screen was 1 m. Head motion was kept to a minimum using a vacuum bag. Subjects were presented with the following sequence: visual stimulus blocks alternating with black screen blocks, always starting with the stimulus block. The blocks were 9 min 32 sec each. The visual stimulus block consisted of a red and black checkerboard (12 squares horizontally and 9 vertically, each measuring 2.5 cm^2^ on the screen), blinking with a frequency of 8 Hz (where "blinking" means that each individual square alternated between being red and being black). We acquired an IVIM sequence for each block presented to the subject. 5 (in some cases 3) IVIM sequences were acquired for each subject. The visual stimulus or the black screen was started 20 s before the acquisition for equilibrium; so, each IVIM acquisition was 9 min 12 sec long. Images Acquisitions {#sec005} ------------------- Data were acquired on a 3 Tesla MR scanner (Trio, Siemens, Erlangen, Germany) using a 32-multichannel receiver head coil. For the purpose of localization, the acquisition was started by a T1-weighted high-resolution (1 mm isotropic) MPRAGE sequence (TR = 2.3 s, TE = 3 ms, TI = 900 ms, flip angle = 9°, field of view = 256 x 240 mm^2^, matrix size = 256 x 240, slice thickness = 1.2 mm, Bandwidth 238 Hz/pixel), followed by a standard functional visual experiment, which consisted of a BOLD sensitive gradient echo EPI sequence (TR = 4 s, TE = 30 ms, flip angle = 90°, field of view = 192 x 192 mm^2^, matrix size = 64 x 64, slice thickness = 3 mm, Bandwidth 2232 Hz/pixel). 60 images were acquired in total, alternating 10 acquisitions during stimulation and 10 during baseline. Single participant analysis was performed using the General Linear Model according to our specific block design experiment. The resulting computed t-maps were then used to identify the visual cortex, and a single IVIM slice was placed in a strict transverse plane on the calcarine fissure. IVIM Imaging Parameters {#sec006} ----------------------- Data were acquired using a Stejskal-Tanner diffusion-weighted adiabatic inversion-recovery (TI = 2660 ms) spin echo sequence \[[@pone.0117706.ref024]\] and echo planar read-out \[[@pone.0117706.ref044]\]. A long repetition time of 12 s was applied to ensure complete recovery of each tissue. A single axial brain slice of 7 mm thickness was acquired with an in-plane resolution of 1.2 x 1.2 mm^2^, using a field of view of 256 x 256 mm^2^ and a matrix of 210 x 210. TE was 92 ms. Parallel imaging with an acceleration factor of 2, and 75% partial Fourier encoding in phase direction was applied. Receiver bandwidth was 1134 Hz/pixel. Fat was suppressed with a frequency selective saturation pulse. Images were acquired at multiple b-values (0, 10, 20, 40, 80, 110, 140, 170, 200, 300, 400, 500, 600, 700, 800, 900 s/mm^2^), in 3 orthogonal directions, from which the traces were calculated, which were then used for model fitting. Images were acquired only once for each b-value and direction, and only once for b = 0. Eddy current induced spatial distortions were corrected using the vendor\'s software. Region of Interest (ROI) Definition and Segmentation {#sec007} ---------------------------------------------------- For quantitative analysis, a visual brain region and a non-visual brain region were obtained by thresholding the t-maps of the BOLD signal. Those two ROIs were further segmented in gray (GM) and white matter (WM) with the help of probability maps constructed from a MPRAGE sequence using the segment function of the SPM framework (<http://www.fil.ion.ucl.ac.uk/spm>) for Matlab (Mathworks, Natick, MA, USA). Those maps were registered to the IVIM space using 3D Slicer (<http://www.slicer.org>). The segmentation maps were finally corrected manually on a voxel-by-voxel basis, using a homemade Matlab program. Regions with significant susceptibility artifacts (petrous bone, frontal sinuses) were excluded. The 4 obtained ROIs (GM and WM in the visual and non-visual brain, respectively) are presented in [S1 Fig](#pone.0117706.s001){ref-type="supplementary-material"}, supplementary material. Image Processing and Analysis {#sec008} ----------------------------- We used the double exponential model proposed by le Bihan et al \[[@pone.0117706.ref020]\] $$\frac{S\left( b \right)}{S_{0}} = f \cdot e^{- bD^{*}} + \left( 1 - f \right) \cdot e^{- bD}$$ where *S*(*b*) and *S* ~0~ represent the signal obtained at a given b-value and with no gradient applied, respectively. Data were fitted in two steps: first, the curve was fitted for b \> 200 s/mm^2^ for the single parameter D, followed by a fit for f and D\* over all values of b, while keeping D constant, using the Levenberg-Marquardt algorithm \[[@pone.0117706.ref045]\] implemented with standard Matlab functions. The curve fitting in the parametric maps was done on a voxel-by-voxel basis, while for quantitative analysis, it was done after averaging the signal of the ROI for each b-value. The later was done to avoid choosing an arbitrary cut-off for misfitted points, which might influence the results. Parameter Fitting Simulation {#sec009} ---------------------------- The quality of the fitting procedure was evaluated with two simulations, the first assessing the quality of the fit as a function of signal-to-noise (SNR), and the second, the quality of the fit as function of f and D\* under the measured intravoxel SNR of the current experiment. First simulation: fitting quality as a function of SNR {#sec010} ------------------------------------------------------ In the first simulation, an SNR-dependent Gaussian random noise term was added at each b-value to the ideal signal corresponding to f = 4%, D = 0.7·10^-3^ mm^2^·s^-1^, and D\* = 17·10^-3^ mm^2^·s^-1^, after which the fitting was performed. Those numerical values were obtained from the experimental values of the gray matter at baseline ([Table 1](#pone.0117706.t001){ref-type="table"}; the values were rounded for simplicity). This was repeated 10'000 times at each SNR ranging from 10 to 400. 10.1371/journal.pone.0117706.t001 ###### Quantitative measurement of IVIM parameters in the visual and the non-visual brain. {#pone.0117706.t001g} Gray Matter White Matter p-value Gray---White Matter ------------------ --------------- --------------- ----------------------------- --------- --------------- --------------- ----------- --------- ---------- ------------- Visual Brain Baseline Stimulation Variation p-value Baseline Stimulation Variation p-value Baseline Stimulation fD\* 0.59 ± 0.32 1.61 ± 0.96 +170% 0.01 0.37 ± 0.11 0.63 ± 0.31 +70% 0.01 0.02 0.005 D\* 17.27 ± 10.17 30.50 ± 18.35 +77% 0.048 9.58 ± 4.82 13.40 ± 6.18 +40% 0.0003 0.02 0.01 F 3.55 ± 0.59 5.34 ± 0.81 +50% 0.0001 4.17 ± 0.98 4.69 ± 0.45 +12% 0.08 0.057 0.005 D 0.713 ± 0.017 0.711 ± 0.005 +0% 0.36 0.723 ± 0.019 0.719 ± 0.017 -1% 0.30 0.059 0.09 Non-Visual Brain Baseline Stimulation Variation p-value Baseline Stimulation Variation p-value Baseline Stimulation fD\* 0.70 ± 0.40 0.58 ± 0.15 -17% 0.22 0.57 ± 0.57 0.44 ± 0.28 -23% 0.13 0.12 0.04 D\* 16.97 ± 11.29 17.91 ± 9.45 +6% 0.42 15.13 ± 20.81 9.65 ± 4.54 -34% 0.19 0.32 0.03 F 4.67 ± 3.01 3.66 ± 1.27 -21% 0.10 4.54 ± 1.57 4.39 ± 0.82 -3% 0.39 0.41 0.04 D 0.724 ± 0.049 0.739 ± 0.016 +2% 0.20 0.714 ± 0.049 0.713 ± 0.020 +0% 0.46 0.11 0.01 The IVIM perfusion parameters D\* \[10^-3^ mm^2^·s^-1^\], fD\* \[10^-3^ mm^2^·s^-1^\] and f \[%\], as well as the diffusion coefficient D \[10^-3^ mm^2^·s^-1^\], obtained in the white and gray matter of a region of interest in the visual cortex and in the rest of a full axial slice excluding the occipital lobe. Second simulation: fitting quality as a function of experimental, b-value-dependent SNR {#sec011} --------------------------------------------------------------------------------------- The SNR of all baseline images was first measured in the current experimental setting in the whole parenchyma, excluding regions of obvious artifacts. It was calculated for each voxel as a function of b as the deviation of the single measurements from their averaged value. The corresponding Gaussian random noise term was then added at each value of b to the ideal signal corresponding first to D = 0.7·10^-3^ mm^2^·s^-1^, D\* = 17·10^-3^ mm^2^·s^-1^ and f ranging from 0.2% to 20% in steps of 0.2%, and second to D = 0.7·10^-3^ mm^2^·s^-1^, f = 4%, and D\* ranging from 0.8·10^-3^ mm^2^·s^-1^ to 30·10^-3^ mm^2^·s^-1^ in steps of 0.2·10^-3^ mm^2^·s^-1^. The fitting procedure was then performed. At each point, the simulation was repeated 30\'000 times. Conversion to standard perfusion units {#sec012} -------------------------------------- IVIM parameters have been converted to standard perfusion units by adapting the formulas from \[[@pone.0117706.ref021]\] to the units used in this report: $${CBV\mspace{9mu}\left\lbrack \frac{ml}{100ml} \right\rbrack = \lambda_{H_{2}O} \cdot f \cdot 100 = 0.78\mspace{9mu} \cdot \mspace{9mu} f\ \left\lbrack \% \right\rbrack},$$ $$CBF\mspace{9mu}\left\lbrack \frac{ml}{100ml\ min} \right\rbrack = 60 \cdot \frac{6\lambda_{H_{2}O}}{L < l >} \cdot f{\cdot 100 \cdot D}^{*}\ = \ 130 \cdot fD^{*}\mspace{9mu}\left\lbrack 10^{- 3}{\ mm}^{2}s^{- 1} \right\rbrack,$$ with the MRI visible water content $\lambda_{H_{2}O}\ = \ 0.78$, the total capillary length *L* = 2 *mm*, and the mean capillary segment length $< l > \ = \ 0.108\ mm$, as used in \[[@pone.0117706.ref021]\] and also \[[@pone.0117706.ref046]\]. Statistical Analysis Paired, single-tailed Student's T-test was performed with Excel (Microsoft, Redmont, Wa, USA). Statistical significance was set to p \< 0.05. Results {#sec013} ======= Simulation Results {#sec014} ------------------ In the first simulation, the values obtained after fitting converged asymptotically with increasing SNR, reaching the correct value for SNR between 50 and 100 ([Fig. 1A](#pone.0117706.g001){ref-type="fig"}). SNR in our experiment was measured to decrease as a function of b (0--900 s/mm^2^) from 107.4 to 34.2. Including this in the second simulation, the quality of the fitting procedure of f, D\*, fD\*, and D as function of f and D\* is shown in [Fig. 1 B-C](#pone.0117706.g001){ref-type="fig"}. While the fitting of D was much better than the evaluation of the IVIM perfusion parameters (f, D\*, fD\*), the quality was found to be acceptable at the current experiment values of f (between 0.035 and 0.053) and D\* (between 9.58 and 30.5·10^-3^ mm^2^/s). Interestingly, fD\* was found to be more precise than the fit of f and of D\* in all three simulations. {#pone.0117706.g001} IVIM Functional Imaging Experiment {#sec015} ---------------------------------- Qualitatively, an increase in flow was observed in the visual cortex on single measurement parametric flow maps fD\* during stimulation ([Fig. 2](#pone.0117706.g002){ref-type="fig"}, and [S2 Fig](#pone.0117706.s002){ref-type="supplementary-material"}, supplementary material). Image quality improved after averaging (pixel-wise, after fitting, [S3 Fig](#pone.0117706.s003){ref-type="supplementary-material"}, supplementary material). Subtraction maps showed an increase in fD\* in the primary visual cortex of all volunteers ([Fig. 3](#pone.0117706.g003){ref-type="fig"}). An increase in perfusion fraction f and pseudo-diffusion coefficient D\* was also visible in the visual cortex, while no variation in diffusion coefficient D was noted ([S2 Fig](#pone.0117706.s002){ref-type="supplementary-material"}, supplementary material). {#pone.0117706.g002} {ref-type="fig"}), as obtained by subtracting the averaged flow maps obtained under baseline to the averaged maps obtained under visual stimulation. Scale of the colorbar: 10^-3^ mm^2^·s^-1^. The corresponding BOLD statistical t-map is given below each IVIM subtraction map.](pone.0117706.g003){#pone.0117706.g003} Quantitatively, a statistically significant increase of all 3 IVIM perfusion parameters f, D\*, and fD\* was observed after stimulation in the GM of the visual cortex (50%, p = 0.0001; 77%, p = 0.048; and 170%, p = 0.01, respectively) while a less marked but similar effect was also observed in the visual subcortical WM (12%, p = 0.08; 40%, p = 0.0003; and 70%, p = 0.01 respectively) ([Table 1](#pone.0117706.t001){ref-type="table"}). A trend to a slight decrease of around 20% (p \> 0.05 for all variables) in all but one of the IVIM perfusion parameters ([Table 1](#pone.0117706.t001){ref-type="table"}) was observed in the rest of the brain excluding the occipital region. The results in standard perfusion units CBF (cerebral blood flow) and CBV (cerebral blood volume) are presented in [Table 2](#pone.0117706.t002){ref-type="table"}. 10.1371/journal.pone.0117706.t002 ###### Cerebral blood volume and flow, as derived from the IVIM parameters. {#pone.0117706.t002g} Gray Matter White Matter ----- ------------- -------------- ------------- ------------- CBF 91.0 ± 52.0 75.4 ± 19.5 74.1 ± 74.1 57.2 ± 36.4 CBV 3.64 ± 2.34 2.85 ± 0.99 3.54 ± 1.22 3.42 ± 1.22 Cerebral blood volume CBV \[ml/100ml\] and cerebral blood flow CBF \[ml/100ml/min\], as calculated from the corresponding IVIM perfusion parameters from [Table 1](#pone.0117706.t001){ref-type="table"}. Percentage variations and p-values stay the same and are not reproduced. Discussion {#sec016} ========== This study demonstrates functional imaging with IVIM MRI in the human visual cortex, as well as in the underlying white matter {#sec017} ------------------------------------------------------------------------------------------------------------------------------ This is of interest, because the method has been shown to be quantitative \[[@pone.0117706.ref025]\], and of microvascular origin \[[@pone.0117706.ref020]\], and might therefore be seen as a new tool with the potential of sorting respective effects of blood flow and blood oxygenation of the currently used BOLD fMRI technique. The increase of 170% in blood flow is higher than, for example, the 40% increase reported using ASL and PET \[[@pone.0117706.ref047]\], though similar changes have also been reported, for example 140% CBF increase at 10 Hz stimulation for 60 s, using laser Doppler flowmetry in rats \[[@pone.0117706.ref048]\]. While the effect was stronger in the cortex, the increase in the WM is noteworthy, because WM BOLD functional activation has remained controversial. Some authors believe it to be undetectable \[[@pone.0117706.ref049]\], while studies reporting BOLD signal measurement in WM regions are getting more numerous \[[@pone.0117706.ref050]--[@pone.0117706.ref053]\]. Another interesting finding is the observed trend toward a slight decrease in the IVIM perfusion parameters in the brain excluding the occipital region, which may be artifactual, but may also be related to a reduction in the brain's baseline default mode activity during specific tasks \[[@pone.0117706.ref054]\]. Finally, the observed stability of D, using a b~max~ of 900 s/mm^2^, conflicts with previous reports, though there higher b-values were used, such as b~max~ = 1443 s/mm^2^ \[[@pone.0117706.ref035]\] or b~max~ = 1800 s/mm^2^ \[[@pone.0117706.ref036]\], and needs further investigation. Perfusion measurement with IVIM remains an area of active research. In this context, the measured CBV reported here, ranging at rest from 2.77 ± 0.46 to 3.64 ± 2.34 ml/100 ml, is slightly lower but consistent with the CBV of 3.8 ± 0.7 ml/100 ml measured with positron emission tomography \[[@pone.0117706.ref055]\]. Furthermore, the measured CBF, ranging at rest from 48.1 ± 14.3 to 91.0 ± 75.4 ml/100 ml/min, is of the magnitude order of the generally accepted 50 ml/100 ml/min \[[@pone.0117706.ref056]\]. Finally, a statistically significant difference was observed in this report between GM and WM for most measured fD\*. There are various sources of possible inaccuracies of the IVIM perfusion parameters as derived in this study. Interestingly, we found fD\* to be more precise than the fit of f and of D\* in all three simulations, as it seems that the error associated with f and the error associated with D\* compensate for each other to some degree. Also, the quality of the fit increased with increased SNR, f, and D\* for all perfusion parameters. The corollary is that the quality of the fit is better in regions of increased perfusion (such as the activated regions), which is in the current case the region of interest. Another possible source of inaccuracy is that the different relaxation rates of the various compartments under inversion recovery were not taken into account; however, this allowed us to keep the model simple. Changes in the homogeneity of local magnetic fields due to changes in concentrations of diamagnetic oxyhemoglobin and paramagnetic deoxyhemoglobin could have an effect on the diffusion-weighted signal. Furthermore, it is also well known that T2 of blood is dependent on the oxygenation state of hemoglobin \[[@pone.0117706.ref007]\], which critically depends on the integrity of the erythrocytes. Therefore, including the relative changes of the dependence on oxygenation of T2 of blood in comparison to the brain parenchyma could improve the accuracy of the IVIM perfusion quantification, but at the cost of increasing the model's complexity. We used a Gaussian diffusion model to describe the signal decay between b = 200 and 900 s/mm^2^. While more complicated models, such as bi-exponential, or polynomial/kurtosis models exist that better capture the signal decay at high b-values, this would have increased the model's complexity. Further, changes in inflow/outflow effects (of both CSF and blood) should be considered. The conversion to standard perfusion units CBV and CBF must be understood as a rough estimate, as it depends on the complex and difficult-to-retrieve microvascular topology, which varies greatly in various regions of the brain, for example between white and gray matter, or even between the various layers of the cortex \[[@pone.0117706.ref057]\]. The MR-visible water content in human brain might also be higher than the value used here \[[@pone.0117706.ref058]\]. Lastly, the acquisition was limited, on purpose, to single-slice acquisition, to keep the experiment as simple as possible and to exclude any possible multi-slice effects on CSF suppression or blood magnetization, as those components move in the volume in a non-trivial way and might encounter several inversion pulses during the acquisition. In conclusion, IVIM fMRI can be seen as a new tool for quantitative mapping of microvascular perfusion changes during functional brain activation. Supporting Information {#sec018} ====================== ###### Gray and white matter segmentation of visual and non-visual brain. \(A\) Region of interest of the visual brain, as obtained by thresholding the t-map of the BOLD experiment, further segmented in gray and white matter, and coregistered to the b0 map of the IVIM sequence, and (B) of the non-visual brain, respectively. (TIF) ###### Click here for additional data file. ###### Single measurement IVIM parameter maps. \(A\) Maps of the perfusion fraction f, (B) pseudo-diffusion coefficient D\*, (C) flow related coefficient fD\*, and (D) diffusion coefficient D, in all 5 (respectively 3) consecutive measurements for all 8 volunteers, during visual stimulation and rest (baseline). (TIF) ###### Click here for additional data file. ###### Averaged IVIM flow maps. Maps of the blood flow related IVIM parameter fD\*, in 8 volunteers, as obtained by averaging the maps obtained under visual stimulation and baseline. Scale of the colorbar: 10^-3^ mm^2^·s^-1^. (TIF) ###### Click here for additional data file. The authors would like to thanks Eleonora Fornari and Giovanni Battistella. [^1]: **Competing Interests:**The authors have declared that no competing interests exist. [^2]: Conceived and designed the experiments: CF KO RM PH PM. Performed the experiments: CF PH PM. Analyzed the data: CF AB. Contributed reagents/materials/analysis tools: CF KO AB. Wrote the paper: CF KO RM PH PM. | Mid | [
0.6247086247086241,
33.5,
20.125
] |
/*****************************************************************************
* *
* This file is part of the tna framework distribution. *
* Documentation and updates may be get from biaoping.yin the author of *
* this framework *
* *
* Sun Public License Notice: *
* *
* The contents of this file are subject to the Sun Public License Version *
* 1.0 (the "License"); you may not use this file except in compliance with *
* the License. A copy of the License is available at http://www.sun.com *
* *
* The Original Code is tag. The Initial Developer of the Original *
* Code is biaoping yin. Portions created by biaoping yin are Copyright *
* (C) 2000. All Rights Reserved. *
* *
* GNU Public License Notice: *
* *
* Alternatively, the contents of this file may be used under the terms of *
* the GNU Lesser General Public License (the "LGPL"), in which case the *
* provisions of LGPL are applicable instead of those above. If you wish to *
* allow use of your version of this file only under the terms of the LGPL *
* and not to allow others to use your version of this file under the SPL, *
* indicate your decision by deleting the provisions above and replace *
* them with the notice and other provisions required by the LGPL. If you *
* do not delete the provisions above, a recipient may use your version of *
* this file under either the SPL or the LGPL. *
* *
* biaoping.yin ([email protected]) *
* *
*****************************************************************************/
package com.frameworkset.common.poolman.sql;
import java.io.Serializable;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.sql.Types;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.frameworkset.common.poolman.NestedSQLException;
import com.frameworkset.common.poolman.PreparedDBUtil;
import com.frameworkset.common.poolman.util.JDBCPool;
import com.frameworkset.common.poolman.util.SQLManager;
import com.frameworkset.orm.adapter.DB;
import com.frameworkset.orm.transaction.JDBCTransaction;
import com.frameworkset.orm.transaction.TransactionException;
import com.frameworkset.orm.transaction.TransactionManager;
import com.frameworkset.util.UUID;
/**
* 封装表的主键信息 可能出现的问题: 1.如果通过其他的方式往数据库的相同表中插入数据,会造成主键冲突
* 2.如果不同的应用同时往同一个数据库的相同表中插入数据,会造成主键冲突
*
* @author biaoping.yin created on 2005-3-29 version 1.0
*/
public class PrimaryKey
{
private static Logger log = LoggerFactory.getLogger(PrimaryKey.class);
/** 多主键时主键及其当前值保存在本变量中 */
private Map primaryKeys;
/** 表的自增值 */
private int increment = 1;
/** 表的主键名称 */
private String primaryKeyName;
/** 表的名称 */
private String tableName;
/** 表的主键当前值 */
private long curValue;
/** 表的主键类型,缺省为int */
private String type = "int";
private String metaType = "int";
/** 数据库连接池的名称 */
private String dbname = null;
/**
* 数据库表主键的前缀,该属性只有在主键类型为string时才起作用,例如: 设定表a的主键id的类型为string 则可指定主键为
*
*/
private String prefix = "";
private DB dbAdapter = null;
private String maxSql = "";
/**
* 生成数据库表主键模式 0:表示自动 1:表示复合
*/
private int keygenerator_mode = 0;
private Object customKey = null;
private boolean hasTableinfo = true;
boolean synsequece = false;
private String seqfunction;
/**
* 主键的生成机制
*/
private String generator;
private String select ;
/**
* 构造函数,构建表的基本信息
*
* @param Connection con 由biaoping.yin添加
* @param dbname
* 表所属的数据库链接池
* @param tableName
* 表的名称
* @param primaryKeyName
* 表的主键名称
* @param increment
* 表主键自增值
* @param curValue
* 表主键当前值
*/
public PrimaryKey(String dbname, String tableName, String primaryKeyName,
Object customKey,Connection con)
{
this.dbname = dbname;
dbAdapter = SQLManager.getInstance().getDBAdapter(dbname);
this.tableName = tableName;
this.primaryKeyName = primaryKeyName;
JDBCPool pool = (JDBCPool) (SQLManager.getInstance().getPool(dbname));
synsequece = pool.getJDBCPoolMetadata().synsequence();
this.seqfunction = pool.getJDBCPoolMetadata().getSeqfunction();
try{
TableMetaData table = pool.getTableMetaData(con,tableName);
if(table != null)
{
if(primaryKeyName != null)
{
ColumnMetaData cd = pool.getColumnMetaData(con,tableName,primaryKeyName);
int type_ = cd.getDataType();
this.setType_(type_);
}
else
{
Set keys = table.getPrimaryKeys();
if(keys != null)
{
Iterator keyitrs = keys.iterator();
if(keyitrs.hasNext())
{
PrimaryKeyMetaData key = (PrimaryKeyMetaData)keyitrs.next();
this.primaryKeyName = key.getColumnName().toLowerCase();
this.setType_(key.getColumn().getDataType());
}
}
}
}
}
catch(Exception e)
{
e.printStackTrace();
}
String mode = pool.getKeygenerate();
if (mode.trim().equalsIgnoreCase("auto"))
keygenerator_mode = 0;
else if (mode.trim().equalsIgnoreCase("composite"))
keygenerator_mode = 1;
this.customKey = customKey;
}
private void setType_(int type_)
{
this.type = getJavaType( type_);
this.metaType = type;
}
public static String getJavaType(int sqltype_)
{
String javatype = "int";
switch(sqltype_)
{
case Types.INTEGER:
javatype = "int";break;
case Types.NUMERIC:
javatype = "long";break;
case Types.SMALLINT:
javatype = "int";break;
case Types.DECIMAL:
javatype = "int";break;
case Types.DOUBLE:
javatype = "long";break;
case Types.FLOAT:
javatype = "long";break;
case Types.VARCHAR:
javatype = "string";
}
return javatype;
}
public PrimaryKey(String dbname, String tableName, String primaryKeyName,
int increment, long curValue, String type, String prefix,
String maxSql,Connection con)
{
this.dbname = dbname;
dbAdapter = SQLManager.getInstance().getDBAdapter(dbname);
this.tableName = tableName;
this.primaryKeyName = primaryKeyName;
this.increment = increment;
this.curValue = curValue;
JDBCPool pool = (JDBCPool) (SQLManager.getInstance().getPool(dbname));
synsequece = pool.getJDBCPoolMetadata().synsequence();
this.seqfunction = pool.getJDBCPoolMetadata().getSeqfunction();
String mode = pool.getKeygenerate();
if (mode.trim().equalsIgnoreCase("auto"))
keygenerator_mode = 0;
else if (mode.trim().equalsIgnoreCase("composite"))
keygenerator_mode = 1;
if (type != null && !type.trim().equals(""))
this.type = type;
if (prefix != null && !prefix.trim().equals(""))
this.prefix = prefix;
this.maxSql = maxSql;
if(type.equals("sequence"))
{
ColumnMetaData cd = pool.getColumnMetaData(con,tableName,primaryKeyName);
if(cd != null)
{
int type_ = cd.getDataType();
this.metaType = PrimaryKey.getJavaType(type_);
}
else
{
metaType = "int";
}
}
else if(type.equals("uuid"))
{
ColumnMetaData cd = pool.getColumnMetaData(con,tableName,primaryKeyName);
if(cd != null)
{
int type_ = cd.getDataType();
this.metaType = PrimaryKey.getJavaType(type_);
}
else
{
metaType = "string";
}
}
else
{
this.metaType = type;
}
}
public PrimaryKey(String dbname, String tableName, String primaryKeyName,
int increment, long curValue, String type, String prefix,
String maxSql,String generator,Connection con)
{
this.dbname = dbname;
this.generator = generator;
dbAdapter = SQLManager.getInstance().getDBAdapter(dbname);
this.tableName = tableName;
this.primaryKeyName = primaryKeyName;
this.increment = increment;
this.curValue = curValue;
JDBCPool pool = (JDBCPool) (SQLManager.getInstance().getPool(dbname));
synsequece = pool.getJDBCPoolMetadata().synsequence();
this.seqfunction = pool.getJDBCPoolMetadata().getSeqfunction();
String mode = pool.getKeygenerate();
if (mode.trim().equalsIgnoreCase("auto"))
keygenerator_mode = 0;
else if (mode.trim().equalsIgnoreCase("composite"))
keygenerator_mode = 1;
if (type != null && !type.trim().equals(""))
this.type = type;
if (prefix != null && !prefix.trim().equals(""))
this.prefix = prefix;
this.maxSql = maxSql;
if(type.equals("sequence"))
{
ColumnMetaData cd = pool.getColumnMetaData(con,tableName,primaryKeyName);
if(cd != null)
{
int type_ = cd.getDataType();
this.metaType = PrimaryKey.getJavaType(type_);
}
else
{
metaType = "int";
}
}
else if(type.equals("uuid"))
{
ColumnMetaData cd = pool.getColumnMetaData(con,tableName,primaryKeyName);
if(cd != null)
{
int type_ = cd.getDataType();
this.metaType = PrimaryKey.getJavaType(type_);
}
else
{
metaType = "string";
}
}
else
{
this.metaType = type;
}
}
// public static long parserSequence(String sequence,String prefix,String
// type,String table_name)
// {
// long new_table_id_value = 0;
// if(type == null || type.trim().equals("") ||
// type.trim().equalsIgnoreCase("int")
// || type.trim().equalsIgnoreCase("integer")
// || type.trim().equalsIgnoreCase("java.lang.Integer")
// || type.trim().equalsIgnoreCase("long")
// || type.trim().equalsIgnoreCase("java.lang.long")
// || type.trim().equalsIgnoreCase("short"))
// new_table_id_value = sequence == null ||
// sequence.equals("")?0L:Long.parseLong(sequence);
// else
// {
// String temp_id = sequence;
//
// if(prefix == null || prefix.trim().equals("") )
// {
// log.debug("tableinfo中没有指定[" + table_name + "]的主键前缀'table_id_prefix'字段");
// }
// else
// {
// if(temp_id != null && temp_id.length() > prefix.length())
// temp_id = temp_id.substring(prefix.trim().length());
//
//
// }
// try
// {
// if(temp_id != null)
// new_table_id_value = Integer.parseInt(temp_id);
// }
// catch(Exception e)
// {
// log.error("tableinfo中没有指定[" + table_name +
// "]的主键前缀'table_id_prefix'字段,主键值为["+ temp_id + "]不是合法的数字。");
// //e.printStackTrace();
// new_table_id_value = 0;
// }
// }
// return new_table_id_value;
//
// }
/**
* added by biaoping.yin on 2008.05.29
*/
public PrimaryKey() {
// TODO Auto-generated constructor stub
}
private boolean exist(Connection con,long curValue) throws SQLException
{
String select = "select count(1) from " + this.tableName + " where " + this.primaryKeyName + "=?" ;
PreparedDBUtil dbUtil = new PreparedDBUtil();
try {
// String update = "update tableinfo set table_id_value=" + this.curValue +" where table_name='"+ tableName.toLowerCase() + "' and table_id_value <" + this.curValue ;
dbUtil.preparedSelect(this.dbname,select);
if (this.metaType.equals("int") || this.metaType.equals("java.lang.Integer")
|| this.metaType.equals("java.lang.integer")
|| this.metaType.equalsIgnoreCase("integer"))
{
dbUtil.setInt(1,(int)curValue);
}
else if (this.metaType.equals("java.lang.Long") || this.metaType.equals("java.lang.long")
|| this.metaType.equalsIgnoreCase("long"))
{
dbUtil.setLong(1,curValue);
}
else if (this.metaType.equals("java.lang.String")
|| this.metaType.equalsIgnoreCase("string"))
{
dbUtil.setString(1,this.prefix + curValue + "");
}
else
{
dbUtil.setString(1,this.prefix + curValue + "");
}
dbUtil.executePrepared(con);
if(dbUtil.getInt(0,0) > 0)
return true;
} catch (SQLException e1) {
e1.printStackTrace();
throw e1;
} catch (Exception e1) {
e1.printStackTrace();
throw new SQLException(e1.getMessage());
}
finally
{
dbUtil.resetPrepare();
}
return false;
}
/**
* 生成表的主键值
*
* @return Sequence
* @throws SQLException
*/
public Sequence generateObjectKey() throws SQLException
{
return generateObjectKey(null);
}
/**
* 生成表的主键值
*
* @return Sequence
* @throws SQLException
*/
public Sequence generateObjectKey(Connection con) throws SQLException
{
Sequence sequence = new Sequence();
if (type.equals("sequence"))
{
long curValue = this.curValue;
// String sql = "select " + this.generator + ".nextval from dual";
do
{
// PreparedDBUtil dbutil = new PreparedDBUtil();
try {
// if()
// dbutil.preparedSelect(this.dbname, sql);
// dbutil.executePrepared(con);
// if(dbutil.size() <= 0)
// {
//// System.out.println("select " + this.generator + ".nextval from dual");
// throw new SQLException("[select " + this.generator + ".nextval from dual] from [" + dbname + "] failed:retrun records is 0.");
// }
// curValue = dbutil.getInt(0,0);
curValue = this.dbAdapter.getNextValue(this.seqfunction,generator, con, this.dbname);
if(this.synsequece && this.exist(con,curValue) )
continue;
} catch (SQLException e) {
throw e;
}
// sequence.setPrimaryKey(new Long(curValue));
// sequence.setSequence(curValue);
break;
}
while(true);
this.curValue = curValue;
if (this.metaType.equals("int") || this.metaType.equals("java.lang.Integer")
|| this.metaType.equals("java.lang.integer")
|| this.metaType.equalsIgnoreCase("integer"))
{
sequence.setPrimaryKey(new Long(curValue));
sequence.setSequence(curValue);
return sequence;
}
if (this.metaType.equals("java.lang.Long") || this.metaType.equals("java.lang.long")
|| this.metaType.equalsIgnoreCase("long"))
{
sequence.setPrimaryKey(new Long(curValue));
sequence.setSequence(curValue);
return sequence;
}
if (this.metaType.equals("java.lang.String")
|| this.metaType.equalsIgnoreCase("string"))
{
sequence.setPrimaryKey(this.prefix + curValue + "");
sequence.setSequence(curValue);
return sequence;
}
else
{
sequence.setPrimaryKey(this.prefix + curValue + "");
sequence.setSequence(curValue);
return sequence;
}
}
else if(type.equals("uuid"))
{
// UUID.randomUUID().toString();
sequence.setPrimaryKey(UUID.randomUUID().toString());
sequence.setSequence(this.curValue);
return sequence;
}
else
{
synchronized (this)
{
switch (keygenerator_mode)
{
case 0:// 自动模式
curValue += increment;
break;
case 1:
curValue += increment;
synchroDB(con);
break;
}
if (this.metaType.equals("int") || this.metaType.equals("java.lang.Integer")
|| this.metaType.equals("java.lang.integer")
|| this.metaType.equalsIgnoreCase("integer"))
{
sequence.setPrimaryKey(new Long(curValue));
sequence.setSequence(curValue);
return sequence;
}
if (this.metaType.equals("java.lang.Long") || this.metaType.equals("java.lang.long")
|| this.metaType.equalsIgnoreCase("long"))
{
sequence.setPrimaryKey(new Long(curValue));
sequence.setSequence(curValue);
return sequence;
}
if (this.metaType.equals("java.lang.String")
|| this.metaType.equalsIgnoreCase("string"))
{
sequence.setPrimaryKey(this.prefix + curValue + "");
sequence.setSequence(curValue);
return sequence;
}
else
{
sequence.setPrimaryKey(this.prefix + curValue + "");
sequence.setSequence(curValue);
return sequence;
}
}
}
// return curValue;
}
public Sequence generateObjectKey(String type, String prefix) throws SQLException
{
return generateObjectKey(type, prefix,null);
}
/**
* 根据主键类型和主键的前缀生成表的主键
*
* @param type
* 表的主键类型
* @param prefix
* 表的主键前缀
* @return
* @throws SQLException
*/
public Sequence generateObjectKey(String type, String prefix,Connection con) throws SQLException
{
Sequence sequence = new Sequence();
if (type.equals("sequence")) //不需要锁
{
long curValue = this.curValue;
// String sql = "select " + this.generator + ".nextval from dual";
do
{
// PreparedDBUtil dbutil = new PreparedDBUtil();
try {
// dbutil.preparedSelect(this.dbname,sql);
// dbutil.executePrepared(con);
// curValue = dbutil.getLong(0,0);
// if(this.synsequece && this.exist(con,curValue))
// continue;
curValue = this.dbAdapter.getNextValue(this.seqfunction,generator, con, this.dbname);
if(this.synsequece && this.exist(con,curValue))
continue;
} catch (SQLException e) {
e.printStackTrace();
throw e;
}
// sequence.setPrimaryKey(new Long(curValue));
// sequence.setSequence(curValue);
// return sequence;
break;
}
while(true);
this.curValue = curValue;
if (this.metaType.equals("int") || this.metaType.equals("java.lang.Integer")
|| this.metaType.equals("java.lang.integer")
|| this.metaType.equalsIgnoreCase("integer"))
{
sequence.setPrimaryKey(new Long(curValue));
sequence.setSequence(curValue);
return sequence;
}
if (this.metaType.equals("java.lang.Long") || this.metaType.equals("java.lang.long")
|| this.metaType.equalsIgnoreCase("long"))
{
sequence.setPrimaryKey(new Long(curValue));
sequence.setSequence(curValue);
return sequence;
}
if (this.metaType.equals("java.lang.String")
|| this.metaType.equalsIgnoreCase("string"))
{
sequence.setPrimaryKey(this.prefix + curValue + "");
sequence.setSequence(curValue);
return sequence;
}
else
{
sequence.setPrimaryKey(this.prefix + curValue + "");
sequence.setSequence(curValue);
return sequence;
}
}
else if(type.equals("uuid"))
{
// UUID.randomUUID().toString();
sequence.setPrimaryKey(UUID.randomUUID().toString());
sequence.setSequence(this.curValue);
return sequence;
}
else //需要锁
{
synchronized (this)
{
switch (keygenerator_mode)
{
case 0:// 自动模式
curValue += increment;
break;
case 1:
curValue += increment;
synchroDB(con);
break;
}
if (this.metaType.equals("int") || this.metaType.equals("java.lang.Integer")
|| this.metaType.equals("java.lang.integer")
|| this.metaType.equalsIgnoreCase("integer"))
{
sequence.setPrimaryKey(new Long(curValue));
sequence.setSequence(curValue);
return sequence;
}
if (this.metaType.equals("java.lang.Long") || this.metaType.equals("java.lang.long")
|| this.metaType.equalsIgnoreCase("long"))
{
sequence.setPrimaryKey(new Long(curValue));
sequence.setSequence(curValue);
return sequence;
}
if (this.metaType.equals("java.lang.String")
|| this.metaType.equalsIgnoreCase("string"))
{
sequence.setPrimaryKey(this.prefix + curValue + "");
sequence.setSequence(curValue);
return sequence;
}
else
{
sequence.setPrimaryKey(this.prefix + curValue + "");
sequence.setSequence(curValue);
return sequence;
}
}
}
// Sequence sequence = new Sequence();
// return curValue;
}
// /**
// * 生成表的主键值
// * @return long
// */
// public long generateKey()
// {
// synchronized(this)
// {
//
// switch(keygenerator_mode)
// {
// case 0://自动模式
// curValue += increment;
// break;
// case 1:
// curValue += increment;
// synchroDB();
// break;
// }
//
// return curValue;
// }
//
// //return curValue;
// }
/**
* 同步当前缓冲中存放的对应表的最大值与数据库表中最大值
* @throws SQLException
*/
protected void synchroDB(Connection con) throws SQLException
{
// Connection con = null;
Statement stmt = null;
ResultSet rs = null;
JDBCTransaction tx = null;
boolean outcon = true;
try
{
if(con == null)
{
tx = TransactionManager.getTransaction();
if(tx == null)
{
con = SQLManager.getInstance().requestConnection(dbname);
}
else
{
con = tx.getConnection(dbname);
}
outcon = false;
}
stmt = con.createStatement();
rs = stmt.executeQuery(maxSql);
long temp = 0;
if (rs.next())
{
temp = rs.getLong(1);
}
if (temp >= this.curValue)
{
curValue = temp + 1;
}
// else if(temp < this.curValue )
// {
//
// curValue = temp + 1;
//
// System.out.println("curValue=========:" + curValue);
// }
}
catch (SQLException e)
{
// log.error("同步当前缓冲中表[" + tableName + "]的主键[" + primaryKeyName
// + "]最大值与数据库该表主键最大值失败,系统采用自动产生的主键:" + e.getMessage());
throw new NestedSQLException("同步当前缓冲中表[" + tableName + "]的主键[" + primaryKeyName
+ "]最大值与数据库该表主键最大值失败,系统采用自动产生的主键:", e);
} catch (TransactionException e) {
// e.printStackTrace();
// log.error("同步当前缓冲中表[" + tableName + "]的主键[" + primaryKeyName
// + "]最大值与数据库该表主键最大值失败,系统采用自动产生的主键:" + e.getMessage());
throw new NestedSQLException("同步当前缓冲中表[" + tableName + "]的主键[" + primaryKeyName
+ "]最大值与数据库该表主键最大值失败,系统采用自动产生的主键:" , e);
// throw new SQLException(e.getMessage());
}
finally
{
if(con != null)
{
JDBCPool.closeResources(stmt, rs);
if(!outcon)
{
if(tx == null)
{
JDBCPool.closeConnection(con);
}
}
}
con = null;
}
}
public static String changeID(Serializable id, String dbName, String type)
{
if (type.equals("int") || type.equals("java.lang.Integer")
|| type.equals("java.lang.integer")
|| type.equalsIgnoreCase("integer"))
{
return id.toString() ;
}
if (type.equals("java.lang.Long") || type.equals("java.lang.long")
|| type.equalsIgnoreCase("long"))
{
return id.toString();
}
if (type.equals("java.lang.String") || type.equalsIgnoreCase("string"))
{
char stringDelimiter = SQLManager.getInstance()
.getDBAdapter(dbName).getStringDelimiter();
return "" + stringDelimiter + id + stringDelimiter;
}
else
{
char stringDelimiter = SQLManager.getInstance()
.getDBAdapter(dbName).getStringDelimiter();
return "" + stringDelimiter + id + stringDelimiter;
}
}
/**
* 恢复表的主键值,当异常发生或插入不成功时执行该方法
*
* @return long
*/
public long restoreKey(Object oldValue)
{
synchronized (this)
{
long temp = getKeyID(oldValue);
if (curValue == temp)
curValue -= increment;
}
return curValue;
}
public long getKeyID(Object key)
{
long temp = 0;
if (key instanceof Long)
temp = ((Long) key).longValue();
else
{
String t_ = key.toString();
t_ = t_.substring(this.prefix.length());
try
{
temp = Long.parseLong(t_);
}
catch (Exception e)
{
temp = 0;
}
}
return temp;
}
/**
* 当数据插入操作成功的时候设置当前的主键值到缓冲和tableinfo表中
*
* @param newValue
* @param stmt
* @param updateSql
* @throws SQLException
*/
public synchronized void setCurValue(long newValue, Statement stmt,
String updateSql) throws SQLException
{
{
// 当新值比原来的值要小时才执行更新操作,否则不执行
if (curValue < newValue)
{
//stmt.executeUpdate(updateSql);
this.curValue = newValue;
}
}
}
// private final static String update = "update tableinfo set table_id_value=? where upper(table_name)=? and table_id_value <?" ;
/**
* 更新表的主键,当单独获取数据库的主键时,需要在生成主键后调用本方法
* 以便同步tableinfo表中的信息
* @throws SQLException
*
*/
public void updateTableinfo(Connection con) throws SQLException
{
// PreparedDBUtil dbUtil = new PreparedDBUtil();
// try {
//// String update = "update tableinfo set table_id_value=" + this.curValue +" where table_name='"+ tableName.toLowerCase() + "' and table_id_value <" + this.curValue ;
// dbUtil.preparedUpdate(this.dbname,update);
// dbUtil.setInt(1,(int)this.curValue);
// dbUtil.setString(2,this.tableName.toUpperCase());
// dbUtil.setInt(3,(int)this.curValue);
// dbUtil.executePrepared(con);
// } catch (SQLException e1) {
// throw e1;
//// e1.printStackTrace();
//
// } catch (Exception e1) {
// throw new SQLException (e1.getMessage());
// // e1.printStackTrace();
// }
// finally
// {
// dbUtil.resetPrepare();
// }
}
/**
* @return Returns the curValue.
*/
public long getCurValue()
{
return curValue;
}
/**
* @return Returns the increment.
*/
public int getIncrement()
{
return increment;
}
/**
* @return Returns the primaryKeyName.
*/
public String getPrimaryKeyName()
{
return primaryKeyName;
}
/**
* @return Returns the tableName.
*/
public String getTableName()
{
return tableName;
}
public String getType()
{
return type;
}
public void setType(String type)
{
this.type = type;
}
public String getDbname()
{
return dbname;
}
public void setHasTableinfo(boolean hasTableinfo) {
this.hasTableinfo = hasTableinfo;
}
public boolean hasTableinfo() {
return this.hasTableinfo;
}
public String toString()
{
StringBuilder buffer = new StringBuilder();
buffer.append("table=").append(this.tableName).append(",primaryKey=").append(this.primaryKeyName)
.append(",type=").append(this.type);
return buffer.toString();
// return
}
public String getMetaType() {
return metaType;
}
public void setMetaType(String metaType) {
this.metaType = metaType;
}
public static void main(String[] args)
{
for(int i = 0; i < 10; i ++)
System.out.println(UUID.randomUUID().toString().length());
}
}
| Low | [
0.49195402298850505,
26.75,
27.625
] |
## 页面传参 页面传参数是一种比较常见的业务需求,根据实现原理及适用环境可以分为两大类。 **在普通浏览器端常用的方法有如下几种:** 1.利用URL传参数 在页面跳转的时候通过设置window.location.href添加参数,在接收参数的页面通过window.location.search获取参数字符串。 发送参数的页面: ``` window.location.href = 'new.html?targetId=123' ``` 接收参数的页面: ``` // 获取url中的参数 function getUrlParam (name) { var reg = new RegExp("(^|&)" + name + "=([^&]*)(&|$)"); var r = window.location.search.substr(1).match(reg); if (r!= null) { return unescape(r[2]); }else{ return null; } } //获取url中的targetId参数 var targetId = getUrlParam('targetId'); console.log(targetId); ``` 2.本地存储 可以使用本地存储的方式,可以使用cookie、sessionStorage和localStorage。 发送参数的页面: ``` localStorage.setItem("targetId","123"); ``` 接收参数的页面: ``` localStorage.getItem("targetId"); ``` **mui框架根据业务场景不同,提供了两种传值模式:** 1.页面初始化时,通过扩展参数传值 html5+ webview中在创建新窗口的时候有一个extras参数,用于创建Webview窗口的额外扩展参数。 ``` var w = plus.webview.create("new.html","new",{},{ targetId: '123' }); w.show(); // 显示窗口 // 获取扩展参数属性值 console.log("extras:" + w.targetId); ``` 注意:id是WebviewObject的一个属性,所以extras不能使用id作为参数名。 mui在初始化页面时,提供了extras配置参数,通过该参数可以设置页面参数,从而实现页面间传值。 mui框架在如下几种场景下,会执行页面初始化操作: - 通过mui.openWindow()打开新页面(若目标页面为已预加载成功的页面,则在openWindow方法中传递的extras参数无效); - 通过mui.init()方法创建子页面; - 通过mui.init()方法预加载页面; - 通过mui.preload()方法预加载页面 以打开新页面为例说明浏览器和基于5+的APP的兼容写法: ``` var targetId = '123'; var baseUrl = 'new.html'; var url = mui.os.plus?baseUrl:baseUrl+'?targetId=' + targetId; mui.openWindow({ url: url, extras: { targetId: targetId } }) ``` 想获取某个webview页面的拓展参数,可以通过webview对象获取。例如在new.html页面可以通过下面的方法获取拓展参数: ``` mui.plusReady(function () { var self = plus.webview.currentWebview(); // 或 var self = plus.webview.getWebviewById('new'); console.log("extras:" + self.targetId); }) ``` 至于浏览器和APP兼容的写法自己根据上面的方法和这个方法结合一下就OK了。 2.页面已创建,通过自定义事件传值 Webview窗口对象窗口对象WebviewObject有一个evalJS方法,可以将JS脚本发送到Webview窗口中运行,可用于实现Webview窗口间的数据通讯。 ``` mui.fire(target, event, data) ``` 参数说明: - target: Type: WebviewObject 需传值的目标webview - event:Type: String 自定义事件名称 - data:Type: JSON json格式的数据 发送参数的页面: ``` <button id="send" type="button" class="mui-btn mui-btn-blue mui-btn-block">按钮</button> <script src="js/mui.js"></script> <script type="text/javascript"> var ws = null; mui.plusReady(function () { ws = plus.webview.create("new.html","new",{top:"0px",bottom:"0px"}); }) document.querySelector('#send').addEventListener('tap',function () { var targetId = '123'; ws.evalJS('send('+targetId+')'); ws.show(); }) </script> ``` 接收参数的页面: ``` <div class="mui-content"> <div id="text"></div> </div> <script src="js/mui.js" type="text/javascript" charset="utf-8"></script> <script type="text/javascript"> mui.init(); // 接收参数的函数 function send(param){ document.getElementById("text").innerHTML = param; } mui.back = function(){ var self = plus.webview.currentWebview(); self.hide(); } </script> ``` 注:这里要重写back,不然默认为close,当我们返回的时候再次打开show的时候需要重新创建。 显然这样写有点复杂,为此mui将evalJS传值的机制进行了封装,通过自定义事件实现页面传值,可以使用mui.fire()方法可触发目标窗口的自定义事件。 发送参数的页面: ``` <button id="send" type="button" class="mui-btn mui-btn-blue mui-btn-block">按钮</button> <script src="js/mui.js"></script> <script type="text/javascript"> var ws = null; mui.plusReady(function () { ws = plus.webview.create("new.html","new",{top:"0px",bottom:"0px"}); }) document.querySelector('#send').addEventListener('tap',function () { mui.fire(ws,'send',{ targetId: '123' }) ws.show(); }) </script> ``` 接收参数的页面: ``` // 添加send自定义事件监听 window.addEventListener('send',function(event){ //获得事件参数 var targetId = event.detail.targetId; document.getElementById("text").innerHTML = targetId; }); ``` 这里需要特别说明一下的是,很多人在使用mui.fire传参数的时候会提前预加载接收参数的页面,结果发现接收不到传的参数,这是一种非常常见的错误,这里需要说明的是当预加载了页面后,页面上的代码都已经执行,如果在loaded事件未完成就执行webview.evalJS或mui.fire,此时页面接收参数失效。此时应该将接受参数的逻辑写在webview loaded或者show监听事件中: **验证一个webview的loaded事件是否完成的方法:** ``` var ws = plus.webview.getWebviewById(id) ws.addEventListener( "loaded", function(e){ console.log( "Loaded: "+e.target.getURL() ); }, false ); ``` **验证一个webview的show事件是否完成的方法:** ``` var ws=plus.webview.currentWebview(); ws.addEventListener("show", function(e){ console.log( "Webview Showed" ); }, false ); ``` | High | [
1.152993348115299,
12.1875,
-1.6171875
] |
The 2018-19 NHL season begins Oct. 3. With training camps open, NHL.com is taking a look at the five keys, the inside scoop on roster questions, and the projected lines for all 31 teams. Today, the Philadelphia Flyers. Coach: Dave Hakstol (third season) Last season: 42-26-14; third place Metropolitan Division, lost to Pittsburgh Penguins in Eastern Conference First Round [RELATED: 2018-19 Season Preview coverage] 5 KEYS 1. Finding a third-line center Mikhail Vorobyev and Jordan Weal have emerged as the top contenders to fill the spot. Vorobyev, 21, had 29 points (nine goals, 20 assists) in 58 games with Lehigh Valley of the American Hockey League in his first North American season and has been impressive during training camp. "I didn't know what level he would be able to be at in camp on a consistent basis," coach Dave Hakstol told the Courier-Post. "Last year as a young player, this year, there's a lot of maturity in every part of his game." Weal, 26, had 21 points (eight goals, 13 assists) in 69 games last season. He has played the wing during his three seasons with the Flyers but is a natural center. "When I'm in the middle of the ice, it seems like it's a little easier of a spot to create and make players around you better, and that's what I feel I do best," Weal said. "At the end of the day, [Hakstol] is going to make the call. But if I go out there and do what I can do, I think I'll be just fine." 2. Where does Simmonds fit? Wayne Simmonds scored 24 goals despite a litany of injuries, including a pelvis tear before training camp last season that was surgically repaired after the season. The emergence of Travis Konecny on the top line likely bumps Simmonds to a third-line spot at 5-on-5, and the signing of forward James van Riemsdyk brings into question the 30-year-old's spot on the first power play. Simmonds also is entering the final season of his contract and has been mentioned in trade rumors. He maintains a loud voice in the locker room because of his work ethic, but his future with the organization is a question. Video: Discussing the health and role of Wayne Simmonds 3. Patrick, take two Nolan Patrick ended last season with 21 points (10 goals, 11 assists) in his final 33 regular-season games and a strong performance during the Stanley Cup Playoffs with two points (one goal, one assist) in six games. He's entering his second NHL season after his first surgery-free offseason since 2015, and the 20-year-old center is excited to get going. "Obviously I want to start where I ended last year and keep building off that," he said. Patrick likely will start the season on a line with van Riemsdyk and Jakub Voracek, and with good health a 60-point season is possible. 4. Improve the penalty kill The Flyers were 29th in the NHL on the penalty kill last season (75.8 percent) but did little to address it during the offseason. General manager Ron Hextall said he's confident a full season with some tactical changes made late last season will improve the results. Philadelphia was 18th on the penalty kill after Feb. 22 (78.4 percent in 22 games). "Internally our guys have got to get better," Hextall said. "That's the bottom line there." Video: PHI@NYR: Rubtsov blasts a shot past Lundqvist for PPG 5. The goalie question Brian Elliott and Michal Neuvirth each had offseason surgery, but the expectation is they will be Philadelphia's goaltenders this season. But Neuvirth has a lower-body injury and is questionable to start the season. Alex Lyon, who played 11 games last season when Elliott and Neuvirth were injured, is out about three more weeks with a lower-body injury sustained Sept. 18. That could open a spot for top prospect Carter Hart, who has a .957 save percentage in three preseason games. The 20-year-old was expected to start the season with Lehigh Valley of the American Hockey League but could have played himself into NHL consideration. Anthony Stolarz also is in the running. The 24-year-old played seven games with the Flyers in 2016-17 but missed most of last season because of a knee injury. ROSTER RUNDOWN Making the cut Defenseman Philippe Myers could be ready for an NHL spot after an impressive first professional season with Lehigh Valley of the AHL last season. The 21-year-old had 21 points (five goals, 16 assists) and a plus-12 rating in 50 regular-season games, and seven points (four goals, three assists) in 13 AHL playoff games. He's big (6-foot-5, 210 pounds) and strong enough to handle opposing forwards and brings a needed right-handed shot. Most intriguing addition There's a welcome familiarity for the Flyers and van Riemsdyk, who signed a five-year contract July 1 to rejoin to the team that selected him with the No. 2 pick of the 2007 NHL Draft. Van Riemsdyk returns after six seasons with the Toronto Maple Leafs, capped by an NHL career-high 36 goals last season. Philadelphia was overly reliant on its top line of left wing Claude Giroux, center Sean Couturier and Konecny last season. A second line of van Riemsdyk, Patrick and Voracek has the potential to give the Flyers vastly improved scoring depth and create favorable matchups on the road. Biggest potential surprise Weal became a bit of a forgotten piece for the Flyers last season but has the chance to make a big impact if he can win the third-line center spot. He was a big offensive producer as a center in junior hockey and close to a point-per-game player as a center in four full AHL seasons (219 points in 255 games). He's getting the chance to play in the middle now, and if he can center an effective third line, likely with left wing Oskar Lindblom and Simmonds, it would give the Flyers the potential for three productive scoring lines. Ready to break through Defenseman Travis Sanheim began last season in the NHL but was sent to the AHL on Jan. 22 after he had five points (one goal, four assists) and was minus-10 in 35 games. He came back March 9 better prepared for the NHL and had five points (one goal, four assists) and a plus-4 rating in 14 games. He averaged 16:27 of ice time per game after being recalled, up from 15:31 prior to his demotion. The 22-year-old could open the season on the left side of the second defense pair and potentially see time on the second power play. PROJECTED LINEUP Claude Giroux -- Sean Couturier -- Travis Konecny James van Riemsdyk -- Nolan Patrick -- Jakub Voracek Oskar Lindblom -- Jordan Weal -- Wayne Simmonds Scott Laughton -- Jori Lehtera -- Taylor Leier Ivan Provorov -- Shayne Gostisbehere Travis Sanheim -- Andrew MacDonald Robert Hagg -- Radko Gudas Brian Elliott Anthony Stolarz | Mid | [
0.617647058823529,
36.75,
22.75
] |
Release of Different Cell Mediators During Retinal Pigment Epithelium Regeneration Following Selective Retina Therapy. To investigate the effect of selective retina therapy (SRT) on the release of AMD-relevant cell mediators, such as matrix metalloproteinases (MMPs), VEGF, and pigment epithelium derived factor (PEDF) using different laser spot sizes and densities. Porcine RPE-choroid explants were treated with a pulsed 532 nm Nd:YAG laser using (1) large spot sizes, (2) small spot sizes with a high-density (hd) treatment, and (3) small spot sizes with a low-density (ld) treatment. Explants were cultivated in modified Ussing chambers. RPE regeneration and RPE cell death were investigated by calcein-AM staining and immunofluorescence. The MMP release was examined via zymography and immunofluorescence. VEGF and PEDF secretion was analyzed by ELISA. During pigment epithelium regeneration (PER), mitosis and RPE cell migration were observed. Four days after SRT (large spot size) the content of active MMP2 increased significantly (P < 0.01). Hd treatment with small spot sizes resulted also in an increase of active MMP2 (P < 0.05). In immunofluorescence explants showed a localized expression of MMP2 within the healing lesions after irradiation. The PEDF level increased significantly (P = 0.01) after SRT with large spot sizes. VEGF secretion decreased significantly (P < 0.05) following SRT with large spot sizes and with hd treatment of small spot sizes. SRT induces a cytokine profile, which may improve the flux across Bruch's membrane, slows down progression of early AMD by RPE regeneration, and inhibits the formation of choroidal neovascularization. The cytokine release depends on the size and density of applied laser spots. | High | [
0.7067861715749041,
34.5,
14.3125
] |
--- abstract: 'Maximizing the sum of two generalized Rayleigh quotients (SRQ) can be reformulated as a one-dimensional optimization problem, where the function value evaluations are reduced to solving semi-definite programming (SDP) subproblems. In this paper, we first use the optimal value of the dual SDP subproblem to construct a new saw-tooth-type overestimation. Then, we propose an efficient branch-and-bound algorithm to globally solve (SRQ), which is shown to find an $\epsilon$-approximation optimal solution of (SRQ) in at most O$\left(\frac{1}{\epsilon}\right)$ iterations. Numerical results demonstrate that it is even more efficient than the recent SDP-based heuristic algorithm.' author: - Xiaohui Wang - Longfei Wang - Yong Xia date: 'Received: date / Accepted: date' title: 'An efficient global optimization algorithm for maximizing the sum of two generalized Rayleigh quotients [^1] ' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction ============ The problem of maximizing the sum of two generalized Rayleigh quotients $$\begin{aligned} \label{sum-of-two} {\rm(SRQ)}~~ \max_{x\ne0}\dfrac{x^TBx}{x^TWx}+\dfrac{x^TDx}{x^TVx}\end{aligned}$$ with positive definite matrices $W$ and $V$, has recent applications in the multi-user MIMO system [@PG] and the sparse Fisher discriminant analysis in pattern recognition [@DFB; @FM; @WZW]. Without loss of generality, we can assume that $V$ is identity. Otherwise, we reformulate (\[sum-of-two\]) as a problem in terms of $y$ by substituting $x=V^{-\frac{1}{2}}y$. Moreover, since the objective function in (\[sum-of-two\]) is homogeneous, (SRQ) can be further recast as the following sphere-constrained optimization problem, which is first proposed by Zhang [@Hong; @HZ]: $${\rm(P)}~~ \begin{array}{lll} &\max_{x\in \Bbb R^n}& f(x) =\dfrac{x^TBx}{x^TWx}+x^TDx\\ &{\rm s.t.} & \|x\|= 1, \end{array}$$ where $\|\cdot\|$ denotes the $\ell_2$-norm throughout this paper. The single generalized Rayleigh quotient optimization problem (i.e., (SRQ) with $B=0$) is related to the classical eigenvalue problem and solved in polynomial time [@ZYL]. However, to our best knowledge, whether the general (SRQ) (or (P)) can be efficiently solved in polynomial time remains open. Actually, as shown in \[[@Hong], Example 1.1\], there could exist a few local non-global maximizers of (P). Moreover, even finding the critical point of (P) is nontrivial, see [@Hong; @HZ]. Recently, (P) is reformulated as the problem of maximizing the following one-dimensional function [@NRX]: $$({\rm P}_1)~~\max_{\mu\in [\underline{\mu},\bar{\mu}]}~q(\mu):=\mu+g(\mu), \label{oned}$$ where $g(\mu)$ is related to a non-convex quadratic optimization: $$\label{P_mu} \begin{array}{lll} g(\mu)=&\max_{x\in \Bbb R^n}&x^TDx\\ &{\rm s.t.} & \|x\|= 1\\ &\ & x^T(B-\mu W)x\ge0 \end{array}$$ and the lower and upper bounds $$\label{mu-bar} \underline{\mu}=\min_{\|x\|=1} \dfrac{x^TBx}{x^TWx}, \ \ \bar{\mu}=\max_{\|x\|=1} \dfrac{x^TBx}{x^TWx}$$ are the smallest and the largest generalized eigenvalues of the matrix pencil $(B,W),$ respectively. In order to solve the one-dimensional problem (\[oned\]), a “two-stage" heuristic algorithm is proposed in [@NRX] by first subdividing $[\underline{\mu},\bar{\mu}]$ into coarse intervals such that each one contains a local maximizer of $q(\mu)$ and then applying the quadratic fit line search [@An; @Baz; @Lu] in each interval. For any given $\mu$, $g(\mu)$ (or $q(\mu)$) can be evaluated by solving an equivalent semi-definite programming (SDP) formulation, according to an extended version of S-Lemma in \[[@Poly], Proposition 4.1, see also \[[@D], Theorem 5.17\]\]. Finally, for the returned optimal solution $\mu^*$, the optimal vector solution of (P) is recovered by a rank-one decomposition procedure \[[@NRX], Theorem 3\]. Though this “two-stage" algorithm could find the global solutions of the tested examples, it is still a heuristic algorithm since the function $q(\mu)$ is not guaranteed to be quasi-concave. Besides, there is no meaningful stopping criterion for the “two-stage" algorithm. That is, we cannot estimate the gap between the obtained solution and the global maximizer of (P$_1$). In this paper, we propose an easy-to-evaluate function for upper bounding $q(\mu)$. It provides saw-tooth-curve upper bounds of $q(\mu)$ over $[\underline{\mu},\bar{\mu}]$, which are used to establish an efficient branch-and-bound algorithm. We further show that the new algorithm returns an $\epsilon$-approximation optimal solution of (P$_1$) in at most $O\left(\frac{1}{\epsilon}\right)$ iterations. Numerical results show that the new algorithm is even much more efficient than the “two-stage" heuristic algorithm [@NRX]. The remainder of this paper is organized as follows. In Section 2, we give some preliminaries on the evaluation of $g(\mu)$. In Section 3, we propose an easy-to-compute upper bounding function, which provides saw-tooth-curve upper bounds of $g(\mu)$. In Section 4, we establish a new branch-and-bound algorithm and estimate the worst-case computational complexity. In Section 5, we do numerical comparison experiments, which demonstrate the efficiency of our new algorithm. Conclusions are made in Section 6. Throughout the paper, $v(\cdot)$ denotes the optimal objective value of the problem $(\cdot)$. We use $A\succeq (\preceq)0$ to stand for a positive (negative) semi-definite matrix $A$. The positive definite matrix $A$ is denoted by $A\succ 0$. Let $\lambda_{\max}(A)$ and $\lambda_{\min}(A)$ be the maximal and minimal eigenvalue of $A$, respectively. The inner product of two matrices $A$ and $B$ is denoted by $A\bullet B=$trace$(AB^T)$. For a real number $a$, $\lfloor a\rfloor$ returns the largest integer less than or equal to $a$. Preliminaries ============= In the section, we first show how to evaluate $g(\mu)$. Then, we present the “two-stage" algorithm [@NRX] to maximize $q(\mu)$ (\[oned\]). Finally, we discuss how to get the optimal vector solution of (P) from the maximizer of $q(\mu)$. Lifting $xx^T$ to $X\in \Bbb R^{n\times n}$ (since $x^TAx=A\bullet (xx^T)$) yields the primal SDP relaxation of the optimization problem of evaluating $(g_{\mu})$ for any given $\mu$: $$\begin{aligned} ({\rm SDP}_{\mu})~~ &\max & D \bullet X\\ &{\rm s.t.} & I\bullet X= 1\\ & & (B-\mu W)\bullet X\ge0\\ & & X\succeq 0.\end{aligned}$$ The conic dual problem of $({\rm SDP}_{\mu})$ is $$\begin{aligned} ({\rm SD}_{\mu})~~ &\min &\nu\\ &{\rm s.t.} &D -\nu I+\eta(B-\mu W)\preceq0\\ & & \eta\ge0,\end{aligned}$$ which coincides with the Lagrangian dual problem of $g({\mu})$. It is trivial to see that $({\rm SD}_{\mu})$ has an interior feasible solution, i.e., the Slater’s condition holds. We can verify that, for any $\mu$ satisfying $$\mu<\bar{\mu},\label{ass}$$ the Slater’s condition holds for $({\rm SDP}_{\mu})$, i.e., there is an $X\succ 0$ such that $I\bullet X= 1$ and $(B-\mu W)\bullet X>0$. Therefore, under the assumption (\[ass\]), strong duality holds for $({\rm SDP}_{\mu})$, that is, $v({\rm SDP}_{\mu})=v({\rm SD}_{\mu})$ and both optimal values are attained. Under the assumption (\[ass\]), by further applying the extended version of S-Lemma in \[[@Poly], Proposition 4.1, see also \[[@D], Theorem 5.17\]\], we can show that the strong duality holds for the optimization problem of evaluating $g({\mu})$, i.e., $g({\mu})=v({\rm SD}_{\mu})$. For more details, we refer to [@NRX]. Next, we present the “two-stage" algorithm proposed in [@NRX] for solving (\[oned\]). Firstly, it partitions $[\underline{\mu},\bar{\mu}]$ into a rather coarse mesh and then collects all subintervals containing an interior local maximizer. In the second stage, the quadratic fit method [@Baz; @An; @Lu] is applied to find a corresponding local maximizer in each subinterval that has been collected in the first stage. Finally, the optimal solution $\mu^*$ is selected from all these obtained local maximizers. In this paper, we will not present the detailed quadratic fit line search subroutine, which can be found in [@NRX]. One of the reason is that the algorithm in the first stage is already quite time-consuming. [**The “two-stage" scheme proposed in [@NRX]**]{} 1. - Given $\delta>0.$ Let $\mu_0=\underline{\mu}$ and $\mu_i=\underline{\mu}+(i-1)\delta$ for $i=1,2,\ldots,\lfloor\frac{\bar{\mu}-\underline{\mu}} {\delta}\rfloor+1$. If $\frac{\bar{\mu}-\underline{\mu}} {\delta}$ is not an integer, set $\mu_k=\bar{\mu}$ for $k= \lfloor\frac{\bar{\mu}-\underline{\mu}} {\delta}\rfloor+2$. - For $i=1,2,\ldots,$ collect all the three-point pattern $[\mu_{i-1},\mu_i,\mu_{i+1}]$ such that $\max\{q(\mu_{i-1}),q(\mu_{i+1})\}\le q(\mu_i)$. - Call the quadratic fit line search subroutine (with a smaller tolerance than $\delta$) to find a corresponding local maximizer in each three-point pattern $[\mu_{i-1},\mu_i,\mu_{i+1}]$. - Select the best maximizer $\mu^*$ among $\underline{\mu}$, $\bar{\mu}$, and all the local maximizers found in Step 3. Suppose (\[oned\]) is solved, let $\mu^*$ be the returned maximizer. If $\mu^*=\bar{\mu}$, the feasible region of (\[P\_mu\]) is reduced to $$\|x\|=1,~(B-\mu^*W)x=0,$$ which contains only the unit eigenvector corresponding to the maximal eigenvalue. In this case, $g(\mu^*)$ is actually a maximum eigenvalue problem. On the other hand, suppose $\mu^*<\bar{\mu}$, the optimal vector solution of (P) is recovered from the equivalent $({\rm SDP}_{\mu^*})$ based on the rank one constraint, by using a rank-one procedure similar to that in [@SZ; @Y], see details in [@NRX]. There is an alternative approach to recover the optimal solution of (P). Let $(\nu^*,\eta^*)$ be the optimal solution of the dual problem $({\rm SD}_{\mu^*})$. It is not difficult to verify that $$g(\mu^*)=\max_{\|x\|= 1}~ x^T(D-\eta^*(B-\mu^* W))x=\lambda_{\max}(D-\eta^*(B-\mu^* W)).$$ Consequently, the optimal vector solution of (P) is the unit eigenvector corresponding to the maximum eigenvalue of $D-\eta^*(B-\mu^* W)$. Saw-tooth upper bounds ====================== In this section, we propose an easy-to-evaluate upper bounding function, which provides saw-tooth upper bounds for $q(\mu)$ over $[\underline{\mu},\bar{\mu}]$. Let $\cup_{i=1}^k[\mu_i,\mu_{i+1}]$ be a partition of $[\underline{\mu},\bar{\mu}]$, where $\mu_1=\underline{\mu}$ and $\mu_{k+1}=\bar{\mu}$. Consider the interval $[\mu_i,\mu_{i+1}]$ with $i\le k-1$ (so that $\mu_{i+1}<\bar{\mu}$). Solve $({\rm SD}_{\mu})$ with $\mu=\mu_i,\mu_{i+1}$ and denote the optimal solutions by $(\nu_i,\eta_i)$ and $(\nu_{i+1},\eta_{i+1})$, respectively. Then, we have $\eta_{i}\ge 0$, $\eta_{i+1}\ge0$, and $$q(\mu_{i})=\mu_{i}+\nu_{i},~ q(\mu_{i+1})=\mu_{i+1}+\nu_{i+1}.$$ For any $\mu\in[\mu_i,\mu_{i+1}]$, it follows from the strong duality that $$\begin{aligned} q(\mu)&=&\mu+ \min_{\eta\ge0}\max_{\|x\|= 1} x^TDx+\eta(x^T(B-\mu W)x)\nonumber\\ &\le& \mu+ \max_{\|x\|= 1} x^TDx+\eta_i(x^T(B-\mu W)x)\nonumber\\ &=& \mu_i+ \max_{\|x\|= 1} \{x^TDx+\eta_i(x^T(B-\mu_i W)x)+\mu-\mu_i +\eta_i(\mu_i-\mu) x^TWx\}\nonumber\\ &\leq& q(\mu_i)+\mu-\mu_i +\eta_i(\mu_i-\mu) \max_{\|x\|= 1} x^TWx\nonumber\\ &\le& q(\mu_i)+\mu-\mu_i +\eta_i(\mu_i-\mu)\min_{\|x\|= 1}x^TWx\nonumber\\ &=& q(\mu_i)+\mu-\mu_i +\eta_i(\mu_i-\mu)\lambda_{\min}(W) \label{ub1}\\ &:=& q_1(\mu). \label{ub11}\end{aligned}$$ Similarly, we have $$q(\mu) \le q(\mu_{i+1})+\mu-\mu_{i+1} +\eta_{i+1}(\mu_{i+1}-\mu)\lambda_{\max}(W) :=q_2(\mu). \label{ub2}$$ Now, we obtain an upper bounding function of $q(\mu)$ over $[\mu_i,\mu_{i+1}]$: $$\bar{q}(\mu)= \min\{q_1(\mu),q_2(\mu)\}, \label{ub}$$ which is a concave function as $q_1(\mu)$ and $q_2(\mu)$ are both linear functions. It provides the following upper bound of $q(\mu)$ over $[\mu_i,\mu_{i+1}]$: $$U_i=\max_{\mu\in[\mu_i,\mu_{i+1}]} \bar{q}(\mu). \label{ubb}$$ Problem (\[ubb\]) is a convex program. Moreover, it has a closed-form solution. \[thm:b\] Under the assumption $\mu_{i+1}<\bar{\mu}$, an upper bound of $q(\mu)$ over $[\mu_i,\mu_{i+1}]$ is given by $$\begin{aligned} U_i=\left\{ \begin{array}{ll} q(\mu_i),& {\rm if}~\eta_i\lambda_{\min}(W)\ge1 \label{bise1}\\ q(\mu_{i+1}),& {\rm if}~\eta_{i+1}\lambda_{\max}(W)\le1 \label{bise2}\\ q_1(\mu_0),& {\rm otherwise,} \label{adapt}\\ \end{array} \right.\end{aligned}$$ where $$\mu_0=\frac{q(\mu_{i+1})-\mu_{i+1} +\eta_{i+1}\mu_{i+1}\lambda_{\max}(W)- q(\mu_i)+\mu_i -\eta_i\mu_i\lambda_{\min}(W)} {\eta_{i+1}\lambda_{\max}(W)- \eta_i\lambda_{\min}(W)}.$$ The trivial proof is omitted as both $q_1(\mu)$ and $q_2(\mu)$ are linear functions and $\mu_0$ is the unique solution of the equation $q_1(\mu)=q_2(\mu)$. Finally, we also have a simple estimation of the upper bound $U_i$. \[thm:b2\] For any $\mu\ge \mu_i$, we have $$q(\mu) \le q(\mu_{i})+\mu-\mu_{i}. \label{ub:es}$$ The inequality (\[ub:es\]) follows from the definition $q_1(\mu)$ (\[ub11\]) and the facts that $\eta_i\ge0$ and $\lambda_{\min}(W)>0$ (as $W\succ 0$). The estimation (\[ub:es\]) is independent of $\mu_{i+1}$. Therefore, it can be satisfied for the extended case $\mu_{i+1}=\bar{\mu}$. A saw-tooth branch-and-bound algorithm ====================================== In this section, we first propose a branch-and-bound algorithm based on the new saw-tooth-curve upper bounds and then establish the worst-case computational complexity of the new algorithm. Our algorithm works on a list $$\underline{\mu}=\mu_1<\cdots<\mu_{k+1}=\bar{\mu}. \label{list}$$ The initial list is $\underline{\mu}=\mu_1<\mu_2=\bar{\mu}$. In each iteration, we first select the interval $[\mu_i,\mu_{i+1}]$ from the $\{\mu\}$-list that provides the maximal upper bound $U_i$ (\[ubb\]). Then, we insert the mid-point $\frac{\mu_i+\mu_{i+1}}{2}$ into the $\{\mu\}$-list (\[list\]) and increase $k$ by one. The process is repeated until the stopping criterion is reached. The detailed algorithm is presented as follows. [**The saw-tooth branch-and-bound algorithm**]{} 1. - Given the approximation error $\epsilon>0$. Compute $\underline{\mu}$, $\bar{\mu}$ (\[mu-bar\]), $\lambda_{\min}(W)$ and $\lambda_{\max}(W)$. Initialize the iteration number $k=1$. Let $\mu_1=\underline{\mu}$. Solve (${\rm SD}_{\mu_1}$) to obtain the optimal solution $(\nu_1,\eta_1)$. Then, $q(\mu_{1})=\mu_1+\nu_1$ and let $LB=q(\mu_{1})$, $\mu^*=\mu_1$. Let $\mu_2=\bar{\mu}-\epsilon$. If $\mu_2\le \underline{\mu}$, stop and return $\mu^*$ as an approximate maximizer. Otherwise, solve (${\rm SD}_{\mu_2}$) to obtain the optimal solution $(\nu_2,\eta_2)$. Then, $q(\mu_{2})=\mu_2+\nu_2$. If $q(\mu_{2})>LB$, update $LB=q(\mu_{2})$ and $\mu^*=\mu_2$. Set $k=2$ and $S=\emptyset$. - Let $\tilde \mu= \frac{1}{2}(\mu_{1}+\mu_{2})$. Solve (${\rm SD}_{\tilde\mu}$) and obtain the optimal solution $(\tilde\nu,\tilde\eta)$. Then, $q(\tilde\mu)=\tilde\mu+\tilde\nu$. If $q(\tilde \mu) > LB$, update $LB= q(\tilde \mu)$ and $\mu^*=\tilde \mu$. - According to Theorem \[thm:b\], compute the upper bounds: $$\begin{aligned} UB_1= \max_{\mu\in [\mu_{1},\tilde \mu]} \bar{q}(\mu),~ UB_2= \max_{\mu\in [\tilde \mu,\mu_{2}]} \bar{q}(\mu).\end{aligned}$$ Update $S=S\cup\{(UB_1,\mu_{1},\tilde \mu)\} \cup\{(UB_2,\tilde \mu,\mu_{2})\}$ and $k=k+1$. - Find $(UB^*,\mu_1,\mu_2)= \arg \max\limits_{(t,*,*)\in S} t$. If $UB^*\leq LB+\epsilon$, stop and return $\mu^*$ as an approximate maximizer. Otherwise, update $S=S\setminus\{(UB^*,\mu_1,\mu_2)\}$ and go to Step 1. Theoretically, we can show that our new algorithm returns an $\epsilon$-approximation optimal solution of (P$_1$) in at most $O(\frac{1}{\epsilon})$ iterations. Here, we call $\mu^*$ an $\epsilon$-approximation optimal solution of (P$_1$) if it is feasible and satisfies $$v({\rm P}_1)\ge q(\mu^*) \ge v({\rm P_1})-\epsilon.$$ \[thm:comp\] The above algorithm terminates in at most $\left\lceil\frac{\bar{\mu}-\underline{\mu}} {\epsilon}\right\rceil$ steps and returns an $\epsilon$-approximation optimal solution of (P$_1$). If the algorithm terminates at Step 0, that is, $$\bar{\mu}-\epsilon \le \underline{\mu},$$ then for any $\mu\in[\underline{\mu},\bar{\mu}]$, it follows from the inequality (\[ub:es\]) in Theorem \[thm:b2\] that $$q(\mu)\le q(\underline{\mu})+\mu-\underline{\mu} \le q(\underline{\mu})+\bar{\mu}-\underline{\mu} \le q(\underline{\mu})+\epsilon.$$ Therefore, we have $$q(\mu^*)=q(\underline{\mu})\ge \max_{\mu\in[\underline{\mu},\bar{\mu}]} q(\mu)-\epsilon =v({\rm P}_1)-\epsilon.$$ It follows that $\mu^*=\underline{\mu}$ is an $\epsilon$-approximation optimal solution of (P$_1$). Now, we suppose that the algorithm does not terminate at Step 0. Consider $\{(UB,\mu_1,\mu_{2})\}\in S$ in the $k$-th iteration of the algorithm. If $UB<UB^*$, then the interval $[\mu_1,\mu_{2}]$ will be not selected to partition. In the following, we assume $UB=UB^*$. According to the inequality (\[ub:es\]) in Theorem \[thm:b2\], for any $\mu\in[\mu_1,\mu_2]$, we have $$UB\le q(\mu_1)+\mu_2-\mu_1.$$ Since $UB=UB^*$ and $q(\mu_1)\le LB$, according to the stopping criterion, the algorithm terminates when $$\mu_2-\mu_1\le \epsilon.$$ Therefore, there are at most $\left\lceil\frac{\bar{\mu}-\underline\mu}{\epsilon}\right\rceil$ elements in $S$. Since the number of elements of $S$ increases by one in each iteration, the algorithm stops in $\left\lceil\frac{\bar{\mu}-\underline\mu} {\epsilon}\right\rceil$ steps. Let $\mu^*$ be the approximation solution returned by the algorithm. We have $$UB^*\leq q(\mu^*)+\epsilon. \label{ULB}$$ To show that $\mu^*$ is an $\epsilon$-approximation optimal solution of (P$_1$), it is sufficient to prove that $$q(\mu^*)\ge v({\rm P}_1)-\epsilon.\label{appr}$$ Let $\hat{\mu}=\bar{\mu}-\epsilon>\underline{\mu}$. According to the inequality (\[ub:es\]) in Theorem \[thm:b2\], for any $\mu\in[\hat{\mu},\bar{\mu}]$, we obtain $$q(\mu)\le q(\hat\mu)+\mu-\hat{\mu}\le q(\hat\mu)+\bar\mu-\hat{\mu}= q(\hat\mu)+\epsilon.$$ Therefore, we have $$\begin{aligned} v({\rm P}_1)&\le& \max\{UB^*,\max\limits_{\mu\in[\hat{\mu},\bar{\mu}]} q(\mu)\}\nonumber\\ &\le& \max\{UB^*, q(\hat\mu)+\epsilon \}\nonumber\\ &\le&q(\mu^*)+\epsilon.\label{eqq}\end{aligned}$$ where the equality (\[eqq\]) follows from (\[ULB\]). Then, we obtain (\[appr\]). The proof is complete. Computational Experiments {#sec:3} ========================= We test the new branch-and-bound algorithm for solving (P$_1$) on the same numerical examples as in [@NRX]. The SDP subproblems $({\rm SD}_{\mu})$ are solved by SDPT3 within CVX [@Boyd]. Since there is no unified stopping criterion in the “two-stage” heuristic algorithm [@NRX], we just report the number of function evaluations (i.e., solving the SDP subproblems) in the first stage, with the setting $\delta=0.05$ used in [@NRX]. For our algorithm, we set $\epsilon=1e-5$. The first example is taken from \[[@Hong], Example 3.2\]. It has many local non-global maximizers. \[exam3\] Let $B=\left(\begin{matrix}2.3969& 0.4651 &4.6392\\ 0.4651& 5.4401& 0.7838\\ 4.6392& 0.7838 &10.1741\end{matrix}\right),\\ W= \left(\begin{matrix}0.8077& 0.8163& 1.0970\\ 0.8163 &4.1942& 0.8457\\ 1.0970& 0.8457& 1.8810\end{matrix}\right), D=\left(\begin{matrix}3.9104& -0.9011& -2.0128\\ -0.9011& 0.9636& 0.6102\\-2.0128 & 0.6102& 1.0908\end{matrix}\right).$ In this case, $[\underline{\mu},\bar{\mu}]=[0.9882,6.7322]$. The “two-stage” algorithm [@NRX] gives an approximation solution $\mu^*=6.5952.$ The number of function evaluations in the first stage is $116$. Our algorithm returns an $\epsilon$-approximation optimal solution, $\mu^*=6.5952$, in $141$ iterations. The second example in [@NRX] is taking from \[[@Hong], Example 3.1\], where the optimal solution of $({\rm P}_1)$ is achieved at the right-hand side end-point $\bar{\mu}$. \[exam2\] $B={\rm diag}(1, 9,2),W=D={\rm diag}(5,2,3).$ In this case, $[\underline{\mu},\bar{\mu}]=[0.2,4.5]$. The number of function evaluations in the first stage of the “two-stage” algorithm [@NRX] is $87$. While our algorithm finds $\mu^*=4.5$ in $2$ iterations. \[exam4\] Let $$B=\left(\begin{matrix}1& 2 &3&1\\ 2&5&4&-1\\3&4&0&1\\1&-1&1&6\end{matrix}\right), W= {\rm diag}(2 ,1,5,10 ), D=\left(\begin{matrix}5&-1&0&3\\-1&9&1&0\\0&1&-2&0\\3&0&0&8\end{matrix}\right).$$ In this case, $[\underline{\mu},\bar{\mu}]=[-0.8241,6.0647].$ The “two-stage” algorithm [@NRX] gives an approximation solution $\mu^*=5.8748.$ The number of function evaluations in the first stage is $139$. Our algorithm returns an $\epsilon$-approximation optimal solution, $\mu^*=5.8821$, in $35$ iterations. \[exam5\] Let $n=10, B={\rm diag}(1, 2, 8, 7, 9, 3, 10, 2, -1, 6),\\ W={\rm diag}( 9, 8, 7, 6, 5, 4, 3, 2, 1, 10), D={\rm diag}(5, 20, 3, 4, 8, -1, 0, 6, 32, 10).$ The searching interval is $[\underline{\mu},\bar{\mu}]=[-1,3.3333].$ The optimal solution is the left-hand side end-point $-1$. The number of function evaluations in the first stage is $88$. Our algorithm returns an $\epsilon$-approximation optimal solution, $\mu^*=-1$, in $18$ iterations. \[exam1\] Let $n=20,$\ $B={\rm diag}(1, 2, 20, 3, 50, 4, 6, 7, 8, 9, 100, 2, 3, 4, 5, 6, 7, 0, 10, 9);$\ $W={\rm diag}(100, 1, 2, 30, 5, 7, 9, 7, 8, 9, 1, 2, 30, 1, 50, 8, 1, 10, 10, 9);$\ $D={\rm diag}(0, 1000, 20, 2, 5, 6, 7, 9, 50, 3, 4, 5, 100, 5, 2, 200, 4, 5, 9, 21).$ The searching interval of this example is $[\underline{\mu},\bar{\mu}]=[0,100].$ The “two-stage” algorithm [@NRX] gives an approximation solution $\mu^*=2.0029.$ The number of function evaluations in the first stage is $2001$. Our algorithm returns an $\epsilon$-approximation optimal solution, $\mu^*=1.9999$, in $22$ iterations. In addition to Examples 2-5 reported above, our algorithm highly outperforms the “two-stage” algorithm [@NRX]. For Example 1, our algorithm is also competitive. Notice that our algorithm is an exact algorithm and the “two-stage” algorithm [@NRX] is heuristic. Finally, we test more examples where the data are chosen randomly as follows. Each component of the symmetric matrices $B$ and $D$ is uniformly distributed in $[-10,10]$. We generate $W,V=LL^T+\delta I$, where $L$ is a randomly generated lower bi-diagonal matrix with each nonzero element being uniformly distributed in $[-10,10]$ and $\delta>0$ is a constant number to guarantee the positive definiteness of $W$ and $V$. For each dimension varying from $30$ to $200$, we independently run the “two-stage” algorithm [@NRX] and our new algorithm ten times and report in Table \[tab\] the average numerical results including the time in seconds and the number of iterations. It follows from the limited numerical results that our new global optimization algorithm highly outperforms the “two-stage” heuristic algorithm. ----- -- --------- ------- -- --------- ------- -- -- -- time(s) iter. time(s) iter. 30 58.84 233.6 11.93 50.1 50 98.19 320.8 16.80 58.6 80 192.09 400.9 31.59 68.7 100 299.23 459.3 44.08 71.4 120 493.83 536.9 62.52 71.3 150 915.29 609.4 108.95 75.8 180 1519.09 634.0 186.84 81.2 200 2118.18 672.2 262.78 86.6 ----- -- --------- ------- -- --------- ------- -- -- -- : The average of the numerical results for ten times solving (P) with different $n$.[]{data-label="tab"} Conclusions =========== The recent SDP-based heuristic algorithm for maximizing the sum of two generalized Rayleigh quotients (SRQ) is based on the one-dimensional parametric reformulation where each functional evaluation corresponds to solving a semi-definite programming (SDP) subproblem. In this paper, we propose an efficient branch-and-bound algorithm to globally solve (SRQ) based on the new-developed saw-tooth overestimating approach. It is shown to find an $\epsilon$-approximation optimal solution of (SRQ) in at most O$\left(\frac{1}{\epsilon}\right)$ iterations. Numerical results demonstrate that it is much more efficient than the recent SDP-based heuristic algorithm. [999]{} Antoniou, A., Lu, W. S.: Practical optimization: algorithms and engineering applications, Springer Science+ Business Media, LLC $(2007)$ Bazaraa, M.S., Sherali, H.D., Shetty, C. M.: Nonliear programming: theory and algorithms, Third Edition. John Wiley and Sons, Inc., Hoboken, New Jersey $(2006)$ Dundar, M.M., Fung, G., Bi, J., Sandilya, S., Rao, B.: Sparse fisher discriminant analysis for computer aided detection. Proceedings of SIAM International Conference on Data Mining $(2005)$ Fung, E., Michael, K. Ng.: On sparse fisher discriminant method for microarray data analysis. Bioinformation $2, 230-234 (2007)$ Grant, M., Boyd, S.: CVX: MATLAB software for disciplined convex programming, Version 2.1, http://cvxr.com/cvx, $(2015)$. Luenberger, D. G., Ye, Y.: Linear and nonlinear programming, Third Edition. Springer Science+Business Media, LLC. $(2008)$ Nguyen, V.B., Sheu, R.L., Xia Y.: Maximizing the sum of a generalized Rayleigh quotient and another Rayleigh quotient on the unit sphere via semidefinite programming, J. Glob. Optim., $64(2), 399-416 (2016)$ Pólik, I., Terlaky, T.: A servey of S-lemma. SIAM review. $49(3), 371-418 (2007)$ Polyak, B.T.: Convexity of quadratic transformations and its use in control and optimization. J. Optimiz. Theory App. $99(3), 553-583 (1998)$ Primolevo, G., Simeone, O., Spagnolini, U.: Towards a joint optimization of scheduling and beamforming for MIMO downlink. IEEE Ninth International Symposium on Spread Spectrum Techniques and Applications $493-497 (2006)$ Sturm, J.F., Zhang, S.: On cones of nonnegative quadratic functions. Math. Oper. Res. $28, 246-267 (2003)$ Wu, M.C., Zhang, L.S., Wang, Z.X., Christiani, D.C., Lin, X.H.: Sparse linear discriminant analysis for simultaneous testing for the significance of a gene set/pathway and gene selection. Bioinformatics $25, 1145-1151 (2009)$ Ye, Y., Zhang, S.Z.: New results on quadratic minimization. SIAM J. Optim. $14(1), 245-267 (2003)$ Zhang, L.H.: On optimizing the sum of the Rayleigh quotient and the generalized Rayleigh quotient on the unit sphere. Comput. Optim. Appl. $54, 111-139 (2013)$ Zhang, L.H.: On a self-consistent-field-like iteration for maximizing the sum of the Rayleigh quotients. J. Comput. Appl. Math. $257, 14-28 (2014)$ Zhang, L.H., Yang, W.H., Liao, L.Z.: A note on the trace quotient problem. Optim. Lett. $8, 1637-1645 (2014)$ [^1]: This research was supported by National Natural Science Foundation of China under grants 11471325 and 11571029. | Mid | [
0.631732168850072,
27.125,
15.8125
] |
The Senegal international could make his Blues debut as early as tomorrow, when Chelsea travel to Southampton in the FA Cup. Florent Malouda could be leaving Chelsea (Picture: Daily Mail) And it could be one in, one out at Stamford Bridge with rumours French strugglers Evian are interested in a move for Chelsea outcast Florent Malouda. The Frenchman has yet to feature for the Blues this season under either Roberto Di Matteo or Rafael Benitez and is keen to play his football elsewhere. Read the full story here. Chelsea’s London rivals Tottenham have also been busy in the transfer market today, pipping Arsenal to the signing of highly-rated Schalke midfielder Lewis Holtby on a pre-contract. Top talent: Lewis Holtby is heading to the Premier League (Picture: AP) The German Under-21 captain will join up with the Spurs squad at the end of the season. Advertisement Advertisement And while Arsenal missed out on Holtby, they also saw out-of-favour striker Marouane Chamakh depart the club, as he joined West Ham on a loan deal until the end of the season. West Ham also added to their ranks with former Hammers youngster Joe Cole, who finalised his free transfer from Liverpool this afternoon. Read more about West Ham’s new additions here. Moving on: Marouane Chamakh is off to Upton Park (Picture: Getty) And Reds’ boss Brendan Rodgers admitted the January transfer dealings at Anfield could already be done and dusted. Daniel Sturridge moved to Merseyside earlier this week and Rodgers warned Liverpool fans not to expect much more this month, although did admit one more player could be brought in before the window shuts. Joleon Lescott has seen his playing time at Manchester City limited this season (Picture: AFP) | Mid | [
0.546468401486988,
36.75,
30.5
] |
EUGENE, Ore. – TCU will compete in eight events at the 2017 NCAA Outdoor Track & Field Championships at the historic Hayward Field in Eugene, Ore., June 7-10. The Horned Frogs will begin action Wednesday at 6:32 p.m. CT with the men’s 4x100-meter relay. Follow The FrogsLive results will be available via NCAA.com, and a live stream will be provided through ESPN. The EventsMen’s 4x100 Relay: This year's squad made sure that the 4x100-meter relay would continue to be one of the strongest events for TCU. At the Horned Frogs Invitational, Raymond Bozmans, Emeilo Ferguson, Darrion Flowers and Jalen Miller strung together a season-best time of 39.15, which ranks 18th in the country. Freshman Jostyn Andrews filled in to anchor the relay at the NCAA West Prelims, and the team recorded an eighth-place finish en route to a spot at nationals. TCU still holds the collegiate record in the event with a time of 38.23 (1989). The men's 4x100-meter relay will kick off the action at the NCAA Championships at 6:32 p.m. CT on Wednesday, June 7. 100 Meters: Jalen Miller kept the 100 meters a Horned Frog tradition throughout his junior season. Miller became the third TCU athlete in five years to bring the Big 12 crown back to Fort Worth. The Tunica, Miss., native was seeded third overall after the prelims at 10.19, and in the finals, he took advantage of a strong tailwind and edged out the victory over Texas' Senoj-Jay Givans by three-thousandths of a second with a time of 10.03. Following his gold-medal performance at conference, Miller then clocked a time of 10.08 (w+4.0) in the first round of the NCAA West Prelims before running a 10.20 in the semifinals and moving on to the NCAA Championships with the 11th seed. The 100-meter dash semifinals are set for 7:46 p.m. CT on Wednesday, June 7. Long Jump/Triple Jump: Scotty Newton will make his third-straight appearance at Hayward Field, having earned his way to nationals every year in his collegiate history at TCU. Newton will represent the Horned Frogs in both the long jump and triple jump. At the NCAA West Prelims, the junior from Bakersfield, Calif., produced a season-best mark of 25 feet 2.5 inches (7.68 meters) in the long jump to take the 12th and final spot to Eugene. In the triple jump, Newton logged the ninth-best mark. At the Big 12 Championship, Newton took home silver in the triple jump with a wind-assisted distance of 53 feet 11 inches (16.43 meters). Newton will hit the jumping pit at 8 p.m. CT on Wednesday for the long jump, and he will return for the triple jump at 7:40 p.m. CT on Friday. Men’s 4x400 Relay: The men's 4x400 relay secured a spot at nationals with a time of 3:06.50 for an 12th-place finish in Austin. In front of the home crowd at the Horned Frogs Invitational, Kevin McClanahan, Darrion Flowers, Jostyn Andrews and Derrick Mokaleng logged a season-best time of 3:06.00, taking home first place. The 4x400 relay will start at 9:48 p.m. CT on Wednesday. Women’s 4x100 Relay: The women's 4x100 relay team of Kayla Heard, Sabrina Moore, Judy Emeodi and Briona Oliver saved their best effort for the perfect opportunity at the NCAA West Prelims, where they ran the third-fastest time of the meet at 44.34 for a season best and spot at nationals. The squad will run at 6:32 p.m. CT on Thursday, June 8. 100 Hurdles: Brittney Trought, a sophomore from Aledo, Texas, wasted no time in putting her name into the record book in her first season with the Frogs. Trought moved to No. 3 on TCU's all-time list in the outdoor opener with a wind-legal time of 13.31 in the 100 hurdles. Trought went on to take home fourth place at the Big 12 Championship with a finals time of 13.05 (w+4.1). In the prelims, she logged a time of 13.16 (w+2.1). At the NCAA West Prelims, Trought qualified for the semifinals with an eighth-place finish at 13.12 before clocking a time of 13.04 (w+2.8) for a sixth-place finish and spot at nationals. Trought will compete in the 100 hurdles at 7:32 p.m. CT on Thursday, June 8. Discus: After opening his outdoor career with a win at the TCU Invitational, freshman Ryan Camp went on to steadily improve in the discus, his main event, throughout the season. Camp peaked at the right time as he produced a season-best mark of 184 feet 7 inches (56.26 meters) when it mattered most at the NCAA West Prelims. His effort at regionals eclipsed his previous personal best by nearly 3 feet and gave him the ninth-best mark of the meet, booking a ticket to nationals. Camp will compete at 7:05 p.m. CT on Friday, June 9. | Mid | [
0.625531914893617,
36.75,
22
] |
Q: Could not load file or assembly System.ValueTuple Version=4.0.3.0 I am getting the following runtime exception, except I think my case is different than the dozen or so I've seen so far, most of which are a couple years old. Could not load file or assembly 'System.ValueTuple, Version=4.0.3.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' or one of its dependencies. The system cannot find the file specified. We had been using System.ValueTuple and ran into these common issues back when they first incorporated the library into the core language. Since then we have removed all references to the old nuget package and all projects target 4.7. Since then we have been successfully using the ValueTuple constructs throughout our solution without any problems. Until now, I have a specific call that fails, while other calls that return a ValueTuple succeed, from within the same method. I don't know what the difference could be since all the calls use custom objects serialized across a SignalR hub through an interface like: Task<(MyObject myobj, Guid myguid)> GetStuffd(Guid id); I bumped all our projects to 4.7.2. I removed every binding redirect in every app.config file. Still nothing. And since other ValueTuple calls work from the same project, I don't think I'm on the right track with these kinds of solutions. Any ideas? A: The problem was actually server-side and removing the binding redirect from my host service easily solved the problem. It's worth noting that a new .Net Standard 2.0 library was the catalyst here. My call down into a .Net Standard class library is what prompted the issue. This is what was different from other calls already using ValueTuple. Clearing the binding redirects was indeed the solution after all. | High | [
0.6589861751152071,
35.75,
18.5
] |
Philips 9173 Clip On Microphone Not a single word lost Maximise your recording options The clip-on microphone is an omni directional condenser microphone for recording situations where discreet and hands-free operation is required. Its high pick-up sensitivity provides excellent recording quality. | Mid | [
0.6504065040650401,
30,
16.125
] |
from __future__ import division if __name__ == "__main__": from bokeh.io import curdoc from bokeh.plotting import Figure from bokeh.models import ColumnDataSource, CustomJS from bokeh.tile_providers import get_provider import rasterio as rio import datashader as ds import datashader.transfer_functions as tf from datashader.colors import Hot def on_dims_change(attr, old, new): update_image() def update_image(): global dims, raster_data dims_data = dims.data if not dims_data['width'] or not dims_data['height']: return xmin = max(dims_data['xmin'][0], raster_data.bounds.left) ymin = max(dims_data['ymin'][0], raster_data.bounds.bottom) xmax = min(dims_data['xmax'][0], raster_data.bounds.right) ymax = min(dims_data['ymax'][0], raster_data.bounds.top) canvas = ds.Canvas(plot_width=dims_data['width'][0], plot_height=dims_data['height'][0], x_range=(xmin, xmax), y_range=(ymin, ymax)) agg = canvas.raster(raster_data) img = tf.shade(agg, cmap=Hot, how='linear') new_data = {} new_data['image'] = [img.data] new_data['x'] = [xmin] new_data['y'] = [ymin] new_data['dh'] = [ymax - ymin] new_data['dw'] = [xmax - xmin] image_source.stream(new_data, 1) # load nyc taxi data path = './data/projected.tif' raster_data = rio.open(path) # manage client-side dimensions dims = ColumnDataSource(data=dict(width=[], height=[], xmin=[], xmax=[], ymin=[], ymax=[])) dims.on_change('data', on_dims_change) dims_jscode = """ var update_dims = function () { var new_data = { height: [plot.frame.height], width: [plot.frame.width], xmin: [plot.x_range.start], ymin: [plot.y_range.start], xmax: [plot.x_range.end], ymax: [plot.y_range.end] }; dims.data = new_data; }; if (typeof throttle != 'undefined' && throttle != null) { clearTimeout(throttle); } throttle = setTimeout(update_dims, 100, "replace"); """ # Create plot ------------------------------- xmin = -8240227.037 ymin = 4974203.152 xmax = -8231283.905 ymax = 4979238.441 path = './data/projected.tif' fig = Figure(x_range=(xmin, xmax), y_range=(ymin, ymax), plot_height=600, plot_width=900, tools='pan,wheel_zoom') fig.background_fill_color = 'black' fig.add_tile(get_provider("STAMEN_TONER"), alpha=0) # used to set axis ranges fig.x_range.callback = CustomJS(code=dims_jscode, args=dict(plot=fig, dims=dims)) fig.y_range.callback = CustomJS(code=dims_jscode, args=dict(plot=fig, dims=dims)) fig.axis.visible = False fig.grid.grid_line_alpha = 0 fig.min_border_left = 0 fig.min_border_right = 0 fig.min_border_top = 0 fig.min_border_bottom = 0 image_source = ColumnDataSource(dict(image=[], x=[], y=[], dw=[], dh=[])) fig.image_rgba(source=image_source, image='image', x='x', y='y', dw='dw', dh='dh', dilate=False) curdoc().add_root(fig) | Mid | [
0.572008113590263,
35.25,
26.375
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.