proba
float64
0.5
1
text
stringlengths
16
174k
0.996007
Is there a listing of cases in which X-1 Social Discovery was used, either successfully or unsuccessfully? There are over 300 organizations (many with multiple users) now using X1 Social Discovery. The list includes several dozen law enforcement agencies, about 80 law firms (including 20 of the AMLAW 100) and over 100 eDiscovery and computer forensics consulting firms. We are aware of our software being successfully used in many criminal and civil cases but do not have an official list or any written opinions, although we can provide you more specific information off line. We do have a some very favorable recent peer review in the New York State Bar Journal: MADE FOR EACH OTHER SOCIAL MEDIA AND LITIGATION; New York State Bar Journal; 85-FEB N.Y. St. B.J. 10, February, 2013. We are unaware of our software being unsuccessfully used in any cases, and we have a detailed whitepaper on evidentiary authentication that would serve as a foundation for a legal brief should any challenge arise. Another key point is that are cases where mere screen shots have been disallowed, but other cases where screen shots have been admitted into evidence. With X1 Social Discovery representing best practices over any other technical process for collection and preservation of social media, and far and away over screen shots, we believe that it is unlikely that the software will even be challenged, let alone successfully.
0.999929
- [Narrator] Congratulations on completing this learning path. Let's summarize the areas we've covered. "Now cloud computing is a remote virtual pool of on demand shared resources offering compute, storage, and network services that can be rapidly deployed at scale. Cloud computing is based on virtualized technology. There are three different cloud deployment models, public, private, and hybrid. There are three main cloud service models. Infrastructure as a service, platform as a service, and software as a service." There are many different use cases for cloud computing. Among them are migration of production services, traffic bursting, backup and disaster recovery, web hosting, testing and development, proof of concepts, big data, and data manipulation. Next, we looked at the business impacts of cloud computing. We identified that there will be a number of changes to the general dynamics of the business, in the way in which cloud is approached, and how this brings other changes to your organization. This may include different sales techniques, along with the ability to access and run in depth business analytics, which may alter business decisions. We also looked at how responsibilities within the organization may change as a result of your migration, and that the new processes and procedures will need to be defined to help you manage your new deployments and implementation methods. Following this, we touched on how this directly affects your employees, such as the potential of new roles, and opportunities being made available, as well as the possibility of roles becoming redundant. Cloud training is essential to ensure your teams are in a position to implement your new chosen cloud strategies. We examined how changes can affect employees, how a cloud migration affects specific teams and individuals, and the issues that may arise, and how to offer reassurance to your work force that this change brings positivity, and the opportunity to all be involved. Next, we looked at how cloud impacts financial changes. Cloud adoption allows for less spend on your capital expenditure, and a move towards your operational expenditure. Also, billing of your internal departments and customers can be optimized with specific billing reports related to define usage of cloud services that your finance department can centrally manage from the cloud console. There will also be a shift in span within certain budgets as resources have been sourced, used, and deployed. We then looked at contractual obligations as a business when migrating to the cloud, for example your service level agreements. Lastly, we looked at some of the risks involved starting from legislation and flexibility of contracts, and strategy choices. You will come across a number of your own risks dependent on your industry and business function, but it's key to ensure you perform the necessary risk mitigation against these known risks. We then looked briefly at the agile prices, and how agile can help us move forward with cloud projects. Okay, so that completes this learning path, congratulations. Please send any feedback you have to [email protected]. We welcome your comments and questions.
0.945358
On 31 May 1970, Mexico and the Soviet Union kicked off that year's World Cup before a crowd of 107,000 at Mexico City's Estadio Azteca. The match did not go as the hosts had hoped, ending as a scoreless draw. Controversy erupted even before the players took the pitch, as FIFA scheduled several matches, including the opener, to start at noon. They claimed the early start time was intended to facilitate live television coverage for Europe, but many of the players complained that the noon starts would be too hot and would favor sides from warmer climates. Despite El Tri's home advantage, the Soviets were heavily favored, having advanced the semi-finals of the previous World Cup, then finishing in fourth place in the 1968 European Championship. Mexico, meanwhile, had never gotten out of the first round in any prior World Cup. On the day, however, neither side managed to threaten the other, making it an anticlimactic start to what would eventually become one of the most celebrated tournaments thanks to a dynamic Brazil side that won the trophy, their third, after beating Italy 4-1 in the Final. The match did see the World Cup's first-ever tactical substitution, as well as the first-ever yellow card in a World Cup match. Before 1970, substitutions were allowed only in case of injury and, while cautions and expulsions were used before 1970, the tournament was the first to use the card system. The first substitute was the USSR's Anatoliy Puzach, who came on for Viktor Serebryanikov in the 46th minute, while the first yellow card - one of five issued in the match - went to Evgeni Lovchev in the 40th minute. On 30 May 1979, Nottingham Forest won their first European trophy, beating Malmö FF in the European Cup Final before a crowd of 57,000 at the Olympiastadion in Munich. It was an incredible accomplishment for Forest, who became only the third English side - after Manchester United and Liverpool - to claim Europe's biggest prize. Forest were riding a wave of success under manager Brian Clough, who had taken Derby County to the European Cup semifinals six years earlier. After leaving Derby and suffering through a brief spell at Leeds, Clough moved to Forest in January 1975. Though they were in the Second Division at the time, Clough steered them to promotion in 1977, then to the League title in 1978. In the European Cup, Forest advanced with wins over Liverpool, AEK Athens, Grasshopper, and Köln. Malmö, who were also playing in their first European Cup Final, reached it by beating Monaco, Dynamo Kiev, Wisła Kraków, and Austria Vienna. Despite the participation of two relative Cinderella teams, the match itself was anticlimactic. Malmö, dealing with the loss of key players to injury, played a defensive game to slow the English attack. But Forest secured the match's only goal near the end of the first half, as their £1 million signing Trevor Francis - making his first European appearance for Forest - scored a 45th-minute header (pictured) that turned out to be the match-winner. Forest successfully defended their title the next season, beating Hamburg 1-0. On 29 May 1985, as Liverpool and Juventus prepared to play the European Cup Final in Brussels' Heysel Stadium, pre-match rioting resulted in 39 deaths and over 600 injuries. UEFA Chief Executive Lars-Christer Olsson later called it "the darkest hour in the history of the UEFA competitions." The trouble started approximately one hour before the scheduled kick-off time. Behind one of the goals, the opposing fans, who were divided into two sections separated by a narrow strip of unoccupied territory and bordered by chain-link fencing, began to throw bottles and stones over the fences at each other. As the situation grew increasingly hostile, a group of Liverpool supporters charged through and over the fencing into the Juventus enclosure. The Juventus fans retreated against a wall, which collapsed under the pressure. Of the 39 people killed, 32 were Juventus fans, while the other seven were neutrals. Upon seeing their fellow supporters attacked, the Juventus fans at the other end of the stadium began rioting and fought with the police for over two hours, even after the match started (officials chose to play the match for fear that cancellation would result in increased violence - Juventus won 1-0 with a 56th-minute penalty scored by their French midfielder Michel Platini). In response, UEFA banned British clubs from European competition for five years, with Liverpool receiving an additional three-year ban (later reduced to one extra year). British police investigated the incident and eventually arrested 27 people for manslaughter. In a trial held in Belgium, 14 people - all Liverpool fans - were given three-year sentences. On 28 May 1888, newly-formed Celtic FC played their first official match, a 5-2 win against Rangers. Newspaper reports from the time state that the match was friendly both in name and spirit, in contrast to what the meeting would become. Celtic were founded the previous November in the Calton district of Glasgow by Brother Walfrid, whose chose the name "Celtic" to emphasize the area's Irish heritage. It was a decision that linked the new club with Edinburgh's Hibernian, founded in 1875. Indeed, Celtic borrowed several Hibernian players for the match against Rangers (and would later sign several of those players the following August to Hibernian's detriment). Accounts of that first Celtic-Rangers match are sparse, but show that Neil McCallum scored Celtic's first goal of the day, and thus their first-ever goal in an official match. The Glasgow derby has since become one of the most hotly-contested rivalries in football, with Celtic and Rangers usually fighting one another for the Scottish league's trophies. To date, they have played a total of 387 matches in the league, the Scottish Cup, and the Scottish League Cup, with 155 Rangers wins, 139 Celtic wins, and 93 draws. Between them, they have 100 league titles—Rangers have 54, while Celtic have 46, including the last four. Labels: Celtic F.C., Hibernian F.C., Neil McCallum, Rangers F.C. On 27 May 1965, Inter defeated Benfica 1-0 in the European Cup Final before a crowd of 85,000 at the San Siro. It was the second consecutive European Cup for the Italian side, who would not win it again until 2010. At the time, Benfica had themselves recently won two straight European Cups (in 1961 and 1962) and boasted the tournament's joint top scorers in Eusébio and José Augusto Torres (pictured at far right), both with nine goals. But Inter were playing at their home stadium, where, playing the catenaccio system masterminded by their manager Helenio Herrera, they had conceded only one goal throughout that season's competition. The Final thus shaped up as a classic battle of offense against defense. On the day, Inter were helped by the poor weather, with rain slowing both the pitch and the speed of the Portuguese attack. Inter received another lucky break when their Brazilian midfielder Jair de Costa took what appeared to be an easily-handled shot on goal in the 42nd minute. The wet ball slipped through the hands of Benfica keeper Costa Pereira to give Inter a 1-0 lead. Which, as it turned out, was all they needed as Benfica proved unable to crack the Italians' defense. Inter returned to the Final twice more, losing to Celtic in 1967 and Ajax in 1972 before winning their third European Cup/Champions League trophy in 2010 over Bayern Munich. 26 May 1989 - "It's Up For Grabs Now!" On 26 May 1989, Arsenal won the League in dramatic fashion, beating runners-up Liverpool 0-2 at Anfield in the last match of the season. Despite leading the race for the title earlier that spring, a recent loss to Derby County and a draw with Wimbledon left Arsenal in second place, three points behind Liverpool. Liverpool also had a better goal differential which meant that Arsenal needed not only to win, but to win by two goals. It seemed an impossible task, as Liverpool had not lost by two more or goals at Anfield all season (and had, in fact, lost there only twice all year). It was an emotional day, with Liverpool still feeling the effects of the Hillsborough disaster from the previous month in which 96 of their supporters died due to overcrowding and police mismanagement. Recognizing that, Arsenal manager George Graham planned to keep the game close for the first half, try to get a goal early in second half, then push a second. Which is almost exactly how it played out. After a scoreless first half, Arsenal striker Alan Smith scored a 52nd-minute goal after a Nigel Winterburn free kick. With about 14 minutes left in the match, and Arsenal still leading 1-0, Graham switched the Arsenal formation from a defensive 4-5-1 to a more attacking 4-4-2. Liverpool took advantage of the extra space in midfield to launch several counter-attacks, but could not produce an equalizer. In the second of three minutes of injury time, Arsenal keeper John Lukic rolled the ball out to right back Lee Dixon, who sent a long pass to Smith. Smith lobbed it into the path of a charging Michael Thomas just outside the Liverpool box. Thomas (pictured at right) evaded Liverpool defender Steve Nicol, then chipped the ball over the diving Liverpool keeper, Bruce Grobbelaar. It went into the net with 25 seconds to spare - deciding the title with the final goal in the final minute of the season's final match. Commentator Brian Moore reported the action saying "Thomas, charging through the midfield ... it's up for grabs now ... Thomas, right at the end!" The match has since been recognized as one of the most dramatic title wins in English history and featured in the 1997 film "Fever Pitch." On 25 May 2005, Liverpool won their fifth European Cup/Champions League trophy, coming back from a 3-0 deficit to beat AC Milan on penalties 3-3 (3-2) before a crowd of 70,000 at Istanbul's Atatürk Olympic Stadium. The win salvaged an otherwise disappointing season for the Reds, who had finished the Premier League season in fifth place after an early FA Cup elimination and a loss to Chelsea in the League Cup Final. The win also allowed Liverpool to compete in the next season's Champions League - their fifth-place League finish was outside the four qualification spots, but UEFA granted them a special exemption to compete in 2005-06 as title holders. Milan were favored to win and, true to form, took an early lead with a volley from captain Paolo Maldini after only 51 seconds (it was the fastest-ever goal in a European Cup/Champions League Final and made the 36-year old Maldini the competition's oldest-ever goalscorer). Liverpool attacked the Milan area, but were unable to break through the Italians' defense. Liverpool's efforts exposed them to a counter-attack, resulting in two more Milan goals before the break, both from Argentinian striker Hernán Crespo (38', 42'), on loan from Chelsea. Milan's 3-0 lead looked insurmountable, but Liverpool renewed their pressure after the break. They played only three defenders in order to bolster their attack, which paid dividends when they scored three goals in a six-minute period (Gerrard 54', Šmicer 56', Alonso 60') to draw level. Despite Liverpool's weakened back line, Milan were unable to score and the match went to extra time, then to penalty kicks. Milan went first in the shootout and missed their first two kicks - the first went over the bar, while the second was easily saved. Liverpool made their first two, but their third was saved, so that after four kicks, Liverpool were ahead 3-2. Milan's Ukrainian striker Andriy Shevchenko, who had scored the winning penalty in the 2003 Final, stepped up to take Milan's last kick, knowing that he needed to convert it in order to prevent Liverpool from winning. Unfortunately for Milan, he sent it right down the middle where it was saved by keeper Jerzy Dudek. On 24 May 1972, Scotland's Rangers won their first (and to date only) European trophy, beating Dynamo Moscow 3-2 in the UEFA Cup Winners' Cup Final before 24,000 at Barcelona's Camp Nou. Despite the tournament's name, Rangers (pictured, post-match) are one of five teams to win the competition without actually entering as a cup winner. The tournament, played from the 1960-61 season to the 1998-99 season, was open to the winners of the domestic cup competitions in UEFA's member states. But the defending Scottish Cup champion that year was Celtic, who also qualified for the European Cup by winning the league, so Rangers, as the Scottish Cup runners-up, took the spot. Dynamo, on the other hand, qualified in the traditional manner by winning the 1970 Soviet Union Cup. All eyes in the USSR were on them, as they were the first Soviet team to make it to a European final. Still, only about 400 supporters traveled from Moscow, compared to over 16,000 for Rangers. The Scots dominated the first fifty minutes, going up 3-0, but Dynamo pulled one back at the hour mark, then heightened the tension by scoring a second in the 87th minute. With one minute remaining, thousands of Rangers supporters invaded the pitch, thinking the match was over. The match stopped while the pitch was cleared, then when the final whistle sounded, the Rangers supporters rushed the pitch again, clashing with the police in several altercations. As a result of the supporters' actions, UEFA banned Rangers from the next season's competition, preventing them from defending their title. Labels: FC Dynamo Moscow, Rangers F.C. On 23 May 2007, AC Milan won their seventh European Cup/Champions League trophy, beating Liverpool 2-1 at the Olympic Stadium in Athens in a rematch of the 2005 Final. It was a dramatic finish for Milan, who had earlier been barred from competing in the tournament as a result of their involvement in the Serie A match-fixing scandal of 2005-06. But on appeal, the Italian football association allowed Milan to enter the competition in the third qualifying round, rather than directly into the group stage. Both Liverpool and Milan won their groups, but faced difficult roads to the Final. Milan beat Celtic (1-0 agg.), Bayern Munich (4-2 agg.), and Manchester United (5-3 agg.) on their way to the Olympic Stadium, while Liverpool advanced over Barcelona (2-2 agg. - Liverpool won on away goals), PSV (4-0 agg.), and Chelsea (on penalties, 1-1 (4-1)). Despite having two of the tournament's top scorers - Milan's Kaká had a tournament-high 10 goals going into the Final, while Liverpool's Peter Crouch was tied for third with 6 goals - the defenses held strong through most of the first half (Crouch didn't come on until the second half). Milan striker Filippo "Pippo" Inzaghi broke the deadlock with a controversial 45th-minute goal that appeared to deflect off of his arm past keeper Pepe Reina. Liverpool pressed for an equalizer in the second half, but were unable to beat Milan's goalkeeper, Dida. Inzaghi then scored a second goal in the 82nd minute. The match appeared to be won, but Liverpool's Dirk Kuyt made sure the last few minutes were exciting when he found the net in 89th minute. Liverpool could not muster a second, however, and the match ended as a 2-1 Milan win. On 22 May 1999, forward Mia Hamm scored her 108th goal for the US women's team, making her the all-time leading scorer in international history. The record-setting goal came a the end of the first half in a friendly against Brazil, played at the Citrus Bowl in Orlando, Florida. The score was tied at 0-0 when teammate Cindy Parlow sent the ball into the path of Hamm in the Brazilians' penalty area. Hamm cut to the right, fought off a defender, then shot the ball through the legs of Brazilian keeper Dida to put the US ahead 1-0. Brazil applied intense pressure in the second half, forcing a handful of acrobatic saves from US keeper Brianna Scurry, but were unable to find the back of the net. The hosts then extended their lead to 2-0 when forward Kristine Lilly received a 72nd-minute corner kick and kneed it home. The US scored the final goal in the 87th minute as Brandi Chastain took a quick throw that caught the Brazilians off guard. The throw went in the box to Lilly, who headed it down to Tiffany Milbrett for a strong volley into the goal. The match was a warm-up for the 1999 World Cup, which opened the next month. The US went on to win their second World Cup trophy beating China in the Final. Brazil finished in third. It was Hamm's 172d match for the US. Before her retirement in 2004, she made a total of 275 US appearances and extended her scoring record to 158. That remained the world record until 2013, when Abby Wambach scored her 159th goal (Wambach's current total is 182). On 21 May 2003, Porto won the UEFA Cup, beating Celtic 3-2 in extra time at Seville's Estadio Olímpico. It was the first European honor for Porto manager José Mourinho, who built on the success by winning the Champions League the next season. Under normal circumstances, Celtic would have been heavy favorites. But, at the time of the match, Porto had already secured the Portuguese Liga title with two matches to spare and were completely focused on the Final. Celtic, meanwhile, were tied on points with SPL leader Rangers, but behind on goal differential with one match left. In addition, the day's hot weather forced the teams to play at a slower pace, which also favored Porto. Porto's midfield general Deco orchestrated a first-half attack that put his side ahead in the 45th minute as midfielder Dmitri Alenichev's shot was parried by Celtic keep Robert Douglas into the path of Porto's Brazilin forward, Derlei, who drove it home. The lead did not last long, however, as Henrik Larsson - that year's top SPL scorer - equalized with a 47th-minute header. It was his tenth goal of the tournament and his 200th goal for Celtic. Two more quick goals followed, with Alenichev putting Porto ahead once more in the 54th minute, then Larsson finding another equalizer in the 57th minute. The teams were stalemated at 2-2 through the end of regulation, forcing the match into extra time and triggering the silver goal rule. The Final was the first match played under the silver goal rule, which meant that a lead for either side after the first half of extra time would end the match. As it turned out, though, neither team scored in the first period, so they played the full allotment of time. In the 115th minute, Derlei again pounced on a Douglas block to score the goal and seal the win. It was Porto's first European trophy in 16 years, but they would not have to wait as long for the next one, as they beat AS Monaco in the next season's Champions League Final. Celtic, meanwhile, went on to lose the SPL title race to Rangers despite winning their last match 4-0, as Rangers won theirs 6-1. On 20 May 1993, Marseille beat Valenciennes 1-0 to secure their fifth consecutive Ligue 1 title with one match left to play. Later, however, French authorities learned that Marseille had bribed three Valenciennes players and stripped the title from the club. The press labeled the ensuing scandal "L'affaire VA-OM." Marseille were motivated by their upcoming Champions League Final against AC Milan, scheduled for 26 May. While they heavily favored to beat Valenciennes anyway, they wanted to guard against injuries and still clinch the win so that they could rest their players in their final league match against title-chasers Paris Saint-Germain. It apparently worked, as Marseille beat Milan 1-0. The investigation revealed that, the night before the match, Marseille player Jean-Jacques Eydelie had offered money to three Valenciennes players in exchange for their agreement that they would not try to hard against Marseille. Eydelie claimed that he was acting under the instruction of the club's general secretary, who in turn claimed that he had been instructed by club president Bernard Tapie (pictured). In turn, Tapie claimed that it was not a bribe, but that instead he had loaned 250,000 francs to one of the Valenciennes players in order to help him start a restaurant. The FFF stripped Marseille of the 1992-93 title and it remains unassigned, as second-place finishers PSG refused to accept it. Tapie served five months in jail, while Eyedelie served seventeen days. The Valenciennes players received six-month suspended sentences and a two-year league ban. Both Marseille and Valenciennes were relegated to Ligue 2. UEFA allowed Marseille to keep their Champions League trophy, but barred them from appearing in the next season's competition. On 19 May 1957, Scotland defeated Switzerland 1-2 in a World Cup qualifier in Basel, but they had to do it in shirts borrowed from the Swiss. Ordinarily, the blue shirts of the Scottish national team would have been fine, as the Swiss shirts were red. But, according to Tommy Docherty, who started in the midfield for the Scots that day, the match was televised across Europe in black and white. Without color, officials were concerned that viewers would have difficulty distinguishing between the sides. The Scots, however, had not brought a change kit, so they had to borrow Switzerland's, which used white shirts trimmed in red. That matter settled, the Swiss took an early lead, going up 1-0 in the 13th minute with a goal from forward Roger Vonlanthen. Scotland, though, battled back against the Swiss and the progressively deteriorating weather to level the match with a 33rd-minute goal from forward Jackie Mudie (pictured). Level at the break, the Scots continued to press in the second half and were rewarded by a 71st-minute match-winner from midfielder Bobby Collins - his first international goal. The win put Scotland at the top of their qualification group. After two more matches (a loss to Spain and another win over the Swiss) they advanced to the World Cup, where they were eliminated in the group stage. On 18 May 1994, AC Milan dismantled Barcelona 4-0 in the UEFA Champions League Final at the Olympic Stadium in Athens. It was Milan's fifth European Cup/Champions League title. The teams looked evenly matched on paper, as both had won their domestic leagues that season and both had advanced from the earlier rounds with ease, winning their groups before cruising through the semifinals. Both also had recent experience in the Finals; Milan finished as runners-up the previous season, while Barcelona won the Final the season before that. If either side had an edge, most considered it to be Barcelona, as Milan were missing key players to injury (Marco van Basten and Gianluigi Lentini) or suspension (captain Franco Baresi). The Italians, under manager Fabio Capello, rose above the circumstances to dominate the match from the beginning. They were led by forward Daniele Massaro, who recorded a brace before half-time (22', 45'). Shortly after the break, forward Dejan Savićević—who had provided the assist for Milan's first goal—chipped the Barça keeper to extend the lead to 3-0 in the 47th minute. Barcelona, managed by Johan Cruyff, failed to mount any serious challenge and Milan defender Marcel Desailly—who had played for Marseille in the previous Final and beat Milan—added a fourth goal in the 59th minute to conclude the day's scoring. On 17 May 2006, Barcelona defeated Arsenal 2-1 in the Champions League Final, played at the Stade de France in Paris. It was the second European Cup/Champions League trophy for the Catalonians, who added a third in 2009, a fourth in 2011, and hope to make it five next month. The match was hyped as featuring two of the sport's greatest players at the time - Barcelona's Ronaldinho and Arsenal's Thierry Henry. But the match's first goal was scored by Arsenal defender Sol Campbell, who headed in a 35th-minute free-kick to give the Gunners a surprising lead - surprising because the English side were down to ten men after keeper Jens Lehmann had been sent off in the 7th minute for fouling Barça's Samuel Eto'o outside the box. Despite being a man down, the Gunners held on to their advantage through the remainder of the first half and deep into the second, while still attacking the Barcelona goal. The next goal, however, was Barcelona's, as midfielder Andrés Iniesta played a long pass to Eto'o, who scored a 76th-minute equalizer. Four minutes later, a Barcelona cross found second-half substitute Juliano Belletti, who fired the ball through Almunia's legs for the lead and the win. Leading up to the match, several rumors circulated about Barcelona's interest in signing Henry. He eventually signed with them in 2007 and went on to win the Champions League with them in 2009. On 16 May 2009, German side Duisberg (pictured) took a giant step toward winning their first UEFA Women's Cup, beating Russia's Zvezda Perm 0-6 in the first leg of the Final at the Central Stadium in Kazan, Russia. The scoreline was no anomaly; Duisberg had rolled through the earlier rounds in similar fashion, including a 5-1 win over Ukranian side Naftokhimik and a 5-0 win over Levante, both during the group stage, and an aggregate 1-5 victory over their fellow German side (and defending UEFA Women's Cup champions) Frankfurt in the quarterfinals. The Russians, however, had advanced to the Final with a series of closer matches, never winning by more than two goals. The first leg of the Final stayed close for most of the first half, with Zvezda keeper Nadezhda Baranova denying a 12th-minute Duisberg penalty to keep the match scoreless until the 42nd minute, when Duisberg's Famke Maes scored to put her side up 0-1. Zvezda attacked the Duisberg goal in the second half, but paid the price when Duisberg counterattacked and won another penalty for a second handball. Duisberg captain Inka Grings converted the 64th-minute kick, doubling the visitors' lead. After that, the floodgates opened. Grings scored twice more to complete her hat-trick and also notched an assist on another goal. Maes also scored again for a brace. The 6-goal advantage proved too difficult for Zvezda to overcome in the second leg, which ended as a 1-1 draw six days later in Duisberg, giving the German side their first Cup. Grings finished as the tournament's top scorer with seven goals. On 15 May 1963, Tottenham Hotspur became the first British club to win a European trophy by beating defending champions Atlético Madrid 5-1 in the European Cup Winners' Cup Final. And Spurs were truly a British club - all eleven starters and manager Bill Nicholson were from the Home Countries of England, Scotland, Wales, and Northern Ireland. As its name implies, the Cup Winner's Cup, which was first played in the 1960-61 season, pitted the various winners of the European domestic cup competitions against each other. Spurs reached the Final with wins over previous finalists Glasgow Rangers in the first round, followed by wins over Czech side Slovan Bratislava and Yugoslavia's OFK Beograd. Atlético's road to the Final was paved with victories over Maltese side Hibernians, Bulgaria's Botev Plovdiv, and Germany's Nuremberg. Played at Rotterdam's Feyenoord Stadium before a crowd of 49,000, the Final was close for about a half. Tottenham forward Jimmy Greaves scored first in the 16th minute, then his fellow forward John White extended the lead to 2-0 in the 35th minute. The Spanish side pulled one back shortly after the break, when forward Enrique Collar converted a 47th-minute penalty kick, but it was all Spurs after that. Forward Terry Dyson restored the two-goal advantage with a 67th-minute strike, then both he and Graves completed braces (Greaves 80', Dyson 85') to finish the match 5-1. Before UEFA discontinued the Cup Winners' Cup after the 1998-99 season, a handful of British clubs followed after Spurs, including West Ham (1965), Manchester City (1970), Chelsea (1971 and 1998), Rangers (1972), Aberdeen (1983), Everton (1985), Manchester United (1991), and Arsenal (1994). Labels: Atlético Madrid, Enrique Collar, Jimmy Greaves, John White, Terry Dyson, Tottenham Hotspur F.C. On 14 May 1938, England opened their European tour by defeating Germany 6-3 at Berlin's Olympic Stadium before a crowd of 110,000, including several high-ranking Nazi officials. But the match is best remembered for the political statement made by the English players, all of whom gave the infamous Nazi salute at the start of the match. The salute was an effect of Britain's appeasement policy at the time, intended to show the Germans that England respected their sovereignty. To the players' credit, when first asked to give the salute, they refused. The request came from a Football Association official who entered the dressing room as the players were preparing for the match and asked them to give the salute during the playing of the German national anthem. According to inside-right Stanley Matthews "The dressing room erupted. There was bedlam. All the England players were livid and totally opposed to this, myself included. Everyone was shouting at once. Eddie Hapgood, normally a respected and devoted captain, wagged his finger at the official and told him what he could do with his Nazi salute, which involved putting it where the sun doesn't shine." The official left the dressing room, but returned with a direct order from the British Ambassador to Germany, Sir Neville Henderson, who instructed the players to give the salute. Henderson informed them that the political relationship between England and Germany at the time was so sensitive that failure to show deference could be the "spark to set Europe alight." Faced with the apparent choice between giving the salute and starting World War II, the English players (pictured above, in the white shirts) raised their arms. Less than sixteen months later, England and Germany were at war. The act drew fierce criticism from the British press and still does. The BBC recently called it "one of England's darkest moments in the sport." On 13 May 2006, FC Basel hosted rivals FC Zürich in the last match for both clubs in the Swiss Super League season. The match - and the league title - were decided by a last-second goal in stoppage time to give Zürich their first league title in 25 years. Basel, who had won the two previous league titles, started the day in first place, three points ahead of Zürich. The visitors, however, had a better goal differential, which meant that a win would push them over Basel into first place in the final league table. It was an ugly match, with both sides committing several hard fouls, and it was close. By the end of regulation, the score was 1-1, which had the Basel supporters celebrating their imminent third consecutive title. But in the third minute of injury time, the referee awarded Zürich a throw-in. The throw went down the right side of the pitch to Zürich midfielder Florian Stahel, who crossed it into the Basel penalty area where it found his teammate, defender Iulian Filipescu. With only seconds left in the match, Filipescu fired the ball into the net past Basel keeper Pascal Zuberbühler to put Zürich ahead 2-1. The referee ended the match right after the goal. As the Zürich players and team officials celebrated the win, they were attacked by dozens of Basel supporters who poured onto the pitch. For his heroics, Filipescu was singled out by several of the Basel fans, one of whom threw a flare at the Romanian defender. Even after the teams left the stadium, fighting continued between hooligans and local police well into the night. On 12 May 1976, Bayern Munich won their third consecutive European Cup, beating Saint-Étienne 1-0 at Glasgow's Hampden Park. It is the last time that any club has won three straight competitions and only the third time that a club has won more than two consecutive Finals. Saint-Étienne, that season's Ligue 1 champions, had already visited Hampden Park in that year's competition - they beat Rangers there by the score of 1-2 in the second leg of their Second Round meeting with the Scottish club to advance 4-1 on aggregate. As a result, thousands of Scottish supporters turned out to cheer them on in the Final. Combined with the French club's own visiting fans, approximately 45,000 of the 55,000-strong crowd were supporting Saint-Étienne. The Final was a close contest. Bayern thought they had taken an early lead, but Gerd Müller's goal was flagged (incorrectly) as being offside. The French side then had a number of first half opportunities, but could not take advantage. A 34th-minute shot from midfielder Dominique Bathenay beat Bayern keeper Sepp Maier, but hit the crossbar. Five minutes later, midfielder Jacques Santini's shot just missed the net, slipping inches wide of the goalpost. The missed chances shook the confidence of the Saint-Étienne players and Bayern took control of the match early in the second half, as midfielder Franz Roth turned a 57th-minute Franz Beckenbauer free kick into the net. The French side attacked with renewed vigor, but were unable to get past the Germans' defense. Bayern's victory matched the accomplishment of Ajax, who won the European Cup three straight times from 1971 to 1973. The only team with a better streak is Real Madrid, who won the first five European Cups from 1956 to 1960. On 11 May 1985, Bradford City's Valley Parade stadium caught fire during their last match of the season, killing 56 spectators and injuring 265 others. The day started triumphantly for Bradford City. They had secured the Division Three title five days earlier with a 2-0 win over Bolton Wanderers, so the League presented the trophy to the club before their match, played against Lincoln City. The presentation brought 11,076 people out to see the match, almost double the season average of 6,610. People in the crowd noticed the first signs of fire under the main stand about five minutes before the break. Later reports described it as a "glowing light," possibly from a dropped match or cigarette that landed on trash and debris that had collected under the stand. The fire spread quickly across the wooden stand and roof, so that, within four minutes, the entire stand was engulfed in flame. The roof dropped burning timbers and other material onto the crowd, some of whom tried to escape through the back of the stand, while other rushed onto the pitch, helped by police officers and Bradford City striker John Hawley, who climbed into the burning stand to help rescue a stranded supporter. Of the 56 people who died, several succumbed to smoke inhalation, while others were crushed in the panic or were burned. One of the fatalities was 86-year old Sam Firth, a former chairman of the club. Sadly, prior to the fire, several people had warned the club about the need to replace the wooden stand and roof and to clear the debris from under the stand, but Bradford City had been slow to implement renovations. But having secured promotion, they had just ordered a new steel roof and concrete terracing that would have minimized any damage. The stadium re-opened in December 1986 and remains home to Bradford City. On 10 May 1980, Celtic won their 26th Scottish Cup, beating Rangers 1-0 before a crowd of over 70,000 at Glasgow's Hampden Park. The match itself was overshadowed by the pitch invasion afterward, followed by a riot among the rival supporters. The Final was the last chance for either team to claim a major trophy that season, as both had been eliminated from the League Cup and Aberdeen had won the SPL a week earlier, one point ahead of second-place Celtic. Rangers finished back in fifth place. Nevertheless, the Final itself was fairly tame. Neither team created many chances and they were scoreless after 90 minutes. Celtic found the advantage in extra time as forward George McCluskey diverted a Danny McGrain volley past Rangers keeper Peter McCloy. It was the only goal of the match, enough to give Celtic the 1-0 win and the Scottish Cup. As the referee blew the final whistle, a multitude of Celtic supporters rushed onto the pitch. But what started as a victory celebration soon turned into a full-scale riot, as Rangers supporters joined their opposite number on the pitch and several fights broke out. The police tried to separate the two groups, but were hopelessly outnumbered. Afterward, the SPL determined that alcohol had been a major contributor to the violence and banned its sale at league matches. Despite recent pleas to lift the ban, it remains in place. Labels: 1980 Scottish Cup Final, Celtic F.C., Danny McGrain, George McCluskey, Peter McCloy, Rangers F.C. On 9 May 2001, 126 football fans died at the Accra Sports Stadium, making it Africa's worst ever sports-related disaster. It was one of several such incidents in Africa over a period of less than a month, including similar disasters in South Africa (43 killed on 11 April), the Congo (14 killed on 29 April), and the Ivory Coast (39 killed on 6 May). The fans were in Accra that night to watch a derby match between hosts Hearts of Oak and fellow Accra club Asante Kotoko. The visitors were up 1-0 near the end of the match, but Hearts of Oak scored two late goals to take the lead. With about five minutes left, frustrated Asante Kotoko supporters began ripping seats out of the stands and throwing them onto the pitch. The police responded by firing tear gas into the crowd, causing a panic. Most of the victims were crushed in the ensuing stampede, while a few died from suffocation. Although six policemen were charged with manslaughter, authorities later determined that the tragedy was compounded by the fact that several stadium gates were locked at the time, preventing the crowd from escaping. The Ghanian government established a special scholarship for children of the victims and also erected a memorial statue at the stadium, which has since been renamed Ohene Djan Stadium.
0.993156
 Has the family lost all its wealth yet failed to heal? Doctor's advice: Six kinds of diseases can't be cured. Don't spend money wrongly. Has the family lost all its wealth yet failed to heal? Doctor's advice: Six kinds of diseases can't be cured. Don't spend money wrongly. If you are ill in your life, the first thing you think about is to get rid of it quickly, and with the development of medicine, many diseases that could not be cured in the past can now be cured, for example, penicillin, anti-tuberculosis drugs and so on. However, in the current medical development, there are still some diseases that can not be cured, such as AIDS, cancer in the middle and late stages, and so on. At present, the drugs developed by human beings can delay such diseases. However, if some people suffer from such diseases in life, they will have a mentality that is to rush to the doctor. It is very likely that doing so will lead to our illness. It's more serious, and it hasn't improved yet. Household wealth scattered but could not be cured? Doctor's advice: 6 kinds of diseases can't be cured, don't spend money in vain! Faced with the specific causes of such diseases, so far we really don't know, and there is no medicine that can cure it. There is also a disease called "immortality". Once suffering from such diseases, the most important thing is to have a good mentality, and then cooperate with doctors to carry out it. Treatment, but it must be remembered that do not rush to the doctor. At present, more and more people suffer from this disease. According to the data, there are 100 million people with diabetes, and according to our current medical level, this disease can not be cured, so long as we control more blood sugar on the body in life. Most people with hypertension do not know what is the reason, so many people suffer from this disease in time to control blood pressure, in ordinary times, we can control blood pressure through diet, exercise and so on, although it is an incurable disease, but Usually we must be careful. Many people with spur know that it can oppress our nerves and make our body very painful, but this disease is actually untreated. This pain technique is caused by ischemia and anoxia on the muscles. is psoriasis, which is often called psoriasis in the population. Because of its complexity, it is very difficult to treat the disease. In view of the current medical conditions, no drug has been developed to treat the disease. In the treatment of psoriasis, long-term standardized treatment is used to alleviate the disease. This disease is a chronic inflammation and an allergic inflammation, and there are medicines to control this disease in current medicine, but it is impossible to cure it, in fact, as long as we get control in life, no matter what we do is not delay. . Some of the above symptoms are autism, Alzheimer's disease and so on, which can not be cured, but can be controlled. In ordinary times, we just need to pay more attention to rest and have a good living habits. In the previous:Three clinics refused to see the patient he received, the results of a needle down. The next article:Cardiologist's advice: The real culprit and accomplice of myocardial infarction have been found, and we should stay away from these two habits.
0.999999
Teach your child to ride Bicycling in a different approach! Learning how to ride a bike is one of the biggest accomplishments in a young child's life. As a parent, you have the responsibility to teach your child to ride correctly and safely. It is also important to keep the learning process fun and free from pressure. It's much easier than the old way of doing it (running behind or along side your child and pushing her) is an outdated and undue punishment for both you and the frustrated Child. A punishment for you, the parent, because you quickly run out of breath (you're not the sprinter you used to be in high school), and a punishment for your child because he doesn't understand why on earth she crashes every time you're getting tired and release your grip of his saddle. He realizes that learning to balance is a scary and stressing ordeal. In a different approach ,kids aged 4 to 6 can easily and quickly learn to independently ride without training wheels and without an adult gripping their saddle from behind. The technique described here is much safer than the old one and has a much higher rate of success, based on the experience of many happy children who learned to ride this way. A balance bicycle can be a great tool to help your child learn how to ride a bicycle correctly. The purpose of a balance bike is to help the child learn proper balance and steering. A balance bicycle does not have pedals, gears or a crank set and chain. There are also no training wheels on a balance bicycle. Balance bicycle are great beginner bicycles, and they can be used by very young children. Select the Right bicycle: It it important to buy a balance bicycle in the correct size. Your child should be able to walk with both feet flat on the ground while sitting in the seat of their balance bicycle. After walking with the balance bicycle for awhile, your child will learn how to cruise and steer his bicycle. This is how it works: Remove the training wheels and remove the pedals from your child's bicycle. Lower the saddle so your child can easily place both his feet on the ground while seated. From this point on, your young child takes full control. He is in charge of moving the bicycle, with no help from you, with no pushing or balancing on your part (unless she asks for it). You can compare the bicycle with no pedals to a two-wheel scooter, which kids love and enjoy riding and balancing with no fear. A word on children's capabilities: It may come as a surprise, but little children do have good instincts and common sense which translate into self esteem and confidence it they get a chance to use them. In the old way, the child is taught to rely on someone else to balance the bicycle, and not on himself. The adult was the one who controlled the situation, the movement, the speed. The adult was the one preventing crashes and providing a false sense of safety which crashed when he let go. This erodes trust and makes the experience scary. With the suggested technique, your young child is the one making the calls. He determines how fast he goes, he is the one moving and balancing, and controlling the bicycle from the get go, he is the one preventing the crash by simply placing her feet on the ground. This builds trust. Trust in his capabilities. This builds self esteem and confidence. You explain to your child that all he has to do is create a little movement ahead while balancing. You tell him that he can't fall, because the minute something doesn't feel right, he can always put her feet down on the ground (which is the starting position) and brake. he has to try and hold him feet in the air a little while and keep pushing this way. You choose flat ground (no slopes), no vehicles, and no obstacles. You can also take his to the park and find a grassy field, so if he does crash, it's a softer landing. Long pants are advisable, to prevent scratches when falling. Use this opportunity to teach your child to wear a helmet, so he can get used to it. After just a couple of lessons your child will get the hang of it. Now he feels the balance, and he masters the first and very important skill of balancing. You can now replace the pedals, move the saddle up an inch, so her feet comfortably reach the ground, but not as easily as before. Next, choose a grassy field with a gentle slope, and let your child go down the slope, balancing with his feet on the pedals. After a few runs, he will feel comfortable and add pedaling. It's important you explain to his in advance how to apply the brakes. At this point all you have to do is behold the magic: your child enthusiastically pedals and just can't get enough! Many parents say that after years of painful failures, their child finally gets the hang of riding a bicycle with this technique.
0.985764
Are you the parent of a child with autism or another disability that’s annoyed by the special education system? Earlier than you can grow to be a particular education instructor you’ll need to have the best educational training. Though revised in 1990 because the People with Disabilities Education Act (THOUGHT), the most comprehensive modifications got here in 1997. Overidentified college students place an unnecessary burden on already restricted college resources and take away existing assets from those college students who are actually in need of them. \n\nMost special education classes are targeted on the main teams of disabilities similar to Aspergers Autism, Dyslexia and lots of different studying disorders. This system could include children with extreme disabilities, and people with reasonable or delicate language difficulties, emotional or cognitive disabilities, or other impairment that hinder studying. \n\nOn the primary day, it will be important that you take your youngster to high school to satisfy his special education teacher. The advocate should also be keen to bring up the laws, at IEP meetings, if this will profit the child. It will be one other twenty years before this concept was applied to kids with handicaps, particularly learning disabilities, making an attempt to receive an education.\n\n10) Give these students opportunities to take initiative for any of the exercise in the classroom. The coed might have learning disabilities and wishes additional attention to assist them understand the teachings, to remain up with different college students.\n\nThis may allow you to get a real, first-hand feel for the college in order to resolve if the atmosphere, actions, programs, opportunities, and provisions will meet your child’s wants. Particular education courses are taught by lecturers who’re certified and certified on this area of expertise.\n\nThere are children with emotional and behavioral issues, which need particular curriculums, and there are more than sufficient ideas, products and experience to solve these challenges, but it surely all costs cash. Every time attainable children needs to be with their typical friends and attend their neighborhood schools.\n\nIn my expertise, plenty of kids that have a disability develop unfavorable conduct, because of frustration with their lecturers. Youngsters with disabilities should “to the utmost extent appropriate be educated with children who aren’t disabled” 20 U.S. C. 1412 (e)(5).
0.999998
What date will the parties be signing this document? Indicate the date the parties will be signing this document. THIS SHARE TRANSFER FORM is made on ________. The Transferor hereby transfers to ________ ________ ordinary shares at ₦________ (________) each for a valuable consideration in the undertaking known as ________ (the "Company") to hold unto the said Transferee, its successor(s) in trade and assignor(s) subject to the same terms and conditions under which the Transeferor held same immediately before due execution hereof and the Transferee do hereby agrees to accept and take the said shares in the Company. IN 2558555 2858582 the 2552828 5582 25285225 2588 2252, 252 552 525 2255 28582 8582222 58282.
0.928416
It seems like my cat’s litter box is always wet. Is that abnormal? As more and more cats live exclusively indoors (where they’re safest) more and more are also using the litter box. One of the best things about a litter box is that it allows you to be more aware of your cat’s urine habits. You may notice, for example, that you’re cleaning the box more often. If there is more urine in the litter box, it can sometimes be a bit difficult to tell if a cat is peeing larger volumes or just more often; however, it’s important to differentiate increased frequency from increased volume, since they indicate completely different potential problems and point to different locations in the urinary system. There are conditions that cause increased frequency of small amounts of urine, conditions that result in large volumes of urine and conditions that cause increased attempts to urinate. You may need to keep a close eye on your cat to know for sure. It will also be important to know what is normal for your cat so that changes will be noticeable. One study, reported by DVM 360, indicated that cats produced an average of 28 ml/kg of urine every 24 hours. That equals about one half cup of urine a day for the average 10 pound cat. In general, what goes in must come out. Although minute water losses include tear flow, saliva flow and fluids contained in stools, the majority of water leaves the body of animals as urine. Is your cat peeing a lot or just often? Increased urine frequency (pollakiuria): If your cat is urinating small volumes frequently, he is not peeing too much; in fact, he may not be peeing enough. This condition is called oliguria and refers to decreased urine formation by kidneys or decreased elimination of urine from the body. Voiding frequent, small volumes is most often a sign of bladder irritation associated with sterile, idiopathic inflammation, infection, bladder stones or obstruction. Alternatively, increased frequency of a normal volume or increased attempts to urinate are serious signs of urinary problems, indicating possible inflammation or a sense of urgency associated with an inability to empty the bladder because of some form of blockage. Oliguria associated with decreased urine formation by the kidneys may be in compensation for bodily fluid losses elsewhere or it may be pathologic, but is always significant. These conditions should be treated as an emergency and must be evaluated by a veterinarian as soon as possible. Increased volume (polyuria): If your cat is urinating larger than normal volumes, it’s called polyuria. Most of the time excessive urinating is a result of the body’s inability to regulate urine formation. Excess blood glucose, for instance, leaves the body through urine and carries a lot of water with it. Kidney disease often results in an inability to retain fluids and so urine forms more rapidly than normal. Some hormonal diseases such as diabetes or hyperthyroidism result in poor urine concentrating ability, causing too much water to be excreted as urine. Whatever the cause, the result is increased water intake in an effort to balance this excess urine output. How will my veterinarian decide why my cat is peeing excessively? Your veterinarian will always start with a thorough physical examination, but blood and urine tests are needed to evaluate organ function. What should I do if my cat seems to be peeing more than usual? These conditions are always serious and can be life threatening. The sooner your cat is diagnosed, the greater the chances of success. See your veterinarian at once!
0.972005
The tower in sacred geography takes on the symbolism of the Tower. A Tower is the symbolic equivalent of a Database or a Data store. It is thus an accumulation of knowledge. The symbolism in sacred geography is often used in conjunction with a castle or palace. A castle is representative of a person, a palace of a spiritually important person. Thus in geographical terms, if one attaches a tower to one's house it can represent the place where one accumulates knowledge or an indicator that one is a person dedicated to acquiring knowledge - or wisdom - a store of information. There is sometimes the implication that it is spiritual knowledge, but it doesn't have to be. Occasionally the owner chooses the shape to indicate the type of knowledge being acquired. Many poets and writers physically placed towers in their gardens and attached them to their houses, and here these creative people worked and here they kept their libraries of books. They thus combined the symbolic with the actual. Poets like W B Yeats have exploited the concept of the tower extensively in their poems, but Yeats went one step further, not only did he produce a book of poems called The Tower in 1928, but he also owned a tower. The Tower was Yeats's first major collection as Nobel Laureate after receiving the Nobel Prize in 1923. It is considered to be one of the poet's most influential volumes and was well received by the public. The title, which the book shares with the second poem, refers to Ballylee Castle, a Norman tower which Yeats purchased and restored in 1917. Yeats Gaelicized the name to Thoor Ballyllee, and it has retained the title to this day. Yeats often summered at Thoor Ballylee with his family until 1928. The book includes several of Yeats' most famous poems, including "Sailing to Byzantium," "Leda and the Swan," and "Among School Children." But there is a negative side to tower symbolism. It can mean a person who accumulates vast amounts of knowledge and not wisdom - a person intent on collecting so called 'facts' as a means of impressing or towering over others, but with no real wisdom attached to what has been accumulated. A burden of beliefs. Thus if one is going to place a tower next to your house or in your garden it is better to keep it of modest proportions and modest in design, otherwise one is simply saying to the world, my ego is large, and I am accumulating knowledge with the sole intent of obtaining power and money. The same applies to the buildings owned by large companies. Vast towers housing bureaucrats or office workers, or those whose purpose is to make money without any regard for the environment or the morality of their actions, simply proclaim the negativity of tower symbolism, when applied to architecture. Modern day Towers of Babel. It is interesting that the desire to build higher office blocks or housing has only come about in this modern age of materialism, - money making without moral intent and power seeking without any regard for the environment or fellow creatures on the planet. Sometimes the symbolism of what has happened does not always come home to people, but there are some events that have a very deep symbolic meaning that bear thinking about.
0.954254
Real data scientists. Unreal results. There is no blueprint for solving problems that have never been solved before. That's exactly where our data science team excels — and what makes us stand out. Work with data scientists who are passionate about machine learning and know how to apply their knowledge to real-world problems. Our data science team consists of machine learning and statistics practitioners who are also seasoned software engineers, ensuring you get clean, tested, and maintainable code. We take the latest research and apply it to real-world problems, bridging the divide between theory and reality. Turn data into your competitive advantage. We build predictive models that break through the noise and deliver accurate predictions. Using current and historical data, you’ll be able to predict future trends and stay ahead of the curve. Extract meaning from unstructured text with custom language processing and analysis. Using natural language processing (NLP), we can turn your data into insights — whether you want to understand customer feedback better or analyze doctor’s notes on medical records, we can make it happen. Many businesses have a frustrating problem — their most valuable data is siloed in different systems, making it impossible to gather actionable insights. Data warehousing solves this problem by storing data from multiple systems in a centralized environment. With the data stored together in a single location, you can see a complete picture of your business’s health and make data-driven decisions. Calculating model accuracy is a critical part of any machine learning project, yet many models are put into production without proper validation. Our data science team can work with your team to validate pre-existing models, design new models, and establish a training program that will produce viable, accurate results using best-in-class tools and the best technologies for the problem at hand. Clean and complete data is the foundation for any machine learning project. During a data quality audit, our data science team dives into your dataset to analyze whether you have the foundations for a successful data science project. If your business processes require employees to do the same task over and over, you can gain massive efficiencies with machine learning. We can automate most decisions that a human can make in one second. Our team can build systems to automate these tasks, freeing your team up to do more impactful work. We build software that can automatically classify image files based on the actual content of the image, such as recognizing products, faces, or other objects in a scene. Using AWS Rekognition and Bayes' Rule, we’ve built robust applications that use facial recognition to identify repeat customers. We use data analysis to uncover structure in your data. By grouping similar entities together, you can profile the attributes of different groups. Get insights into underlying patterns of different groups and how they fit into the larger system. We worked with Hop to develop the world’s first self-serve beer kiosk to be powered by facial recognition technology. "Very’s data scientists came up with suggestions that were in the back of my head but I thought, ‘that’s probably not possible.’ And they made it happen."
0.999356
Non-commercial modifications of Football Games? I want to make software manipulates the machine code of the FIFA and FM series to enhance the gameplay of those games. The software would be free and I'd only take donations. One similar program is FMRTE, a run-time editor which allows modifications of FM's memory. As far as I know FMRTE has no troubles even though it is commercial software. So would my software be illegal? I'm not sure if I can provide a really complete answer. Programs written in highlevel languages are subject to copyright and are treated as literary works. Under UK law they are not patentable as programs per se, although the end product, eg the process achieved by a program can sometimes be subject to patent protection, although I suspect that does not apply here. And if a program creates as part of its output a quantity of data which could be described as a database, then that database itself can be protected separately as "a collection of independent works, data or other materials which are arranged in a systematic or methodical way, and are individually accessible by electronic or other means." (section 3a of the Copyright Designs and Patents Act 1988). Therefore anything which copied the source code or database of a program would potentially infringe copyright. But you say you want to create something which operates on the machine code and modifies it. From this I assume that the user would need to have their own copy of the game in order to run your app (for want of a better description). On that basis I think it might be acceptable because you would not need to copy the original code. However if you wanted to create a free-standing game which used the FM code together with your own code, then that would almost certainly not be legal unless you obtained a licence from the makers Sports Interactive. However I notice from their website that you can download an editor from them and so I assume that they anticipate that users of the game will wish to modify aspects of the play, and that this is condoned. I have not had time to find the terms and conditions associated with the end user licence for the game to see what they have to say about you publishing your app, but I suspect that this would not be allowed, even if you are offering it for free. You should read this licence yourself to find out what limitations there are on distributing any derivative version you create, or failing that I suggest you contact the company to ask. My program will not copy anything from the company. It can be considered a 'third-party' tool which requires an original copy of the game so that when the program is launched it will manipulate the copy. Every modification will be on memory so it won't be permanent. I'd definitely consider contacting them. I really appreciate the suggestions.
0.963356
How do you ever tell the difference? I hear noises that I think are gunshots at least once a week, but my neighbor insists they are always fire crackers. As far as I know, what I have heard has yet to be a gunshot because I never read about shootings in the local news. However, I used to live in places where sounds that I thought were gunshots turned out to actually be gunshots, so I don't want to get complacent just because I live in a "safe" place now. Basically, how do you ever tell the difference and how do you know when to call the cops?
0.997349
What's the weather like in Anakao, Madagascar in September 2019? The climate in Anakao during September can be summarized as warm, humid and dry. September is in the spring in Anakao and is typically the 4th coldest month of the year. Daytime maximum temperatures average around a muggy 28°C (82°F), whilst at night 16°C (61°F) is normal. On average September is the 2nd driest month of the year in Anakao with around 15mm 0.6 inches of rain making it a dry time to visit. This rainfall is typically spread over 3 days, although this may vary considerably. On the flip side this corresponds to an average of 10.4 hours of sunshine per day.
0.999977
Amazing Ingredients for building muscles. For building muscle you should be focusing on exercises such as deadlifts, squats, bench press and pull ups. These exercises recruit multiple muscle groups which stimulate a release of muscle building hormones such as IGF-1 Insulin like Growth Factor-1, Human Growth Hormone and Testosterone. Preferably you should train 3 times a week around 30-45 minutes each session. You should also do around 4 sets, 10 reps per exercise for maximum muscular hypertrophy. That's enough time for stimulating your muscle building hormones as well as avoiding atrophy from too long a session.
0.999271
Not to be confused with Cover band. A tribute act, tribute band or tribute group is a music group, singer, or musician who specifically plays the music of a well-known music act. Tribute acts include individual performers who mimic the songs and style of an artist, such as Elvis impersonators covering the works of Elvis Presley or groups like The Iron Maidens, an all-female band that pays tribute to Iron Maiden. Many tribute bands, in addition to playing the music of an artist or group, also try to emulate the vocal styles and overall appearance of that group, to make as close an approximation as possible. Others introduce a twist on the original act; for example, Only One Direction have created a theatre show in London's West End around their act. Dread Zeppelin plays Led Zeppelin songs in a reggae style with a lead singer dressed up as Elvis Presley, while Gabba perform the songs of ABBA in the style of the Ramones. Tribute bands usually name themselves based on the original band's name (sometimes with a pun), or on one of their songs or albums. The first tribute acts to emerge may have been Beatles tribute bands, such as The Buggs, who attempted to look and sound like The Beatles while playing their songs. However, one might argue that Elvis impersonators qualify as well. Neil Innes's band "The Rutles", a humorous take on the Beatles, achieved tremendous success with a film, All You Need Is Cash backed by George Harrison. Although initially created to honor the original bands, many tribute bands have grown to have their own fan base. Only One Direction have performed to hundreds of thousands of fans, have completed four UK theatre tours, and debuted in their own show on London's West End in October 2015. Those bands and artists that have inspired a cult following in their fans tend to have a significant tribute band presence as well, such as Lynyrd Skynyrd, Black Sabbath, Journey, Genesis, Led Zeppelin, Deep Purple, Styx, Pink Floyd, AC/DC, Iron Maiden, Kiss, Madonna, The Misfits, Queen, Alice in Chains, Grateful Dead, Van Halen, ABBA, The Rolling Stones, The Who, The Cars, R.E.M., Rammstein, Neil Diamond, and Steely Dan. More recently, tribute acts have looked to capitalize on the success of the pop genre, with a heavy focus on newer acts such as One Direction, Adele, Take That, The Wanted, Taylor Swift, Britney Spears and Beyoncé. In 1997, the British journalist Tony Barrell wrote a feature for The Sunday Times about the UK tribute-band scene, which mentioned bands including Pink Fraud, the Pretend Pretenders and Clouded House. In the piece, Barrell asserted that "the main cradle of the tribute band...is Australia. Starved of big names, owing to their reluctance to put Oz on their tour itineraries, Australians were quite unembarrassed about creating home-grown versions. Then, like an airborne seed, one of these bands just happened to drift to Britain." The band in question was the ABBA tribute Björn Again, who staged a successful publicity stunt in the early 1990s, arriving at Heathrow Airport in white one-piece outfits similar to the ones worn by ABBA on the cover of their 1976 album, Arrival. Other tribute acts such as The Beatnix (Beatles), Zeppelin Live, and The Australian Pink Floyd Show have experienced continued popularity for over a decade. In 1998, two men who were in a Blues Brothers tribute band changed their names officially by deed poll to Joliet Jake Blues and Elwood Jake Blues. They also are the only men in the UK to have their sunglasses on in their passport and driving licence photos. In 2000, filmmakers Jeff Economy and Darren Hacker produced the documentary film ...An Incredible Simulation, which examined the tribute band phenomenon. Produced separately and independently in 2001 was the documentary Tribute by directors Kris Curry and Rich Fox, which also covered the movement. In 2007, producers Allison Grace and Michelle Metivier produced a four-part documentary series called "Tribute Bands" for Global TV which features tributes to The Police, Queen, Rush and The Tragically Hip. In 2002, the first biography of a tribute band was published by SAF in London. Titled Being John Lennon, the book is a humorous account of life on the road in The Beatles' tribute "Sgt. Pepper's Only Dart Board Band", written by the group's founder, Martin Dimery. In 2003, Mandonna, an all-male tribute to Madonna, was formed in response to the rise of all-female tribute acts such as The Iron Maidens, Lez Zeppelin and AC/DShe. In 2005, original Lynyrd Skynyrd members Ed King (co-author of "Sweet Home Alabama"), drummers Artimus Pyle and Bob Burns, and "Honkettes" Leslie Hawkins and JoJo Billingsley all played with The Saturday Night Special Band, a Lynyrd Skynyrd tribute from New York. This was the first tribute band to be composed of more original members than the current touring lineup of Lynyrd Skynyrd. In 2005, tribute band Beatallica received attention when they were threatened with a lawsuit by Sony Music Entertainment over their unique interpretation of Beatles songs done in a Metallica style. With the help of Metallica drummer/co-founder Lars Ulrich, Beatallica won their legal battle, and still record and tour today. Original Deep Purple drummer Ian Paice has played with members of the Deep Purple tribute band Purpendicular in 2002, 2004 and 2007, and the whole band in December 2008 and March 2012 (which included a surprise appearance of original Deep Purple bassist Roger Glover in Switzerland) on European tours. David Brighton, (whose act "Space Oddity - David Brighton's Tribute to David Bowie" tours each year) featured in a short 2004 promo film with Bowie himself, together promoting the new Bowie album "Reality". The late soul singer Charles Bradley had considerable success in his own right after starting his career as a James Brown tribute act. Not all tribute acts use the impersonation style. An example is The Muffin Men, who play the music of Frank Zappa in their own style, do not look like, or attempt to look like original members, and often tour with former band members. Jimmy Carl Black was a regular in the band, and they have in the past played, recorded, and toured with Ike Willis and Don Preston. "From the Jam" regularly play compositions by Paul Weller and the Jam featuring bassist Bruce Foxton and previously Rick Buckler. Despite being seen as a tribute act even with an original member, they have recorded original matierial at Weller's studios. In May 2016, comedy impressionist and musician Stevie Riks vocals on his take of David Bowie singing "My Way" – Bowie's attempt to write the song for Frank Sinatra and re-creating it on "Life on Mars?" – were featured around the world – on the air, online and in print – by newspapers and trade magazines including Rolling Stone, NME and Billboard. The confusion caused in the music world began with Riks' vocals being replaced by pictures of Bowie on a YouTube video by an unknown source, credited as Bowie's "newly discovered, unreleased music" and had to be subsequently retracted by the media outlets. Tribute acts are not always welcomed by the original acts they are patterned after. In April 2009, Bon Jovi sued the Los Angeles-based all-female tribute Blonde Jovi for copyright infringement. After temporarily using the name Blonde Jersey, the band reverted to Blonde Jovi before disbanding in February 2010. In 2012 the first ever television show dedicated to tribute bands called The Tribute Show made its debut on Australian cable channel Aurora Community Channel (channel 183) on Foxtel in Australia. The show is still currently[when?] on air. In 2013 through 2017, a television series titled The World's Greatest Tribute Bands appeared on American cable television network AXS TV. Some groups have played and recorded music that parodies a specific artist or band, either by performing the original songs with modified lyrics or doing more general stylistic parodies. Examples include The Rutles and Zombeatles (for The Beatles), Beatallica (for The Beatles and Metallica), Take Fat (for Take That), 2 Live Jews (for 2 Live Crew) and The Pizza Underground (for The Velvet Underground). They Might Be Giants has occasionally played their own tribute band, opening for themselves as Sapphire Bullets and performing the album Flood from start to finish. ^ O'Mahony, Kieran. "Jamie still part of the famous five 1D tribute". The Southern Star. Retrieved 3 September 2015. ^ Porter, Hilary. "PREVIEW: Only One Direction, Bournemouth Pavilion Theatre". Daily Echo. Retrieved 28 August 2015. ^ O'Mahony, Kieran. "Jamie still part of the famous five 1D tribute". Southern Star. Retrieved 3 September 2015. ^ "Led, Sabbath - Two For One Metal Monsters From Led Or Black". 2017-03-05. Retrieved 2017-06-19. ^ Tribute Acts Management. "List of Tribute Acts". Archived from the original on 2014-10-08. Retrieved 2014-09-24. ^ Barrell, Tony (1997-11-09). "Playing Tribute". The Sunday Times. Retrieved 2014-09-06. ^ BBC News (Derbyshire) (2006-06-14). "Licence leaves band in the shade". Retrieved 2008-01-14. ^ "Rare David Bowie Music to Premiere on BBC Doc". Rolling Stone. ^ "David Bowie Rarities – Including a Song For Frank Sinatra – Unearthed For New BBC Doc". Billboard. ^ "Hear David Bowie's unreleased version of 'My Way'". factmag.com. ^ "Undercover.com.au - Bon Jovi Sue Tribute Band". Archived from the original on 2009-08-13. Retrieved 2009-08-23. ^ Simpson, David (2013-06-10). "AXS TV's 'The World's Greatest Tribute Bands' Sneak Peek". The Hollywood Reporter. Retrieved 2013-09-29. ^ "Canada's Premiere AC/DC Vocalist". www.acdc.com. Archived from the original on 2009-02-02. Retrieved 2009-01-15. ^ "Sapphire Bullets". TMBW: The They Might Be Giants Knowledge Base. Retrieved 2010-05-23.
0.974024
Why did Sony kill off its Aibo robot dog? Because Aibo isn't a band or a film, and it can't play or record music or films. Essentially, Sir Howard Stringer, the new boss at the Japanese multinational, is trying to focus the company on areas that will generate cash and, more importantly, profits, and the robotics unit that created Aibo, launched in 1999, was put to sleep in the process. Also killed off as Sony announced shining results was Qrio, a humanoid robot that can walk on two legs but had never been sold commercially; and, separately, the cathode-ray tube and plasma TV operations, which aren't thriving either. Though the Aibo was very popular - 150,000 were sold - it never went on sale in the UK, and the robotics division only produced about $40m to $80m revenue. The cull, announced last week, shocked Aibo fans, who were drooling only last September over the third-generation version of the robot dog with a camera in its eye that could be used to recognise its owner and record images, while its personality could be reprogrammed from lovable to mischievous.
0.997856
How do I install RealTimes on my computer and mobile devices? Go to http://uk.real.com/getapps.html, scroll down below the photo, and click the Windows icon. If you are using Internet Explorer or Edge: This should prompt you with Run or Save. If you run the file, the setup begins automatically. If you save it, it will be saved to your hard drive, and then you can run it later. If you are using Firefox: This should prompt you with Save File or Cancel. Click Save File, then click the icon on your desktop to run the installer. If you are using Google Chrome: A "RealTimes-RealPlayer.exe" button should appear in the bottom left corner of your browser window. Click it to start the installation process. After installing the program, you will be prompted to create an account or sign in. If you have a paid subscription, signing in will unlock the additional features. Signing in is also necessary for viewing your photos and videos across all devices on which you have downloaded the app. Open the Google Play or iTunes store on your device and search for "RealTimes", then install as you would any other app. You will then be prompted to create an account or sign in. If you have a paid subscription, signing in will unlock the additional features. Signing in is also necessary for viewing your photos and videos across all devices on which you have downloaded the app. Note: You will also be prompted to turn on Auto-backup; turning this on will automatically backup all the photos and videos on your camera roll and store them in RealCloud (you also get an extra 5GB of free RealCloud storage space if you enable this option). RealTimes is available on Roku. Amazon's Kindle Fire and Fire TV, Xbox One, and Windows 8 tablets are not yet compatible with RealTimes, but you can use the previous version, RealPlayer Cloud, on these devices and share videos between these devices and other devices that are running RealTimes.
0.981897
Are Grapes An Acidic Fruit? I'm supposed to avoid acidic fruits like oranges, lemons, grapefruit, pineapple, and tomatoes because I have acid reflux. I see no mention of grapes. Are they acidic? There is a great overview here that has information about acid reflux diets, prevention and what to avoid. Grapes are on the list of foods to watch because aparently the level of acidity is dependant on where the grapes are grown. For instance grapes grown in warmer climates tend to have lower acidity than grapes grown in cooler climates. Similarly the sugar content is the opposite. Warmer climates make for sweeter grapes. Now this information is only good if you are planning to make wine, but it is an interesting factoid. One of the best ways to determine how your acid reflux responds to a particular food is to do an elimination diet. If you start by removing all of the acidic fruits and vegetables you'll then be able to add grapes back in to see if they cause you pain. In the meantime, bananas have a great reputation for helping to ease reflux pain, so you're more than welcome to add those into your diet. Avoid: citrus fruits, cranberry juice, tomatoes and tomato-based foods, peppers, onions, spicy foods, most dairy products, caffeine, high-fat foods, deli meat, cheese, fatty cuts of meat, alcohol, and chocolate. Be Careful: berries, grapes, garlic, lean meats, yogurt, non-alcoholic beers and wines, sodas, reduced-fat cookies. Enjoy: apples, bananas, broccoli, carrots, peas, green beans, chicken breast, fish, fat-free or low-fat cheeses, whole-grain bread, brown rice.
0.951255
Authors: Gonzalez-Espinosa, M., Rey-Benayas, J.M., Ramírez-Marcial, N., Huston, M.A. and Golicher, D. Physical factors that may account for regional patterns of plant species diversity remain controversial. We aim to determine the relationship of tree species diversity to environmental factors identifiable at regional scale in the northern Neotropics. We use a high-resolution data set based on herbarium collections of all native tree species known to occur in the highly diverse and physiographically heterogeneous Mexican state of Chiapas. We analyzed 114 grid cells (5 min latitude×5 min longitude each) with 40 or more vouchers. We obtained from maps (scale 1:250 000) data on temperature, rainfall, elevation, and soils, and calculated for each grid cell mean actual evapotranspiration (AET), its ratio during the rainy and dry seasons (RET), average fertility/quality of soils (SFQ), and elevation (coarse-scale topography) variance (SDE). These variables were largely independent of each other, and were entered in multiple regression models to predict species diversity assessed with Simpson's index of concentration. A model that accounted for 41.4% of the total variance in tree diversity showed positive effects of AET and seasonality (RET), whereas SFQ had a negative effect. A curvilinear model described well the relationship between tree diversity and AET (R2=0.45), and an intermediate maximum was detected. The data pattern suggested an asymptotic relationship as well, which was confirmed with a two-part regression. Regression quantiles provided better estimates of the effect of SFQ with the upper envelope of the data (0.85–0.90 quantiles). Minimum diversity at intermediate rainfall values hints at a bimodal model of tree diversity along a rainfall gradient, in opposition to the frequent contention of a positive linear relationship. We suggest that broad-scale climatic gradients interact with intraregional landscape-level influences, thus leading to the observed nonlinear responses of tree diversity to environmental predictors. Authors: González-Espinosa, M., Rey-Benayas, J.M., Ramírez-Marcial, N., Huston, M.A. and Golicher, D. Physical factors that may account for regional patterns of plant species diversity remain controversial. We aim to determine the relationship of tree species diversity to environmental factors identifiable at regional scale in the northern Neotropics. We use a high-resolution data set based on herbarium collections of all native tree species known to occur in the highly diverse and physiographically heterogeneous Mexican state of Chiapas. We analyzed 114 grid cells (5 min latitude x 5 min longitude each) with 40 or more vouchers. We obtained from maps (scale 1:250000) data on temperature, rainfall, elevation, and soils, and calculated for each grid cell mean actual evapotranspiration (AET), its ratio during the rainy and dry seasons (RET), average fertility/quality of soils (SFQ), and elevation (coarse-scale topography) variance (SDE). These variables were largely independent of each other, and were entered in multiple regression models to predict species diversity assessed with Simpson's index of concentration. A model that accounted for 41.4% of the total variance in tree diversity showed positive effects of AET and seasonality (RET), whereas SFQ had a negative effect. A curvilinear model described well the relationship between tree diversity and AET (R2 =0.45), and an intermediate maximum was detected. The data pattern suggested an asymptotic relationship as well, which was confirmed with a two-part regression. Regression quantiles provided better estimates of the effect of SFQ with the upper envelope of the data (0.85-0.90 quantiles). Minimum diversity at intermediate rainfall values hints at a bimodal model of tree diversity along a rainfall gradient, in opposition to the frequent contention of a positive linear relationship. We suggest that broad-scale climatic gradients interact with intraregional landscape-level influences, thus leading to the observed nonlinear responses of tree diversity to environmental predictors.
0.959035
St Michael le Belfrey occupies a broad plot on the southern side of Minster Yard, immediately alongside the Minster itself. Saxon burials in Petergate suggest that a church existed on this site as early as the eighth century. One certainly existed by 1294 and was controlled by the Minster's Dean and Chapter for several hundred years. The present church dates from 1525-1536, and 'le Belfrey' refers either to the Minster belfry or to the older church which probably had a bell tower. The church was a rebuild, although how much of the old fabric remained is uncertain, resulting mixed Gothic and Renaissance styles. It remains the largest parish church in the city, originally serving a wealthy community of merchants and craftsmen. Guy Fawkes was baptised here in 1570. The 1848 bell tower is a replica of the earliest known tower, first shown in 1705. The west front was fully restored in 1867 after houses attached to the church were demolished. St Wilfrid's Catholic Church is on the northern side of Duncombe Place, overlooking Blake Street within sight of the Minster. Opened in 1864, it was dedicated with the name of the former Anglican church in York which was closed in 1548. St Wilfrid's parish was revived by Catholics in 1742 with the founding of the Little Blake Street Mission. A public chapel opened in 1760 and continued until 1802 when another chapel was built on the present site and used until 1864. St Mary's Abbey stood on the land between Museum Street and Marygate, to the north-west of the Minster. The abbey was founded in 1055 and initially dedicated to St Olave. William II re-founded it in 1088, and began the construction of a Norman church. Eventually the wealth of the abbey prompted the building of a much larger abbey church in 1270, which was completed in 1294. The old church was undermined by the foundation work and was demolished. Following a dispute and a riot in 1132, a party of reform-minded monks left to establish the Cistercian monastery of Fountains Abbey. The surviving ruins of St Mary's date to the rebuilding programme begun in 1271 and finished by 1294. In November 1538 the Gilbertine Priory in Fishergate surrendered to Henry VIII during the Dissolution. The Benedictine Priory of Holy Trinity followed a month later. St Mary's was closed in 1539, and was quickly destroyed to a great extent. St Martin-le-Grand is on the western side of Coney Street, opposite New Street. Often known as St Martin Coney Street and most famous for the clock overhanging York's main shopping street, the church existed by the eleventh century. It was largely destroyed by bombing on 29 April 1942, but in the 1960s the surviving south aisle and tower were incorporated in a rebuilding on a smaller scale under the direction of the leading church architect, George Pace. Four photos on this page contributed by Colin Hinson.
0.998023
How many people do you know say they sure wish they could win the lottery, but never buy a lottery ticket? I'm not telling you to run out and buy a lottery ticket, but I'm making the point that if you don't put yourself in a position to get lucky, you never will. Sometimes I have an extremely difficult time writing articles for this site. I will go for hours while I rewrite everything over and over again, and then other times I can't stop myself. This usually happens after I have spent considerable time trying to get the first article done and out, when all of a sudden, the ideas just pour out of my mind. Don't think that I'm disciplined as I should be, I'm far from it, but I do know that the only way I'm going to make it big is taking the time to do the research. When I do, I know that the rewards will come. The payday will be there and I just keep telling myself how it has worked every time before. That's when something will happen that I declare to be a luck moment and everything comes together. Every time I set down to research and write an article, I'm taking myself one step closer to making yet another opportunity produce huge results. I'm using my writing to reinforce my research and to continually build a stronger foundation. You have an incredible opportunity right now, by creating your own blog. Put your own research into the blog and then you'll open the door to getting lucky, just like I am. You can do it online, or create a document on your computer that you keep private. Remember, you have to put yourself in the right place to leverage luck and find your niche.
0.999161
another episode of twists & turns . So did LH & CWB planned the car fire to scare MYR into submission ? But can I just highlight some really funny bits . In a couple of episodes back the real will was read out to LH, DE and the princesss and in that one DE was the one who got the picture. So the one that the lawyer read in today´s episode is a fake one. Then this may be the worst plot hole ever because WS, since he is working with Sunni, should have told her! But the comedy does not end there ..look at LH discovering the joy of instant coffee . I think so... but not fully according to LH's wish. CWB managed to play LH and MYR against each other. His face experession in that moment ist just too hilarious..he was so serious on catching her ...and then he tried to hide it by pretending to excercise. Sometime I wonder if he suffer from multiple personality disorder. @foreverempress WB wasn´t there when the real will was being revealed. So does that mean, our evil emperor is falling for Sunny..?! Then what about Yu Ra..?! This is worth watching!! Shin Sung Rok has become my fav comedy actor!! OMG what were they doing to YR?? Treating her burns without painkiller? Or throwing acid on it??? I think they were scratching it raw removing damaged skin. And yes, they were doing it without any anesthetic. (004) Sunny: Are you telling me LH ordered it? LH: But why weren’t you even once curious about it? LH: Is it because you know who is the real culprit? Ari: Ya YDH, you did it deliberately?! LH: Are you trying to bargain with me now? WB: Where is my mom’s body? WB going to have all the ladies in this drama fall for him if he keep saving them like this. i laughed so hard coz sunny turned her room into a den.. lol the room is packed with instant ramen, rice and coffee. she even has portable stove to cook ..! I rewatched all the funny bits & I don’t think I can ever see SSR in the same light again. I am actually quite glad MYR is not dead. In my opinion, dying is too easy . She’s going to be used to bring down all the villains , and this is the comeuppance she deserves . Article: Naver 'The Last Empress' Jang Na Ra, Lee Elijah counterattack Shin Eun Kyung "Expose the truth" Today's episode is a little cruel.. I think it would be better if they went easy on the torture scenes today. Lee Hyuk is being funny once every episode. He tried to catch Jang Na Ra. It was a culture shock watching him drink mixed coffee too. What's with Lee Hyuk's eyes and smile at the end? I'm curious. How do I wait till next week? Seriously, they directed the torture scenes of skinning off her burn wounds just as it is.. The preview for today's episode had romcom vibes, but I feel nauseated watching this episode.. Seriously.. Didn't this PD receive a warning from the Ethics Community/Korea Communications Standards before?? I don't get why he's doing this... They tortured her by skinning off her burn wounds. They tortured the court lady.. The torture of making Lee Hyuk suffocate from that pillow... This crazy PD... I'm a fool for believing in the preview. The drama has high ratings and is doing well, but what is the reason for them to continue airing these scenes that are cruel and have crossed the line....? The constant exposure of these provocative scenes is the problem. It's not just today, but this has been a problem from time to time.. No matter how interesting the drama is, this isn't right. They are adding more and more stimulating scenes in between the interesting elements. If we keep praising dramas like these, I'm afraid that there will be more stimulating scenes in the future. It's good to praise the drama for having romcom scenes in the development, but those dangerous and overboard scenes have to be criticized to let them know of it. Min Yoo Ra's torture scenes are going to be reported to the Korea Communications Standards. The Kakao Dog seems to know something about dragging down the Empress Dowager. Article: Naver 'The Last Empress' Lee Elijah counterattacks, "Shin Eun Kyung added poison in it" Today's episode was interesting. But Joo Dong Min should reduce stimulating scenes. This isn't Sun Ok Kim's style. The torture scenes are Joo Dong Min's style. If they remove the stimulating scenes, today's episode is interesting. Do they really need that scene? That was so cruel that I felt nauseated. She fainted because of the pain, when they skinned her wounds. And they splashed water on her to make her feel pain again. Why did the King become a gag character suddenly? Ah, seriously. If not for Choi Jin Hyuk, I won't want to watch this drama. Today's episode, they have crossed the makjang line and it became gross.... What were the scriptwriter and director thinking? I really don't get them. I'm really angry that our actor Choi is filming a drama like this. He injured and tired himself out. As a fan, I feel bad for him. Just give Lee Elijah an award, SBS. She's so passionate in acting. She wore patient's clothes in such a cold weather did makeup for her burns. LH is a gag character now because this is just how Kim Soon Ok likes to write some of her villains. It's nothing new for me to see. I have no issue with the torture scenes but they should still be careful because of the Korea Communications Standards. I don't want them to get in trouble. my favorite scene. Could it be foreshadowing his relationship with SN? He called SN cheap but is falling in love with her. I havent watched last night episodw with subs..just feel a little bit frustrated..seems the loveline for cwb and sunny is not moving.. am i right?
0.908672
Critics of the current practice of experimenting on animals tend to fall into two groups: abolitionists and reformers. Abolitionists usually rely on the principle that the end does not justify the means. To inflict pain and death on an innocent being is, they maintain, always wrong. They point out that people do not think that the possibility of advancing scientific knowledge justifies taking healthy human beings and inflicting painful deaths on them; similarly, they say, the infliction of suffering on animals cannot be justified by reference to future benefits either for humans or for other animals (Ryder; Regan). A weakness of the abolitionist position is that when the end is sufficiently important, most people think that otherwise unacceptable means are justifiable if there is no other way of achieving the end. People do not approve of telling lies, but most people accept the idea that politicians should tell lies to mislead the enemy when their country is fighting a war that they believe is right. Similarly, if the prospects of finding a cure for cancer depended on a single experiment, most people probably would think that the experiment should be carried out. In response to objections along these lines, some abolitionists argue that although a single experiment, taken in isolation, may appear justifiable, the benefits of such experiments do not outweigh the suffering inflicted by the institution of animal experimentation as a whole. One also must take into account, these abolitionists would say, two other factors: First, a large (if uncertain) proportion of experiments are worthless; second, even if no pain or distress is caused by the experiments, experimental animals typically have been raised in conditions that constitute severe deprivation for beings of their species. The common laboratory rat, for instance, is a highly intelligent animal with a strong urge to explore new surroundings. Rats also like to get into small, dark spaces, yet in most laboratories they are kept in bare plastic buckets with a bit of sawdust at the bottom. Such treatment indicates the lack of consideration for the interests of animals that prevails in the world of animal experimentation, and abolitionists doubt that this will ever change as long as people continue to regard laboratory animals primarily as tools for research. Reformers believe that a changed practice of experimenting on animals could be defensible. They demand that any benefits that are believed to be likely to arise from the experimentation should be sufficiently probable and sufficiently great to offset the costs to the animal subjects; they urge that every experiment should come under close and impartial scrutiny to determine whether this is the case. Reformers point out that although during the 1980s and 1990s several countries (for example, Australia, Sweden, Switzerland, and the United Kingdom) developed legally obligatory systems of review based on an institutional ethics committee's review of proposals to carry out experiments on animals, experimenters usually are well represented on such committees, whereas animal welfare advocates either are not represented or are heavily outnumbered by experimenters. An impartial committee that weighed the cost to the animal in the same way that people would weigh a comparable cost to a human would, the reformers maintain, approve at most a small fraction of the experiments now performed. In other countries, such as the United States, institutional ethics committees exist but are not legally required for corporations or other institutions that do not receive federal funds, and their coverage of animal experimentation is incomplete. Moreover, in the United States these committees do not always have the authority to prevent experimenters from going ahead with painful experiments if the experimenters assert that alleviating the animals' pain would interfere with the purpose of the experiment (U.S. Congress, Office of Technology Assessment; Dresser; Smith and Boyd; Gavaghan; Orlans). Among opponents of current practices of animal experimentation the line between reformers and abolitionists is not clear-cut because questions of long-term goals and short-term strategy intervene. A threefold division might be more appropriate: In the first category one could place those whose long-term goals do not extend beyond better regulation and control of animal experiments to eliminate the most painful and trivial experiments. In the next category would be those who have the long-term goal of abolishing all or virtually all animal experiments but who consider this an ideal rather than a realistic objective for the immediate future. This group therefore seeks reforms in the interim period, and its short-term goals do not differ significantly from those of members of the first category. The third category consists of those who aim at abolition and are not interested in advocating anything less. Draize eye test was mentioned above. Opponents of animal experimentation suggest that alternative methods would be developed more rapidly if they received more substantial government support (Ryder; Rowan; Balls). The ethical stance of those in the first category, who seek only limited reforms, is often of a relatively conventional type: They can be thought of as following an "animal welfare" line rather than accepting an ethic of "animal rights" or "animal liberation." They accept the idea that animals may be used for human purposes but want safeguards to ensure that the purposes are serious ones and that no more suffering occurs than is necessary for the purpose to be realized. Those who take an animal rights or animal liberation stance want to narrow the ethical gulf that separates humans from other animals in regard to conventional morality. They thus raise a philosophically deep question with implications that go beyond experimentation, extending to the treatment of animals in general.
0.999998
Hurricanes Harvey and Irma have been taking down countless trees in Texas, Florida, and all the states between them along the Gulf of Mexico. In the aftermath of severe storms like these, trees can get lots of attention and are pointed to as the cause of loss of power and damage to property. However, while some trees do come down in high wind and extreme weather events, the majority of healthy trees survive severe storms, buffer the high winds as the storms come ashore, absorb excess rainfall, and reduce localized flooding. In the wake of these major storms, it is extremely important to remember that moving storm debris, limbs, and downed trees over long distances can inadvertently spread tree-killing insects to new places. Many areas affected by Hurricane Harvey and Irma are under quarantines that specifically prohibit the long-distance movement of tree-based storm debris (including debris that has been cut into pieces of firewood). These quarantines will depend on exact location, and may include restrictions in place for emerald ash borer, imported fire ants, giant african land snail, and citrus greening (Huanglongbing). The southeastern USA also has widespread infestations of laurel wilt, which is not under federal quarantine but can be transported on storm debris as well. Storm debris from downed trees and branches should be disposed of using one of the following safer ways: brought to a local solid waste facility (i.e. landfill), brought to a licensed city composting facility, brought to a registered storm debris disposal yard (sometimes called a marshalling yard or area), or used on site for personal firewood. Consult local newspapers and storm information to find out which of the disposal options is best in your area as you get ready to clean up your property. For future storm safety, it is especially important to remember that trees planted near homes and roads need to be properly pruned to minimize potential damage and failure, especially near power lines. When planting new trees, it is helpful to select a species that will not grow too tall and interfere with power lines to minimize future damage. As cities look to replant choosing the right tree and putting the right tree in the right place will create a more sustainable—and storm resistant—landscape for years to come. Can I bring firewood from Albuquerque New Mexico to the Red River area in New Mexico? It is legal to bring firewood from Albuquerque to Red River in New Mexico, however, that is well over the suggested distance limit of 50 miles for moving firewood. If it is possible for you to buy local firewood in Red River, or collect firewood in Red River near your destination, that would be better. We love getting questions from you, our readers, on your firewood issues! My husband sells firewood to the people that camp at (US Army Corps administered campground in Tennessee). We are a mile from the camp grounds and get all our wood locally. We season it for 2 years in the sun. Will the people who purchase through us no longer be able to take our firewood into the camp grounds with the new Army Corps firewood rules? Thank you for your help. Yes, the new policy as set by the Army Corps of Engineers in Tennessee is that firewood that is not packaged and stamped as formally certified as heat-treated by USDA APHIS is not permitted within their campgrounds, so your seasoned firewood would not be allowed. I do realize this may be a frustrating policy for a firewood vendor as close are you are to the park. If you are interested in learning how to become a business that sells heat treated firewood, I suggest you contact the USDA APHIS offices in Tennessee to speak with them. Thank you! Thank you so much for your prompt attention to our question. My husband’s log splitter broke and we don’t want to purchase another one if we can’t sell next year. Now we know. Thanks. Time for a new installment in our occasional advice column series, Dear Don’t Move Firewood, this time from a homeowner in California! We live in (city removed) near San Diego and discovered today that we have drywood termites living in our firewood, both inside and outside our home. The wood has been there, unused, for 5 or 6 years, and our termite inspector suggested either burning it, or, since the summer is hot enough already, bagging it in strong garbage bags and throwing it out with the usual trash pickup. Is this okay to do? Thank you! I am sorry to hear about the termites! I agree with your idea that summer is hot enough without a bonfire, and wildfire risks are also so high this time of year. Yes, you definitely could bag it and throw it out with regular trash pickup if you wanted. As an alternative that is a bit more ecologically friendly, you might be able to get a green waste bin from a local municipal compost or trash service. They would take your firewood and turn it into harmless, termite-free compost, which is probably better than it just taking up space in the landfill. Try searching online for any sort of local business that accepts green waste for mulch, compost, or soil amendments. Good luck, and thank you for asking! The Dear Don’t Move Firewood advice column is back, with real questions from real people (often slightly edited to ensure they are anonymous). I’m going camping in another state which is Maine. I’m from Ohio. Can I take wood for camp fires from my own wood pile? It is illegal to take any out of state firewood into Maine, as per Maine state law. It is also violation of the emerald ash borer (EAB) federal quarantine to take it from Ohio (inside the EAB quarantine area) into Maine (which is outside the EAB quarantine area). Last but not least, in general, the rule of thumb is not to move firewood more than 50 miles- and it is a lot more than 50 miles from Ohio to Maine. Instead of bringing firewood from your own wood pile in Ohio, please plan to buy wood after your arrival in Maine, ideally near your camping destination. Thank you!
0.998399
Move toward a cashless society it is no secret that we are moving toward a cashless society as electronic payment services Are they instant or does it take some time? About ponzi schemes (scams): schemes which guarantee quick returns in your investments have existed since the 1500s. Which is added to the blockchain Gas pricing China currently has currency restrictions and monies are being shipped out of the country through bitcoin. Efficient market theory says that the price contains all information aviable to the market. I have a site for you which will not multiply within 100 hours lol but its a crowdfunding and development website and its 100% legit. I think the most likely price direction from this point is down. 1 btc and receive 200% back in 90 days spartacus 3 � invest 0. 3 btc (11%) jc4 � 0. On the other hand The company claims you can earn gains of 5 to 15%. You will not see your money again. At the time the note was published The person with the most bitcoins is obviously satoshi nakamoto. It is done using specialized hardware. You may have noticed a lot of talk going on about 'forks'. Well the members are hoping they will start paying this friday. If non-upgraded nodes continue to mine blocks 600 to $2 Interested parties include microsoft Fifteen days later The binary bonus will be processed and paid daily Potential stop-orders should be long enough.
0.999812
Designing a top-notch interface for a touch-screen panel PC involves some careful planning. In addition to designing an HMI that is easy to use and intuitive, developers should also bear the medium in mind. Touch-screen technology represents some unique challenges to interface design, but by paying attention to details such as the physical environment the workspace and by adding in a margin of error, developers can design truly effective HMIs for touch-screens. When users use a touch screen, they generally expect a quick response from the equipment. This response time is registered in tenths of a second, and any response time longer than a quarter to half a second may cause the user to think their input has not registered. Pressing the button again may cause unfortunate results, particularly for important functions like starting or stopping a machine. A good way to prevent this is to ensure that the user is aware their input has been recorded. Having the button change color or shape, or otherwise indicate the response will help prevent users from growing frustrated or confused by the HMI. Depending on the application of your interface, it may be used in situations where a variety of operators will need to access it. In addition to planning for the variations in human finger size, shape, and dexterity, developers should also plan for the possibility that users will be wearing protective gear, such as gloves. Gloves may considerably increase the size of the hand, and developers can anticipate this by spacing objects more widely, or by introducing confirmation requests into the HMI. Another variation in input may come from users with impaired motor skills or sight. This can happen if the environment is dangerous or prone to elements like steam or heat that may obscure the screen. A parallax effect may also occur when users view the screen at angles other than a straight-on view. This can cause objects to appear to shift. It’s a common problem on touch-screen HMIs for an operator to inadvertently choose an object adjacent to the intended button. A good way to prevent the ‘fat-finger’ effect is to place all actionable buttons (such as those that start or stop the machine) as far apart from one another as is plausible. In addition, an HMI designed for touch screens should also include an easy way to undo or deselect accidental selections without restarting the screen. Design the HMI with safe operating procedures for the machine in mind. Color is often used as an indicator of various statuses on the machine. Bear in mind, however, that this color may be compromised by things like fingerprints, condensation, glare, sunlight, oil, or other things that may decrease visibility. Always use highly contrasting colors when possible, and avoid using similar colors in close proximity to one another on the HMI. Color is a great shorthand for information, and similar colors may confuse the user. Do not configure actions on the right click event of commands using touch buttons. Check the option ‘release’ from the command animation for actions triggered with the ‘on up’ event. Enable the InduSoft Web Studio native virtual keyboard/keypad, and scale it according to the resolution of the screen. Enable the option ‘Push-like’ for check boxes and radio buttons. Avoid associating actions with the ‘Double-click’ event. This entry was posted in InduSoft CEView, InduSoft Web Studio and tagged HMI SCADA software, HMI software, SCADA HMI, SCADA HMI software by mcorley. Bookmark the permalink.
0.946844
The seventh-generation BMW 5 Series arrived in March 2017 with designs to remain the 'ultimate driving machine' in the prestige large sedan segment. Weight loss was also the G30's prerogative with savings of 95kgs thanks to the extensive use of magnesium, as well as all-aluminium doors, bootlid and bonnet. Meanwhile, the initial four-tier line-up was more simplified than its predecessor's, including the 520d, 530i, 530d and 540i – which were all rear-wheel-drive and fitted with an eight-speed automatic transmission as standard. Significant price rises between $9145 and $19,245 were balanced with added value from higher equipment levels, including a bevy of range-wide sensors that supported various safety and semi-autonomous technologies. Is it all high fives for new 5 Series, or has BMW stretched its sedan too far?
0.98603
There are many people who wonder what exactly constitutes a healthy life style. You will find that there are many different defining characteristics of a healthy life style and they are both physical and mental. In order for your life style to be truly healthy, you will need to make sure that you take care of your body. With all of the different ways to do this, you really should not have any problems. It will be important for you to get enough exercise each day, because it is something we all need to stay healthy. If you wish to lead a truly healthy life style, you will need to make sure that you stay active physically, whether this means joining a gym or just taking your dog out for a long walk every day. There are a lot of different things you can do to say physically healthy and fit, so start exploring your options today. You will of course also want to be mentally/emotionally healthy as well, which involves having the best possible outlook on life and expressing your emotions in a meaningful way.
0.999989
In today’s On the News segment: As Obamacare goes into effect, the Koch brothers have orchestrated a multi-million dollar misinformation campaign; The Federal Reserve Bank of Chicago says an increase in the federal minimum wage would be a huge boost to our economy; Texas Governor Rick Perry announced he will not run for reelection; and more. You need to know this. As Obamacare goes into effect, some states are getting creative with ads telling residents about the healthcare law’s benefits. However, those ads will be running alongside a multi-million dollar misinformation campaign orchestrated by the Koch brothers. According to the Think Progress Blog, the conservative group Americans for Prosperity is spending a fortune on television ads aimed at turning Americans against the new healthcare law. The group’s first ad is called “Questions,” and it features a mother of two who tells viewers that she “has some questions about Obamacare.” Her so-called questions imply that under the Affordable Healthcare Act, she will be paying higher premiums and won’t be able to select her family’s doctor. The ad perpetuates myths that have been clearly debunked. As Think Progress points out, there is nothing in Obamacare that stops patients from choosing their own doctors, and people who get their insurance through their employer likely won’t see any changes at all. And, while the Americans For Prosperity ad refers to higher healthcare premiums, we’ve seen insurance rates go down in states implementing the law – like California – and millions of Americans will qualify for subsidies to purchase healthcare through insurance exchanges. In fact, it’s Republican policies – like refusing to implement the law’s Medicaid expansion – not Obamacare – that will leave some people uninsured. The constant misinformation from the Right is exactly why states like Oregon and New Jersey are working so hard to inform the public about the law’s new benefits with their new television ads. However, thanks to the Koch brothers, Americans will have a difficult time sorting through the lies, and figuring out the real ways Obamacare will improve their lives. In screwed news… The Federal Reserve Bank of Chicago says an increase in the federal minimum wage would be a huge boost to our economy, but that won’t stop Republicans from keeping wages low. According to economists at the Chicago Fed, raising the minimum wage to $9 dollar per hour would increase total household spending by nearly $50 billion dollars the following year. Nine dollars is the minimum wage President Obama proposed in his 2013 State of the Union. In response, Speaker of the House John Boehner rejected the idea immediately. He said that raising the wages of the lowest-paid Americans would cost jobs and hurt our economy. However, multiple studies have shown that an increase in wages would not lead to job losses, and the authors of this recent study wrote, “a minimum wage hike can stimulate economic activity by putting money into the hands of people who are especially likely to spend it.” Republicans would rather be able to blame the President for the economy, than give the lowest-paid workers a modest raise. On Monday, Texas Governor Rick Perry announced he will not run for reelection. While leaving the door open for another presidential run, Governor Perry said he will retire from office in January of 2015. The news presents an opening for a democratic candidate for governor, and it could finally bring an end to Rick Perry’s extreme right-wing policies. During his announcement, Perry said, “Our responsibility remains to the next generation of Texans, who will inherit a state of our making. We alone are responsible for the kind of Texas that will greet them.” And, that’s exactly why Texas democrats are hopeful that they can win the Governor’s office, and implement policies that will actually benefit that next generation. Congress may have failed to act to protect students, but at least two states are stepping in to prevent young people from being burdened with outrageous college debt. Lawmakers in Oregon and New Jersey are drafting plans to make college affordable for students in their states. Newark Mayor Cory Booker is calling on the federal government to contribute to a college fund for low-income families, and Oregon lawmakers are working on legislation to allow students to go to public universities tuition free. Both plans could provide much-needed relief to students and families considering the looming cost of higher education, and could be implemented as early as 2015. New Jersey and Oregon recognize the need to invest in our future leaders, and are working to give everyone access to a college education. And finally… After Florida Lt. Governor Jennifer Carroll resigned because of suspected ties to an internet gambling ring, lawmakers in that state went into over-drive to address the problem. Last April, Florida Governor Rick Scott signed into a law a ban on all internet cafes. But, it turns out, the poorly worded bill may have effectively outlawed all computers and smartphones in the Sunshine State. According to a recent lawsuit by one of the internet cafes, the legislation bans any “system or network of devices” that may be used in a game of chance. The law suit alleges that the law was passed “in a frenzy of distorted judgment.” Some say it’s no surprise that Republican lawmakers didn’t consider the unintended consequences of the law. But, others suspect the ban is simply another sneaky GOP attempt to send our country back to the 1950s. And that’s the way it is today – Tuesday, July 9, 2013. I’m Thom Hartmann – on the news.
0.998875
Stressed parents might medicate their children to help them sleep, calm down or endure long flights. (CNN) -- If the kids become too much to handle, slip 'em a little cold medicine. It's an often-repeated joke -- or advice -- that parents share on the playground or on Twitter and Facebook pages. One mom, Jill Smokler, said she doesn't vilify parents who medicate their kids: "It's not the end of the world." "It's certainly better than being pushed to edge, spanking a child or slamming doors or really losing it," she said. But drugging children with over-the-counter or prescription medications can have unintended consequences, said the author of a research published Thursday, who likened the practice to child abuse. The research, published in the Journal of Pediatrics, found an average 160 annual cases in which pharmaceutical drugs were maliciously used on children. "We believe the malicious use of pharmaceuticals may be an under-recognized form and/or component of child maltreatment," wrote the author, Dr. Shan Yin, a pediatrician. Using information from the National Poison Data System, Yin found that children were most commonly receiving analgesics, stimulants/street drugs, sedatives, hypnotics, antipsychotics and cough or cold medications. He found 1,439 cases from 2000 to 2008. Of those, 14 percent resulted in injuries, and 18 children died. More than half of the cases involved at least one sedating drug; 17 of the 18 deaths included sedatives. Yin said the poison data most likely underestimates the actual number of cases. The circumstances around the 18 deaths were not clear, Yin said. He did not have access to case notes and legal findings. Four of them were ruled as homicides, three resulted in legal action against the mother, two were noted as highly suspicious and one included cocaine. Why young children were given drugs such as antidepressants, stimulants and antipsychotics were also unclear. The motives, he said, could widely vary, such as overwhelmed parents looking for a break, amusement or punishment. "Anytime you're giving a medication for any other purpose other than for what it's explicitly prescribed for, you run the risk of harming your child," Yin said. This year, a Massachusetts woman was sentenced to life in prison after she was found guilty in the death of her 4-year-old daughter, whose blood had a lethal level of a hypertension drug used to sedate children with ADHD. Her husband, who was tried separately, was convicted of first-degree murder, according to CNN affiliate NECN. The prosecutors had argued the father had either given the pills or ordered his wife to do so to silence the child. In a 2005 case, a Montana day care owner was convicted of killing a 1-year-old after giving a fatal dose of cough medicine to put the child to sleep. In extreme cases such as these, the law determines whether the parent or caregivers' actions are criminal, said Dr. Lawrence Diller, who practices behavioral-developmental pediatrics in Walnut Creek, California. But in more ordinary, everyday circumstances, the ethical boundaries are hazy. "There are really ambiguous situations where the line is between helping the child legitimately -- those are the vast majority -- and situations that border on sedating the child that would be a form of abuse," Diller said. Parents may slip their children some medication to relax and think they're not harming them. This happens to families when "they feel overwhelmed or desperate," he said. Each case has different elements and motives, so it's hard to generalize whether deliberate medicating of a child is abuse, said James Hmurovich, the president and CEO of Prevent Child Abuse America. "If it's for medical a reason, that's one thing," Hmurovich said. "If moms are at wit's end and the stress is building up and they're tired, that's not a good use of over-the-counter medications." Some parents use drugs to calm their children down in airplanes. Smokler gave her daughter, who was then 1 1/2 years old, some Benadryl, expecting her to sleep through the two-hour flight. Benadryl, an antihistamine used to relieve irritated eyes, sneezing and a runny nose, had an energizing effect on her daughter. The toddler ran through the aisles, talked as loudly as she could, and jumped up and down on her chair. "It was worst-case scenario," said Smokler, of Baltimore, Maryland. "This is what I get for trying to dope up my kid." She never tried it again. Smokler has discussed using Benadryl on kids with her friends and said it could be seen as a way to have "me time" to relax, read a book or have a quiet dinner. "It's a selfish act doing that," said Smokler, who blogs at Scary Mommy, where she takes a frank look at motherhood. "Sometimes you just need it. It's better than screaming at a kid when all your buttons are being pushed. You need a break; it's a survival mechanism." Cynthia Dermody, a health editor for a mom blog, The Stir, said in the typical, real-mom world, parents joke about giving children Benadryl but don't usually go through with it. "I'll admit I've felt that inclination, too, when my kids were younger and weren't sleeping well. ... I would never give my own children a medication for a nonmedical use, but as a harried, stressed-out parent, I do understand the temptation." Sometimes, parents need a break or want their kids to sleep a little longer or sleep at all, she added.
0.950126
I've been too busy to talk. We wept and shouted in delight. Causing a shock and a breathless pause. and blame it on the dream. Eidolon - Chapter 3That day would have been a good one if Olivia had not slipped on the stairs. Christopher had finished work early for once, instead of talking himself into finishing one last stack of paperwork before calling it a day. The bus had arrived on time, the noisy school children had actually appeared docile and normal for once, the weather was pleasant. When Christopher arrived at the house his keys appeared to jump into his hand instead of eluding him and the key had turned perfectly in the lock, instead of sticking slightly until he thought it would snap. It had been one of the few occasions that he had not brought back an armful of extra paperwork to finish, so he was able to open the door easily, instead of having to shunt at it until it edged open. Eidolon - Chapter 2'Christopher!' Jack exclaimed. 'I'm in purgatory out here! Let me in, would you?' Christopher obediently stepped back to allow this distant friend inside. Jack was removing his soggy cagoule before Christopher had even managed to close the door behind him. He was leaving fat drops of rainwater on the tiled floor of the entryway, as if he were his own storm cloud, come to make Christopher's day worse than it would naturally be. He was mentally imploring himself to ignore all callers from then on when Jack thrust the cagoule at him. Christopher took it clumsily and draped it over one arm mindlessly. 'Is that the kettle boiling?' Jack asked. Christopher nodded dumbly, also listening to the faint tumult coming from the kitchen. 'Mind if I grab a brew?' Jack did not wait for an answer and simply marched down the corridor. He did not falter at the foot of the stairs. When Christopher stepped into the kitchen Jack was at the sink, washing a mug. Every inch of his body ached at this sight. Eidolon - Chapter 1A house in mourning is a lonely one. A black flag is flown above the chimney pot: do not enter, death was here. He took away all that was dear, all that held meaning. His own hands pushed her down that flight of stairs and broke that tender neck. He still lurks in the corridors, seeping his black aura into everything. Death is cruel, death is anguish. VisitorYou can appear in the unlikeliest of places. It should be a familiar sight to me now, but I am always taken off guard. I often think I see you in the shadow at the back of a room, slipping out of view. I tell myself not to look. There is nothing but dust and darkness where I think you have been. If only it were just there that I saw you then I could ignore it. I used to see you as I drew the curtain in the shower. Only a glimpse, but enough to make me wrench it back in hope and fear that you really were there. But no, nothing, still alone. I nearly always pretend I don’t see you there anymore, but it calls for short, sharp showers when I cannot help but believe that you are behind the curtain. I don’t always do it though. Sometimes when I hear the floorboards creek outside the bedroom door, I creep out of bed and call to you. I am on the landing saying your name, expecting something; expecting nothing.
0.976992
This article is about the video game. For the series with the same name, see Donkey Konga (series). Donkey Konga is a Donkey Kong video game for the Nintendo GameCube. It was developed by Namco and published by Nintendo in 2003 in Japan and 2004 overseas. It is the first installment of the Donkey Konga series. Donkey Konga is notable for being the first game to be compatible with the DK Bongos. Donkey Konga eventually received two sequels: Donkey Konga 2 and the Japan-exclusive Donkey Konga 3: Tabehōdai! Haru Mogitate 50 Kyoku. Donkey Kong realizes the potential to become famous from playing bongos. Donkey Kong and Diddy Kong are strolling across a beach and suddenly find a mysterious pair of barrels. DK attempts to open it but is stopped by Diddy, who believes it is a trap from King K. Rool. Following Diddy's advice, the duo take the barrels to Cranky Kong. Cranky chuckles and explains that they are bongos. DK decides to call them the "DK Bongos", and he plays on them. Diddy comments that DK is bad at the bongos, and he tries the bongos. DK, in turn, laughs and claims that Diddy plays the bongos poorly. He claps, which cause the bongos to glow. Cranky explains that the instrument glows and makes noises from detecting clapping. In response, Donkey Kong and Diddy perform and clap with the bongos more. After they make a lot of loud noise, DK becomes discouraged and admits that him and Diddy are not good at playing the bongos. Cranky explains that nobody starts out as a professional and that their performance gradually improves from practicing. DK initially mentions his dislike of practicing but suddenly has the idea to become good at the bongos and become famous, which Cranky believes to be a possibility. DK and Diddy daydream and focus on becoming rich and owning lots of bananas. Cranky sighs and reminds them again to practice, which the two head out to do. The main gameplay is largely identical to the Taiko no Tatsujin games, which were also designed by the same developers. The player has the option to utilize the DK Bongos or a standard GameCube controller. During gameplay, the player controls Donkey Kong, whose goal is to hit scrolling notes, known as beats. They must hit it with accurate timing when it moves under a cursor on the far left. There are four types of beats (red, light blue, yellow, and purple), and are each associated with a different button. A word appears on screen for every passing note, and the displayed word is based on the accuracy of the player hitting the beat. A combo is displayed if the player hits two or more consecutive beats, but it vanishes if the player misses a beat. All four gameplay modes (except Challenge) have three levels of difficulty modes, from lowest to highest: Monkey, Chimp, and Gorilla. The second player plays as Diddy Kong in multiplayer modes. Every song has a varying number of beats, which is indicated from the number of barrels next to their titles on the selection menu. Street Performance Based on the concept of street performance, Donkey Kong can perform songs and earn Coins, which he can use to purchase unlockables at DK Town. During gameplay, Donkey Kong earns two coins for every beat that he hits with perfect timing, or one coin for regularly-timed beats. A coin counter appears next to Ellie at the bottom-left with a self-explanatory purpose of keeping count of the number of collected coins. Additionally, a bar appears at the top-right corner that tracks how many notes the player hit. A "CLEAR" label appears in the center, and it divides the bar into two color-coded segments, red and yellow, which respectively represent poor and good performance. The bar gradually fills up for every note hit by the player, but it contrarily decreases for every missed note. The results are calculated after the song ends; Donkey Kong wins if the bar fills past the Clear label and keeps the Coins that he obtained on the way. If Donkey Kong loses at a challenge, he does not keep the coins. The player can purchase individual songs to perform on Gorilla (expert) difficulty. The player can purchase alternate sounds for the bongos to make during gameplay. The player can purchase three mini-games to play in the ape arcade, two of which have a 2-player competitive (Vs.) mode. 100M Vine Climb 4,800 coins Single player: "Climb vines and collect fruit to set records!" Multiplayer (Vs.): "Climb vines and collect fruit to be the king of the Jungle! Banana Juggle 5,800 coins Single player: "Juggle bananas and set records!" Multiplayer (Vs.): "Compete at juggling! Only one ape can win!" Bash K.Rool 5,800 coins "Slam King K.Rool back into the ground. Go for high scores!" Donkey Konga features around thirty songs, most of which differ between regional release. Every region has songs that originate from other Nintendo titles along with traditional music, including kids' medleys, pop and classical. Almost every traditional song was made into a shortened cover for the North American release. Aside from a different set of songs, Donkey Konga's North American logo is different from the European and Japanese logo. This change is reflected both in-game and on each region's box cover. The Japanese logo has a subtitle, which western versions do not have. Every title screen depicts a scene of the beach, but the North American one displays a different scene from the European and Japanese versions. The latter two depict a straight view of the beach, which is partially obscured by the game's logo. The North American title screen shows Donkey Kong and Diddy Kong partying at the shore, complete with a pair of bongos and a boombox in the scene. The logo on the GameCube menu banner is also different between regions. The Japanese version has a start-up warning advising players to be weary of vibrations, the sound and the amount of time they play. This warning is absent from the North American and European releases. A health and safety warning is featured in every regional release of Donkey Konga 2, but this exact warning was also featured in the Japanese version of the sequel. Nintendo Gamecube Tom Bramwell, Eurogamer 6/10 In the end, Donkey Konga is just too short-lived, even in multiplayer, to be worth the sort of outlay it represents. Nintendo has been surprisingly generous in its pricing here - most people will sell you the game and a set of bongos for £30 as far as we can see, and extra sets run to just £20 - but with the songs already shortened (and covered by a fairly decent bunch of impersonators, rather than licensed, curiously) Donkey Konga just doesn't have the legs. We appreciate the simplicity of the idea, but in the absence of the hidden depths we normally expect from this sort of game - or the ritual humiliation we now demand - it ultimately wears thin far too quickly. And for that reason we can't see it becoming the eBay legend that Samba was, although we've little doubt that you'll be able to find it on there all too quickly. Nintendo Gamecube Juan Castro, IGN 8.5/10 Donkey Konga packs hours of fun. It's a good single-player experience and a great multiplayer one. If you can round up four buddies and four bongo controllers, you're set for the evening. All that's missing in a room with this game (and four bongos) is booze and a bowl of Tostitos. A somewhat limited song selection is the only thing keeping the multiplayer aspect from being the greatest thing EVAR, so to speak. The graphics, while bland and lacking several layers of polish, get the job done without causing too much of an eye-sore. The mini-games offer a little fun, but your best still sits in Kongo's primary game modes. For this subject's image gallery, see Gallery:Donkey Konga. Donkey Kong - When choosing whether to display the screen in 50Hz or 60Hz, Mario (as he appears in Donkey Kong) acts as a cursor and Donkey Kong stands to the left (also as he appears in Donkey Kong). Donkey Kong 64 - Donkey Kong, during the "K. Rool Bash" mini-game, can be heard saying "Hey!", "Cool!", and "Yeah!" throughout. Also, the Melee version of the DK Rap appears in the game. Super Smash Bros. Melee - The tracks "Rainbow Cruise," "Super Smash Bros. Melee Opening," and "DK Rap" are taken from this game. Super Mario Bros. - The track "Mario Bros. Theme" is a remix of a track from this game. Donkey Kong Country - The track "Donkey Kong Country Theme" is a remix of a track from this game. The Legend of Zelda series - The Legend of Zelda Theme is featured on the North American, European and Australian versions of the game. Kirby: Right Back at Ya! - The Japanese and North American releases both include the anime's theme song. Pokémon (anime) - The North American release includes the anime's theme song. This page was last edited on April 6, 2019, at 14:55.
0.981476
The extraordinary mob life of Julius Bernstein, the last of New York's great Jewish gangsters, has been revealed for the first time in the release of a once-secret FBI file. The gangster known as Spike managed to stay out of jail for more than 40 years as he devoted his life to organised crime with one of the Mafia's 'Five Families'; the Genovese family. Pages of the once confidential file have been obtained by the New York Daily News through the Freedom of Information Law. The mob life of Julius Bernstein, the last of New York's great Jewish gangsters, has been revealed for the first time in the release of a once-secret FBI file The papers detail Bernstein's life shaking down the Sbarro restaurant chain for cash payoffs, seizing control of a bus drivers’ union and working alongside the legendary Gambino family capo Matthew Ianniello, reports the Daily News. 'I’ve been a thief all my life,' Bernstein once bragged. But before his death at the age of 85 in 2007, Bernstein became an FBI informant. 'Wiseguys trust me.That’s why sitting here is killing me.' he said on his first day as an informant, reports the Daily News. In 1944 Bernstein began his mob career after forming a friendship with Matthew Ianniello, pictured here leaving court in 2006, who was known as a up and coming mobster Born in 1922 in Brooklyn, Bernstein grew up in an Italian/Jewish neighbourhood. Jewish gangsters like Meyer Lansky and Bugsy Siegel paired up with Italian mobsters like Charles (Lucky) Luciano while Louis (Lepke) Buchalter ran Murder Inc., a franchise of Jewish and Italian assassins. Returning as a hero after fighting with U.S. forces on D-Day, Bernstein formed a friendship with Matthew Ianniello, who was known as a up and coming mobster, according to the Daily News. Despite their close alliance Matty, who was best man at Bernstein's wedding, could not bring his friend into the Mafia because of his Jewish heritage. Bernstein grew up in the shadows of other Jewish gangsters including Benny (Bugsy) Siegel, pictured here in 1940, who reputedly worked as a hitman for the Mafia Jewish gangsters like Meyer Lansky and Bugsy Siegel, pictured in a mughsot, also paired up with Italian mobsters like Bernstein Bernstein instead served as a trusted 'associate' of the Genovese family. His big moment came in 1971 when the Genovese were seizing control of labor unions and he was planted at Local 1181 - a school bus drivers’ union that became a steady illegal income, reports the Daily News. Over the next 35 years, Bernstein's salary soared to $216,000 a year, and he drove a union-owned Lincoln Continental as he squeezed every illegal penny he could out of the union by shaking down bus company owners, uniform makers and a medical clinic. Bernstein earned enough trust to manage the bookmaking operation of the then-Genovese boss Frank (Funzi) Tieri. Bernstein says it was Tieri who told him about the family’s decades-long shakedown of the now-global Sbarro restaurant chain, reports the Daily News. Jewish gangster Meyer Lansky, pictured here in 1951, was known for running one of the most violent Prohibition gangs in New York Unlike Bernstein, who managed to stay under the radar for over 40 years, Lansky pictured here in a mugshot, had many run ins with the police The empire began in 1959 as a single Italian grocery in Bensonhurst run by Sbarro brothers, Mario, Joseph and Anthony. Bernstein told the FBI that the 'protection' payments began in the 1960s. By 2004, they were paying $20,000 a year - Bernstein said he was ordered to take over collecting the two annual payments of $10,000. The gangster’s luck finally ran out at the age of 82 in July 2005 after his arrest for union corruption which saw him facing up to 20 years in prison. He admitted several extortion charges in 2006 - including the Sbarro shakedown - and made an extraordinary decision to become an FBI informant. Even then, the gangster had not quite reformed and eight months later he collected a $20,000 payment from a bus company owner inside a hotel bathroom, reports the Daily News. On October 21, 2007, Bernstein died aged 85 at Montefiore Medical Center in the Bronx.
0.999998
History of Naturalization Requirements in the U.S. The History of Naturalization Requirements in the U.S. Naturalization is the process of gaining United States citizenship. Becoming an American citizen is the ultimate goal for many immigrants, but very few people are aware that the requirements for naturalization have been over 200 years in the making. Before applying for naturalization, most immigrants must have spent 5 years as a permanent resident in the United States. How did we come up with the "5-year rule"? The answer is found in the legislative history of immigration to the U.S. Naturalization requirements are set out in the Immigration and Nationality Act (INA), the basic body of immigration law. Before the INA was created in 1952, a variety of statutes governed immigration law. Let's take a look at the major changes to naturalization requirements. Before the Act of March 26, 1790, naturalization was under the control of the individual states. This first federal activity established a uniform rule for naturalization by setting the residence requirement at 2 years. The Act of January 29, 1795, repealed the 1790 act and raised the residency requirement to 5 years. It also required, for the first time, a declaration of intention to seek citizenship at least 3 years before naturalization. Along came the Naturalization Act of June 18, 1798 - a time when political tensions were running high and there was an increased desire to guard the nation. The residence requirement for naturalization was raised from 5 years to 14 years. Four years later, Congress passed the Naturalization Act of April 14, 1802, which reduced the residence period for naturalization from 14 years back to 5 years. The Act of May 26, 1824, made it easier for the naturalization of certain aliens who had entered the U.S. as minors, by setting a 2-year instead of a 3-year interval between the declaration of intention and admission to citizenship. The Act of May 11, 1922, was an extension of a 1921 Act and included an amendment that changed the residency requirement in a Western Hemisphere country from 1 year to the current requirement of 5 years. Noncitizens who had served honorably in the U.S. armed forces during the Vietnam conflict or in other periods of military hostilities were recognized in the Act of October 24, 1968. This act amended the Immigration and Nationality Act of 1952, providing an expedited naturalization process for these military members. The 2-year continuous U.S. residence requirement was done away with in the Act of October 5, 1978. A major overhaul of immigration law occurred with the Immigration Act of November 29, 1990. In it, state residency requirements were reduced to the current requirement of 3 months. Today's general naturalization requirements state that you must have 5 years as a lawful permanent resident in the U.S. prior to filing, with no single absence from the U.S. of more than 1 year. In addition, you must have been physically present in the U.S. for at least 30 months out of the previous 5 years and resided within a state or district for at least 3 months. It is important to note that there are exceptions to the 5-year rule for certain people. These include: spouses of U.S. citizens; employees of the U.S. Government (including the U.S. Armed Forces); American research institutes recognized by the Attorney General; recognized U.S. religious organizations; U.S. research institutions; an American firm engaged in the development of foreign trade and commerce of the U.S.; and certain public international organizations involving the U.S. USCIS has special help available for naturalization candidates with disabilities and the government makes some exceptions on requirements for elderly people.
0.999999
What FIVE key services does cafecollege provide? Helping students set and keep short- and long-term academic and career goals by building a college-going culture. Increasing awareness of career opportunities and assisting with planning of career paths, to include increasing student knowledge and awareness of STEM fields for career opportunities in San Antonio. Increasing awareness of higher education opportunities and assisting with college entry and enrollment. Providing guidance and coaching as students transition from high school to college with confidence and success, as well as opportunities for character development, including the essential skills needed to become an effective leader.
0.999923
SEO is a multifaceted endeavor comprised of many different parts of varying levels of importance. I’m not saying internal linking is the most important part, but it is up there. When designing a website, it's not enough to simply have all of the information and pages you want. Equally important is how visitors will access that information, and how you'll move them along towards a specific and measurable conversion. During our internal linking analysis, we evaluate the different pages on your website, how they interact with each other, how accessible your content is, and then we design a strategy to maximize visitor flow. This ensures that your website is not simply a place for information, but rather is designed to funnel visitors to the most valuable pieces information, facilitating conversion. Internal linking also plays a role in how search engines crawl, categorize and rank different pages on your website. A properly optimized site will maximize authority to your most valuable pages and can significantly improve the search rankings of these pages. We know how to optimize internal linking for maximum authority and visibility — ultimately increasing the chances that searchers will find your website, helping you reach new customers and bring in more business.
0.994213
Announce I'm a Korean. and can speak english a little. I'm using Cmotion Gallery script. and this work at IE and Opera. Anybody know how to make work at Firefox??? Too many scripts on one page. What are you mean?? Can you tell me what is crash with script?? What are you mean??There are too many scripts on your page. Il y a trop de scripts à votre page. Hay demasiado de escrituras(?) en su pagina. There are too many scripts on your page. I already understand that... just I want know where has problem. Just some of the scripts. It doesn't matter which ones, although some will have a different effect on performance than others. hm, that means if I want use at ff than I have to change almost...?? isn't it?? This won't just apply on FX; for slower computers, it will have unpredictable results on your page under any browser. This won't just apply on FX (sic FF); for slower computers, it will have unpredictable results on your page under any browser. Quite true, and the way to fix things is to decide which scripts are really essential and make these work together in at least all major browsers. If you write off FF, you lose at least 20% (and growing) of your users. Making sure that your page validates is a good step as well. I know that, few do. Just I'll not using that script at Firefox, another function is can work at FF. Sorry for my english is not good and that make misunderstand.
0.940243
How do I restore an archived survey? 2. Click on the Archived tab. 3. Click the Restore Survey button for the survey you want to restore. 4. Your survey will now be restored within the All tab.
0.999956
Hi, I need a banner or two for my website - I could do it, but don't have the time. How much would it cost - my website is - [увійдіть, щоб побачити URL] - one spring looking with my logo and a key phrase or two - My site is a gift and personalized product site. I could supply images of products.
0.948515
The Constitution of Colorado is the foremost source of state law. Legislation is enacted by the Colorado General Assembly, published in the Session Laws of Colorado, and codified in the Colorado Revised Statutes. State agencies promulgate regulations in the Colorado Register, which are in turn codified in the Code of Colorado Regulations. Colorado's legal system is based on common law, which is interpreted by case law through the decisions of the Supreme Court and the Court of Appeals, which are published in the Colorado Reporter and Pacific Reporter. Counties and municipalities may also promulgate local ordinances. In addition, there are also several sources of persuasive authority, which are not binding authority but are useful to lawyers and judges insofar as they help to clarify the current state of the law. The foremost source of state law is the Constitution of Colorado, which like other state constitutions derives its power and legitimacy from the sovereignty of the people. The Colorado Constitution in turn is subordinate only to the Constitution of the United States, which is the supreme law of the land. Pursuant to the state constitution, the Colorado General Assembly has enacted various laws. The bills and concurrent resolutions passed by a particular General Assembly session, together with those resolutions and memorials designated for printing by the House of Representatives and the Senate, are contained in the Session Laws of Colorado. These in turn have been codified in the Colorado Revised Statutes (C.R.S.). Pursuant to certain broadly worded statutes, state agencies have promulgated an enormous body of regulations, published in the Colorado Register and codified in the Code of Colorado Regulations (CCR), which carry the force of law to the extent they do not conflict with any statutes or the state or federal Constitutions. Colorado's legal system is based on common law. Like all U.S. states except Louisiana, Colorado has a reception statute providing for the "reception" of English law. All statutes, regulations, and ordinances are subject to judicial review. Pursuant to common law tradition, the courts of Colorado have developed a large body of case law through the decisions of the Colorado Supreme Court and the Colorado Court of Appeals. There is no official reporter. The Colorado Reporter (a Colorado-specific version of the Pacific Reporter) is an unofficial reporter for appellate decisions from 1883. Decisions of the Colorado Supreme Court were published in the official Colorado Reports from 1864 to 1980, and decisions of the Court of Appeals were published in the official Colorado Court of Appeals Reports from 1891 to 1980. Colorado is divided into 64 counties, as well as some 271 active incorporated municipalities, including 196 towns, 73 cities, and two consolidated city and county governments. Colorado counties have the authority to adopt and enforce ordinances and resolutions regarding health, safety, and welfare issues "as otherwise prescribed by law" which are not in conflict with any state statute, as well as the power to adopt ordinances for control or licensing of those matters of purely local concern in a number of policy areas. All such ordinances of a general or permanent nature and those imposing any fine, penalty, or forfeiture must be published. Colorado municipalities have the power to adopt ordinances which are necessary and proper to provide for the safety, preserve the health, promote the prosperity, and improve the morals, order, comfort, and convenience of the municipality and its inhabitants and which are not in conflict with any laws, and have the power to enforce them with fines of up to $2650 and/or imprisonment for up to one year. All such ordinances of a general or permanent nature and those imposing any fine, penalty, or forfeiture must be published in a local newspaper, or three local public places otherwise. ^ a b Hamilton 2008, p. 113. ^ a b Oesterle, Dale A.; Collins, Richard B. (2002). The Colorado State Constitution: A Reference Guide. Greenwood Publishing Group. p. 27. ^ a b Larsen, Sonja; Bourdeau, John (1997). Legal Research for Beginners. Barron's Educational Series. p. 268. ^ C.R.S. § 30-15-101 et seq. Hamilton, Andrea L. (August 2008). "Conducting Colorado Legislative History Research". Brown, Douglas G.; Pike, Charles W. (June 1997). "The Colorado Revised Statutes: A Glimpse at the State's Obligation—Past, Present, and Future".
0.974794
Semantic textual similarity deals with determining how similar two pieces of texts are. This can take the form of assigning a score from 1 to 5. Related tasks are paraphrase or duplicate identification. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks. Sentence pair modeling is critical for many NLP tasks, such as paraphrase identification, semantic textual similarity, and natural language inference. Word embeddings have been found to provide meaningful representations for words in an efficient way; therefore, they have become common in Natural Language Processing sys- tems.
0.999975
was thinking about mage or marksmen, but i dont really know who of all characters can do max area damage, so if you know who can do max area damage(endgame also) please tell me, also if 2 characters do same area damage, what the difference is, thanks! Mage is best by far. Lightning or Fire trees. Marksman class is broken and damage doesn't come close to a mage at end game. ok thanks, and what has better area damage, fire or lightning? Fire - because you have DoT and simply have quicker cast times. Lightning mages only catch up 75+ if they go crit damage build. I grab all sorts of things too, so my area is definitely bigger than a mages. depends on what you actually want. do you usually pair up with a healer ? do you want to pvp at some point ? do you want it just for botting orders where first hit aggro is important ? if you don't plan to pvp, mm is a good choice. else go mage. spike damage is higher with a mage imho. With our burns mm have a lower but more steady aoe damage. so wich is more effective in pve mm or mage? or equally? I've managed to grab aggro from mages even as a Burst / Prec hybrid unless they were super op. But at that point, they can far more easily survive the aggro they generate which works for me since I can AoE with impunity. As to the pve side of things, it can be pretty even depending on level in terms of overall damage but when it comes to spike damage mages will eventually win that contest at end game since the right combo of MM skills can give 20% bonus to both base and bonus damage but it's for a limited number of attacks and burns through a large amount of the soul clip unlike a mage who just needs the right debuffs on the target. The difference comes in the amount of status effects a MM can generate from a skill or skill combo that either up our damage or the pt damage as well as being able to do hybrid builds due to the way the various skill trees are laid out. TL;DR version of the above paragraph: Mage smash keyboard! MM need to understand timing for buff / debuff usage as well juggling range and soul bullets in combat leading to more of a challenge. You do realize burns don't generate aggro though, right? A Dyos fire mage for instance can get +14%/+16% perma crit chance in PvE (Fervor + Fire Shield, latter is with Fire Shield rune but is very rare) and +40% crit damage on AoEs. And for PvE purposes (with the talent making it more than 180) +225 mastery for like 15sec and way more burn. You can't compare yourself to mages who are completely noobs/trash gearwise. If a mage isn't crit build, he's doing it wrong. Don't even compare to those. This is of course talking PvE wise since that's what we're talking about right? so if you want to use gems on gear, only using crit + gems? Ideal DPS of course would be, 2 Goldsparks, 4 Shattershards and 8 Ragefires. Possibly 8 Azures as well, but then you might be too squishy depending on your build. If you have 12/12 gear it's fine. You'll hit like a truck. Of course, not so good in PvP cause you'll be too squishy. Also you can get +50% crit damage on AoEs, if you have Infuriated Mind rune. 1) Yes yes burns don't generate aggro, but I do still steal aggro from fire mages that don't out wing me on a pretty average basis. I don't think they're crit build, but I guess that's your point right? If they aren't crit build they are trash? That can get pretty pricey for the casual player. For fairly average casual-ish (as in very little to no cs) it's even imho. Now for leaf exchange/gold farming yadda-yadda yadda to get the cs items (esp now there are gems in there) it is possible, but for the average person, it can take a while. 3. Me and Trogatha aren't saying Mages won't kick our dps **** with spike damage, that's a given. We can however, continuously put out a large aoe (range wise) with decent damage. I can literally only hit fire aoes (and one earth for fire resistance lowering), without auto-attacking. While I haven't paid too much attention and my personal fire mage is lvl 48, I don't really think fire mages can claim the same. They can still have high aoe dps, but it's not continuous with the high burns. RoF goes off every other skill, inflicts normal damage and burn. That is where our power comes from. Will one RoF wipe a mob group out? No. Will we clear it faster than a fire mage? From my estimations, runs and yadda yadda it's actually about the same. We might take more skills, we might not. 4. Lightning mages are actually the ones that will take my mobs and wipe them out before me. This might be accredited to the low number of high level fire mages vs lightning or ice. But that's just my experiences. As for gems, goldspark > shattershard > bloodstone > ragefire > twilight / crystaline > azurecloud. Loosing 600 health for 75 attack is not worth, and lv4 azurecloud will only add 36 bonus damage to skills. For gear, crit chance > crit dmg > mastery > health > attack. Purple gear is 33% easier to get useful ID due to free re-id items from weekly quests and extra identified attribute. For a necklace, 12% crit dmg is almost as good as 2% crit chance, but cost much less. GoS set is not so bad for a mage because you can get those missing set bonuses from a good ID, and 95 mastery is a big add to damage, but takes far more time + money to get. As a maxed 80 Burst MM, I can tell you know that the best aoe class is a mage. Pure DPS leans toward fire. If I was to reroll a class, it would be a Frost Mage. What they give up in pure DPS they more than make up for in CC. When it comes to mobs, CC is better than pure DPS. That being said they are the ones I fear most in PVP and welcome most in my parties.
0.978783
complete the landing gear checklist for that airplane. indicators can be rechecked prior to landing. • Neglected to extend landing gear. • Inadvertently retracted landing gear. • Activated gear, but failed to check gear position. • Misused emergency gear system. • Retracted gear prematurely on takeoff. • Extended gear too late. feel of a properly operating landing gear system. pilot holds or is working towards.
0.958358
It is generally agreed that September 11, 2001, changed the course of history. But we must ask ourselves why that should be so. How could a single event, even one involving 3,000 civilian casualties, have such a far-reaching effect? The answer lies not so much in the event itself as in the way the United States, under the leadership of President George W. Bush, responded to it. Admittedly, the terrorist attack was historic in its own right. Hijacking fully fueled airliners and using them as suicide bombs was an audacious idea, and its execution could not have been more spectacular. The destruction of the Twin Towers of the World Trade Center made a symbolic statement that reverberated around the world, and the fact that people could watch the event on their television sets endowed it with an emotional impact that no terrorist act had ever achieved before. The aim of terrorism is to terrorize, and the attack of September 11 fully accomplished this objective. Even so, September 11 could not have changed the course of history to the extent that it has if President Bush had not responded to it the way he did. He declared war on terrorism, and under that guise implemented a radical foreign-policy agenda whose underlying principles predated the tragedy. Those principles can be summed up as follows: International relations are relations of power, not law; power prevails and law legitimizes what prevails. The United States is unquestionably the dominant power in the post-Cold War world; it is therefore in a position to impose its views, interests, and values. The world would benefit from adopting those values, because the American model has demonstrated its superiority. The Clinton and first Bush Administrations failed to use the full potential of American power. This must be corrected; the United States must find a way to assert its supremacy in the world. Not all the members of the Bush Administration subscribe to this ideology, but neoconservatives form an influential group within it. They publicly called for the invasion of Iraq as early as 1998. Their ideas originated in the Cold War and were further elaborated in the post-Cold War era. Before September 11 the ideologues were hindered in implementing their strategy by two considerations: George W. Bush did not have a clear mandate (he became President by virtue of a single vote in the Supreme Court), and America did not have a clearly defined enemy that would have justified a dramatic increase in military spending. September 11 removed both obstacles. President Bush declared war on terrorism, and the nation lined up behind its President. Then the Bush Administration proceeded to exploit the terrorist attack for its own purposes. It fostered the fear that has gripped the country in order to keep the nation united behind the President, and it used the war on terrorism to execute an agenda of American supremacy. That is how September 11 changed the course of history. Exploiting an event to further an agenda is not in itself reprehensible. It is the task of the President to provide leadership, and it is only natural for politicians to exploit or manipulate events so as to promote their policies. The cause for concern lies in the policies that Bush is promoting, and in the way he is going about imposing them on the United States and the world. He is leading us in a very dangerous direction. The supremacist ideology of the Bush Administration stands in opposition to the principles of an open society, which recognize that people have different views and that nobody is in possession of the ultimate truth. The supremacist ideology postulates that just because we are stronger than others, we know better and have right on our side. The very first sentence of the September 2002 National Security Strategy (the President's annual laying out to Congress of the country's security objectives) reads, "The great struggles of the twentieth century between liberty and totalitarianism ended with a decisive victory for the forces of freedom—and a single sustainable model for national success: freedom, democracy, and free enterprise." The assumptions behind this statement are false on two counts. First, there is no single sustainable model for national success. Second, the American model, which has indeed been successful, is not available to others, because our success depends greatly on our dominant position at the center of the global capitalist system, and we are not willing to yield it. To be sure, the Bush doctrine is not stated so starkly; it is shrouded in doublespeak. The doublespeak is needed because of the contradiction between the Bush Administration's concept of freedom and democracy and the actual principles and requirements of freedom and democracy. Talk of spreading democracy looms large in the National Security Strategy. But when President Bush says, as he does frequently, that freedom will prevail, he means that America will prevail. In a free and open society, people are supposed to decide for themselves what they mean by freedom and democracy, and not simply follow America's lead. The contradiction is especially apparent in the case of Iraq, and the occupation of Iraq has brought the issue home. We came as liberators, bringing freedom and democracy, but that is not how we are perceived by a large part of the population. It is ironic that the government of the most successful open society in the world should have fallen into the hands of people who ignore the first principles of open society. At home Attorney General John Ashcroft has used the war on terrorism to curtail civil liberties. Abroad the United States is trying to impose its views and interests through the use of military force. The invasion of Iraq was the first practical application of the Bush doctrine, and it has turned out to be counterproductive. A chasm has opened between America and the rest of the world. The size of the chasm is impressive. On September 12, 2001, a special meeting of the North Atlantic Council invoked Article 5 of the NATO Treaty for the first time in the alliance's history, calling on all member states to treat the terrorist attack on the United States as an attack upon their own soil. The United Nations promptly endorsed punitive U.S. action against al-Qaeda in Afghanistan. A little more than a year later the United States could not secure a UN resolution to endorse the invasion of Iraq. Gerhard Schröder won re-election in Germany by refusing to cooperate with the United States. In South Korea an underdog candidate was elected to the presidency because he was considered the least friendly to the United States; many South Koreans regard the United States as a greater danger to their security than North Korea. A large majority throughout the world opposed the war on Iraq. Where are we in this boom-bust process? The deteriorating situation in Iraq is either the moment of truth or a test that, if it is successfully overcome, will only reinforce the trend. Whatever the justification for removing Saddam Hussein, there can be no doubt that we invaded Iraq on false pretenses. Wittingly or unwittingly, President Bush deceived the American public and Congress and rode roughshod over the opinions of our allies. The gap between the Administration's expectations and the actual state of affairs could not be wider. It is difficult to think of a recent military operation that has gone so wrong. Our soldiers have been forced to do police duty in combat gear, and they continue to be killed. We have put at risk not only our soldiers' lives but the combat effectiveness of our armed forces. Their morale is impaired, and we are no longer in a position to properly project our power. Yet there are more places than ever before where we might have legitimate need to project that power. North Korea is openly building nuclear weapons, and Iran is clandestinely doing so. The Taliban is regrouping in Afghanistan. The costs of occupation and the prospect of permanent war are weighing heavily on our economy, and we are failing to address many festering problems—domestic and global. If we ever needed proof that the dream of American supremacy is misconceived, the occupation of Iraq has provided it. If we fail to heed the evidence, we will have to pay a heavier price in the future. Meanwhile, largely as a result of our preoccupation with supremacy, something has gone fundamentally wrong with the war on terrorism. Indeed, war is a false metaphor in this context. Terrorists do pose a threat to our national and personal security, and we must protect ourselves. Many of the measures we have taken are necessary and proper. It can even be argued that not enough has been done to prevent future attacks. But the war being waged has little to do with ending terrorism or enhancing homeland security; on the contrary, it endangers our security by engendering a vicious circle of escalating violence. The terrorist attack on the United States could have been treated as a crime against humanity rather than an act of war. Treating it as a crime would have been more appropriate. Crimes require police work, not military action. Protection against terrorism requires precautionary measures, awareness, and intelligence gathering—all of which ultimately depend on the support of the populations among which the terrorists operate. Imagine for a moment that September 11 had been treated as a crime. We would not have invaded Iraq, and we would not have our military struggling to perform police work and getting shot at. Declaring war on terrorism better suited the purposes of the Bush Administration, because it invoked military might; but this is the wrong way to deal with the problem. Military action requires an identifiable target, preferably a state. As a result the war on terrorism has been directed primarily against states harboring terrorists. Yet terrorists are by definition non-state actors, even if they are often sponsored by states. The war on terrorism as pursued by the Bush Administration cannot be won. On the contrary, it may bring about a permanent state of war. Terrorists will never disappear. They will continue to provide a pretext for the pursuit of American supremacy. That pursuit, in turn, will continue to generate resistance. Further, by turning the hunt for terrorists into a war, we are bound to create innocent victims. The more innocent victims there are, the greater the resentment and the better the chances that some victims will turn into perpetrators. The terrorist threat must be seen in proper perspective. Terrorism is not new. It was an important factor in nineteenth-century Russia, and it had a great influence on the character of the czarist regime, enhancing the importance of secret police and justifying authoritarianism. More recently several European countries—Italy, Germany, Great Britain—had to contend with terrorist gangs, and it took those countries a decade or more to root them out. But those countries did not live under the spell of terrorism during all that time. Granted, using hijacked planes for suicide attacks is something new, and so is the prospect of terrorists with weapons of mass destruction. To come to terms with these threats will take some adjustment; but the threats cannot be allowed to dominate our existence. Exaggerating them will only make them worse. The most powerful country on earth cannot afford to be consumed by fear. To make the war on terrorism the centerpiece of our national strategy is an abdication of our responsibility as the leading nation in the world. Moreover, by allowing terrorism to become our principal preoccupation, we are playing into the terrorists' hands. They are setting our priorities. A recent Council on Foreign Relations publication sketches out three alternative national-security strategies. The first calls for the pursuit of American supremacy through the Bush doctrine of pre-emptive military action. It is advocated by neoconservatives. The second seeks the continuation of our earlier policy of deterrence and containment. It is advocated by Colin Powell and other moderates, who may be associated with either political party. The third would have the United States lead a cooperative effort to improve the world by engaging in preventive actions of a constructive character. It is not advocated by any group of significance, although President Bush pays lip service to it. That is the policy I stand for. The evidence shows the first option to be extremely dangerous, and I believe that the second is no longer practical. The Bush Administration has done too much damage to our standing in the world to permit a return to the status quo. Moreover, the policies pursued before September 11 were clearly inadequate for dealing with the problems of globalization. Those problems require collective action. The United States is uniquely positioned to lead the effort. We cannot just do anything we want, as the Iraqi situation demonstrates, but nothing much can be done in the way of international cooperation without the leadership—or at least the participation—of the United States. Globalization has rendered the world increasingly interdependent, but international politics is still based on the sovereignty of states. What goes on within individual states can be of vital interest to the rest of the world, but the principle of sovereignty militates against interfering in their internal affairs. How to deal with failed states and oppressive, corrupt, and inept regimes? How to get rid of the likes of Saddam? There are too many such regimes to wage war against every one. This is the great unresolved problem confronting us today. I propose replacing the Bush doctrine of pre-emptive military action with preventive action of a constructive and affirmative nature. Increased foreign aid or better and fairer trade rules, for example, would not violate the sovereignty of the recipients. Military action should remain a last resort. The United States is currently preoccupied with issues of security, and rightly so. But the framework within which to think about security is collective security. Neither nuclear proliferation nor international terrorism can be successfully addressed without international cooperation. The world is looking to us for leadership. We have provided it in the past; the main reason why anti-American feelings are so strong in the world today is that we are not providing it in the present.
0.953339
Horses have long been one of the most economically important domesticated animals, and have played an important role in the transport of people and cargo for thousands of years. Most notably, horses can be ridden by a person perched on a saddle attached to the animal, and are also widely harnessed to pull objects like wheeled vehicles or plows. In some human cultures, horses are also widely used as a source of food. Though isolated domestication may have occurred as early as 4500 BC, clear evidence of widespread use by humans dates to no earlier than 2000 BC, as evidenced by the Sintashta chariot burials (see Domestication of the horse). Until the middle of the 20th century, armies used horses extensively in warfare; soldiers still refer to the groups of machines that have replaced horses on the battlefield as "cavalry" units, and sometimes preserve traditional horse-oriented names for military units (Lord Strathcona's Horse). The earliest evidence for the domestication of the horse comes from Central Asia and dates to approximately 4,000 BCE. Competing theories exist as to the time and place of initial domestication. Wild species continued to survive into historic times. For example, the Forest Horse (Equus ferus silvaticus, also called the Diluvial Horse) is thought to have evolved into Equus ferus germanicus, and may have contributed to the development of the heavy horses of northern Europe, such as the Ardennais. The tarpan, Equus ferus ferus, became extinct in 1880. Its genetic line is lost, but its phenotype has been recreated by a "breeding back" process, in which living domesticated horses with primitive features were repeatedly interbred. Thanks to the efforts of the brothers Lutz Heck (director of the Berlin zoo) and Heinz Heck (director of Tierpark Munich Hellabrunn), the resulting Wild Polish Horse or Konik more closely resembles the tarpan than any other living horse. Przewalski's Horse (Equus ferus przewalskii), a rare Asian species, is the only true wild horse alive today. Mongolians know it as the taki, while the Kirghiz people call it a kirtag. Small wild breeding populations of this animal exist in Mongolia. Wild animals, whose ancestors have never undergone domestication, are distinct from feral animals, who had domesticated ancestors but now live in the wild. Several populations of feral horses exist, including those in the West of the United States and Canada (often called "mustangs") and in parts of Australia ("brumbies") and New Zealand ("Kaimanawa horses"). Isolated feral populations are often named for their geographic location; in Namiba feral animals known as Namib Desert Horses live in the desert, while the Sable Island Horses are resident on Sable Island, Canada. Feral horses may provide useful insights into the behavior of ancestral wild horses. Prices current as of last update, 04/16/19 4:27am.
0.999683
Find Chinese medicine training in Canada and the United States. Chinese medicine training is easy to get in America nowadays. Pupils drawn to the healing arts will realize that several acupuncture and Oriental medicine schools offer many different Chinese medicine training programs. While a variety of the academic classes comprise practical Chinese medicine training in Qi gong, Tai Chi and Tuina, there are a great many alternative medicine and traditional medicine schools that have been or have started offering extensive Chinese medicine learning acupuncture and TCM (Traditional Chinese Medicine). As both a complementary and alternative healing treatment, Chinese medicine training is crucial to possible healers seeking to become certified and/or licensed professionals of the artwork. In modern academic institutions, Chinese medicine training programs encompass a variety of health classes, including but not limited to studies in shiatsu, acupressure, acupuncture, Chinese medicine doctrines and theories, herbal medicine, moxibustion (cupping), Asian bodywork treatments, meridian therapy along with other relevant education. Students, who want to enrol in degree classes (including acupuncture and Oriental medicine degrees) will find that many Chinese medicine colleges and schools often require conventional prerequisites prior to registration. Requirements may include training and formal education at technical school, university or a traditional college. It is definitely wise to carefully analyze all academic requirements prior to applying for any number of Chinese medicine training programs; as schools can vary in this facet, along with tuition, program lengths, certification, etc.
0.970775
I've been pondering the imponderable - when is something so important that it can't be addressed? Sometimes I think many firms consider innovation to be almost mystical in nature, something that has to be fully considered to be approachable. Here's a challenge for you. Find me a firm, any firm, that isn't telling it's people, it's customers and it's investors that innovation isn't important. Can you imagine that? Telling these constituents that innovation isn't important is like telling people that oxygen isn't important. So, let's take as a given that most firms advocate a bias toward innovation. Then, carefully consider what most firms are actually DOING. In many firms, innovation is the big pink elephant in the room, something that everyone knows they should be doing, something that anyone could be doing, but no one is really responsible, and what's more, there are so many other pressing responsibilities - costs to cut, organizations to restructure, that it just never seems that innovation is urgent enough. I like to say that most firms innovate based on two drivers - vision or fear. A few firms have very visionary leadership that drives innovation from the top down and makes it a corporate mantra. We all know who those firms are. Many other firms turn to innovation when it appears that all the other things that were so pressing - cutting costs, outsourcing, restructuring - turn out not to create interesting new products or services that drive revenue growth or market share. These firms turn to innovation as a last desperate resort. Even then, many of them are like the old Adam Ant song - "desperate, but not serious". I think we've been asking the wrong question. Innovation is not urgent or important because it is simply too nebulous. I think the question should be - is organic revenue growth important? Is gaining market share important? Is taking market leadership through distinctive products and services important? If so, how important? Is it important enough to pull a few interested individuals away from the things they are doing that aren't contributing to a differentiated product or service? There's two problems embedded here. Everyone realizes that innovation is important, but it is simply too abstract, so it's impossible to implement a program or scheme to generate it. If we can more adequately define and communicate the potential outcomes of a good innovation program, then we can probably get more buy in and commitment to those results and what it will take to obtain them. Then, innovation (or its outcomes) will become important, urgent and something firms do more than pay lip service to. One of my favorite movie lines comes from the movie Hope and Glory, a movie about the second world war and the blitz of England. In the movie, a bunch of young kids run around in bombed out buildings looking for souvenirs. When they induct a new kid into their group, they then proceed to break everything in the house that's left standing. Their motto is "let's smash things". The juxtaposition is kids in bombed out houses who are destroying what's left of many people's belongings. But they don't know any better and really don't understand what's happening, they just explore the surroundings and break things with glee. Wouldn't it be great from an innovation perspective if you had the permission to smash things in your organization? Often times we work so carefully not to disrupt the apple cart that any innovation we can create is so safely packaged that it can't hurt anything or bother anyone. Safety is often the arch nemesis of innovation. The more you try to shape, package and control it, the less innovation you'll get. People understand this as well. When they are told to be more innovative, they first consider all the spoken and unspoken issues and challenges to innovation. "I can't do this because that would anger Mr. X in that team, and we've clearly been instructed to leave that product alone. We certainly don't want to disrupt our own products or markets, and entering new products seems too risky." This mindset leads you to a watered down product or service that everyone recognizes as immediately obvious, so what's so cool about innovation? What if - those are clearly innovation words - what if we could smash anything and make it better or eliminate it all together? What if breaking things down and eliminating them, or working as if the innovation was more important than any existing expectation or investment? That would be innovative. Could you get your teams to think like that, rather than work with the blinders they often put on themselves? Kids in a bombed out house have two choices. They can try to stack the bricks back they way they were and create order, or they can gleefully smash the rest. One approach has an expectation of care and order and control. The other is interested to see what might happen next. Both may lead to positive outcomes, but anything with the word gleeful attached to it has got to be more attractive and more interesting. For you innovation leaders - when's the last time you asked someone to "smash things"? I'm proud to announce that my new book Make us more Innovative is being launched today. Based on several years of innovation consulting, Make us more Innovative was written the the innovation manager or chief innovation officer in mind. Frequently this individual has been given the requirement to "make their firm more innovative" and they need a guidebook or roadmap to help accomplish that task. This book has relevance for a firm just starting a consistent innovation program or a firm that has enjoyed some success but wants to take its focus further. "Make Us More Innovative is a valuable tool for any business hoping to understand and create a culture of innovation. Phillips does a good job of laying out all the steps that will help you make your innovation initiative a reality." Make us more innovative is a process road map for successful innovation. Each step offers the innovation traveler comfort along the way. Your Mission: Use It. Jeffrey Phillips is right on the mark when he advises that companies must weave innovation throughout their organization, and innovation must be an integral part of day to day operations. This is a practical book for working managers, with twelve concrete steps that managers can take to transform their organizations into innovative firms. If you are interested in purchasing Make us more Innovative, you can find it at Amazon, Barnes&Noble and iUniverse online. We often throw these words around quite frequently, assuming that other people know what we mean when we say "incremental" or "disruptive" innovation. What do we mean when we say these things, and where do these ideas come from? If you were to think of innovation as a spectrum, on the most conservative end of the spectrum is incremental innovation. Really, on the MOST conservative end of that spectrum are concepts like product roadmaps and next product releases. Incremental innovation is simply the next version of a product or service - the most likely next release or product. As you proceed along the spectrum you'll come to concepts like breakthroughs - these can represent a distinct change in a technology or service or business model from the existing solution. Finally, as you edge ever closer to the most radical end of the spectrum, you'll come to the disruptive ideas - ideas that significantly change an industry or a marketplace, and force competitors to adjust their view of the world. Incremental ideas are fairly easy to acquire. Folks in your business are walking around with them right now, all the time. They are natural extensions of existing products and services. Breakthroughs are the ideas that Michael Keaton made famous in the movie Nightshift. Do you remember his tape recorder where he would record great ideas like "feed the tuna fish mayonnaise"? Breakthroughs often combine two concepts that were not thought of as practical or relevant to combine. You hear these ideas every day as well, and quickly reject them because they often don't appear realistic on the surface. Disruptive ideas are those that would cannabalize your existing products or markets, or radically change your market or industry. These are the ideas that people in your teams toss out ocassionally and everyone chuckles nervously, because anyone within the industry would be crazy to do them, due to the destruction of the infrastructure or investment. However, anyone outside the industry would be very likely to do them to enter and disrupt the market and force the existing providers to adjust. This is also why truly disruptive ideas almost always come from outside an industry. It's simply too hard to disrupt your own industry, due to the investments and existing product portfolio. But, with the right combination of products and services and business models, you might disrupt another market or industry. Use the automobile industry as an example. No matter how innovative the market gets around transportation, the industry is going to provide a solution that has an engine and four wheels. They can't think any other way. So an idea like a more efficient gas engine or hybrid is a natural for these guys, and a big stretch is to a fuel cell engine. Someone like Dean Kamen, or a light airplane designer or some other team outside of the automotive industry, will be the ones who ultimately disrupt this market. While you might think this post is going to be a recap of some of the news about innovation, it isn't. This post is about innovation in the way news is captured and reported. What seemed like a slow moving old model industry is demonstrating that it still has some innovation life. CNN is demonstrating a beta of something they call "I Report". Using this capability, all of us become stringers for CNN, local reporters creating content and news that CNN can decide to use, or can at least publish on the iReport site. What CNN has done is combine the power of blogs with the power of the individual and marry that to the content management of YouTube to create a framework for any of us to identify, capture and report news. Now, what seems like news to you may not seem like news to me, so CNN will eventually filter out what's "news" from what's not. But what they've done is to some extent extraordinary. CNN is recognizing that news happens everywhere and that they can't possibly report all of the interesting news. Indeed, what happens on a regular basis is that much news of interest to many people is never reported - either CNN or other networks had no one there to cover the news, or it was spiked somewhere in the editing process. What we see on CNN's website or on the nightly news represents a tiny fraction of what happens that might be interesting on any one day. Now CNN is providing other outlets for that news. You, me, any of us can be reporters and capture the news and report it to iReport. The folks at CNN may then determine to use that report as the basis for a story - or they may not, but your news will still be posted and available for others to see and read. Hmm. This sounds suspiciously like the innovation approach P&G used when it determined that its scientists couldn't possibly think up all of the best new products. Remember that P&G turned to an open innovation model, so many more individuals, companies and researchers could submit ideas to P&G. I guess what CNN is recognizing is that there's no special competitive advantage for capturing the news, other than being in the right place at the right time. Would any of us know Dan Rather if it hadn't been for the hurricane that brought him to prominence? Maybe. But CNN and P&G are demonstrating that they believe there is some value in the process - analyzing and determining which stories (in the CNN case) or which ideas and products (in the P&G case) to commercialize. What's nice about the CNN site is that even if your story isn't "picked up" it is still visible and becomes part of the "news" on iReport. With enough interest, time and tools, this could become a site with as large a following as Youtube. I also notice that CNN is building in links to social networking sites, and has some functionalities that would enable iReport to build social networks as well. Somebody at CNN has been paying attention and is at a minimum copying innovative ideas from P&G and Facebook. While that may not seem like a big innovation generally, this is a huge innovation in the media and news world.
0.999951
Lecture 17 gives an example where test data is used to calculate means for pre-processing training data. It is indicated that doing so will bias the results such that the performance will be inflated when the model is tested on the test set. It makes sense to me that test data should not be used at all for learning parameters of a model, including parameters for pre-processing. After all, when a model is used in production, the pre-processing parameters have to already exist, and can't be a function of online data. However, I am having a difficult time understanding the intuition regarding the example from Lecture 17. Why is it that using test data to calculate means for normalizing the data, improves the performance when testing the model? It is more clear to me why the test scores would be inflated if say, the test labels were somehow incorporated into the training process (maybe by doing feature selection prior to splitting the data). I can think of a hyperbolic example where having access to test inputs could bias a trained model to perform well on the test data. For example, when training using the training model, observations that are nearby points from the test data may be given extra weight, to ensure the model learns to do well on the test data. Any intuition for the original example would still be appreciated though. That is, some intuitive reason for an accuracy bias (positive accuracy bias in the lecture 17 example) when normalizing the training data using means and variances that were calculated using both train and test data. As the data set size grows, it seems like the issue would decrease in severity, since the means and variances of test data and training data would probabilistically become closer as n grows. Like I mentioned, it is more clear to me why this is problematic if labels from the test set were used during the training process. I have heard the same warning given regarding dimensionality reduction (I am not referring to feature selection, where test data labels are used, and I intuitively understand the consequences). In such case, the warning is the same: when doing PCA (or some other unsupervised dimensionality reduction), do the pre-processing just on the training data and use the parameters to reduce dimensions of test data during evaluation. I also have a hard time intuitively seeing why this would bias results one way or the other. 2) a dataset that used the first half of the original dataset to standardize the features (shifted all data using the mean of the first half, and scaled it using the variance of the first half). I then trained models using the first halves of the two datasets, and tested them on the second halves. I used a linear regression. Both models performed the same. They learned different parameters, but the performance measures were the same for both models. Is there a specific type of model that this type of snooping effects? It did not appear to make a difference on linear regression. I attached the datasets to this post. There are 3 attachments. They are all csv files (but I used txt extensions since uploads didn't work with csv extensions). The first file, cpu-original, has the original data. The next file, cpu-standardized-from-all, has the data that has been standardized using all observations. The last file, cpu-standardized-from-train, has the data which has been standardized using only the parameters (mean and variance) from the first half of the data (i.e., the training data). Note: I'm not familiar with the cpu dataset. I just tried it to see the effects of standardizing using train+test data, versus standardizing only using training data parameters. I also split training and testing sets in half, which seemed inconsequential for the purpose of this experiment. I just got home, so I was able to read through some of chapter 5 on data snooping. It seems that the problem referenced with exchange rate predictions is particularly vulnerable to the problem. I can't express formally at the moment, but it seems like labels from the test set are making their way into the training set, since input data consists of data that perfectly matches labels (that is, a label from observation i will be part of the input data of observation i+1, given the way the data set is constructed). I would be interested in the results where the same experiment is ran with a much sparser dataset, such that any given rate change only shows up in one row of data. So I suppose that there may be cases where incorporating test input data (not labels, just the raw unsupervised input), may be benign (like the example I gave in earlier posts), but it could have consequences in non-obvious ways. Regarding dimensionality reduction, I've seen references to negative consequences and benign consequences. I have not run any experiments myself. It sounds like it could have non-obvious consequences (similar to the consequences of using test data for getting normalization parameters from Lecture 17). "I detected only half of the generalization error rate when not redoing the PCA for every surrogate model" As before, any insight would be greatly appreciated, especially if any of these ideas have been formalized elsewhere. Your questions are indeed subtle. Indeed, it is very important to heed the warning at the bottom of page 9-5. I highly recommend problem 9.10 as a concrete example of what can go wrong. The problem that occurs can be illustrated with PCA, which does a form of dimensionality reduction. PCA identifies an `optimal' lower dimensional manifold on which the data sit. If you identify this manifold using test inputs, then you will (in some sense) be throwing away the least amount of the test inputs' information that you can, retaining only that part of each test input in the optimal lower dimension. Now, if you did the PCA using only the training data you will create your lower dimensional manifold to throw away the least amount of information in your training set. When you come to use this lower dimensional manifold on the test data (since it was not optimal for the test data), you will find that you may have thrown away important information in the test inputs which will hurt your test error. The golden rule is that to make predictions on your test set, you can *only* use information from your training set. That is the way it is in practice, and that is the way you should evaluate yourself during the learning phase. Here is a very simple way to check if you have data snooped. Before you do any learning assume the data has been split into a training and test set for you. Run your entire learning process and output your final hypothesis . Now, go and set all your data in your test set to strange values like 0 for all the inputs and random target labels. Run your entire learning process again on this new pair of training set and perturbed test set and output your final hypothesis . If then there has been data-snooping -- the test set in some way is influencing your choice of . In learning from data, you must pay a price for any choices made using the data. Sometimes the price can be small or even zero, and sometimes it can be high. With snooping through input-preprocessing, the price is not easy to quantify, however, it is non-zero.
0.999365
Srinagar: Over 100 youths were rounded up in a crackdown on stone pelters even as fresh clashes erupted today in old city during a strike call by hardline faction of Hurriyat Conference that disrupted normal life in all major towns of the Kashmir Valley. Shortly after Friday prayers, groups of youths took out a procession from Jamia Masjid and started pelting stones when police and paramilitary forces prevented them from marching forward, official sources said. The clashes were continuing till evening at Saraf Kadal, Bohri Kadal and Rajouri kadal in old city, the sources said. Police have arrested over 100 youths from different parts of the city and Baramulla and Sopore towns of north Kashmir for their alleged involvement in stone pelting incidents since Wednesday The youths are in the age group of 18-23 and were rounded up during raids conducted in different areas of the old city, police sources said. 'We have arrested only those youths whose images have been captured by our videographers. We have got enough evidence to prove that arrested youths are involved in stone pelting,' police sources said. They said that FIRs have been registered against most of the arrested youths and they would be produced in court soon. However, family members of the arrested youths accused the police of harassing them and their wards. Reports from Sopore and Baramulla in north Kashmir said the police have arrested more than 20 youths in connection with stone pelting incidents. Younis Ali (13) was arrested by police on Wednesday from a school in old city, his family claimed. A student of class VIII, Younis was arrested from his examination centre at the government boys high school, Rainwari. Police picked him up alleging the teenaged student participated in stone throwing protests in the area, they said. 'Policemen had gone to his school to arrest him and did not allow him to appear in the exam,' said his mother Atika. 'When Tufil Ahmad Mattoo of Saidakadal was killed, everyone was out on streets pelting stones. He (younis) was also among them. He was not the only person pelting stones on the security forces,' Atika said. Meanwhile, shops and business markets remained closed and traffic remained off the roads. However, attendance in government offices and educational institutions was near normal, official sources said. Following the call for strike, the chief secretary SS Kapur issued instructions to the divisional commissioners of Kashmir and Jammu, all deputy commissioners and heads of the departments to immediately constitute special teams to monitor attendance in government offices and institutions. The authorities also announced that ongoing 10th class examinations would be held as per schedule today. The hardline faction of Hurriyat yesterday gave the call for strike today to protest against the alleged human rights violations.
0.999466
It’s been less than a month since it began, thankfully for most, and yet the hemorrhaging from the University of Texas football program continues. In the short time since the Longhorns ended their disastrous 5-7 season, head coach Mack Brown has gone from an in-control-of-the-program CEO to looking like Scotty Smalls trying to make friends and play backyard baseball in The Sandlot. In other words, he’s got some work to do…and fast. Up until last week, the Longhorns had seen four coaches depart since November – offensive coordinator Greg Davis, offensive line coach Mac McWhorter, defensive coordinator Will Muschamp, and defensive line coach Mike Tolleson. But, to put a wrapper on 2010, wide receiver coach Bobby Kennedy expectedly resigned on Dec. 30 to make a lateral move to be the University of Colorado’s wide receiver coach. Make that five. Throughout a tumultuous December, Longhorn fans across the country spread coaching hire rumors as fast as they could drink a bottle of Salt Lick BBQ sauce. They threw around more names than Santa Claus could rattle off reindeer names. And yet the New Year passed with nothing from the halls Belmont. What exactly was Mack Brown doing over there? Had we been naughty and not nice? On Monday and Tuesday this week, fans began to get some answers – albeit not quite the names or coordinator-level titles fans were expecting. The first presser of 2011 brought us Darrell Wyatt as the new wide receiver coach and co-recruiting coordinator, and you can watch Wyatt’s introductory press conference here. Wyatt is a Texas-born Kansas State alumnus who is a get-to-the-point coach with credible Big 12 Conference experience and has been both a wide receiver coach, offensive coordinator, not to mention recruiting extraordinaire. The problem might be, he’s a gypsy of sorts – making his rounds year-after-year – to different schools around the country, including Kansas (most recently), Baylor, oklahoma, and Oklahoma State…and those are just his Big 12 Conference stops. In fact, he’s coached at 14 different universities in his 21 years of coaching. That said, Wyatt can downright get kids to come play for him and turn them in to top-tier talent – see also Adrian Peterson (oklahoma), Mark Clayton (oklahoma), Rashaun Woods (Oklahoma State), and Mike Thomas (Arizona). He’s recruited from Texas for most of his coaching tenure, including the Dallas/Ft. Worth metroplex, Houston and East Texas, and Central Texas. It’s an exciting addition, and ‘Horns fans can be assured that Wyatt will turn out as much talent to the next level as former offensive coordinator Greg Davis ruined. Another positive for Wyatt – his youth and energy. Brown’s talked about it, and now it’s coming to fruition – a much-needed addition to the retirement home-bound staff that had been residing in Austin the past few seasons. In addition to Wyatt, Mack Brown also announced Bo Davis, who has served as a Nick Saban disciple at LSU, the Miami Dolphins, and Alabama, is joining the Texas Longhorns staff, making a lateral move to become the ‘Horns defensive line coach. During his tenure with the Crimson Tide, Davis has had a top-10 defense year-in and year-out in one of the toughest conferences in the country, and he has had several defensive lineman become all-conference or all-American players. Prior to joining the ranks of Saban’s various staffs across the southeast, Davis spent several years coaching at Galena Park North Shore High School in Texas, including coaching former Longhorn DE Cory Redding, and has relationships with high schools across the state. Given his background as an LSU alumnus and assistant, Davis also brings inroads to the top high schools in Louisiana. The question now becomes whether Brown is making random hires that he hopes work well together under his tutelage. It seems odd, to this writer anyway, to hire position coaches when the coordinator positions are still up in the air. At least, publically still up in the air. Maybe Brown’s got his CEO house in order, has lined up more than we know behind the scenes, and has everything but signatures on the dotted line. Maybe he’s building a staff based on input from those to-be-named resources. Rumors are circulating that leading candidates for the offensive and defensive coordinator positions are also in Austin interviewing this week. While many expected Teryl Austin (Florida), Everett Withers (North Carolina), or even former Longhorn Jerry Gray (Seattle Seahawks), to be leading defensive coordinator candidates, it appears as though Brown is after another young, energetic SEC coach instead – none other than Mississippi State’s Manny Diaz. Diaz would be an interesting hire, but to look at what he’s done with a middle-of-the-road SEC team only means he could flourish with the talent in Texas. On the offensive side of the ball, many have considered Boise State or Wisconsin’s coaching gurus to be the focus of the search, and that seems to be more or less true, as the Badgers offensive coordinator Paul Chryst is supposedly the top target. But, don’t rule out the Broncos’ OC, although it sounds like he wants some of his boys (namely, his offensive line coach) to come along for the ride if he signs a contract to come to Austin. Only time will tell, but as the college bowl season wraps up and the recruiting window opens up again leading in to Signing Day in early February, it’s due time to name some coaching talent and get them in Austin and on the road solidifying what is and could still be the #1 recruiting class in 2011. What at first appeared to be a blessing in disguise for Texas this off-season is quickly becoming a nightmare for Texas football head coach Mack Brown and the athletic department. Following his first losing and worst season in Texas history (5-7), pressure was on Brown to replace key coaching positions on his staff where deficiencies were observed. That meant a swift “adios” to long-time offensive coordinator Greg Davis, as well as line coaches Mac McWhorter and Mike Tolleson. Today, in the wake of Florida Gator coach Urban Meyer’s second retirement in Gainesville, athletic director Jeremy Foley has announced the Gators they’ve hired away Brown’s coach-in-waiting for the Longhorns – none other than defensive coordinator Will Muschamp. Holy, Davy Crockett with a raccoon hat. Next, we’re going to find out the Confederacy won the Civil War, the French army is something to be reckoned with, and there were no weapons of mass destruction in Iraq. Brown’s got some big holes to fill, and just when he thought things were lining up perfectly for him to retire in the not-to-distant future. The two lead coordinator positions, plus the two line coaches – and maybe a wide receiver coach to boot – leads to a very, very busy off-season. Anyone else think 2011’s becoming a re-building year…again?! Hey, Greg Robinson…tired of working for Rich Rodriguez and getting your door knocked on by the NCAA every other day? Hey Gene Chizik, when you’re done coaching you’re Heisman Trophy quarterback in the national championship game this year, would you be interested in coming back to coach the defense in Austin? Hey, Major Applewhite, aren’t you glad you’re sticking around…opportunities are becoming more and more available for you, my man. It’s evident, even more so with this latest departure – EyesOfTX is quickly seeing a very, very young coaching staff taking over the helm in Austin in the next 3-5 years. Texas’ Mack Brown Turns Mafioso? Suffice to say, there are a few mixed feelings about offensive coordinator Greg Davis’ departure from the Texas Longhorns football team. But, we all know that head coach Mack Brown loves to control his “empire” and these latest coaching retirements and resignations are no surprise given the heat from the Texas fans and boosters. What’s next? Opening laundry mats and restaurants? Well, at least more of them do now – and, of course, I’m referring to Texas Longhorn football coaches. With this week’s announcement that at least three, maybe four, coaches would be resigning from the Texas Longhorns football staff, there is quite a bit of work to be done to fill the void. That said, there is one coach in particular that Texas fans can agree they’ll be glad to see go – offensive coordinator Greg Davis. While he did bring some success to Texas during his tenure (including a 2005 National Championship and the Frank Broyles award for the Top Assistant Coach), it’s been hard to assess whether his impact on his players and the program was good or bad. Now, he’s been reduced to fixing classic cars, ballroom dancing, and illeagl gun running, according to some sources. Either way, after 13 years, Texas fans owe Greg Davis a “thank you” for all that he’s given the university, so on your behalf, EyesOfTX has taken a stab at a proper send-off, below. It’s been quite a run you’ve had at the University of Texas and with the Longhorns football program. In your 13 years, you’ve given us many memories, and we couldn’t let you shrek off in to obscurity without highlighting out some of the moments to which we owe you thanks. Thank you for convincing Ricky Williams to stay for one more year. Thank you for benching Chris Simms in place of Major Applewhite in the 2000 Big 12 Championship game; one quarter sooner and we would’ve played for another national title. Thank you for recruiting Vince Young, Colt McCoy, Garrett Gilbert, and Cedric Benson to Texas, but not for recruiting Chris Simms. Thank you for starting Major Applewhite in the 2001 Holiday Bowl against #21 Washington. Thank you for the bubble screen. Thank you for allowing Vince Young to utilize his skills in the zone read offense. Thank you for never getting the offense off to a fast start. Thank you for figuring out a way to beat oklahoma 6 times (but not for losing to them 7 times). Thank you for the play call to Quan Cosby on the final play of the game in the 2009 Fiesta Bowl. Thank you for the quarterback option call on the fifth play of the 2009 National Championship game against #1 Alabama. Thank you for boosting Colt McCoy’s sense of self-worth by limiting our running backs enough that he was the leading rusher 9 out of 10 games. Advising Mack Brown on various weight loss schemes that took him from…in the words of “Can’t Buy Me Love,” geek status, to king status, to no status. We appreciate your time in Austin, but are ready for and in need of an offensive change that doesn’t take three years to implement. We will try to forget your ignorance around teaching the quarterbacks how to read the blitz, for not figuring out how to run a successful screen pass to the talented running backs, for throwing for one yard wide receiver bubble screens on 3rd and long, for running a set type of offense with the wrong kind of player personnel, for never getting the most out of the talent on the field, and for thinking you were better than you were and never understanding where you made mistakes and fixing them. We hope you enjoy your time away from football and the University of Texas, we will. What’s missing, ‘Horns fans? What would you like to “thank” Greg Davis for after all of these years? It’s about time. No one ever wants to see wholesale changes in a coaching staff, especially one that has been together as long as Mack Brown’s Texas Longhorns staff. But, after the first losing season for Texas football since Brown’s arrival in 1997, it is time for some change. Early reports indictate that several coaches have either resigned, or at a minimum told their players that they won’t be returning next season. The key departure (good or bad depending on your alliances) is offensive coordinator Greg Davis, who has been with Brown for all 13 years at Texas, not to mention his tenture at North Carolina and Tulane before coming to Austin. In addition, offensive line coach Mac McWhorter, defensive line coach Mike Tolleson, have confirmed they are resigning, and wide receiver coach Bobby Kennedy is rumored to also be leaving the staff (although that has not yet been confirmed). That leaves a lot of holes to fill on the coaching staff, but they were all areas where the Longhorns have struggled the past 2-3 years. You can find more on the departures here, and it appears as though Mack Brown will not try to fill the positions until after the bowl season concludes. The resignations will also not be effective until August 31, 2011, when each of the coach’s contracts expire, although they could leave sooner if they are hired away by other teams. Who are the likely candidates to fill some of those roles, you ask? Let’s pontificate, based on some rumors circulating Longhorn nation. Keep in mind, current defensive coordinator and future Texas Longhorns head coach Will Muschamp will also have some pull in hiring the new coaches, so he might help Brown and the staff dig in to SEC coaching talent as well. No doubt, with some top coaches departing, the ‘Horns will have some money to spend in the off-season to get top-notch talent. – Dana Holgorsen (Oklahoma State offensive coordinator/quarterbacks coach): Lead the nation’s #1 offensive juggernaut in 2010, and made a former Major League Baseball pitching prospect one of the best quarterbacks in the Big 12. It could be tough to grab Holgorsen, though, as he’s an in-conference coach, and Cowboys alum and millionaire Boone Pickens has plenty of money to donate to keep a winning staff together in Stillwater. – Bob Bostad (Wisconsin running game/offensive line coach): When you think of the Badger’s offense, the first thing that comes to mind is a stout running game that is based on the success of an offensive line that can run block with the best in the counry year-in and year-out. The downside is Bostad hasn’t called the plays and would have to learn on the fly or defer to another member of the current Texas offensive staff (see also: Major Applewhite). Would be a solid hire as an offensive line coach, but that might not be the type of “promotion” or long-term growth opportunity that is attractive to a successful assistant coach. – Major Applewhite (Texas running backs coach): A natural progression for Applewhite, and more the likely one of the reasons he took a demotion from previous roles to come to Texas in the first place – to be Greg Davis’ replacement. As a former quarterback, he can help groom future talent, and he’s also called the offensive plays for Rice and Alabama (under none other than Nick Saban) in previous stints. He’s young – yet experienced, more than capable, repsected by the players, has name recognition and in-roads to Texas-state talent, and it is a guarantee that he wants the job. – Mark Helfrich (Oregon offensive coordinator): As much as EyesOfTX despises all things Ducks, this might be a great hire. Helfrich has only called the plays for the “Zeroes” for two years, but their offense has been prolific during that time – and he’s got the Ducks playing in the national championship game the year after his starting quarterback transferred and his best running back went off to The League. Not bad. This one comes down to how much Nike, er…Phil Knight, er…the university is willing to pay to keep him around. – Bryan Harsin (Boise State offensive coordinator/quarterbacks coach): This would be an interesting hire, but might be difficult given Harsin’s a Broncos graduate. It’s hard to get talent out of Boise, as head coach Chris Petersen has a well-oiled machine under his helm. The question remains whether Boise State talent can climb up the rungs and be successful at the next level – see also: former head coach Dan Hawkins, who floundered in his attempt to translate his success in Boise to Boulder and the Universityof Colorado. – Stacy Searels (Georgia line coach): There is some history with Searels and Will Muchamp, and the SEC connection might help draw him to the Lone Star State. While Searels has seen success at Georgia in previous years, the past several years have been struggles for the Bulldogs. Is that what Brown and Muschamp want to bring to town? We’ll see how things pan out, but expect some big changes in the next month or two. Luckily for the ‘Horns, the bye week means they’ve some extra time to try and right the ship. Level their wings. Put their heads on straight. To remember they are football players for one of the most elite college programs in the game. The I-35 bubble in Austin should have a lot of blood, sweat, and even some tears after practice this week. The film room should have cots spread out across the room because players have been spending every waking hour glued to the early-season game tape to find and correct their on-field mistakes. But, it’s deeper than that. So far this season, the seniors are showing how much they feel entitled. On the field, that’s translating to Maverick’s “crashed and burned on the first one, it wasn’t pretty” lingo. The younger guys can’t buy in to that – there is too much talent and potential among the depth chart. For the ‘Horns, it’s time to step up or turn in their pads because they’ve “lost the edge.” Just because it says “Texas” on the front of your jersey doesn’t mean you deserve to win. There is one thing on Nebraska’s mind this week – redemption. December 5, 2009. Just like Texas needed one second back in the upset in Lubbock in 2008, the ‘Huskers want one second back from last year’s Big 12 Championship game. December 5, 2009. There will be no mercy rule, Nebraska is squarely set on putting the wood to Texas on Saturday in Lincoln, the teams’ final Big 12 regular season match-up. December 5, 2009. Make no mistake, this game has been circled on head coach Bo Pelini’s calendar since … December 5, 2009. Even Nebraska’s marketing department was behind an off-season shot at the Texas game (and later changed) to get Husker fans excited for the upcoming season – called “Red Out Around The World.” Their mantra (and doesn’t this sound kind of familiar): “Come early. Be Loud. Wear Red. (Beat Texas).” They’ve sold out of t-shirts at the bookstore bearing the saying: “All my ex’s live in Texas: Iowa State, Kansas, Kansas State and Missouri.” The date “10-16-2010” is plastered all over campus. Don’t become ou fans, Huskers…it’s not a good look for you. His name is Taylor Martinez. His name is Taylor Martinez. His name is Taylor Martinez. Seriously, Tyler Durden probably knows this kid by now. The barely-past-puberty Martinez leads Nebraska at QB in 2010, and brings the word “amazing” to an offense that was anything but in 2009. With the same basic role players on offense, the infusion of Martinez has helped transform what was a horrific scoring attack last year to one of the best in all of college football this year. Jake Locker, eat your heart out. As a redshirt freshman, ESPN NFL draft analyst Mel Kiper Jr. might have just moved Martinez past Locker on his draft board. Everyone knows Pelini is a defensive-minded coach, and since his return to Nebraska, he’s shown his ability to craft a defensive juggernaut – even giving the “Blackshirts” nickname back to this year’s squad. But, Martinez is the knight in shining armor for the ‘Huskers 2010 BCS run, running the zone read to perfection. He’s mobile. He’s fast. REALLY fast. Think Looney Tunes’ roadrunner. Get this: the kid is five games in to the season, and he’s already accumulated 737 yards rushing, on 10.8 yards per carry, for 12 TDs. Those are RB stats, folks. A really, really good RB. Passing? Only three TDs. You get the idea. Let’s hope Martinez doesn’t leave Will Muschamp’s Texas defense looking like Wile E. Coyote. Unfortunately, Martinez isn’t the only rushing threat. RBs Roy Helu, Jr. and Rex Burkhead flank Martinez in the backfield, and are more than capable of providing the power running attack as opposed to Martinez’s sideline-to-sideline flair. Is this bringing back UCLA nightmares yet? It should. On the outside, Martinez has the option to throw to several talented and big wide receivers – namely Niles Paul, Mike McNeill and Brandon Kinnie – but quite honestly, he just doesn’t need to. He’s only thrown for 660 yards on the season with three TDs and three INTs. Will they pass? Yes. Do they think they need to? Probably not. Most impressive is that Nebraska’s offense is built around a very inexperienced offensive line, with three new starters in 2010. Maybe Texas’ Mac McWhorter could take some lessons on how to transform on-paper talent to on-field production? The line has given up seven sacks on the season, and with Texas’ stacked defensive line, the Big Red will have their hands full maintaining their gaps and creating running lanes for Martinez, Helu, Jr. and Burkhead. This year’s “Suh” in Lincoln is none other than Suh’s cohort in the trenches last year, defensive tackle Jared Crick. He has 23 tackles and 2.5 sacks on the season, and with opponents focused on protecting against Crick, the rest of the defensive line has opportunities to shine in opponent’s backfields. Ironically, the line isn’t their strength – the ‘Huskers bring the #1 pass defense in the country. You’d have to utilize your abacus to add up the number of interceptions they have on the year. With Texas’ lack of a running game, expect Pelini to pressure and contain any semblance of a running game with his front four and have his secondary focus on dropping back in to coverage to track down balls a la Willie Mays. Good news. Texas got their butts chewed during the bye week. Bad news. Offensive coordinator Greg Davis is probably sitting up in the press box drawing up a “revised” version of the bubble screen to a different running back or wide receiver. Good news. RB DJ Monroe has used the bye week to “learn the playbook.” OK, maybe not, but he’s getting the call to start in the backfield again this week. Bad news. No matter how well the offense plays on Saturday in Lincoln, Texas won’t win Saturday without a big performance by the defense. Offensively, this game lies in the hands of the Texas offensive line. Nebraska is prone to giving up rushing yards (well, at least more than they do through the air). If the o-line can give QB Garrett Gilbert time in the pocket, provide running lanes for the speedy Monroe, and the wide receivers can run routes beyond the first down markers, Texas does indeed have a shot. It hasn’t happened yet this season, but they’ve had their poor performances to-date rubbed in their face for too many weeks now. It’s time to change. It’s time to define the offense…on the field…on a Saturday. With freshman WR Mike Davis back in the line-up, Texas can take some shots down the field and change the dynamic of the game with big plays and open up the field for the…gasp…running game. Defensively, Muschamps’ boys have their hands full with Nebraska’s three-pronged running attack. But, like any good football coach will tell you, even “Coach’s” Hayden Fox, beating a running team is all about playing assignment football. It’s about maintaing your gaps in the trenches, and utilizing your linebackers and secondary to clean up the mess. It’s about not making mistakes. It’s about making sure tackles. It’s beating Martinez to the corner with the right angles. It’s about stripping the ball and winning the turnover battle, and the ‘Huskers have put the ball on the ground 18 this year, so it’s possible. Nebraska will get their yards on the ground, but this defense has shown glimpses of being an elite unit. They’ll need every piece of that talent and pride to win in Lincoln. Texas will also have to overcome a strong Nebraska kicking game, as the ‘Huskers will use every opportunity to pin Texas deep with punter Alex Henery and make Gilbert and company drive the length of the field, which has been a consistent problem this season. The ‘Horns have to eliminate the mistakes in special teams. Expect to see new kick and punt returners, and with a swift kick in the pants, a different attitude to bring some momentum to the Texas sidelines. It’s going to be chaotic and red in Lincoln, but Texas has a long-shot chance at avoiding a .500 start to the 2010 season and redeeming themselves in the eyes of college football’s elite.
0.999765
ARE you thinking about making a fresh beginning in the new year? Before you do anything, take some time to reflect on what was good or not so good at work over the past 12 months. Only after that can you ask yourself what you can do to make 2008 a year of career advancement, job enrichment and work performance satisfaction. The following ideas might provide a good starting point for crafting your own career resolutions for the new year. 1. I will widen my network: Each month, find a way to meet half a dozen new people. Are there professional associations you can join? Also try to find time to reconnect with existing contacts. By the end of 2008, you should have made plenty of new connections and sustained important relationships with ongoing networking and communication. If you work diligently on this strategy, you will accomplish your resolution. 2. I will update my resume: Never let your resume become outdated. Allocate time every three months to update your resume to include recent projects, new accomplishments, educational courses which you have completed, technology skills you have picked up and professional affiliations you have made. Review, critique and assess whether your resume will position you effectively in the employment marketplace. Be sure to review your resume critically at least twice a year to make sure it stays focused on your career aspirations. You will then be ready to welcome the opportunity when it comes knocking. Plan well and 2008 could be the year you make a major career transition. 3. I will have a better work-life balance: This could mean a career shift to a part-time or flexible work schedule. If you are responsible for housekeeping, this may involve paying someone to take care of household chores, dining as a family more regularly, or changing jobs to work in an environment that is more accepting of your personal requirements. The key is to define what is most important to you and to take steps that will help make your goal of work-life balance a reality. 4. I will take control of my performance: Since your boss or manager generally only recalls your performance from the few months prior to a review, it is up to you to keep track of your accomplishments across the entire year. To do this, spend five to 10 minutes at the end of each week or month writing down what you have worked on, what you have learnt and how you have contributed to the success of your group, department and organisation. You will be prepared with plenty of examples when it is time for your next performance review. 5. I will find a mentor or become one: A mentor can play a critical role in advancing someone's career. Whom you seek out depends on what you want. A mentor inside your organisation might be able to help you navigate the office politics and link you to the informal networks that get you ahead. By contrast, a mentor outside your organisation can be a truly impartial adviser who has your best interests at heart without interference from organisational politics. You can also share your experiences with others by volunteering to be a mentor, perhaps to junior co-workers, recent alumni of your university or polytechnic, or people looking to break into your field. 6. I will learn more: Develop new work-related skills, try new hobbies and, generally, stimulate your mind and heart with learning. Push yourself to try something new. Going back to take that diploma or degree through evening classes or distance learning could be an important move to get ahead. Always be committed to growing your career. Some say New Year resolutions are a waste of time as they are nothing more than a long list of "should-dos" that people never take seriously anyway. Make your New Year resolutions for 2008 different. Take the time to plan them carefully. A long list of resolutions will set you up for failure. Think carefully about what you would like to change. Be brutally honest with yourself. Your goals should be achievable and realistic. After you set them, the rest is really up to you. You will have to think positive, be disciplined and committed so that you can say, "I did", rather than "I should have", when the end of next year rolls around.
0.984573
Make a list of what you are passionate about. Enjoy making the list and enjoy reading your list. Passion is an enjoyable high frequency feeling. What are 5 FEELINGS you choose to feel today that will support your passions? Here are mine: eager, enthusiastic, cosmic reach, zealous, fervent.
0.961237
Climate change is caused by the excessive release of greenhouse gases into the atmosphere; its results are vast but include an increase in the growing period of warm weather crops, as well as droughts, heat waves, a change in precipitation patterns, stronger hurricanes, rising sea levels (1-4 feet by 2100), and the melting of glaciers. Considering that climate change is happening in every ecosystem on earth, it will inevitably impact the lives of humans within them, in turn impacting the relationships between and within nations. Scientists have found a strong correlation between climate change and human migration. Moreover, within politically unstable countries, the implications of increasing temperatures have been stressors for civil conflict. Some of today’s greatest climate-induced civil wars are unfolding in Darfur and Syria, according to a majority of researchers. In Darfur, a decrease in fertile land availability has led to conflict revolving around the question of equality amongst Africans and Arabs. In Syria, a long-lasting drought was the catalyst for a peaceful revolt turned violent. These wars, in combination with potential future conflicts between Middle Eastern countries over water, will likely spur further international tensions involving migration and resource availability. The Syrian Civil War was inspired by the successful Arab Spring uprising in neighboring countries in 2011. The Arab Spring resulted in the overthrow of the oppressive Tunisian and Egyptian presidents, giving the people in Syria hope for a more democratic future. What began in 2011 as a peaceful protest against unemployment, government corruption, and a lack of political freedom quickly turned into a full-scale war of unjust and brutal death and destruction. Responsibility for the shift in the nature of the civil conflict rests on the shoulders of Syria’s authoritarian leader, President Bashar al-Assad. However, considering the ecological circumstances of the protest, scientists today are asking: could it be that 367,965 people died because a drought pushed them to rebel against authoritarian forces? The Gravity Recovery and Climate Experiment (GRACE) has found that Syria is among the nations with the fastest depleting water supplies in the world. In 2008, the absence of rain accounted for a 22% decrease in agricultural production. The drought, which lasted several years, caused numerous farmers to migrate towards the urban centers of Syria; between 2002 and 2009, the urban population increased by 50%. Meanwhile, between 2001 and 2007, employment in the agricultural sector decreased by 33%. Consequently, there were numerous financially challenged men and women struggling to survive in cities. General consensus amongst researchers is that the drought did contribute to the uprising of 2011. As such, the second question to ask is whether the drought was a result of man-induced climate change. Researcher Colin P. Kelley has found that higher temperatures and decreasing rainfalls in the area were greatly affected by the increasing levels of CO2 in the atmosphere. Additionally, it was recently found that human activities contributing to atmospheric CO2 levels, such as the burning of fossil fuels, made the drought two to three times more likely to occur. Although there is room to argue for other contributing factors, it can be assumed that human activities caused an increase in atmospheric CO2 levels and, as a result, the drought. This phenomenon was also a result of irresponsible agricultural practices. Water had been greatly overused in previous years to grow crops like cotton, causing the land to become dry and infertile. Moreover, the government had cancelled finances for power irrigation pumps and produce transportation. President Bashar al-Assad failed to respond accordingly or with appropriate speed, in turn affecting millions of Syrian citizens for the next decade, if not longer. The Syrian Civil War, which began with a drought and developed into a full-scale conflict marked by Bashar al-Assad’s ruthless killings, resulted in massive migration towards European countries. In England, the influx of refugees has contributed to the British decision to leave the European Union. In Germany, Italy, and France, it has brought about racist and anti-immigrant activities and political parties. As temperatures increase, more people will migrate, furthering international tensions and questions regarding national security and moral politics. Civil war broke out in Darfur in 2003 when the Justice and Equality Movement (JEM) and the Sudan Liberation Army (SLA) rebelled against the Khartoum government in the name of equality. The violence with which the government responded escalated into a genocide that claimed the lives of nearly 400,000 people. Former United Nations Secretary-General Ban Ki-moon described the war in Darfur as the first climate change war. A study conducted by Edward Miguel reported that conflicts in Africa are expected to increase by 54% by 2030, the equivalent of 393,000 battle deaths, due to rising temperatures. In the years leading up to war in Darfur, the Sahara desert expanded by a mile each year and rainfalls decreased between 15% and 30%. These changes impacted small farm owners and pastorians the most, creating conflict between the sedentary farmers, who identify as African, nomadic herders, who are for the most part Arab, and the government. As a result of the global-warming induced war, 1.2 million people have been displaced, of whom only 30% are receiving assistance. Chad, a neighboring country, is currently hosting 200,000 refugees. However, Chad’s limited resources and scarce water supply prevent it from accommodating any more refugees in the future. Moreover, the entrance of foreign citizens into Chad has put great pressures on locals. It has also sparked cross-border raids by the Janjaweed, a terrorist group supported by the government of Sudan. In the future, the migration of foreign refugees from Darfur into Chad is likely to spark anti-immigration policies and terrorist-related brutality. To top it off, Western nations are hesitant to accommodate refugees for fear of disturbing oil trade relations with the Sudanese government. As such, Darfur is receiving less aid than it otherwise might have. Today, men, women, and children continue to be plagued by violence in Darfur. The increasing impact of global warming on land productivity will remain the greatest challenge to peaceful social relations in Sudan. As nations wrestle with dwindling natural resources, racially inclined prejudice, and the desire for international peace, migration will become the chief problem facing politicians and foreign policy. By the year 2100, temperatures in the Middle East are expected to increase by 1.6 to 1.8 degrees while precipitation is expected to decrease by 50%. The effects of climate change will include water shortages, a decrease in agricultural productivity, migration, more refugees from areas facing land inundation due to rising sea levels, and other various financial difficulties. In Israel, the water shortage is especially critical considering the fact that the country can only drink from its own water sources to avoid poisoning by neighboring countries. The obstacles of global warming will complicate Israel’s ability to comply with the Jordan River and Yarmouk River water-sharing arrangements. These agreements were created to allocate necessary amounts of water to Israel and its neighboring nations for the prevention of conflict. If Israel were to demand more water, the agreements would be impacted and would require revision. This necessity would be cause for conflict between Israel and other Middle Eastern countries sharing the same water sources. Global warming has also impacted the quality of Israel’s water and its availability in the territory. 97% of the water in the Gaza strip is undrinkable. For example, the increasing salinity and pollution of the water available in the Gaza strip recently caused the death of a five year old. Israel’s attempts to fix the problem have been set back by bombings in the Gaza War. According to the UN, the Gaza War in 2014 alone resulted in 30 million dollars worth of damage to water infrastructure. The conflict for Israel has only heightened the uninhabitability of Gaza and, in turn, increased emigration from the territory. Climate change has lead to desertification, a decrease in drinkable water availability, and droughts. Within any affected country, increasing temperatures lead to unemployment, migration, urbanization, and hunger. In combination with politically repressive governments, these factors either have already resulted in civil war, as in Darfur and Syria, or are likely to lead to it in the future. The conflict in Yemen and the Water War of Cochabamba, during which 9 people died trying to make drinkable water available to the public, indicate that water may become a commodity of the wealthy in developing countries. Meanwhile, until climate change gains the status of a first world pandemic, insignificant treaties and charters will continue to be drafted in place of substantive actions for change.
0.675177
Volodymyr the Great (Valdamar, Volodimer, Vladimir), b ca 956, d 15 July 1015 in Vyshhorod, near Kyiv. Grand prince of Kyiv from 980; son of Sviatoslav I Ihorovych and Malusha; half-brother of Yaropolk I Sviatoslavych and Oleh Sviatoslavych; and father of 11 princes by five wives, including Sviatopolk I, Yaroslav the Wise, Mstyslav Volodymyrovych, and Saints Borys and Hlib. In 969 Grand Prince Sviatoslav I named his son Volodymyr the prince of Novgorod the Great, where the latter ruled under the guidance of his uncle, Dobrynia. In 977 a struggle for power broke out among Sviatoslav's sons. Yaropolk I, who was then the grand prince of Kyiv, seized the Derevlianian land and Novgorod, thereby forcing Volodymyr to flee to Scandinavia. In 980 Volodymyr returned to Rus’ with a Varangian force, expelled Yaropolk's governors from Novgorod, and took Polatsk after a battle in which Prince Rogvolod of Polatsk was slain. Volodymyr took Rogvolod's daughter, Rohnida, as his wife. Later that year he captured Kyiv and had Yaropolk murdered, thereby becoming the grand prince , and married Yaropolk's Greek widow. Over the next 35 years Volodymyr expanded the borders of Kyivan Rus’ and turned it into one of the most powerful states in Eastern Europe. After taking the Cherven towns and Peremyshl from Poland (981) and waging successful wars against the Viatichians (981–2) and Radimichians (984) he united the remaining East Slavic tribes, divided his realm into lands, and installed his sons or viceroys to govern them, dispense princely justice, and collect tribute. In 983 Volodymyr waged war against the Yatvingians and thereby gained access to the Baltic Sea. In 985 he defeated the Khazars and Volga Bulgars and secured his state's eastern frontier. Volodymyr devoted considerable attention to defending his southern borders against the nomadic Pechenegs and Chorni Klobuky. He had lines of fortifications built along the Irpin River, the Stuhna River, the Trubizh River, and the Sula River and founded fortified towns (eg, Vasylkiv, Voin, and Bilhorod) that were joined by earthen ramparts. Volodymyr attributed his victory over Yaropolk I Sviatoslavych to the support he received from pagan forces, and had idols of the deities Perun, Khors, Dazhboh, Stryboh, Symarhl, and Mokosh erected on a hill overlooking his palace in Kyiv. Later he became convinced that a monotheistic religion would consolidate his power, as Christianity and Islam had done for neighboring rulers. His choice was determined after the Byzantine emperor Basil II turned to him for help in defeating his rival, Bardas Phocas. Volodymyr offered military aid only if he was allowed to marry Basil's sister, Anna, and Basil agreed to the marriage only after Volodymyr promised to convert himself and his subjects to Christianity. Volodymyr, his family, and his closest associates were baptized in December 987, when he took the Christian name Vasylii (Basil). Soon afterward he ordered the destruction of all pagan idols. The mass baptism of the citizens of Kyiv took place on 1 August 988 (see Christianization of Ukraine), and the remaining population of Rus’ was slowly converted, sometimes by force. In 988 Volodymyr sent several thousand warriors to help Basil regain power and married Anna, and in 989 he besieged Chersonese Taurica, took it from Bardas Phocas, and returned it to Basil. The Christianization of Rus’ was essentially engineered by Byzantium. Byzantium supplied the first hierarchs and other missionary clergy in Rus’ and introduced Byzantine art, education, and literature there. During Volodymyr's reign the first schools and churches were built, notably the Church of the Tithes in Kyiv. The adoption of Christianity as the official religion facilitated the unification of the Rus’ tribes and the establishment of foreign dynastic, political, cultural, religious, and commercial relations, particularly with the Byzantine Empire, Bulgaria, and Germany. Relations with Poland improved after Volodymyr's son Sviatopolk I married the daughter of Prince Bolesław I the Brave in 992. Volodymyr received papal emissaries in 986, 988, 991, 992, and 1000 and sent his own envoys to Rome in 993 and 1001. After Anna's death in 1011, Volodymyr married the daughter of Count Kuno von Enningen. Toward the end of his life his sons Sviatopolk of Turiv and Yaroslav the Wise of Novgorod challenged his rule. Having defeated Sviatopolk, Volodymyr died while preparing a campaign against Yaroslav and was buried in the Church of the Tithes. He was succeeded briefly by Sviatopolk. The Rus’ clergy venerated Volodymyr because of his support of the church, but he was canonized only after 1240. Thereafter he was referred to as ‘the holy, equal to the Apostles, grand prince of Kyiv.’ The oldest extant mention of him as Saint Volodymyr is found in the Hypatian Chronicle under the year 1254, and his feast day, 28 July (15 July OS), was first celebrated in 1263. A referral to this page is found in 94 entries.
0.900469
Of all the advantageous characteristics to have, I think that perseverance plays the greatest role in one’s success. Other characteristics, such as intelligence, confidence, and honesty, are no doubt important, but they do not necessarily guarantee success. Perseverance offers no guarantees either, but I believe that this trait offers one more opportunity to succeed. There are several reasons why perseverance often leads to success. First of all, a man who has perseverance does not give up after a failure. He tries again and can, therefore, learn from his mistakes. Second, a persistent person is usually a hard worker, and hard work is an important ingredient in success. Last, with perseverance comes a certain amount of confidence — the confidence that one will eventually succeed. For all of these reasons, I believe that perseverance is the most important characteristic to have. It is easily combined with other traits such as diligence and confidence to increase the chances of success. However, without perseverance, one’s success may be more dependent on luck than anything else.
0.954264
Israel warns Syria and Lebanon not to allow any attacks on Israel from their soil. Israel hopes to avoid reprisals for an Israeli air strike in Syria that killed an Iranian general and senior Hezbollah fighters. Fears of retaliation by Lebanon’s Hezbollah or other groups have risen since Sunday’s attack, prompting Israel to move troops and equipment towards its northern borders with Lebanon and Syria. The Golan Heights region is on high alert. Unlike any other Defense Minister Moshe Ya’alon usually means what he says and follows with his threats. Ya’alon is considered a hawk in Israel. It all started when Iran said a general in the elite Iranian Revolutionary Guards force was killed in an Israeli helicopter strike in Syria on Sunday that Iran said also claimed six Hezbollah fighters, fanning concerns that an eight-year truce between the Shiite militia and Israel could weaken. The officer was identified as Brig. Gen. Muhammad Ali Allahdadi, according to Fars, an Iranian news agency affiliated with the guards. A second state-controlled news agency said that Gen. Allahdadi advised the Syrian government on policy toward rebel groups and Israel. Hezbollah, which is based in Lebanon and backed by Iran, held funerals Monday for its fighters killed in the strike near the Golan Heights border, including Jihad Mughniyeh, the son of former military chief Imad Mughniyeh, who was assassinated in 2008. The article above was redacted using two sources of information: Reuters and the Wall Street Journal.
0.999888
how to cancel apple music from laptop Create or use an Apple ID without a payment method. I can no longer download anything at all or even update apps. You can create a free Apple ID to use with iCloud on your iPhone, iPad, iPod touch or Mac. • Wi-Fi or cellular internet connection is required. WiFi-only devices must be connected to the Internet via a registered Wi-Fi network to be located. If you’re running the beta for iOS 10.3 on your iPad then you’ve most likely already received a push notification from Apple encouraging you to enable Two-Factor Authentication for your Apple ID. For apple users who are using Apple’s devices like iPhone, iPad and iPod touch/ Mac, Then you have to use iCloud ID like [email protected], Apple giving option to create a new Apple ID on iPhone as free account with iCloud apple ID in any number.
0.946913
Can you tell me a little bit about yourself and your background in product management? Sure. I’ve been doing product management and related product development roles for over 20 years. I worked in big companies, like Oracle, with hundreds of thousands of employees, and small companies, like startups. I was employee number 25 at NetProspex, for example. I started my own company three times, at this point. I’ve worked in a variety of different industries: e-commerce software at ATG, information at the intersection of information and technology at both NetProspex and iMarket, a variety of different industries. I’ve also played a variety of different roles. As you know, product management is a very broad role, it’s very different wherever you go, and it’s very much a sort of a coordinating role. You’ve got to coordinate all the different functions within the organizations to deliver value to your customer, and if some of those functions are not existing at your company because it’s a small one, or they’re not being done the way you need them done for the success of your product, then you have to fill in yourself, or you have to find someone. As a result, I’ve done everything from being a Scrum Master, to doing business development and partnerships, and M&A, to being an engineering manager, to being a UX designer, you name it. Other than writing production code, which I haven’t done, I’ve done a lot, written SQL scripts and stuff like that. I’ve done most of those roles. I found myself, over time, increasingly in generalist-type roles, where my job, in addition to creating value within product management, is to be an advisor to the whole business on how do we do this product development and product delivery bit. Perfect. Currently, you’re a consultant, working for different companies and trying to bring all of that expertise for them, right? That’s right. For the last two and a half years [now 4 years as of Dec 2017], I’ve been doing consulting, training, coaching, and mentoring for product development team: everything from the product management side of things, to the go-to market, to how do we organize the product development team cross-functionally, for success. That’s where I think my experience really provides the most value in helping organize and motivate and direct the efforts of cross-functional teams. Excellent. In your experience, what is customer feedback, and how important is it for your work as a PM? It’s absolutely critical. If it would be 100% possible, and it has happened many times in the history of business, to very thoughtfully put together a cross-functional team of people to build an exquisite product that absolutely nobody wants to buy, and with a missing ingredient that most often gets you that result is customer input. If you don’t understand, you can’t put yourself in the shoes of the customer, then you’re very likely to build yourself a better mousetrap in a place where there’s no mice. I don’t know if you want me to get into detail, but I’ve developed–. Definitely, we will get into details a bit. I have a few questions on that topic. Let’s maybe start with the discovery side of things, and when you have a running product and you’re trying to find out about problem areas, or new things that you could be building, and you’re actively seeking customer feedback, what are you looking for? What kind of methods or techniques do you use to get this kind of input? My favorite tool in the toolbox is, in a situation like that, it’s qualitative interviews, where I call and speak with customers, or other people who I suspect should be customers, but aren’t. Rather than ask them a … Rather than treat it like a survey, where I have a canned list of 20 questions that are mostly about my product, I try to have conversations with them. I try to have a 45-minute to an hour conversation about, not my product, about their job in the B2B space, or in their life in the B2B space. I want to ask them about their job so that when I am thinking about what my product should be, I can put myself in their shoes because they’ve told me about themselves, about what they are trying to accomplish in their job, and how they are measured; how their boss rates them, and what metrics are they trying to optimize. Then, I ask them, “Okay, what are the hard things about accomplishing those goals, about doing your job?” Notice I haven’t said anything about my product, although usually they kind of know what my product is, and they’ll kind of tailor any answers to fit the area that I’m asking about. I’m asking them where are the challenges, where did they… What are the risks to them of, in terms of delivering on their goal, and what are the consequences that they don’t see. I’m trying to identify which of their various goals are most important and most at-risk, and most consequential if they fail as goals. This allows me to dial in on their most important need, most important unmet need. I will also ask them a lot of questions about what they’re doing now to try to do that, whether they are doing something completely manually, or they’ve got a product that maybe changed their mind, or they’ve built something in-house, or they’re maybe actively searching for a product. That starts to shift the conversation, from “Tell me about your job” into the solutions phase, and where my product is. At that point, once I have enough background, I can start to ask, “Well, how do you view our product now? How does it solve your problem? In what ways is it not positioned to solve your problem?” I try to ask questions that would have been much more difficult to ask at the beginning, like, “If we don’t have such and such feature, if we add it on top of our existing product would that solve your unmet need?” I can evaluate their answer based on everything they’ve told me about their needs to that point. As you are exploring the problem space, it’s tricky to avoid leading the person to some preconceived of the idea of a problem that you think they might be having. Are there ways to avoid this kind of bias on our end? Yes. First of all, the main method for that is starting with open-ended questions, starting with “Tell me about your job”, and even asking personal questions, like, “How long have you been in the job, and do you like the job; what’s the most personally frustrating thing about the job?” What you’re doing there is you’re developing a rapport with the person. You’re almost acting a little bit like their therapist, and they become much more transparent and honest with you because they realize that you’re not there to sell them something, that you’re really there for them… You start to get much more honesty from them in the first place. The second thing is, you want to really be … You want to really be sure not to ask them about your product idea until you’ve asked them about the underlying problem, and you’re not asking, “Do you have this problem”, you’re asking, “What problem do you have?” Eventually, if you don’t hear that they have a problem that you are kind of hunting for, at some point … A half an hour, 40 minutes … You can ask them, “do you also have this problem?”. If you’ve got enough of a background on the rest of their problem, you ask them, “Can you rank [this problem that you are hunting for] on a list of your problems?” If this is a very important problem or other problems needs more attention. And then you’re taking it up a level of abstraction, and you’ve already got them so thoroughly into talking about their problems in general that hopefully it’s not a… Hopefully you’re not leading them. You can do the same thing with features. You can say, “You’ve asked me for two or three features in the course of this interview, and we have a list of features that other people have asked about that we’re considering. Let’s combine those lists, and now we’ve got a list of six things. Can you rank those for me? Can you tell me which ones resolve your problems best?” Importantly, ask them to explain why they believe each thing should be in the position it should be. It’s not just making up the numbers, but why they believe it. Again, this is really important. I attended a focus group once, and… More than once, but a particular one I’m thinking of, and in this focus group there was one person who didn’t really have the problem that we were looking for. It became clear after the first ten minutes or so that this person was in a slightly different position than the other people, and they weren’t really going to be able to give us the information they wanted. Unfortunately, they were also the most talkative person in the room, so they, everything they were saying was speculation. They were like, “I imagine that a person in this position would want or need this,” and that’s completely useless information, so we asked that person to step out of the focus group. My point in telling you this is, you want to make sure that you’re speaking within the bounds of this person’s reality, this person’s job, with their questions, and not asking them to speculate about what other people might want in different positions. People who are in product management or product development speculate about other people’s wants and needs a lot, but they’re the toughest interview, in some ways, and I’ve done a bunch of those, because I try and have product in development for product managers. It’s important, in order avoid leading the witness, to always ask them to repeat and explain any claim that they make, and preference that they express, back to their own jobs or their own life, what would make a difference for them. That keeps the conversation awesome. The next conversation I had on that topic is, how do you go from a few people that you’ve talked to to understanding that there’s a real opportunity for the larger market? How do you size that opportunity in a way that gives you a lot more comfort or assurance that you’re making the right decision? Great questions. I have two answers for you. One is that it takes surprisingly few of these qualitative interviews to begin to start hearing the same things over and over again, and to develop a set of needs that you’re pretty sure are real and consistent, that you understand. Jakob Nielsen had an article some years ago, where he said that in UX testing you start to get seriously diminishing returns after five or so interviews, and I believe that’s often true in market research, as well. I usually shoot for eight or so, we have maybe ten in a given customer segment, to be sure that I have gotten the whole story, but if it’s a relatively homogenous market, eight is plenty. Before we get to the mistakes, the purpose of the survey is to size the insights that you come up with. You’ve come up with an idea of the target market, based on your eight or so interviews, you have a description of the type of person who is potentially a customer, and you have a list of problems and potential solutions, at that point, that you can test in a survey, and you can use the survey to quantify and rank those problems. You can use the survey to find out if problem A is really more common than problem B. It turned out it was more common in your interviews, but that’s not a scientifically valid survey in terms of the sample size, so you want to validate that across a larger sample size. You can also try to sample adjacent markets and see if they have the same problems, or if they don’t resonate as well, to get an idea of the size of the market or segment of the market. The biggest mistake that I want to come back to, that I see people making and doing surveys, is doing surveys before the interviews. They figure, “We’ll get inside the information, and then we’ll zero in on the most promising prospect, and we’ll interview them,” but that’s backwards. The primary reason is that I don’t think you can write a really good survey that will get meaningful answers from your target market, until you get to know a few people in that target market better, you can’t understand how they categorize things in their mind before… You can’t write a survey, a multiple choice answer question until you understand how they categorize things, and you can’t phrase a question unambiguously until you know how they talk; until you know the words that they use and what they mean, their contact. We’ve all taken surveys where it seems like every question none of the four answers are correct. All right. Yeah. You were talking about addressing a market, or maybe addressing an adjacent market, to where you think an opportunity might be. That comes with segmentation, and how we define a market. In your view, what’s a proper way to define customer segments or market segments in a way that makes sense, both for user research and understanding customer feedback that might be coming in later? Everybody has a different opinion about this. My favorite way of doing it is by needs. I want to group people by common sets of needs, so the people who, we’re going to use the car industry, the people who have a need for basic transportation are a different group of people who are then those who have a need for family, moving a large family around, and they’re different from people who are looking for prestige. They want an expensive car that looks expensive. They all need to get from point A to point B, but the auto manufacturers have become very good at segmenting their customer base by those secondary needs, and the brands are built around that. Right. When you’re moving to building the product, what kind of customer input are you seeking at that point? Mostly validation. I’m a big proponent of the Lean Startup methodology, which I think applies equally well to a new product or existing product, and you’re looking for validation and traction. The important thing, in order to get that, from customers during the development process is to have something to show them, something they can interact with and react to. Waterfall process, you would write the whole stack and do all this design, architecture design and then implement every bit of version one of the product, and then launch it with a lot of fanfare, but that gets back to the problem of lack of product-market fit. No matter how good your research was, your initial ideas of what might hit the market needs are going to be deficient in some way, and possibly some critical way. In fact, likely in a critical way, given how often new products fail. The way to de-risk that is, at every stage, first of all, have rapid iterations where you have something to show frequently to potential customers. Secondly, just build it in such a way that you’ve always got something new and more refined to show, no matter how narrow that is. It might be that, initially, you just had wireframes, but I actually much prefer a clickable mockup to a wireframe, because a potential user can interact with a clickable mockup, and you can tell them that you can give them a task and see if they can accomplish that task, and talk with you while they’re attempting to accomplish that task, about the experience, and whether it’s meeting their expectations, and in what ways it is used. You get the real high-fidelity feedback from a clickable mockup, where a bunch of Powerpoints with some wireframes don’t have that same interacting with this, and it’s working for me or it’s not kind of reaction for customers. Right. A question I would have on that front is, is there a difference between a reaction from a potential customer and an existing customer when you’re showing them mockups, even clickable mockups? Actually, I find that existing customers are sometimes tougher on you, because they know how the product works now, and hopefully they’re satisfied with it, and so anything you show them is a change, and people don’t really love change, in general, unless they’re in pain, unless they really… The reason they’re willing to participate is that they told you frequently that there’s some significant deficiency in your product, and you’re showing them a potential solution. That’d be helpful… It sounded like you had something in mind. Was there some question? The question was a bit more on how do you approach that in a way that disarms that disbelief or that negative reaction on a first level so that you can get their insights on a new version that you might be trying to build. Right. I will say there are a few techniques I use to put the person at ease, and one of them is to make sure they understand that we’re testing the software, not them. I know this wasn’t exactly what you were asking about, but in a lot of usability tests people who have never done it before are kind of nervous, and they’re like, “I hope I do okay.” You have to reassure them that it’s about the product, not about them, and that the best thing they can do is tell you that they think is not intuitive, or they can’t figure it out or whatever, because that’s actually what we’re looking for. And the second closely related thing is, again, people have a tendency to be nice about your product and not tell you what is not working for them, and so you actually really want to push them to be critical and tell you when something is not working for them, or doesn’t fit quite correct with them, or is confusing. It can be… I find you often have to remind people multiple times about that. That depends on the person. Sometimes you just get somebody who’s cantankerous, and doesn’t want things to change, and so with that person, in order to set them at ease, what I’m thinking about is saying, “We’re just trying out some ideas. This is early days, early stages. It’s not working software.” If I know that person is possibly going to have that kind of a reaction, I want to set the expectation that this is not finished software. There are some things you could do in the presentation of the software to make that clear. If you want to make it look like it’s really done and high-fidelity, you can do that these days: you can make clickable mockups with HTML, with really snazzy CSS that looks like it’s production code, and it looks like it’s a prototype rather than just a static mockup; or you can do a paper prototype technique, either really using paper or using something like Balsamiq, where it looks like a drawing. You’ve ever used that? With a paper prototype you can, or with Balsamiq you can still simulate interaction. You just have to do it through non-traditional means. Take the prototype, you have to … You’re the computer. You replace one piece of paper with another, or you remove a sticker and uncover a new state of a button, or whatever, and it takes a little bit longer to conduct the test, but it’s really easy to set up for the test, because you’re just drawing, and it’s really easy to iterate on it, because you get out your eraser and draw something else between tests. Balsamiq is almost … You can set up links in Balsamic so that if a user clicks on something it goes to another screen, and there’s also something called Powerpoint Prototyping. I don’t know if you’ve ever heard that phrase. You do the same thing: you put screen mockups in Powerpoint and you link them together into a flow and you walk the user through a scenario. All right. After building the thing and getting notes, it starts to produce a lot of quantitative and qualitative data, and in terms of what we’re tracking and what we’re receiving from users. It’s very difficult to know where to focus and where to look at, so do you have a general approach into the day to day analysis of metrics and qualitative feedback that comes in? Right. Customer requests are simultaneously a blessing and curse. You have an existing customer base and you’re going to get requests for features or other kinds of changes to the product, and sometimes you could get lots of them, hundreds of them, thousands of them, depending on how many customers you have, or even if you have a small number of customers, but they’re enterprise customers and its’ somebody’s job to interface with your company, and so they feel like part of their value-add is to give you an endless list of requests. You’ve got all of these customer requests, what do you do with them? The biggest mistake that you can make, I think, as product manager, is just to take those requests as-is and figure out a way to prioritize them by, say, frequency, or revenue, or size of customer, and just put them into the backlog and do them without any further examination. Many products have become really bloated and difficult to use because they’ve become overloaded with features that nobody really uses. There was somebody’s bright idea, but there really wasn’t a real need or a good solution to a need. We need only look at Microsoft Word, which has 80 gazillion features, 80% of which nobody uses. You just want to be able to create a neat document, and Microsoft, actually, they’re on record of saying that 80%, some very high percentage, I’m not sure, but of the requests that they get for new features are features they already have. People are not discovering those features, either. What do you do about this? The way I think about it is, in any list of, say, 100 feature requests, there are probably three to five underlying needs, and the job is to figure out what those underlying needs are, and so when you look at the actual requests as written, it was probably given to a sales person or a support person by a customer, and then written down, and so it’s a little bit like a game of telephone. You’re not really getting the whole story. It’s a secondhand translation of somebody’s best guess as to the solution to a problem, but the problem is not stated in the request. Not usually. You’ve got all these various requests: they’re underlying problems, but they’re not stated there, so what I usually do is I try to group the requests into what I think are maybe different ideas for solving the same problem, but then to call up some of those customers and try to validate my assumptions. What I’m asking those customers is, why did you ask for this change or this petition for the product? What problem were you hoping to solve? I have a blog about this I can send you later. Often, you end up digging under multiple layers of why. Have you heard the expression: “Five levels of why”? The idea is you sometimes have to ask why does somebody want something, and then, why does somebody want that. The next thing that they say, up to five times, before you get to the real, underlying problem they want solved. The idea is you can get to that underlying problem. As I said, there are usually fewer of them in any long list of requests. Then you can take those problem statements back to your development team, you can say, “All right: let’s figure out the best way to solve this group of problems,” and if you do that then all of those customer requests will usually evaporate, because you found the best way to solve the problem, rather than just six different ways that other people came up with. You can do that with less engineering time and effort, because instead of having to make 100 changes, you’re only making three to five changes. You touched on two topics, there, that I wanted to try to expand a bit more on, which are, first, the issue of cross-team collaboration with sales support, account managers, and eventually other customer-facing teams. How can we set up a process so that our relationship as product managers with those teams is productive and effective, and in a way that also gets them the same page as we are, in terms of what we’re looking for? Good question. There are a variety of techniques that I think are necessary. The core thing that a product manager needs to do, I think, in an organization, is talk to everybody, is have good collaborative relationships with all the different departments. My friend, Neil Baron has this concept he calls “Circles of Value”, which places the customer at the center of a circle of all the different functions within the organization that are involved in delivering value to that customer. That’s everybody from, obviously, product development, but also sales and marketing, and even finance, where they’re involved in the purchase, in the pricing decisions and the supply chain decisions. Even HR, where the right kinds of people need to be hired for customer support, for example, like background and profile. Every department is involved, and since the customer can’t be there every day to do this coordination job, the product manager becomes the stand-in for the customer in the center of the Circle of Value, making sure that all of these different parts of the organization are coordinated to deliver that value. That puts a lot of pressure on the product manager to really have effective relationships with all the different functions, and as I said, there are a variety of things you can do. The first, most important one, is you’ve got to make friends with those guys. You’ve got to actually show up on the sales floor, or in the support departments, and just get to know the people and find out what’s going on, and help them out now and then with challenges that they’re having, because you’re the product guy who wanders through the department, you will get questions, and if you’re too busy to answer people’s questions or set up a meeting with me, then you’re not establishing a good relationship: a good, collaborative, casual relationship. If you’re like, “Sure, I’ve got five minutes. Let’s talk about that,” or if salespeople are asking, “How do we handle this competitor or that competitor,” go off and volunteer to do a little research and get back to them that afternoon with a little bit tidbit: “This is how I might position to get that competitor. They seem to have this weakness or that weakness, compared to us.” Be willing to engage in these informal conversations. That’s one. Another thing that I find is really helpful is to have a little bit of a formal process, where you get together with other departments on a regular basis. Some companies have a product counsel, with representation from different departments. Both the product management team gets in front periodically, to present the current thinking about priorities and roadmap, and the current status on projects, and they’ll do that monthly, or quarterly, or something like that. That’s useful. Another thing that I believe in, in any sufficiently complex situation is a prioritization scorecard. My product Reqqs incorporates this because I developed one over time, in Excel, and the model quickly got beyond Excel’s capabilities. The scorecard basically provides a transparent way to come up with and explain and collaborate on prioritization. If we’ve got a list of 50 initiatives that we want to do, and we’ve got the bandwidth to fund ten of them, we need a process for designing which are the ten winners for this. I prefer a scorecarding approach when you have a number of stakeholders, because we can make clear what the logic is. Scorecarding approach basically compares the ROI of any different, of proposed features or initiatives based on the goals you’re trying to achieve versus some level of effort. If your goals are strictly revenue, then you can put that up at the top; if you’ve got a market share goal, you can put that up at the top; if you’ve got, getting your first ten customers for a new product, you can put that up at the top, but some measurable goal can… Usually there’s between three and five of these goals, and you score every idea for something you might do that’s going to take some effort. As to how much it will contribute to those goals, add that stuff up, divide by level of effort, and I usually have a competence score that takes it to account risk, an uncertainty, and we derive a number from that that you can use to prioritize. The critical piece of that scorecarding methodology is that product manager can’t do it in isolation they can’t just fill in all the scores and then publish it and say, “It is done.” Instead, it becomes a framework for having a conversation with all the stakeholders. What I’ve done in the past is I’ve taken my scorecard around to all the stakeholders and I’ve asked them to give me whatever ideas they have that they think we should be doing. I put them into the scorecard, I sit with them, and I do the scoring with them. I said, “All right: based on the goals that we all agreed on at the executive meetings, I think this score’s a two on a scale, and a one on this one, and a zero on the other one,” and we have a discussion about whether we agree on those scores, and how certain we are of those numbers. We come to some sort of alignment, even if they don’t completely agree with the score that I put down in the end. I believe we’ve heard them out, given them an opportunity to tell them what they think. Then, I repeat that with all the other stakeholders, one on one, and get everybody on the same page about how the scoring is coming out. What that does is it really, it gives everybody a shared context for how we’re making these decisions, and if something that a project of theirs did not highly prioritize, at least they feel heard and they know why. The issue there would be how to set the waits for the different goals that you’re trying to shoot for. That’s the baseline for the prioritization strategy: getting everyone to buy into the fact that you need X percent of investment on growth, or X percent of investment on bug fixing, or whatever, some other goal might be, right? Right. There’s frequently argumentation about that, as well, but you have to start with that. I actually find it, the model that I use, the mathematical model I use is a very simple scale. It doesn’t require weighting between the different goals, because I use a simple scale of one or two. You know what I mean? That’s how the score is. One means it helps some, two means it helps a lot. Those, not just the what is “some” and what is “a lot” are not defined, and also the weighting between the goals is not defined. With that simple scale, I find that you get a good separation, as long as you have a few different goals. Usually, like I said, there’s three to five, and if you incorporate things like confidence, then you usually get a reasonable outcome without having to set weights. The reason I don’t want to have a very granular scale for completing goals, like one to ten or one to one hundred, is that I want to avoid those arguments, because, in the end, they don’t matter. In the end, it turns out that you can do a pretty good job of prioritizing with a very simple scale. Going back to the other topic that I wanted to dive into from a previous answer of yours was the fact that we need access to customers, and when you were talking about how you were to understand what the problems were behind some customer requests, you would then try to talk to a few of those customers, but in many large organizations, usually PMs have a problem with getting access to customers for different reasons. Why do you think that is, and how can they try to overcome that? It’s usually a problem of departmental silos. It’s usually a problem of territory. One, the executive who’s in charge of sales is a different person than the executive who’s in charge of product management, usually, and… They don’t feel like natural allies. It’s just an unfortunate, in a large company… The salespeople’s job is to control the relationship, to manage the relationship with the customer, and to make sure that it goes well, and that the chance for renewal, or whatever it is that they’re striving for is maximized, and the product manager can be an unknown variable in that process. Product managers sometimes ask questions that seem like they’re out of left field, don’t necessarily support the sales conversation that the salesperson is trying to have. There are different kinds of products and services where this is more or less true. In an enterprise sale, where the customer is very invested in the success of the product, and the implementation of the product, the product manager can add a great deal of inside information and credibility to the conversation, and I find that in enterprise sales, salespeople, are much more likely to want a product manager to be interacting with the customer. They still probably want to be on the call with you, as a product manager, to make sure everything goes well and expectations are set properly, but they’re much more willing to make that introduction; where I find it’s much, much harder to get on the phone with a customer where the product that you offer is just a very small part of their job. They’re busy, and that’s not … They don’t have time for you. They’re not as invested or in the B2C sense. If it’s not a product that they use to care about every day, if you’re selling smartphones … People are addicted to those things. They want to give you as much of their opinion and point of view as possible, but I suppose if you were selling garden hoses to somebody who doesn’t think that much about yard work, they probably would have a harder time finding time for you. That’s one aspect of it. Going back to the internal picture: there’s kind of two or three ways you can go about this, and it depends on your organization, which ones are going to get you the most traction. The first one is I go back to just developing informal relationships with the actual salespeople, the actual support people, where you become helpful to them and they start to view you as an ally, and as someone who can help them with their sales problem. If they see you as a trusted advisor, someone who adds value to them, then they will trust you to do that with the customer, as well. That handling is at the grassroots level, where a salesperson, an individual salesperson, might be willing to bring you in, and if you, if they’ve asked you to come in and help them with demo, or to answer some tough questions, or to tell the customer no at a critical time, then when you ask them, “Can you introduce me to a couple of customers in this area,” they’ll be much more likely to go ahead and do that. If word gets around that you’re this helpful person, then that bubbles up not only to the other salespeople, but up to management, and then it becomes an institutionalized policy that of course Product Managers talk to customers. The other way to go about this is VP to VP, to somehow see if you can get an agreement between the VP of product and the VP of sales, that product managers can talk to customers, and get the ground rules agreed to what the conditions are. I’ve been in companies where, because the salespeople were just so under the gun, and that the… To make a lot of outbound calls, the agreement was that the product team could call any customer, any time, on their own without having to involve a salesperson. They needed to log into Salesforce, but they didn’t need to go through the salesperson, because they didn’t want to distract the salesperson. It was a very transactional business. I thought that was actually… That made it easy. I could just call the customers directly. Other companies I’ve been in where the two VPs agree on the ground rules, and the ground rules are that the salesperson needs to make the introductions, and then maybe ground rules will include the salesperson being on the phone, or maybe they won’t. I’ve tried to avoid having the salesperson on the phone, because I feel like I’ve yet … There’s two challenges, I find, having a salesperson on the phone with you. One is that the customer won’t always be completely honest and open with you if they know the salesperson is listening, because they always … Customers always feel a little vulnerable in front of salespeople: they’re ready to pounce on any sign of weakness, and so they can’t … They’ve got to be a little careful about what they say. The second problem, I find, is that salespeople will sometimes jump in and answer the questions for the customers, because they want to … They’re just trying to be helpful. They have background on the account, they’ve known this person for years or whatever, and they just want to contribute, but, in fact, I don’t want to hear the salesperson’s digested version of the situation: I want to hear it in the customer’s own words, and the customer’s situation may have evolved since the last time the salesperson talked to them, as well, and we just lost the opportunity for me to get that real, direct information, because salesperson jumped in and tried to answer. The issue of busy customers, and the fact that they don’t have the same availability of time for you, any particular tips on how to try to overcome that? I keep a spreadsheet where I write down, if I think I want eight interviews, then I’ve got ten times as many customers as I need. I’ve got 80 people lined up, and I’m just going to go down the spreadsheet, and on day one I’m going to make a bunch of calls, and then on day two I’m going to call more people, and then on day three I might try to call back some of the initial people, and I’ll mark all of that in my spreadsheet, and I’ll mark when I get an appointment, and so on. Sometimes, product managers don’t have time for all of that, and so I have sometimes had good luck with actually hiring a temp to make the calls and send the e-mails to try to set up appointments, whereas, having an associate product manager, somebody who’s new to the role, make the, outbound calls. I think it’s a good technique, and then when we set up an appointment and I get on the phone with them. The other thing is, is you do have a good relationship with sales, or … Here’s a sneaky back door with support, you can leverage the fact that they need to talk to these people regularly, and you can just say, “Hey, I’m looking for eight interviews with this kind of customer. Do you have any of those kind of people that you’re looking for an excuse to call them, anyway?” Then, I get the salesperson or the support person to make the call, and the reason is because they want to introduce them to their product manager. Excellent. Bruce, I’m guessing that you’re probably at the site that you need to get to right now? Yeah? Thank you very much for your time. It was great talking to you, and I hope it was also good for you. It’s always fun to talk about this stuff. I hope it was useful.
0.946785
Intel was founded in 1968 by Gordon E. Moore (chemist) and Robert Noyce(a physicist and co-inventor of the integrated circuit) when they left Fairchild Semiconductor. It is noteworthy that Intel competitor amd was also founded by members of the Traitorous Eight, in 1969. Intel's fourth employee was Andy Grove (a chemical engineer), who ran the company through much of the 1980s and the high-growth 1990s. Grove is now remembered as the company's key business and strategic leader. By the end of the 1990s, Intel was one of the largest and most successful businesses in the world, though fierce competition within the semiconductor industry has since diminished its position. Intel has grown through several distinct phases. At its founding, Intel was distinguished simply by its ability to make semiconductors, and its primary product were static random access memory (SRAM) chips. Intel's business grew during the 1970s as it expanded and improved its manufacturing processes and produced a wider range of products, still dominated by various memory devices. While Intel created the first microprocessor in 1971, by the early 1980s its business was dominated by chips. However, increased competition fromJapanese semiconductor manufacturers had by 1983 dramatically reduced the profitability of this market, and the sudden success of theIBM personal computer convinced then-CEO Grove to shift the company's focus to microprocessors and to change fundamental aspects of that business model. By the end of the 1980s this decision had proven successful, and Intel embarked on a 10-year period of unprecedented growth as the primary (and most profitable) hardware supplier to the PC industry. After 2000, growth in demand for high-end microprocessors slowed and competitors garnered significant market share, initially in low-end and mid-range processors but ultimately across the product range, and Intel's dominant position was reduced. In the early 2000s then-CEO Craig Barrett attempted to diversify the company's business beyond semiconductors, but few of these activities were ultimately successsful. In 2005 and 2006, CEO Paul Otellini reorganized the company to refocus on core processor businesses and announced a series of dramatic cuts in the size of Intel's workforce that will ultimately reduce the company's size by over 10%. In September 2006, Intel had nearly 100,000 employees and 200 facilities world wide. Its 2005 revenues were $38.8 billion and its Fortune 500 ranking was 49th. Its stock symbol is INTC, listed on the NASDAQ .
0.999998
What Should I Consider When Buying Tires? If you live in a wintery climate, you may want to consider speciality tires designed for driving in snow and ice. SUVs often need specialized tires. Buying tires can be a confusing task, because most people do not do it very often. Taking a few things into account when buying tires will make the process easier, and also ensure that you get the right tires for the right job. Good tires can make a major difference in your life: sometimes it feels like driving a brand new car. If you are not entirely certain about your needs, take someone experienced with you while buying tires, and make sure to go to a reputable dealer. The first thing to think about when buying tires is if you need them, and how many you need. Tires are an extremely important part of your car, and make driving safer, easier, and more pleasant. When tires wear down, they can be dangerous, and they also impact your car's handling. Go outside and look at your tires, and bring a penny coin along. Inspect the tread to see if it is evenly worn, and if the wear looks the same on all the tires. Stick the penny into the tread face up: if you can see the entire head of the figure printed on the penny, you need new tires. Be aware that if only one of your tires is bad, you will still need to replace two when buying tires, and you should have the dealer put the new tires on the back axle, for safer handling. Once you have established the need for tires, think about your driving needs. If you live in an area with lots of ice and snow, you may want to consider buying tires customized for this purpose. If you want more sporty handling, you can purchase sport tires. Or you can stick with all weather tires, which will handle reasonably well in a variety of situations, and be slightly cheaper. You should also consider the type of car that you drive: some cars such as sport utility vehicles use specialized tires. Before you go to the tire dealer, make sure that you know your tire size. This information can be found in the owner's manual for the vehicle, or on the informational placard inside the door. It is important to purchase tires of the right size for your car, so make sure to get this information right. Many tire dealers can also eyeball tire size by looking at your car, especially if it's a common make and model, but knowing the manufacturer's specification when buying tires is a good idea. When you're at the dealer buying tires, you will be presented with a number of options. Try to keep your needs list in mind: don't yield to the temptation to buy expensive specialized tires if you've already decided that all weather tires will do. Do get information about the expected life of the tire and warranty, and keep it somewhere secure so that if you have a problem with your new tires, you can bring them back to the dealer for a refund. Make sure to care for your tires well also: balance and rotate them periodically, and adjust your car's alignment if you have uneven tread wear. How do I Rotate Tires? What is a Pneumatic Tire? What is a CO2 Tire Inflator?
0.943153
The first underwater tunnel ever built opened in London in 1843, paving a path for cities everywhere to expand beneath rivers and oceans. Today, the tunnel's grand entrance hall reopens to the public for the first time in 147 years. The underground event space is part of an engineering museum that celebrates the famous family who built the tunnel — and much of London. The Thames Tunnel was designed for horses and carriages to travel under the river, though because of financial problems, the approach for wheeled vehicles was never finished. Instead, the tunnel was embraced by pedestrians and became quite the destination in itself, illuminated by gaslights and lined by vendors. Londoners would come here to take an afternoon stroll along the tunnel's 396m length, and it was known famously as the eighth wonder of the world. A drawing of Londoners strolling the tunnel shortly after it was opened. About two million people a year paid a penny to walk below the river. By 1869, however, the city's burgeoning rail network took the tunnel over, and it was closed to the public. In 2007, the tunnel needed to be retracked, and the Brunel Museum, which honours the legacy of the tunnel's engineers, stepped in to take ownership of tunnel's entrance hall. In 2010, people were allowed to peek into the space for the first time in over a century, and plans were made to transform the space into an accessible venue. A drawing of pedestrians accessing the Grand Entrance Hall. After a 2010 renovation, a concrete raft was placed above the tunnels, creating the new event space above. Marc Brunel, Isambard Kingdom Brunel and Henry Brunel were three generations of engineers who pushed London to the forefront of infrastructural innovation. Building upon the accomplishments of each other, the engineers were responsible for key elements of the London Underground, the Great Western Railway, Tower Bridge and the modern ocean liner. The Thames Tunnel was one project where they all participated in some way. At the age of 19, Isambard assisted his father Marc, and Henry was actually the first person to walk under the Thames (although technically he was carried through as he was a baby). At the time there had been several failed attempts to build a tunnel under the Thames and the Brunels' plan almost didn't succeed either. The 18-year project was plagued by construction deaths and financial delays. The soft clay led to frequent flooding, so Marc created what was named a "tunnelling shield", where the excavation tools become part of a temporary structure that helps to hold the tunnel in place as it's dug. In fact, what Marc designed became the model for the tunnel boring machines (TBMs) of today, making all future underground rail and vehicle travel possible. The conversion of the tunnel into an event space required some crafty engineering as well. The Grand Entrance Hall of the tunnel is a cavernous 15m wide and 15m deep, yet the only access to the space is a small door at the top corner of the room. Architects at Tate Harmer used what they're calling a "ship in a bottle" approach to build a stairway inside, squeezing the staircase elements into the room piece by piece and assembling them into a freestanding structure that doesn't touch the walls at all. The raw walls, blackened from the steam trains that used to run below the river, were left as-is, making for both a cinematic cultural backdrop and striking reminder of the city's transportation history.
0.999068
What about a video on American men. Finding someone who is compatible, has some emotional maturity and who can be a life partner you can count on is a struggle. Some of us are old souls and mesh well with those who are a little bit older and wiser. Coitus reservatus And this puts you right in the bucket to consider dating an older man. There can be an allure that comes with dating someone older. But, there is a bunch of other stuff to consider too. Dating an older man who is more mature and who has a high level of self-awareness of who they are as a person can shift your world in some pretty unique ways. And this can feel very different versus dating someone your own age or younger.Popular theory suggests gold-digging is in effect, since older men Ph.D., AARP's love and relationship expert and best-selling author of American Being with someone so seasoned in the sheets—coupled with my desire. Age gap relationships - namely, women dating older men - seem to be something that fascinate a lot of people (rightly or wrongly). Here, 8. We all remember when year old Ashley Olsen made headlines for reportedly dating year-old Bennett Miller, the director of Moneyball. And, yes, I know some younger men date older women. Kyle Jones, a year-old Pittsburgh guy, was in the news for having a relationship with year-old great-grandmother, Marjorie McCool. So I am not being sexist. However, this article is about younger women falling in love with older men. Music: "Slam Dunk Da Funk - 5ive" About ME: Time to love! Let's try! I am into to the off the wall stuff. On my weekends I like sewing and walking after the tiring working week. My lips are one of my best assets, and i love to give o to complete. My only requirements are you like some of the things i like. In my spare time I like traveling, doing sports, cooking, watching movies and listening to music. Love knows no bounds and no limits. True love can conquer anything, right? Birds of a feather flock together for a reason: Like any relationship, ours had its ups and downs, most of them not age-related. But in case you might be falling for an older guy, here are a few of the highs and lows of loving an older man you can look out for:. I never intended to date men at all, let alone older men — for most of my early twenties, I was head-over-heels for a woman. She and I had just parted ways when I vowed to stop dating for at least a year, and to try to clean up the hot mess that was my year-old self at the time.
0.999915
Archive of articles classified as' "San Francisco" If you have a soul, you need to try spinning. – You’re in a dark room. – House music is blasting, turned up to 11. – Your super-fab instructor is bopping up and down in front of you and occasionally yelling things at you. His shoulders are enorm. – There’s a disco ball. Yes, okay, technically you are on top of a stationery bike. However, you are rarely — you’re standing, moving your arms, and always bouncing your pedals to the beat, making it feel like you’re actually floating, since you’re dancing in midair. This is the closest YouTube representation I could find — my gym is like this except more of the instructor yelling “OHYEAHHH” in the best way possible every 27 seconds. Normally, I go clubbing in the Castro every couple of weeks, as I have a fairly strict queer dancing quota (this *is* San Francisco, guys). However, the other week, after a Saturday night party at which it was tragically difficult to convince anyone to go Castro out with me, I was bummed and feeling antsy. How would I tide myself over for a whole WEEK, Castro-free??!? But I tried spinning that Tuesday for the first time (as per @lewisisgood‘s recommendation), and let’s just say, QUOTA FULFILLED. TL;DR: come spinning with me.
0.914377
Paula L. WoodsPaula L. Woods, a member of the National Book Critics Circle, is the author of the Det. Charlotte Justice novels, including, most recently, "Strange Bedfellows." EARLY on in Patrick Anderson's survey of crime fiction, "The Triumph of the Thriller," the Washington Post book reviewer writes that "John Grisham is the new James Michener and 'The Da Vinci Code' is our 'Gone With the Wind.' " Forget for a moment the hyperbole in comparing Grisham and Dan Brown with two Pulitzer Prize winners and consider the book's subtitle ("How Cops, Crooks, and Cannibals Captured Popular Fiction") as a question: Have thrillers taken over popular fiction? The answer would be a resounding yes if one compares the Post's fiction bestseller list of 1966, as Anderson does, with its list of 2006, in which nine of 10 novels were firmly in the crime genre. How and why that happened is the more provocative question he sets out to answer, promising to look "back at the origins of modern crime fiction -- to writers like Edgar Allan Poe, Arthur Conan Doyle and Agatha Christie -- to examine not only how the modern thriller has evolved," but also to explain why thrillers have come to be differentiated from mysteries or crime novels. An answer to either question would be most welcome to the vast numbers of crime fiction readers, not to mention the thousands of writers, myself included, who toil in that arena. It would also elevate a genre so often dismissed as brain candy by those who think such books are beneath contempt -- or at least serious criticism. Given the numerous crime novels Anderson has written and his seven years as the Post's weekly thriller columnist, he would seem the perfect person to debunk the critics and say something original. But although I agree with much of what he says about the genre's worthiness and the dreck that some of its biggest brand-name authors are producing, "The Triumph of the Thriller" confuses and frustrates more often than not. My problems begin with Anderson's admittedly loose definition of a thriller, which he says encompasses "hard-core noir, in the Hammett-Chandler private-eye tradition, as well as a bigger, broader universe of books that includes spy thrillers, legal thrillers, political thrillers, military thrillers, medical thrillers, and even literary thrillers." I understand that this definition (and his calling chick-lit a conflation of the romance novel with the mystery) makes it easier for Anderson to cast a net over the breadth of crime fiction, and include personal favorites like Michael Connelly's Harry Bosch series, but to throw them all in the thriller pot is the same as saying chitterlings are like foie gras because both are organ meats. Mysteries, as I've come to define and write them, feature a protagonist (like Sue Grafton's P.I. Kinsey Millhone or cop Bosch in Connelly's police procedurals) who sets out to detect the perpetrator of a crime that has already occurred; thrillers take readers on a journey along with the protagonist to prevent or solve crimes that are being committed in the novel's real time. It may be a technical distinction that is muddied in novels like Connelly's "The Poet" or others Anderson rightly includes in "Triumph," but lumping them together seems more commercially than critically driven. This is not to say the book does not have merit or make some valuable points. The first section, which limns the works of 19th and early 20th century pioneers, explains the genre's history and draws firm, if not always original, conclusions about, among other things, Conan Doyle's simplistic solutions and Raymond Chandler's scorn for "foreigners, blacks, homosexuals, rich people, and women." But too often Anderson substitutes plot summaries or biographical sketches of these early authors for criticism, and had he considered more books from his selected writers' oeuvres, he may have reached more interesting conclusions. Perhaps the book's best part is on the writers Anderson calls "modern masters" -- Connelly, Thomas Harris, George Pelecanos and Dennis Lehane. Here he reins in the biographical material and focuses on more thorough assessments of their work, making some compelling arguments along the way. Particularly insightful are his analyses of the significance of Lehane's breakout novel, "Mystic River," to the writer's career as well as to the genre and Bosch's evolving sense of personal mission that has become the driving force in Connelly's series. Critics, like readers, can like or dislike whomever they choose. But even when I often found myself agreeing with Anderson's opinions, I winced sometimes at how he delivered them. Particularly vitriolic was the chapter "No More Mr. Nice Guy." After briefly mentioning one James Patterson thriller and summarizing the plot of a second, Anderson derides the author as "the absolute pits, the lowest common denominator of cynical, skuzzy, assembly-line writing," without linking the author's skills in design, marketing and advertising (Patterson is a former advertising executive) to publishers' use of these methods to spur the sales of crime writers of both schlock and substance. Anderson also trashes Patricia Cornwell after reading only one of her later Kay Scarpetta novels without acknowledging that her groundbreaking forensic thrillers almost single-handedly created a thriving sub-genre and have even influenced the TV crime drama. (How do we love "CSI"? Let me count the ways.) And the fact that he includes lurid and often-repeated details of Cornwell's personal life only makes his opinion seem more mean-spirited than perhaps he intended.
0.996369
Cut the eggplants in half lengthwise Place on baking sheet, brush with oil and bake at 400 degrees for 30 minutes. Carefully remove flesh from the eggplants to a large mixing bowl, keeping the shell intact. Add the remaining oil, the ground pork, ground beef, eggs, breadcrumbs, brandy, salt, and pepper and mix well. Fill the eggplant shells, then sprinkle each with about a half-tablespoon of grated cheese. Bake 30 minutes at 400 degrees.
0.999267
Routing latency issue with HE and "layer42" Apparently, HE's peering point with Layer42 is in San Jose for IPv4 but in Seattle for IPv6. This causes an RTT disparity of 3:1. Obviously, this may be a BGP peering issue - but would it be possible to peer with them for BOTH protocols at least at San Jose if not both locations? Both of you are present at both locations (and they are HQ'ed in Santa Clara; quite nearby). I believe that your network can support multiple peering points with the same AS; I don't know about theirs (AS 8121). It appears that going out to them has IPv6 via San Jose, so maybe it's their problem? However, similar latency via IPv6. My IPv4 route to glorb doesn't go through HE (but Mzima and twtelecom instead). Might this be possible? Thanks.
0.999992
Type the name of the person. If you make a mistake, you will have a choice. What's her name? Type it.
0.894025
The 3rd Armored Division's shoulder sleeve insignia. The 3rd Armored Division ("Spearhead") was an armored division of the United States Army. Unofficially nicknamed the "Third Herd," the division was first activated in 1941, and was active in the European Theater of World War II. The division was stationed in West Germany for much of the Cold War, and participated in the Persian Gulf War. On 17 January 1992, in Germany, the division ceased operations. In October 1992 it was formally inactivated as part of a general drawing down of forces at the end of the Cold War. The 3rd Armored Division was organized as a "heavy" armored division, as was its counterpart, the 2nd Armored Division ("Hell on Wheels"). Later, higher-numbered U.S. armored divisions of World War II were smaller, with a higher ratio of armored infantry to tanks, based on lessons of the fighting in North Africa. As a "heavy" division, the 3rd Armored possessed two armored regiments totaling four medium tank battalions and two of light tanks (18 companies) instead of three tank battalions containing both (12 companies), 232 medium tanks instead of the 168 allotted a light armored division, and with attached units numbered over 16,000 men, instead of the normal 12,000 found in the light armored divisions. Each division type had an infantry component of three mechanized infantry battalions. The division's core units were the 36th Armored Infantry Regiment, the 32d Armored Regiment, the 33d Armored Regiment, the 23d Armored Engineer Battalion, the 83d Armored Reconnaissance Battalion, and the 143d Armored Signal Company. During World War II these were organized operationally into task forces known as combat commands A, B and R (Reserve). In addition to the core units, a number of other units of various kinds were attached to the division during various operations. The division was activated on 15 April 1941 at Camp Beauregard, Louisiana. In June 1941, it moved to Camp Polk Louisiana (now Fort Polk). On 9 March 1942, it came under Army Ground Forces and was assigned to the II Armored Corps. In July 1942, it was transferred to Camp Young, CA and from August to October 1942, took part in maneuvers at the Desert Training Center. It left Camp Young in January 1943 and moved to the Indiantown Gap Military Reservation, Pennsylvania. The first elements of the 3rd Armored in France saw combat on 29 June, with the division as a whole beginning combat operations on 9 July 1944. During this time, it was under the command of VII Corps and XVIII Airborne Corps for some time, and assigned to the First Army and the 12th Army Group for the duration of its career. The division "spearheaded" the US First Army through Normandy, taking part in a number of engagements, notably including the Battle of Saint-Lô, where it suffered significant casualties. After facing heavy fighting in the hedgerows, and developing methods to overcome the vast thickets of brush and earth that constrained its mobility, the unit broke out at Marigny, alongside the 1st Infantry Division, and swung south to Mayenne. The engineers and maintenance crews took the large I-Beam Invasion barriers from the beaches at Normandy and used the beams to weld large crossing rams on the front of the Sherman tanks. They would then hit the hedgerows at high speed, bursting through them without exposing the vulnerable underbellies of the tanks. Until this happened, they could not get across the hedgerows. Ordered to help close the Falaise Gap and Argentan pocket which contained the German Seventh Army, the division finished the job near Putanges by 18 August. Six days later the outfit had sped through Courville and Chartres and was located at the banks of the Seine River. On the night of 25 August 1944 the crossing of the Seine by the division started; once over, the 3rd slugged its way across France, reaching Belgium on 2 September 1944. Liberated in the path of the division were Meaux, Soissons, Laon, Marle, Mons, Charleroi, Namur and Liege. It was at Mons that the division cut off 40,000 Wehrmacht troops and captured 8,000 prisoners. "Then the division began the first invasion of Germany since the days of Napoleon"[This quote needs a citation] is a claim often repeated and derives from 1947 U.S. Army literature that ignored earlier acts such as the 5th Armored Division's reconnaissance into Germany on 11 September 1944, French troops entering the Saarland in September 1939 during the Saar Offensive, and the entry into Germany by imperial Russian troops in 1914 and of the French invasion of Alsace in August 1914. Division troops crossing the Siegfried line to Germany. On 10 September 1944, the Spearhead Division fired what it claimed was the first American field artillery shell of the war onto German soil. Two days later, it passed the German border and soon breached the Siegfried Line, taking part in the Battle of Hurtgen Forest. The 3d Armored Division continued fighting during the Battle of the Bulge, far north of the deepest German penetration. The division fought south in an attack designed to help wipe out the bulge and bring First Army's line abreast of Patton's Third Army fighting northward toward Houffalize. It severed a vital highway leading to St. Vith and later reached Lierneux, Belgium, where it halted to refit. After a month of rest the division continued its offensive to the east, and on 26 February, Spearhead rolled back inside Germany as both Combat Commands bolted across the Roer River and seized several towns, crossed the Erft Canal, and at last broke through to the Rhine River to capture Cologne by 7 March. Two weeks later it crossed the Rhine south of Cologne at Honnef. On 31 March, the commander of the division, Major General Maurice Rose, rounded a corner in his jeep and found himself face to face with a German tank. As he withdrew his pistol either to throw it to the ground or in an attempt to fight back, a young German tank commander, apparently misunderstanding Rose's intentions, shot the general. Beyond Cologne the division swept up Paderborn in its advance, to shut the back door to the Ruhr Pocket. In April, the division crossed the Saale River, north of Halle, and sped on toward the Elbe River. On 11 April 1945, the 3d Armored discovered the Dora-Mittelbau concentration camp. The division first arrived on the scene, reporting back to headquarters that it had uncovered a large concentration camp near the town of Nordhausen. Requesting help from the 104th Infantry Division, the 3d immediately began transporting some 250 ill and starving prisoners to nearby hospital facilities. The last major fighting in the war for the division was the Battle of Dessau, which the division captured on 23 April 1945 after three days of combat. Following the action at Dessau, the division moved into corps reserve at Sangerhausen. Occupational duty near Langen was given the division following V-E Day, a role it filled until inactivation on 10 November 1945. 3rd Armored Division M-60A3 tanks and armored personnel carriers near the Sembach Air Base exit ramp. The division was reactivated on 15 July 1947 at Fort Knox, Kentucky to act as training formation. In 1955 the 3d Armored Division was reorganized for combat and shipped to Germany the next year. It replaced the 4th Infantry Division under a program called Operation Gyroscope. It was the first U.S. armored division to be stationed east of the Rhine River in the Cold War. The 3d Armored Division, headquartered at Frankfurt am Main, served in Cold War Germany for approximately 36 years, from May 1956 to July 1992, with the exception of the time spent in Saudi Arabia and Iraq during the leadup to and Gulf War. The three main combat forces headquarters for the 3rd AD were, (1) Ayers Kaserne at Kirch-Goens and Schloss Kaserne at Butzbach (The forces at those kasernes initially formed Combat Command "A" [CCA] of the 3d Armored Division. (2) Coleman Kaserne at Gelnhausen (CCB/2d Brigade); and (3) Ray Barracks at Friedberg (CCC/3d Brigade). The 3d Armored's primary mission during the May 1956 to July 1992 period was, in the event of war, to defend the Fulda Gap, alongside other NATO elements, against numerically superior Warsaw Pact forces. In June 1962 USAREUR maxed out its Cold War troop strength; that number was never achieved again. Also in June 1962 the nuclear warheads for the Davy Crocketts arrived in USAREUR (3d AD combat maneuver battalions were issued Davy Crocketts). In late October 1962 during the Cuban Missile Crisis there was no hotline between Washington and Moscow; Soviet Forces, including those in the Group of Soviet Forces in Germany (GSFG), were placed on the highest alert level. Two of the five armies in the GSFG were positioned to advance through the Fulda Gap. These were the 8th Guards Army (three motor rifle divisions and one tank division) and 1st Guards Tank Army (four tank divisions and one motor rifle division). From 1963 the Reorganization Objective Army Division (ROAD) changes meant organizational changes within the 3d AD's three combat commands, plus a name changeover to "brigades" e.g. Combat Command A became 1st Brigade. To prepare their soldiers for an invasion, the 3d Armored Division's units frequently conducted field training in Bavaria at Hohenfels Training Center, Hohenfels, Bavaria; Wildflecken Training Center; and Grafenwöhr Training Center, conducting exercises of live fire, movement and communications. Throughout its time in Cold War Germany, beginning in mid-1956, the division would also frequently take to the German countryside for training maneuvers, including, beginning in January 1969, what became an annually staged war game Reforger, which simulated invasion of Western Europe by Warsaw Pact forces. Note: As indicated in the yearly issues during the Vietnam War of Annual Historical Summary - Headquarters United States Army, Europe and Seventh Army, the USAREUR training maneuver budgets dramatically dried up during the Vietnam War years. Significantly reduced training funds were first mentioned in the 1 January to 31 December 1966 edition of the USAREUR Annual Historical Summary. According to Karl Lowe in his www.usarmygermany.com article, in 1976 the army wanted to move the 2d Brigade's home station from Gelnhausen to Alsfeld, which lay in the Northern Advance Route for Soviet Forces out of the Fulda Gap; however the Germans wouldn't agree to a major share in building brigade facilities at Alsfeld, so the plan was dropped. At the peak of 1980s East/West tensions, as many as nineteen Soviet and East German divisions faced off against NATO forces in the area. Throughout the Cold War, the division headquarters company, the 503d Administrative Company, 503d Adjutant General Company, and 503d MP Company were based at Drake Kaserne, with 143d Signal Battalion and other support units stationed across the street at Edwards Kaserne in Frankfurt, West Germany. A number of its subunits were based in other Kasernes throughout the German state of Hessen, notably Ayers Kaserne (50° 28' 32.44" N 8° 38' 29.24" E) at Kirch-Goens and Schloss Kaserne at Butzbach (CCA/1st Brigade), Gelnhausen (CCB/2d Brigade), Ray Barracks at Friedberg (CCC/3d Brigade) and Fliegerhorst near Hanau (eventually converted to the division's Aviation Brigade). The NCO Academy contained two companies: Co. A was assigned to the medieval castle at Usingen-Kransburg, while Co. B was located in Butzbach. The division itself comprised an average of 15,000 soldiers, organized into three combat commands (CCs) later renamed brigades (ROAD reorganization in 1963), organizations of comparable size to the World War II combat commands. These brigades were individually manned by at least one battalion each of infantry, armor, and artillery, and various supporting units, notably including medical, engineer, and aviation elements. The division was also assigned the dedicated 533d Military Intelligence/CEWI (Combat Electronic Warfare and Intelligence) Battalion by 1980, replacing the 503d MI Company that previously supported the division intelligence staff. Most of the kasernes were located adjacent to or within German communities, leading to lively trade and interaction between soldiers and German civilians. A few, however, were somewhat remotely located, particularly Ayers Kaserne ("The Rock")(50° 28' 32.44" N 8° 38' 29.24" E), where the 1st Brigade was stationed, outside Kirch-Goens. The most famous soldier in the 3d Armored Division during the 1950s was Elvis Presley, assigned to Company A, 1st Medium Tank Battalion, 32d Armor, Combat Command C at Ray Barracks in Friedberg. After his time in service, Presley made a movie G.I. Blues in which he portrays a 3d Armored Division tank crewman with little field duty but with much opportunity for singing, particularly at Frankfurt. In real life Presley was promoted to Sgt E-5 near the end of his tour in Germany, without the prospect of attending the 3AD NCO Academy, although he had a reserve obligation after active duty. Maybe he could have attended a Reserve-sponsored NCO Academy after returning to CONUS. In the movie he wears the E5 insignia of a Spec 5 rather than a SGT E5. Colin Powell also served in the division. He was assigned to the 2d Armored Rifle Battalion, 48th Infantry, Combat Command B, Coleman Kaserne, Gelnhausen, between 1958 and 1960. It started out with his first Army command assignment, as an infantry platoon leader. By 1990, communism in eastern Europe collapsed, the two German states reunited, and the Soviet Army was being withdrawn back to the Soviet Union. With these events, the Cold War came to a peaceful conclusion, freeing U.S. Army units in Europe for other deployments. In response to the changing cold war scenario, 3AD was instructed to begin selective standing down of various division elements during the summer of 1990. Some units, for example the 3d Battalion, 5th Air Defense Artillery, had turned in equipment or cross-leveled with other 3AD units by August 1990, when momentous events in the Middle East developed. That month, Iraq invaded Kuwait, and soon after, President George H. W. Bush committed U.S. troops to the theater, first to defend Saudi Arabia, and then to eject Iraqi troops from Kuwait. Deployment of advance elements of 3AD began in December, with the remaining deploying units arriving by January. Units that had drawn down were replaced or augmented back to full strength. As an example, 3-5 ADA was replaced by the 8th Infantry Division's 5th Battalion, 3d Air Defense Artillery. Other units were attached to 3AD to bring it up to, and even beyond, full strength. The 3d Armored Division, then commanded by Major General Paul Funk, was one of four U.S. heavy divisions deployed with VII Corps. The division and its equipment were shifted from Germany to Saudi Arabia, with in some cases, Army National Guard and Army Reserve elements taking over some of their duties in Germany, while in others, kasernes were left virtually empty. This massive redeployment was possible by the end of the Cold War. After redeployment, the division acclimated to the desert climate, and its troops faced new challenges in mobility, tactics and maintenance in a sandy and hot climate. Various National Guard and Army Reserve units were then attached to the division for the duration of the conflict, swelling the division's size to over 20,000 troops – 25% larger than during its time in Germany. The majority of the division's troops never received Desert Battle Dress Uniforms due to a shortage, and fought instead in lightweight summer "woodland pattern" uniforms, covered by tanker suits or chemical warfare protective MOPP suits. Finally, after months of training the division moved to the line of departure, alongside the 1st Armored Division on its left flank and the 2d Armored Cavalry Regiment on its right flank. While the Iraqi Army concentrated much of its defenses in and around Kuwait itself, the 3d AD and VII Corps launched a massive armored attack into Iraq, just to the west of Kuwait, taking the Iraqis completely by surprise. Scouts from 2d Brigade crossed on the afternoon of 23 February 1991 just after 1500 hours. Less than two hours later, they had penetrated several miles into Iraq and managed to capture over 200 prisoners. On 24 February, the official first day of action, the division as a whole swung into action as part of a coordinated attack by hundreds of thousands of allied troops. During the first day of battle, the 3d Armored Division pushed 18 miles into Iraq, taking over 200 prisoners. By dawn of the second day, an additional 50 prisoners had been taken, with scouts reporting enemy reinforcements moving to meet the division. At 1115 hours of the second day, all elements of the division were finally across the line of departure. The day was marked by hard pushing to penetrate deep and fast, striking for an objective south of Basra. In the course of its drive, various elements of the division engaged the enemy, taking prisoners, skirmishing, sometimes bypassing enemy strongholds to gain ground, other times engaging in full-scale battle. By nightfall of the second day, 3AD had driven 53 miles into Iraq, with dozens of enemy vehicles destroyed, hundreds of POWs captured, and was on the verge of achieving its first objective – an accomplishment that war planners had not anticipated. On the third day of combat, 26 February, the division closed in upon its objective and faced for the first time the Iraqi Republican Guard, a much stronger foe than the forces the division had first engaged, and less inclined to retreat or surrender. Opposing forces included the highly touted Republican Guard "Tawakalna" Division, the Iraqi 52nd Armored Division and elements of the 17th and the 10th Armored Divisions. The division engaged in full scale tank battles for the first time since World War II, and as one of the division's veterans states "There was more than enough action for everyone". Action continued after nightfall, and by 1840 hours, the ground and air elements of the 3d AD could report over 20 tanks, 14 APCs, several trucks and some artillery pieces destroyed. Unfortunately, that same evening, the 4th Battalion, 32d Armor lost the division's first casualties in a Bradley Fighting Vehicle to 25mm cannon fire – with two soldiers killed and three wounded. During the night, both darkness and sandstorms hampered soldiers' visibility, but thermal sighting systems on board the M1A1 Abrams tanks and Bradleys allowed gunners to knock out Iraqi targets. By the fourth day, the division reached its objective, and pursued its now retreating enemy. The division turned east, into Kuwait, continuing to inflict heavy casualties and capture troops as it rolled forward, often hitting new units whose defensive berms and foxholes faced south from their northern flank, rendering their defenses ineffective. By nightfall, forces facing 3AD had been virtually eliminated, with their remnants in full retreat. By the fifth day of combat, the division had achieved all objectives and continued to push east to block Iraqi retreat from Kuwait, conducting mopping up operations. One hundred hours after the ground campaign started, President Bush declared a ceasefire. On 28 February the U.S. 3d Armored Division cleared Objective Dorset after meeting stiff resistance and destroying more than 300 enemy vehicles. The 3d Brigade, 3d Armored Division also captured 2,500 enemy prisoners. At the height of the battle, the 3d Armored Division included 32 battalions and 20,533 personnel. It was the largest coalition division in the Gulf War and the largest U.S. armored division in history. In its moving arsenal were 360 Abrams main battle tanks, 340 Bradley Fighting Vehicles, 128 self-propelled 155 mm howitzers, 27 Apache attack helicopters, 9 multiple-launch rocket systems, and more. During the ground war, 3d AD destroyed hundreds of Iraqi tanks and vehicles, and captured more than 2,400 Iraqi prisoners. The 3rd AD served at the Battle of 73 Easting and the Battle of Norfolk. The 3rd Armored Division had three M1A1 Abrams tanks damaged during combat operations. The 3rd Armored Division suffered 15 soldiers killed between December 1990 and late February 1991. Approximately 7 of the soldiers were killed in action and another 27 soldiers from the division were wounded in action during combat operations. Following the war, 3d Armored Division was one of the first units rotated to Camp Doha, Kuwait, providing protection to Kuwait as it rebuilt. Following Desert Storm, a number of the division's units were transferred to the 1st Armored Division. On 17 January 1992, the 3d Armored Division officially ceased operations in Germany, with a ceremony in Frankfurt at Division Headquarters, Drake Kaserne. "Sir, this is my final salute. Mission accomplished," said Maj. Gen. Jerry Rutherford, the division commander. Rutherford preceded the final salute to General Crosbie E. Saint, USAREUR Commander, with a loudly shouted "Spearhead!" The division colors were then returned to the United States, with the 3d AD still officially active, since Army Regulations state that Divisional "Casing of the Colors" cannot occur on foreign soil. Official retirement took place at Fort Knox, on 17 October 1992. In attendance at the ceremony were several former Spearhead commanding generals, and division veterans from all eras. In a traditional ceremony, Command Sgt. Major Richard L. Ross, holding the division color with battle streamers, passed it to General Frederick M. Franks, Jr., completing the official retirement of the division, and the 3d Armored Division was removed from the official force structure of the U.S. Army. With the end of the Cold War, several of the division's overseas Kasernes were transferred to other units, particularly the 1st Armored Division. Over time, many were closed, fell into disrepair and were eventually demolished. Some 3d Armored units were also transferred to the 1st Armored, notably the 2nd Battalion, 3rd Field Artillery, later to become semi-famous as the unit portrayed in Gunner Palace. The 1st Battalion, 32nd Armor now resides at Fort Campbell, Kentucky as part of the 101st Airborne Division (Air Assault). The unit was reorganized as the 1st Squadron, 3rd Cavalry Regiment, and is assigned to the 1st Brigade Combat Team of the 101st Airborne Division (Air Assault) as its organic Reconnaissance, Surveillance, and Target Acquisition (RSTA) element. The 1st Battalion, 33d Armor also calls Fort Campbell and the 101st Airborne Division (Air Assault) home. The 1st Battalion, 33d Armor was reorganized as the 1st Squadron, 33d Cavalry Regiment and is assigned to the 3d Brigade Combat Team of the 101st Airborne Division (Air Assault). The 4th Squadron, 7th Cavalry is now part of 1st Brigade, 2d Infantry Division. Additionally, the lineage of the 122d Support Battalion (Main) from the Division Support Command was reactivated at Fort Bragg and assigned to the Combat Aviation Brigade, 82d Airborne Division as the 122d Support Battalion (Aviation). Also, the 54th Support Battalion (Main) was reactivated on 16 September 1994 as the 54th Support Battalion (Base) of the 80th Support Group (Area). The 3rd Armored Division had thirty-nine commanders over the course of its history, many of whom went on to four star rank. Cooper, Belton Y. (1998). Death Traps: The Survival of an American Armored Division in World War II. Navato, CA: Presidio Press. ISBN 0-89141-670-6. OCLC 38753044. A unique look at the war from a maintenance officer's perspective. Rolling Thunder: The True Story of the Third Armored Division (2002) – A History Channel documentary detailing the history of the division from birth to the 1990s. Man, Moment, Machine (season 1, episode 4): "Stormin' Norman and the Abrams Tank" – Featuring footage of the 3rd AD in the Gulf War, and interviews with 3AD tankers. Fury (2014 film) - a 2014 American-British war film written and directed by David Ayer. ^ Haldeman, Rob. "644th Tank Destroyer Battalion". ^ Third Armored Division Association; Members of the Division; Family and Friends. "Third Armored Division Association Archives: An Inventory at the University of Illinois Archives". ^ "3rd Armored Div. - Gulf War - 100-Hour Storm". www.3ad.com. Retrieved 26 August 2015. ^ Scales, Brig. Gen. Robert H.: Certain Victory. Brassey's, 1994, p. 279. ^ "54th Support Battalion". Archived from the original on 3 June 2013. Retrieved 13 June 2012. ^ "Commanders of the 3d Armored Division 1941-1992". Archived from the original on 28 September 2007. Retrieved 14 September 2007. ^ "Rolling Thunder: The True Story Of The 3rd Armored Division DVD". A&E Television Networks. Trauschweizer, Ingo. The Cold War U.S. Army: Building Deterrence for Limited War. Univ. Press of Kansas (2008). ISBN 978-0-7006-1578-0. Bourque, Stephen A. (2001). Jayhawk! The 7th Corps in the Persian Gulf War. Center of Military History, United States Army. LCCN 2001028533. OCLC 51313637. Third Armored Division (1945). Spearhead in the West, 1941-45 (PDF). Frankfurt am Main: Franz Jos. Henrich, Druckerei und Verlag. OCLC 1262878 – via Central Connecticut State University. Wikimedia Commons has media related to 3rd Armored Division (United States). 3AD.com – The 3rd Armored Division History Foundation – Covering 1941 to 1992 with high-quality photos, feature articles, documents, audio, and more. Includes, for example, complete text of the 260-page 3AD World War II history "Spearhead in the West" ; audio of President Kennedy's speech to the troops in 1963; details on 3AD Cold War nuclear weapons; Spearhead Newspaper's Gulf War reports; and a look at Elvis Presley's Army days. Association of 3d Armored Division Veterans (All-era group) – Extensive historical information, personal photos, and featuring a roster of Operation Desert Storm troops. 3rd Armored Division Association Archives at the University of Illinois. Text-only listings of their large World War II collection, which must be visited in person. 3rd AD Unit page on Military.com. Roll of Honor of the 3d Armored Division during WWII. This page was last edited on 21 April 2019, at 11:13 (UTC).
0.993665
Teach Cello with the popular Suzuki Cello School. The Suzuki Method(R) of Talent Education is based on Shinichi Suzuki's view that every child is born with ability, and that people are the product of their environment. According to Shinichi Suzuki, a world-renowned violinist and teacher, the greatest joy an adult can know comes from developing a child's potential so he/she can express all that is harmonious and best in human beings. Students are taught using the "mother-tongue" approach. Each series of books for a particular instrument in the Suzuki Method is considered a Suzuki music school, such as the Suzuki Cello School. Suzuki lessons are generally given in a private studio setting with additional group lessons. The student listens to the recordings and works with their Suzuki cello teacher to develop their potential as a musician and as a person. This Suzuki Book is integral for Suzuki cello lessons. Titles: Berceuse, Wiegenlied or Lullaby, Op. 98, No. 2 (F. Schubert) * Tonalization: The Moon over the Ruined Castle (Taki) * Gavotte (Lully) * Minuet from Sei Quintetti for Archi No. 11, Op. 11, No. 5 in E Major (Boccherini) * Tonalization: The Moon over the Ruined Castle (Taki) * Scherzo (Webster) * Minuet in G, Wo0 10, No. 7 for Piano (Beethoven) * Gavotte in C Minor, Gavotte en Rondeau from Suite in G Minor for Klavier, BWV 822 (Bach) * Minuet No. 3, BWV Anh. II 114/Anh. III 183/Anh. II, 115 (Bach) * Humoresque, Op. 101, No. 7 for Piano (Dvorák) * La Cinquantaine (Gabriel-Marie) * Allegro Moderato from Sonata I in G, BWV 1027 for Viola da Gamba (Bach).
0.999684
Learn the inspiring story of the legendary point guard Jason Kidd! In Jason Kidd: The Inspiring Story of One of Basketball´s Greatest Point Guards, you will learn the inspirational story of one of basketball´s premier point guards. Jason Kidd was easily one of the best point guards to play the game of basketball throughout the 2000s. With an extraordinary ability to pass the ball at precisely the right time, Kidd was like Steve Nash in that he made the teammates around him fundamentally better. Being an NBA Champion (with the Dallas Mavericks in 2011), 10-time All-Star and five-time All-NBA First Team member, Jason Kidd´s passing and rebounding abilities made him feared by opponents. He corralled triple-doubles years before flashy athletic point guards like Russell Westbrook began doing so.Here is a preview of what is inside this book: Childhood and Early Life High School Career College Career Jason Kidd´s NBA Career Getting Drafted Rookie Campaign First All-Star Season Tensions in Dallas, the Trade to Phoenix, Fresh Start with the Suns, First Playoff Appearance First Full Season with Phoenix, Igniting the Scorching Suns Shortened Season, Another First Round Exit Breaking Through to the Second Round Final Season in Phoenix The Trade to New Jersey, Reaching the NBA Finals Second Consecutive Finals Appearance Teaming Up with Vince Carter Leading a Trio with Carter and Jefferson Final Stretch in New Jersey, the Return to Dallas Contending as a Mav First and Only NBA Championship Final Season with Dallas Stop at the Big 1. Language: English. Narrator: Michael Hanko. Audio sample: http://samples.audible.de/bk/acx0/065970/bk_acx0_065970_sample.mp3. Digital audiobook in aax. Every action in Martial Arts and self defense aims at discovering the opponents weak points, exploiting them and finally disabling him/her without injury or to bring him/her under control. Although our knowledge about the build-up of the human body has multiplied, in many books on the subject of Martial Arts an explanation concerning the effect of various striking and pressure techniques has reduced to mentioning merely causes pain, paralyzes, death . Explanations are missing or are left in the realm of the esoteric. However it would be appreciated very much, if not simply from a standpoint of personal responsibility, if the followers of Martial Arts delved more into the possible medical outcome of their actions.
0.926397
Эпистемологические дебаты о том, как правильнее определить наименование и границы региона, называемого Средней / Центральной / Внутренней Азией или же Евразией, не затихают с момента изобретения этих терминов в первой половине XIX века. Стандартные аргументы циклично выстраиваются на пересечении научных рассуждений и псевдонаучных конструкций, политики и геополитики, устоявшихся литературных оборотов и вкусовых пристрастий. Предлагаемая в книге Светланы М. Горшениной история начальных этапов выработки концепта Средней / Центральной Азии демонстрирует неоднозначность многих привычных определений, показывая, что все существующие именования относятся не к объективному географическому порядку, а к географии репрезенций. Декодируя воображаемые критерии, автор показывает связь этих топонимов с географиче ским детерминизмом, принципом «центричности», геополитическими интересами национальных государств и транснациональных группировок, а также с линейным позитивизмом, связанным с европоцентризмом и западным империализмом. Данная публикация является адаптированным переводом части книги L’invention de l’Asie centrale. Histoire du concept de la Tartarie a l’Eurasie (Genève: Droz, 2014). Книга на русском языке доступна здесь. Автор: Галия Джанабаева, редактор: Марья С. Розанова. В книге представлены эстетические особенности декоративно-прикладного искусства центральноазиатских народов, многие уникальные образцы которого, а также навыки и знания, связанные с их изготовлением, признаны ЮНЕСКО шедеврами традиционного искусства и внесены в Список Всемирного нематериального культурного наследия. Обращаясь к истокам зарождения и последующего развития народного декоративно-прикладного искусства, художественных промыслов и ремесел, автор показывает историческую преемственность культур и национальных традиций народов Центральной Азии. “Islam in Russia, Russia in the Middle East” Initiative. Eurasia is marked by some of the freest migration in the world through the free labor zone of the Eurasian Economic Union and the visa-free regime of the Commonwealth of Independent States. At the same time, however, it faces restrictions in the form of Soviet-era registration procedures, active use of re-entry bans in Russia, and heavy-handed efforts to regulate emigration in Tajikistan and Uzbekistan. In this context, migration is not only an issue requiring domestic policy attention, but also a critical focus of geopolitical bargaining. Given the political and theoretical salience of migration in the Eurasian region, the NAC-NU Central Asia Studies Program chose as its second theme “external and internal migrations in Central Asia.” The call for papers generated proposals related to the development of Central Asian economies from migration and remittances, the dynamics of migration to Russia (the major destination), rising alternative destinations, and political factors in home and host countries. On the basis of these papers, we convened a conference in Astana in September 2017, which brought together junior and senior scholars with ties to the region and to international academic institutions. This group of scholars is well placed to mediate the empirical work being done in the region and broader theoretical perspectives. The second volume “New Voices from Central Asia: Political, Economic, and Societal Challenges and Opportunities” gives the floor to a young generation of experts and scholars from Central Asia and Azerbaijan. They were fellows at GW’s Central Asia-Azerbaijan Fellowship Program, which aims to foster the next generation of thought leaders and policy experts in Central Asia. The Program provides young professionals (policy experts, scholars, and human rights and democracy activists) with opportunities to develop their research, analytical, and communication skills in order to become effective leaders within their communities. The Program serves as a platform for the exchange of ideas and builds lasting intellectual networks of exchange between and amongst Central Asians and the U.S. policy, scholarly, and activist communities. It increases and helps disseminate knowledge about Central Asian viewpoints in both the United States and Central Asia. China’s Belt and Road Initiative (BRI) was announced by Chinese President Xi Jinping in September 2013 at Nazarbayev University. It is therefore natural that, for its launch, the NAC-NU Central Asia Studies Program, in partnership with GW’s Central Asia Program, seeks to disentangle the puzzle of the Belt and Road Initiative and its impact on Central Asia. Selected from over 130 proposals, the papers brought together here offer a complex and nuanced analysis of China’s New Silk Road project: its aims, the challenges facing it, and its reception in Central Asia. Combining methodological and theoretical approaches drawn from disciplines as varied as economics and sociology, and operating at both micro and macro levels, this collection of papers provides the most up-to-date research on China’s BRI in Central Asia. It also represents the first step toward the creation of a new research hub at Nazarbayev University, aiming to forge new bonds between junior, mid-career, and senior scholars who hail from different regions of the world and belong to different intellectual traditions. The economic driver of Central Asia, Kazakhstan stands up for its forward-looking branding and its multivectoral foreign policy. Behind its many successes, the country has been facing difficulties in managing its relationship with foreign investors, avoiding an “oil curse”, and obscuring the growing public debt of its nationalized big firms. Kazakhstan has partially failed to avoid social tensions linked to deep regional inequalities and to handle the 2014 economic crisis and the collapse of the national value, the tenge; at the same time, the role of Islam in public space, both urban and rural, has been evolving dramatically over two decades. In the past few years, Tajikistan’s domestic situation has been shaped by the shrinking place given to the Islamic Renaissance Party (IRPT). The Tajik authorities used the Islamic State threat to liquidate the IRPT, the last structured opposition force, and eventually banned it in late 2015. State-sponsored narratives have been making massive—excessive—use of “Islam” as a tool to better control society (through women’s dress code, for example); to denounce regional warlords and opponents; and to instrumentalize regional powers such as Iran. However, societal evolutions are much more complex than the black and white state narrative would have us believe, with migrations and youth bulge at the core of social transformations, not to mention difficulties in making the Tajik economy—from energy use to agriculture—and public finance viable. Kyrgyzstan has been the most studied country in Central Asia, due to its openness to Western observers and the substantial presence of foreign institutions in its higher education system. Withstanding the pressure of its paradoxical politics, the country combines political pluralism and diverse parliamentary life with state violence, a public administration that is penetrated by criminal groups, and rising street vigilantism. Kyrgyzstan’s economy is also struggling, between the mining rent curse, agricultural survival and migrants’ remittances. In the past few years, Kyrgyz authorities have begun to follow Uzbekistan’s path, placing excessive stress on the theme of Islamic radicalization in order to justify the status quo and the role of law enforcement agencies. Academic knowledge on Uzbekistan blossomed in the 1990s, before drying up in the 2000s and 2010s with the closure of the country and the increased difficulty of doing fieldwork. However, research has continued, whether directly, on the ground, or indirectly, through secondary sources or diasporic and migrant communities abroad. The death of the ‘father of the nation’, Islam Karimov, in fall 2016, partly changed the conditions and may slowly reopen the country to external observers and to regional cooperation and interaction with the world more broadly. This volume offers a unique collection of articles on Uzbekistan under Karimov, giving the floor to scholars from diverse disciplines. It looks at critical issues of history and memory, at dramatic societal and cultural change the country faced during two decades, at the domestic political order, and at change and continuity in Uzbek regional and foreign policies. The most closed and understudied country in Central Asia, Turkmenistan has been facing profound evolutions since the death of its first president, Saparmurat Niyazov, in December 2006. The political regime has evolved slowly under the second president, Gurbanguly Berdimuhamedow, giving more room to the emerging middle class and allowing for slightly more open patterns of development, particularly in higher education. The country has pursued its neutrality status in a post-Crimea world but has been feeling previously unknown insecurities related to destabilizations coming from its Afghan neighbor. Turkmenistan has also entirely reoriented its gas production and export strategies toward China, which has so far resulted in Ashgabat being essentially held hostage by China’s policies, with little room for maneuver to adjust to a new world energy context. The volume provides academics and policy makers with an introduction to current trends in Southern Eurasia. At the collapse of the Soviet Union, Western pundits celebrated the dramatic reshaping of regional interactions in Southern Eurasia to come, with the hope of seeing Russia lose its influence and be bypassed by growing cooperation between the states of the South Caucasus and Central Asia, as well as the arrival of new external powers. This hope has partially failed to come to fruition, as regional cooperation between the South Caucasus and Central Asia never started up, and cooperation within these regions has been hampered by several sovereignty-related and competition issues. However, a quarter of century after the disappearance of the Soviet Union, strategic nodes in Southern Eurasia have indeed deeply evolved. Some bottom-up dynamics seem to have taken shape and the massive involvement of China has been changing the long-accepted conditions in the wider region. Islamic finance has also emerged, while external actors such as Turkey, Iran, the Gulf countries and Pakistan have progressively structured their engagement with both Central Asia and South Caucasus. Another key node is centered in and around Mongolia, whose economic boom and strategic readjustments may help to shape the future of Northeast Asia. The volume “New Voices from Central Asia: Political, Economic, and Societal Challenges and Opportunities” gives the floor to a young generation of experts and scholars from Central Asia and Azerbaijan. They were fellows at GW’s Central Asia-Azerbaijan Fellowship Program, which aims to foster the next generation of thought leaders and policy experts in Central Asia. The Program provides young professionals (policy experts, scholars, and human rights and democracy activists) with opportunities to develop their research, analytical, and communication skills in order to become effective leaders within their communities. The Program serves as a platform for the exchange of ideas and builds lasting intellectual networks of exchange between and amongst Central Asians and the U.S. policy, scholarly, and activist communities. It increases and helps disseminate knowledge about Central Asian viewpoints in both the United States and Central Asia. To reflect over a quarter century of independence, we decided to give the floor to local voices exclusively. In this book, Central Asian scholars express what they consider to be the main successes and failures of these 25 years of national sovereignty, as well as the challenges their society will have to face in the near- and long-term future. The book Central Asia at 25. Looking Back, Moving Forward includes an essay by 31 authors: Dilorom Abdullaeva, Aida Alymbaeva, Umed Babakhanov, Bakhytzhamal Bekturganova, Denis Berdakov, Alima Bissenova, Konstantin Bondarenko, Svetlana Gorshenina, Shairbek Juraev, Gulzhigit Ermatov , Galym Zhussipbek, Nargis Kassenova, Diana Kudaibergenova, Sobir Kurbanov, Sanat Kushkumbayev, Guzel Majdinova, Parviz Mullodzhanov, Anar Musabaeva, Davron Mukhamadiev, Parviz Mukhamadiev, Madina Nurgalieva, Adil Nurmakov, Mirzohid Rakhimov, Aidos Sarim, Talant Sultanov, Sanjar Sulton, Farhad Tolipov, Valikhan Tuleshov, Umida Hashimova, Alexander Tsay, and Aynabat Yaylymova. The first part of the book insists on three critical elements of the last 25 years: integration processes of the new states in the international scene, an ideology that absolutizes the sovereignty acquired in 1991 as the quintessence of the nation’s achievement, and, domestically, a political path shaped by presidentialism and a fear of pluralism of opinions, seen as a risk for the countries’ stability and essence. The second part of the book looks at the transformation of identities and societies in their ‘post-Sovietness.’ In the third part of the book we look at new social forces at work. The fourth part looks forward and investigates the new ideological trends that will shape some or all Central Asian countries. The book is available in Russian and English. The Annual Memos group all CAP online publications in an easily downloadable booklet to keep you up to date on current research on the region. This book offers unique insight into the memory of the Central Asian afgantsy, who fought in Afghanistan during the Soviet invasion of 1979-1989. This work of oral history, organized by a team of Kazakh, Tajik, and Uzbek scholars under the supervision of Prof. Laruelle, contributes to a better understanding of the deep influence of the Soviet-Afghan war on the Soviet Central Asian social fabric, and the memories still at play today. Данная книга представляет собой уникальную информацию – память войнов-«афганцев» из трех стран Центральной Азии: Казахстана, Таджикистана и Узбекистана, принявших непосредственное участие в военных действиях в Афганистане в ходе десятилетней войны. Благодаря замечательной работе группы, состоящей из казахстанских, таджикских и узбекских специалистов, «афганцы» Центральной Азии, наконец, получили возможность высказаться. Будучи самостоятельными вершителями своих судеб, далекими от нынешних идеологических установок, они описывают реальность своей повседневной жизни, свой опыт в качестве солдат и офицеров, а также взаимоотношения с жителями Афганистана. Они определяют то, что считают важным сохранить в памяти, в основном, свою гордость за советские достижения. Данная книга посвящена памяти всех «афганцев» Центральной Азии, которых уже нет в живых и кто уже не может поведать миру свою историю.
0.999998
How much impact does yoga have on the recovery of cancer patients? Yoga is an art form that is greatly appreciated for its positive effects on the body. Physical, mental, emotional, and spiritual health are each enhanced through the practice of yoga and other holistic methods. Yoga is a calm and relaxing method of strengthening the body and ridding it of toxins, making it an ideal exercise for patients who have long-term or terminal illnesses. Cancer is a disease that is growing rapidly in today’s world, but few know the benefits of yoga to cancer patients. The illness itself is not the only thing that negatively affects cancer patients; the majority of treatments, such as radiation therapy or chemotherapy, also have long-term detrimental effects. While the symptoms and signs of the disease can be terrible and debilitating, the treatments can be just as harsh on the body. It is important for these patients to find ways to alleviate some of their pain, without medication and more potentially painful treatments, as these things can sometimes be more harmful than helpful, when considering a long-term period of illness. Metastatic, malignant cancer cells are not the only toxins circulating in the bodies of cancer patients. The remnants of treatment can remain in the body for long periods of time and may produce illness later. Yoga increases blood flow without increasing blood pressure, and gentle poses will assist in balancing metabolic processes and increasing the activity of the lymphatic system, beginning the elimination of these toxins from the system. The slow movements and deep, therapeutic breathing increase oxygen flow in the body, allowing for further toxin removal. Not only are there physical benefits to teaching yoga to patients recovering from cancer, but the mental and emotional benefits are great. Yoga has been proven to reduce anxiety and stress, alleviate migraines, and relieves tension throughout the entire body. Anxiety and tension have been linked directly to immunosuppressant effects, and by reducing these feelings in the body, patients are increasing their body’s own natural defense against illness, including cancer. The beginning lessons may be difficult for some cancer patients, particularly if their body has succumbed to the illness greatly, but the benefits of yoga are worth the initial rough start. The deep breathing exercises (pranayama) are also an important aspect of teaching yoga to cancer patients. As time progresses, patients will find that regular, restorative yoga exercises helps them cleanse their bodies and gives them a sense of comfort and ease, washing away their anxieties and worries.
0.970025
This function is convenient when encoding a string to be used in a query part of a URL, as a convenient way to pass variables to the next page. The string to be encoded. Be careful about variables that may match HTML entities. Things like &amp, &copy and &pound are parsed by the browser and the actual entity is used instead of the desired variable name. This is an obvious hassle that the W3C has been telling people about for years. The reference is here: » http://www.w3.org/TR/html4/appendix/notes.html#h-B.2.2. PHP supports changing the argument separator to the W3C-suggested semi-colon through the arg_separator .ini directive. Unfortunately most user agents do not send form data in this semi-colon separated format. A more portable way around this is to use &amp; instead of & as the separator. You don't need to change PHP's arg_separator for this. Leave it as &, but simply encode your URLs using htmlentities() or htmlspecialchars(). urlencode function and rawurlencode are mostly based on RFC 1738. However, since 2005 the current RFC in use for URIs standard is RFC 3986. Here is a function to encode URLs according to RFC 3986. Since PHP 5.3.0, urlencode and rawurlencode also differ in that rawurlencode does not encode ~ (tilde), while urlencode does. ... should solve this problem. I needed encoding and decoding for UTF8 urls, I came up with these very simple fuctions. Hope this helps! Be careful when encoding strings that came from simplexml in PHP 5. If you try to urlencode a simplexml object, the script tanks. I got around the problem by using a cast. Don't use urlencode() or urldecode() if the text includes an email address, as it destroys the "+" character, a perfectly valid email address character. Unless you're certain that you won't be encoding email addresses AND you need the readability provided by the non-standard "+" usage, instead always use use rawurlencode() or rawurldecode(). If you need to use any of these modules and handle paths that contain %2F or %3A (and few other encoded special url characters), you'll have use a different encoding scheme. My solution is to replace "%" with "'". Do not let the browser auto encode an invalid URL. Not all browsers perform the same encodeing. Keep it cross browser do it server side. This very simple function makes an valid parameters part of an URL, to me it looks like several of the other versions here are decoding wrongly as they do not convert & seperating the variables into &amp;. * @param $params The parameters to be converted into URL with key as name. If there are arrays within the $args array, they will be serialized before being urlencoded. I'm running PHP version 5.0.5 and urlencode() doesn't seem to encode the "#" character, although the function's description says it encodes "all non-alphanumeric" characters. This was a particular problem for me when trying to open local files with a "#" in the filename as Firefox will interpret this as an anchor target (for better or worse). It seems a manual str_replace is required unless this was fixed in a future PHP version. Keep in mind that, if you prepare URL for a connection and used the urlencode on some parameters and didn't use it on the rest of parameters, it will not be decoded automatically at the destination position if the not encoded parameters have special characters that urlencode encodes it. here is the second parameter has spaces which urlencode converts it to (+). after using this URL, the server will discover that the second parameter has not been encoded , then the server will not decode it automatically. this took more than 2 hours to be discovered and hope to save your time. I was testing my input sanitation with some strange character entities. Ones like � and � were passed correctly and were in their raw form when I passed them through without any filtering. However, some weird things happen when dealing with characters like (these are HTML entities): &#8252; &#9616; &#9488;and &#920; have weird things going on. If you try to pass one in Internet Explorer, IE will *disable* the submit button. Firefox, however, does something weirder: it will convert it to it's HTML entity. It will display properly, but only when you don't convert entities. The point? Be careful with decorative characters. PS: If you try copy/pasting one of these characters to a TXT file, it will translate to a ?. kL's example is very bugged since it loops itself and the encode function is two-way. Why do you replace all %27 through ' in the same string in that you replace all ' through %27? Lets say I have a string: Hello %27World%27. It's a nice day. I get: Hello Hello 'World'. It%27s a nice day. With other words that solution is pretty useless. Just replace %27 through ' when decoding. Or just use url_decode. // country, postcode, and city with id='country' and so-on. if you have a url like this: test-blablabla-4>3-y-3<6 or with any excluded US-ASCII Characters (see chapter 2.4.3 on http://www.ietf.org/rfc/rfc2396.txt) you can use urlencode two times for fix the 403 error. the problem is that the characters are decoded 2 times, 1 single, the first time mod_rewrite, the second is to create the php $ _GET array. also, you can use this technique to the same as the complex functions of other notes. * @param mixed $fe Is first call to function? This example covers all the encodings you need to apply in order to create URLs safely without problems with any special characters. It is stunning how many people make mistakes with this. - Use urlencode for all GET parameters (things that come after each "="). - Use rawurlencode for parts that come before "?". - Use htmlspecialchars for HTML tag parameters and HTML text content.
0.977803
Decoherence can be viewed as the loss of information from a system into the environment (often modeled as a heat bath), since every system is loosely coupled with the energetic state of its surroundings. Decoherence represents a challenge for the practical realization of quantum computers, since such machines are expected to rely heavily on the undisturbed evolution of quantum coherences. Simply put, they require that coherent states be preserved and that decoherence is managed, in order to actually perform quantum computation. So I am wondering how can this loss of information be managed? Does this mean that it should be prevented completely, or is it necessary for quantum computing to actually allow some information loss in order to compute? The quantum circuit model describes a quantum computer as a closed quantum system and assumes that there is a system which executes the circuit but is completely isolated from the rest of the universe. In the real world, however, there are no known mechanisms for truly isolating a quantum system from its environment. Real quantum systems are open quantum systems. Open quantum systems couple to their environment and destroy the quantum information in the system through decoherence. When examining the simple evolution of a single quantum system this system-environment coupling appears to cause errors on the quantum system’s evolution (which wouldn't be unitary in this case). A coin has two states, and makes a good bit but a poor qubit because it cannot remain in superposition of head and tail for very long as it is a classical object. A single nuclear spin can be a very good qubit, because superposition of being aligned with or against an external magnetic field can last for a long time, even days. But it can be difficult to build a quantum computer from nuclear spins because their coupling is so small that it is hard to measure the orientation of a single nuclei. The observation that the constraints are opposing in general: a quantum computer has to be well isolated in order to retain its quantum properties, but at the same time its qubits have to be accessible so that they can be manipulated to perform computation and read out the results. A realistic implementation must strike a balance between these constraints. The first step towards solving the decoherence problem was taken in 1995 when Shor and Steane independently discovered a quantum analogue of classical error correcting codes. Shor discovered that by encoding quantum information, this information could become more resistant to interaction with its environment. Following this discovery a rigorous theory of quantum error correction was developed. Many different quantum error correcting codes were discovered and this further led to a theory of fault-tolerant quantum computation. Fully fault-tolerant quantum computation describes methods for dealing with system-environment coupling as well as dealing with faulty control of the quantum computer. Of particular significance was the discovery of the threshold theorem for fault-tolerant quantum computation. The threshold theorem states that if the decoherence interactions are of a certain form and are weaker than the controlling interactions by a certain ratio, quantum computation to any desired precision can be achieved. The threshold theorem for fault-tolerance thus declares a final solution to the question of whether there are theoretical limits to the construction of robust quantum computers. Yes, currently the loss of information is being managed by means of quantum error correction protocols. Ideally, quantum decoherence and eventual loss of information should be prevented. However, in real-world scenarios, it is hard to completely isolate quantum systems from their environment. Technically, quantum decoherence is something necessary for the overall operation of the quantum computing system, in order to bring the quantum computing system to equilibrium state to initiate or perform another computing operation. Having said that loss of information due to quantum decoherence during an computing operation is not an ideal thing. Not the answer you're looking for? Browse other questions tagged quantum-information decoherence or ask your own question.
0.999965
Our cat of 12 years cries all the time... She is not hungry, her litter is clean, she has water, etc. She has always been a crier, but recently, it has gotten worse. We had to put our other cat to sleep in March and she did mourn then. A squirting water bottle does help, but we are quite curious why she is doing this. Also, she has been much more loving lately. She used to have no use for humans except to feed, water and keep her litter clean. Thank you for any input or suggestions you have!! I would recommend not trying to discourage your cat's behavior, but rather, to focus on finding and treating the underlying cause. Cats crying incessantly can be a sign that they are not feeling well. Possible causes for your cat becoming more insecure and crying all of the time include loss of hearing and feline metabolic diseases, such as feline hyperthyroidism, cat diabetes, and kidney disease. Because these cat diseases are deadly if left untreated, and considering your cat's age, I would suggest you bring her to a veterinarian for an exam and bloodwork as soon as possible.
0.999896
In the first round of CMT Music Awards nominations, Taylor Swift, Carrie Underwood, and Lady Antebellum have all received nods. The fan-voted awards show announced Tuesday that Lady Antebellum, Sugarland and Jason Aldean are nominated in four categories, while stars Taylor Swift, Carrie Underwood, Kenny Chesney, Brooks & Dunn and Brad Paisley were nominated for three each. The first round of voting kicked off April 5 and will run through May 3 at CMT.com. CMT will announce the four finalists in each category except video of the year on May 11. The finalists for the Video of the Year category, which includes 10 nominees, will be announced at the start of the show on June 9. Fans will vote throughout the live show via text message. • Jason Aldean - "The Truth" • Kenny Chesney - "Out Last Night" • Toby Keith - "American Ride" • Lady Antebellum - "Need You Now" • Miranda Lambert - "White Liar" • Brad Paisley - "Welcome to the Future" • Taylor Swift - "You Belong With Me" • Carrie Underwood - "Cowboy Casanova" • Keith Urban - "'Til Summer Comes Around" • Zac Brown Band - "Toes" • Billy Currington - "People Are Crazy" • Tim McGraw - "Southern Voice" • Darius Rucker - "Alright" • Laura Bell Bundy - "Giddy On Up" • Martina McBride - "I Just Call You Mine" • Reba McEntire - "Consider Me Gone" • Kellie Pickler - "Didn't You Know How Much I Loved You" • Lee Ann Womack - "Solitary Thinkin'" • Gloriana - "How Far Do You Wanna Go?" • Lady Antebellum - "American Honey" • Rascal Flatts - "Here Comes Goodbye" • Rascal Flatts - "Summer Nights" • Trailer Choir - "Rockin' the Beer Gut" • Zac Brown Band - "Highway 20 Ride" • Bomshel - "Fight Like a Girl" • Brooks & Dunn - "Indian Summer" • Caitlin & Will - "Address in the Stars" • Joey + Rory - "Play the Song" • Steel Magnolia - "Keep On Lovin' You" • Sugarland - "Keep You" • Ryan Bingham - "The Weary Kind" • Luke Bryan - "Do I" • Easton Corbin - "A Little More Country Than That" • Randy Houser - "Boots On" • Justin Moore - "Small Town USA" • Chris Young - "Gettin' You Home" • Brooks & Dunn featuring Billy Gibbons - "Honky Tonk Stomp" • Kenny Chesney with Dave Matthews - "I'm Alive" • John Mellencamp featuring Karen Fairchild - "A Ride Back Home" • Kellie Pickler featuring Taylor Swift - "Best Days of Your Life" • Blake Shelton featuring Trace Adkins - "Hillbilly Bone"
0.995259
Tonight is the Harvest Moon, but why do we call it that? October’s full moon occurs tonight, and it’s also the full moon closest to the Autumnal Equinox, or first day of fall. Every year, the full moon closest to the first day of fall is commonly referred to as the Harvest Moon in the Northern Hemisphere. That means the Harvest Moon can occur in either September or October, depending on how the moon’s cycle lines up with the Equinox. The Harvest Moon likely got its unique name because the full moon at the end of the growing season would help farmers work later into the night thanks to the light of the moon. A complete moon cycle is about 29.5 days, which adds up to roughly one full moon a month. When two full moons happen in a single month, the second full moon is called a Blue Moon.
0.999737
In untrustworthy networks, I let OpenVPN tunnel my laptop. There are certainly alternatives, and I would like to present a particularly simple one: sshuttle . As the name suggests, the tool relies on SSH. The tunnel's endpoint is a leased root server, just like with OpenVPN. Sshuttle is very frugal. It only needs SSH access with user privileges on the server; root privileges are not necessary. Additionally, Python must be installed on the server – that's it. --dns is included here. This means that DNS queries also run through the tunnel, which does not happen automatically. This is sshuttle's Achilles heel: It only transports TCP; ICMP and UDP do not pass through the tunnel, apart from DNS. Whereas other VPN technologies work at packet level and rely on TUN/TAP devices, sshuttle works at session level. It assembles the TCP stream locally, multiplexes it over the SSH connection, while keeping the status, and splits it into packets again on the destination side. This avoids the TCP-over-TCP problem which plagues other tools such as OpenVPN: TCP has an overload control (congestion control). The protocol defines a performance limit on the basis of dropped packets. If you tunnel TCP over TCP, you lose congestion control for the inner connection, which can lead to bizarre error patterns. Sshuttle is immune to the problem. Verbose parameters can help if you do need to troubleshoot. Figure 1 shows a connection setup with -v. With the verbose option, sshuttle is very long-winded, so I recommend redirecting the output to a file that can be evaluated in peace. My conclusions: Sshuttle is an excellent and simple VPN for people who can do without UDP and ICMP. Figure 1: Sshuttle builds a VPN for a server. Because of -v, the messages are more extensive than without.
0.944943
I used to have flying dreams a lot. I am not sure if I still do, I don't remember, I think I have different dreams now though. I used to fly over whole continents... In some dreams I was being my present self and in some dreams I was like some kind of wizard, like from a past life or a different life. However, I believe that your soul or consciousness never leaves yours body, but that only a part of your consciousness is actually within your body and the rest is connected through higher chakras. In a way, that your awareness is both local to you and also multi-dimensional. When you learn to observe your life and your thoughts, you become aware of this connection. Sometimes during meditation you can experience your awareness being in different places, as well as if you were everywhere and yet at the same time right there meditating. It's such a fascinating subject and one that I enjoy exploring.
0.999999
Can Russians exit Russia on an internal passport? Someone I know is a dual citizen of Sweden and Russia, holding a Swedish ID card and Russian internal passport, but neither a Swedish passport nor Russian external passport. She's planning to travel overland Sweden - Finland - Russia - Georgia - Armenia. Georgia accepts Swedish IDs while Armenia accepts Russian internal passports. And obviously entering Russia on an internal passport shouldn't be a problem, as it proves she's Russian. The problem is: I've heard that Russia doesn't let you exit on an internal passport other than to countries accepting them for entry, which the next country, Georgia, doesn't. Is this true? What if she explains that she's a dual national and using another document for Georgia? UPDATE: OK, so I spoke to the concerned person as you had some questions. Her father registered her birth with the embassy in Stockholm and got her an international passport (she's had two, one between 2004-2009 and one between 2009-2014), with which her family visited Russia on several occasions, most recently in 2012. In 2013-2014, she spent an exchange year in St Petersburg, during which she got the internal passport, which is valid until 2019. Shortly after returning to Sweden, her international passport expired and she hasn't got a new one since. UPDATE 2: I finally convinced her to get an (old template) Russian external passport. So apparently she'll use all three documents during the trip: Swedish ID at Schengen and Georgian borders; international Russian passport at the Russian border, and Russian internal passport at the Armenian border. Not possible. There are only a few countries to which Russians can exit with the internal passport, and Georgia isn't one of those. Russian citizens can only use the internal passport to visit Belarus, Kazakhstan, Kyrgyzstan, Armenia (a recent addition), the Russian-occupied territories of Georgia that Russia recognizes as independent and Russian-annexed territories of Ukraine. Possible solutions, other than obtaining a Russian external passport, would involve doing part of the journey by air, such as flying from Russia to Armenia using the internal passport. And obviously entering Russia on an internal passport shouldn't be a problem. Выезд из Российской Федерации и въезд в Российскую Федерацию граждане Российской Федерации осуществляют по действительным документам, удостоверяющим личность гражданина Российской Федерации за пределами территории Российской Федерации. It says, briefly, that Russian citizens have to use the external passport both for entering and leaving the country. In practice, this should not mean that entry will be denied, but the usual situation would be that the person gets detained at the border until the authorities are convinced they are who they claim to be, then the person gets fined and allowed into the country. If your friend also happens to be a resident of Sweden, and not Russia, it might further delay the process of being allowed in on an internal passport. Yes, you can't exit the Russia with internal passport on border points with countries which have no visa-free agreement with Russia. More over, there are very few countries which do accept Russians without international passport. So, basically, your friend can't prove her identity with internal Russian passport outside Russia. Moreover, the absence of it may lead to problems with entering the country, as, strictly speaking, you're still outside the Russia, so you need the international passport. But! The Federal Law about rules for enter/exit the Russia says that no Russian citizens can be denied from entrance the Russia. In case of losing the passport outside the Russia one should contact local Russian embassies/consulates for getting the document proving one's identity during entrance the Russia. In another words, your trip is still an option, but may lead you to numerous checks on entrance the Russia, even if you'll manage out to get the temporary identity document. This is an unusual case for border officers, so you may lose a lot of time. I suggest your friend to contact the Russian Consulate in Sweden and ask them about this problem. Maybe they will provide an easy way to resolve this (like providing her the temporary document), but it looks like your friend had to get her international passport, either from Sweden or from Russia. It's about the protocol - she needs a legal id proving that she can enter and exit the Russia, no matter where she goes from/to. Update about the international passports: Right now there are two types of IP for Russians being available. First type is so called "old template" (иностранный паспорт старого образца), which is still available to get, and it's validity is 5 years from issue date. Second type is so called "new template" (иностранный паспорт нового образца) or "biometrics passport" (биометрический паспорт), which validity is 10 years. Old doesn't contain biometric information, and has a simple photo in it, so for a new one Embassy need a technology update. Old is valid two times less comparing to old one. Old passport can be issued for a child under 12 years without bringing the child in Embassy, and can be done in two weeks. For old one you need photos, for new one you don't. So I assume that the old one was the easiest option, that's why the passports were valid only for a 5 years. All in all, I strongly suggest your friend to contact the Embassy for a passport renewal, and only after that do the trip. This is safest and easiest way to resolve this problem. She can do that in Consulate General of the Russian Federation in Gothenburg (Russian link). It will be done in up to 3 months, price is 276 SEK for old template, and 736 SEK for a new one. Not the answer you're looking for? Browse other questions tagged customs-and-immigration paperwork russia dual-nationality russian-citizens or ask your own question. Is it possible to get a Chinese visa if you already possess a valid visa for a different purpose? How can I avoid being stopped when exiting Phillipines by Immigration and/or being refused boarding by airline because of paperwork issues? How does an airline/ground handling company send a notification to Russian authorities? Did not declare travel partner in UK visa (business trip). How should I respond to immigration officer? Is military service in Russia required with dual citizenship?
0.999966
What are the core ingredients of a happy, healthy family? While there are many ways to answer this question, one usually stands out – that’s balance. However, maintaining this balance is an art and requires constant attention, commitment and dedication. What’s more is that this balance can be easily upset by major life events like the loss of job, financial pressures, marriage, divorce, illness, death, work-life challenges and school pressures. Additionally, the imbalance created by having to deal with mental health issues like depression, substance abuse, chronic illness and eating disorders, can also affect your family. In short, this balance is very fragile and requires constant attention. Our counselors at CareNet will help your family to uncover the issues at the root of your family’s imbalance. Then, they will help each family member learn how to more effectively manage themselves, and interact in ways that are healthier and more constructive to supporting your family’s group dynamic. Our counselors may also recommend individual sessions for some family members who they believe will benefit from this additional counseling. Our Goal: To improve communications, reinforce desired behaviors and help you create, renew and maintain a healthy sense of balance for your entire family.
0.999892
In 2002, five women are discovered barbarously murdered in Sierra Leone. Reuters Africa correspondent Connie Burns suspects a British mercenary: a man who seems to turn up in every war-torn corner of Africa, whose reputation for violence and brutality is well-founded and widely known. Connies suspicions that hes using the chaos of war to act out sadistic, misogynistic f In 2002, five women are discovered barbarously murdered in Sierra Leone. Reuters Africa correspondent Connie Burns suspects a British mercenary: a man who seems to turn up in every war-torn corner of Africa, whose reputation for violence and brutality is well-founded and widely known. Connies suspicions that hes using the chaos of war to act out sadistic, misogynistic fantasies, fall on deaf ears - but shes determined to expose him and his secret. The consequences are devastating. Connie encounters the man again in Baghdad, but almost immediately shes taken hostage. Released after three desperate days, terrified and traumatized by the experience - fearing that she will never again be the person she once was - Connie retreats to England. She is bent on protecting herself by withholding information about her abduction. But secluded in a remote rented house - where the jealously guarded history of her landladys family seems to mirror her own fears - she knows that it is only a matter of time before her nightmares become real.
0.987785
The majority of strokes occur when a blood clot lodges in a blood vessel, blocking blood flow to a portion of your brain (ischemic stroke). The group of brain cells normally nourished by the oxygen in the affected blood vessels dies almost immediately after blood flow is blocked, while surrounding brain cells experience reduced blood flow. Although the benefits of early stroke treatment are clear, only a small percentage of people who have a stroke receive optimal treatment. Almost half the 167,000 people who die of stroke each year die before they ever reach a hospital, and a greater percentage of these people are women. Why? Most of the evidence points toward a delay in seeking or receiving treatment. Knowing the risk factors for stroke, recognizing the warning signs and seeking prompt emergency care can help improve the outcome if you or someone you know has a stroke. The majority of strokes occur when a blood vessel, blocking blood flow to a portion of your brain (ischemic stroke). Similar to a heart attack, a stroke can be considered a "brain attack". The group of brain cells normally nourished by the oxygen in the affected blood vessel dies almost immediately after the blood is blocked, while surrounding brain cells experience reduced blood flow. Your brain cells can tolerate this slowdown in blood flow only briefly before permanent damage begins to occur. The longer the wait until blood flow is restored, the more damage that's done. Stroke is a potentially treatable disease when caught early on its onset. Given the narrow window of opportunity to halt stroke damage and prevent serious complications, prompt treatment is critical to obtaining the best possible outcome. There are many possible reasons why people put off seeking treatment for stroke symptoms. One may be lack of awareness of the symptoms of stroke. Signs and symptoms of heart attack have been drilled into the public consciousness on a much greater and more widespread level than have the warning signs of stroke. Another important factor - and one that is inherently harder to address - is that symptoms of stroke can be disabling, leading to impaired movement, communication and thinking. This can prevent a person from calling for help and is particularly concerning for the person who lives alone. Surprisingly, perhaps, calling your doctor instead of calling an emergency number such as 911 is another cause for delay. After hearing your symptoms, your doctor will most likely tell you to seek emergency care, but in the meantime, precious minutes are lost. When you experience signs and symptoms of stroke (or heart attack), call 911 or your local emergency number immediately. Individual characteristics also have an effect on how long it takes to seek help. For example, not taking your symptoms seriously, wanting to tough it out for being unaware that you're at risk can all contribute to delay in treatment. More pre-hospital stroke deaths occur among women than among men, and research suggests that women experience longer delays to treatment than men do. Why this occurs is unclear, but part of the reason may be that women, and sometimes their doctors, aren't always fully aware or convinced that they're at risk of heart disease and stroke. Possibly the most effective treatment for ischemic stroke, and the one most likely to improve your chances of a full recovery, is injection of a clot-busting (thrombolytic) drug - such as a tissue plasminogen activator (TPA) - to dissolve a blood clot. to outweigh its benefits. Some cases of ischemic stroke may not be compatible with TPA therapy. TPA therapy also isn't used to treat hemorrhagic stroke, a less common type of stroke caused by a blood vessel rupturing and bleeding into the brain. Other treatment options available at some medical centers include use of a tiny instrument called a "retrieval device" that can directly remove the clot from the blocked artery. New treatments are under study, as well. All of these potential treatments require prompt medical attention. Clot-busting therapy must start within three hours of the onset of symptoms. After this period, the risks of the therapy - bleeding and possible brain hemorrhage - begin to outweigh it's benefits. After an ischemic stroke, your doctor may perform several tests, including blood tests and an evaluation of your arteries and heart. This will assist your doctor in determining the best way of preventing another stroke. A program to prevent further strokes may include use of certain blood thinners, and your doctor may recommend surgery or a balloon procedure to unblock or widen the arteries to your brain if they're severely narrowed. Women are just as much at risk of stoke as are men, so don't make the mistake of thinking the possibility of a stroke doesn't apply to you. In addition, many factors can increase your risk. Some factors you can't control, such as a family history of stroke and increasing age. But there are other risk factors that are more manageable, including high blood pressure or cholesterol levels, smoking, diabetes, obesity, physical inactivity, drug and alcohol abuse, and cardiovascular disease. The risk associated with these factors can often be reduced through diet, exercise and medications, when needed. There are also risk factors to which women may be particularly susceptible. These include migraines with aura (visual disturbances preceding a migraine); use of oral contraceptives or oral hormone therapy; autoimmune diseases, such a lupus; or a clotting disorder, sometimes indicated by multiple miscarriages, blood clots in your lungs or legs, or a condition marked by purplish, net-like discoloration of your skin (livedo reticularis ). Your doctor can help you estimate your personal risk of developing cardiovascular disease, including stroke, over the next ten years. Knowing what your risk is can motivate you to take the steps needed to prevent a stroke. 1. Sudden numbness, weakness or paralysis of your face, or leg - usually on one side of your body. 2. Sudden difficulty speaking or understanding speech (aphasia). 3. Sudden blurred, double or decreased vision. 4. Sudden dizziness, loss of balance or loss of coordination. 5. A sudden, severe, "bolt out of the blue" headache or an unusual headache, which may be accompanied by a stiff neck, vomiting or decreased consciousness. 6. Confusion, or problems with memory, spatial orientation or perception. If these symptoms occur briefly and then go away, you may be experiencing a transient ischemic attack (TIA). A TIA is a temporary interruption of blood flow to a part of your brain. The signs and symptoms of TIA are the same as for a stroke, but they last for a shorter period - several minutes to 24 hours - and then disappear, without leaving apparent permanent effects. A TIA should be taken very seriously. It indicates an underlying risk that a full-blown stroke may follow. See a doctor immediately. Jim Martinez is a National Sales Director with Ameriplan USA. Offering discount dental and health plans for individuals or households. Any age or prexisting conditions are accepted and plans start at only $11.95 per month. Be sure to visit the section on health articles for more quality information.
0.959144
Haskell - Why are instances matched only by their heads? because this, for some reason, means "everything is a Monad (every f), only if it's a Wrapper", instead of "everything that's a Wrapper is a Monad". Similarly you can't define the Monad a => Applicative a and Applicative a => Functor a instances. Another thing you can't do (which is only probably related, I really don't know) is have one class be a superclass of another one, and provide a default implementation of the subclass. Sure, it's great that class Applicative a => Monad a, but it's much less great that I still have to define the Applicative instance before I can define the Monad one. This isn't a rant. I wrote a lot because otherwise this would quickly the marked as "too broad" or "unclear". The question boils down to the title. I know (at least I'm pretty sure) that there is some theoretical reason for this, so I'm wondering what exactly are the benefits here. As a sub question, I would like to ask if there are viable alternatives that still keep all (or most) of those advantages, but allow what I wrote. Addition: I suspect one of the answers might be something along the lines "What if my type is a Wrapper, but I don't want to use the Monad instance that that implies?". To this I ask, why couldn't the compiler just pick the most specific one? If there is an instance Monad MyType, surely it's more specific than instance Wrapper a => Monad a. There's a lot of questions rolled into one here. But let's take them one at a time. First: why doesn't the compiler look at instance contexts when choosing which instance to use? This is to keep instance search efficient. If you require the compiler to consider only instances whose instance heads are satisfied, you essentially end up requiring your compiler to do back-tracking search among all possible instances, at which point you have implemented 90% of Prolog. If, on the other hand, you take the stance (as Haskell does) that you look only at instance heads when choosing which instance to use, and then simply enforce the instance context, there is no backtracking: at every moment, there is only one choice you can make. Then once you had provided an instance Monad M where ..., you could simply write instance Applicative M with no where clause and have it Just Work. I don't really know why this wasn't done in the standard library. Last: why can't the compiler allow many instances and just pick the most specific one? The answer to this one is sort of a mix of the previous two: there are very good fundamental reasons this doesn't work well, yet GHC nevertheless offers an extension that does it. The fundamental reason this doesn't work well is that the most specific instance for a given value can't be known before runtime. GHC's answer to this is, for polymorphic values, to pick the most specific one compatible with the full polymorphism available. If later that thing thing gets monomorphised, well, too bad for you. The result of this is that some functions may operate on some data with one instance and others may operate on that same data with another instance; this can lead to very subtle bugs. If after all this discussion you still think that's a good idea, and refuse to learn from the mistakes of others, you can turn on IncoherentInstances. I think that covers all the questions.
0.99728
Most voluntary actions rely on neural circuits that map sensory cues onto appropriate motor responses. One might expect that for everyday movements, like reaching, this mapping would remain stable over time, at least in the absence of error feedback. Here we describe a simple and novel psychophysical phenomenon in which recent experience shapes the statistical properties of reaching, independent of any movement errors. Specifically, when recent movements are made to targets near a particular location subsequent movements to that location become less variable, but at the cost of increased bias for reaches to other targets. This process exhibits the variance–bias tradeoff that is a hallmark of Bayesian estimation. We provide evidence that this process reflects a fast, trial-by-trial learning of the prior distribution of targets. We also show that these results may reflect an emergent property of associative learning in neural circuits. We demonstrate that adding Hebbian (associative) learning to a model network for reach planning leads to a continuous modification of network connections that biases network dynamics toward activity patterns associated with recent inputs. This learning process quantitatively captures the key results of our experimental data in human subjects, including the effect that recent experience has on the variance-bias tradeoff. This network also provides a good approximation of a normative Bayesian estimator. These observations illustrate how associative learning can incorporate recent experience into ongoing computations in a statistically principled way. Experience has a profound effect on nearly all human behavior. For goal-directed behaviors such as reaching, the neural circuits that map sensory cues onto motor output continuously adapt to maintain behavioral stability in the face of external perturbations or internal noise (Shadmehr and Mussa-Ivaldi, 1994; Thoroughman and Shadmehr, 2000; Scheidt et al., 2001; Smith et al., 2006; Cheng and Sabes, 2007; Kluzik et al., 2008; Diedrichsen et al., 2010; Huang et al. 2011). There is evidence that these adaptive processes follow Bayesian statistical principles (Körding and Wolpert, 2004; Slijper et al., 2009; Wei and Körding, 2010), so that behavior is guided by both current sensory signals and a prior expectation of those signals derived from experience. Similar observations have been made for a variety of perceptual and cognitive processes (Weiss et al., 2002; Kersten et al., 2004; Miyazaki et al., 2005; Knill, 2007; Sato et al., 2007; Lages and Heron, 2008; Lu et al., 2008), suggesting that Bayesian principles may capture a general property of neural computation. Despite this wealth of experimental evidence, it is not well understood how neural circuits could learn such priors from recent experience. To better understand how behavior is shaped by recent actions, we investigated a novel form of experience-dependent learning using visually guided reaching. We found that the sensorimotor system appears to maintain a prior expectation for movement planning that is continually updated based on the sequence of recent reaches. This phenomenon is consistent, at the qualitative level, with adaptive Bayesian estimation. We next explored our hypothesis that the activity of a sensorimotor network could itself create and maintain such priors via Hebbian learning. Specifically, we propose a model in which ongoing activity continuously modifies the structure of synaptic connections within a competitive neural network. These changes bias the network dynamics toward recent activity patterns, effectively creating a prior on the network computations. We show that this simple model accurately captures the results from several novel behavioral experiments, suggesting a potential mechanism for adaptive Bayesian estimation. A total of 24 healthy, right-handed participants were tested (10 female, age range: 18–32 years). Subjects were paid for their participation and were naive to the purpose of the experiment. All the experimental procedures were approved by the University of California, San Francisco Human Research Protection Program. Subjects performed a series of trials in which they reached to visual targets in a virtual feedback apparatus (Fig. 1A) (Sober and Sabes, 2003). At the beginning of each trial, subjects placed the tip of their right index finger at a central start location positioned ∼29 cm in front of the midline of their chest. We used an arrow-field paradigm to guide their finger to the start location without providing visual information about absolute position (Sober and Sabes, 2005). After the start location was reached and after a variable delay (500–1500 ms), the target appeared 12 cm from the start location (unfilled circle, 15 mm in radius) and a “go tone” was played. Subjects were instructed to move as soon as possible and reach to put their finger in the center of the target circle. Once the finger had moved a quarter of the distance to the target, continuous feedback of finger position was displayed (filled white circle, 5 mm radius). Trials were terminated when the finger remained still in the target for 200 ms. At the end of each reach, participants received feedback in the form of a bonus score designed to encourage subjects to execute a quick, single, and accurate reach. The score was based on both reaction time and the distance between the target center and the location where the finger first decelerated below 25 mm/s. No bonus was given and a warning message appeared when the peak tangential velocity was <650 or >950 mm/s. Participants in experiment 1 (n = 8, three female) were tested on eight blocks of 110 trials in a single session. Each block began with 10 context trials, followed by a randomized sequence of 80 context and 20 probe trials. Probe trial targets were fixed at θ = 150° (relative to rightward axis) for all trial blocks. For the context trials, target angles were selected from a different distribution in each trial block: the repeated-target condition with all trials at the probe-target location; a normal distribution of targets with standard deviation σTarget = 1°, 2° 3°, 5°, 10°, or 15°; or a uniform distribution of targets on the circle. Participants in experiment 2 (n = 8, three female) were tested on six blocks of 90 trials in each of four experimental sessions. Each block began with 10 context targets, followed by two repetitions of each probe target (14 trials) randomized with 66 context trials. A single context-target distribution was used for each session: the repeated target condition, a normal distribution with σTarget = 7.5° or 15°, or a uniform target distribution. Seven probe target locations were used in this experiment. These were defined with respect to the repeated target location: θ − θrepeat = 0°, ±30°, ±60°, or ±90°. Half of the subjects were tested with θrepeat = 150° and half with θrepeat = 60°. Participants in experiment 3 (n = 8, four female) were tested on six blocks of 120 trials. Targets were presented in sequential order, starting at 0° and proceeding in 3° increments around the circle. Trial blocks had either clockwise or counterclockwise target rotation and there were three blocks per condition with order randomized. Movement trajectories were obtained from the position of an infrared LED located on the right index fingertip. Generally, positional data were not smoothed before analysis. However on ∼5% of trials there were missing data samples due to obstructed view of the fingertip LED. For these trials, the missing datapoints were interpolated using a cubic spline method (spline in Matlab). However, removing these trials appeared to have no qualitative affect on the final results. In experiment 3, we estimated the mean angular error, θMV − θtarget, separately for blocks with clockwise and counterclockwise target rotations. Note, however, that this model and the fit value of σLikelihood are only meant to provide qualitative comparisons to the data, as the assumptions that go into the model, in particular that the prior variance matches the context variance, are unlikely to be correct, as described below. where β ⊂ [0,1] is the learning rate. The free parameters β and σLikelihood2 were fit to the experimental data, minimizing the square error between the per trial MAP estimates of target location and subjects' movement errors (fmincon in Matlab). These fits were performed separately for each subject and each experimental session in experiment 2 (excluding the uniform condition). Cases where the fitted learning rate for a particular subject and session was either the maximum or minimum allowed value (0.999 and 0.001, respectively) were excluded from subsequent analysis (four of 24 fits). With this procedure, the mean estimate for σLikelihood was 10.0° (SD, 6.9°), slightly larger than that found with the normative Bayesian model. The noise has a Fano factor F (ratio of variance to mean), a correlation coefficient ρ between nearest-neighbor units, and a correlation coefficient for other pairs that falls off with the distance between the two preferred directions, with a FWHM of ϕ. With the parameter values used in our simulations (Table 1), we observed an average neuron–neuron correlation coefficient of 0.10 in the input activations. where the parameters a and b were set to the values 0.002 and 0.001, respectively. where β is the learning rate and α is the normalization parameter (Oja's α). We chose to use five iterations per simulated trial because this decode converged after ∼3–4 network iterations. The network model parameters used in our simulations are shown in Table 1. These values were obtained by fitting the model to the context-dependent changes in movement variance observed experimentally. Specifically, we fit the model to the variance curves in Figures 2A (experiment 1) and 5A, B (experiment 2) using the full-experiment simulations (see below); the parameters were selected to maximize the sum of the R2 values for each plot. For computational efficiency, this optimization was performed in three steps. First, using a reduced network with N = 90 neurons, we ran a large number of full-experiment simulations with random parameter values (∼3000 runs). We then qualitatively determined the subset of the parameters that did not correlate well with the goodness of fit: the baseline firing rate I0, Fano factor F, and the Oja's α. In the second stage, we ran a large number of optimization runs for the remaining parameters on the N = 90 network with different initial conditions, using a generic nonlinear optimization routine (Matlab's fmincon). From all these runs, we selected the best N = 90 network. Finally, we increased the number of neurons from 90 to 180 and reoptimized the parameters from the best network, with the scale-dependent parameters τ and β adjusted inversely proportional to network size. We first measured the effect of context target distribution by independently simulating blocks of trials with different input distributions. Each simulation consisted of a series of 100 training trials with the inputs θ(n) drawn from a given distribution followed by a series of probe trials in which learning was turned off (β = 0). All context distributions were centered at 0°, which we refer to as the “repeated target angle”. Probe inputs were presented from −140° to +140° in 20° intervals and each input was repeated 100 times. From each simulation, we computed the bias and trial-by-trial variance of the network output θMV for the probe targets. For each context distribution, we repeated this simulation 50 times and computed the average variance and bias. where the parameter ε determines the degree of unlearning. The network output was analyzed in the same manner as the experimental data. Lastly, we quantified how closely the network model approximates the normative Bayesian model. Specifically, we set out to ask whether the bias and variance of the network output changes as predicted by the Bayesian model as a function of the context and likelihood variances. The network was trained using the blockwise protocol described above with four different context distributions and tested with a series of probe targets and with different input gains, γ (see Fig. 8C–F). We used these parameters to predict the variance and bias of the network for other combinations of gain and context variance (see Fig. 8C–F). Experience-dependent changes in reach planning were evaluated using the well studied paradigm of center-out reaching to visual targets that were arrayed radially about a fixed starting point (Fig. 1A). Across blocks of trials, we manipulated the statistics of recent movements by varying the probability distributions of the target angles for the majority trials within a block, i.e., for the context trials (90 per block in experiment 1, 76 in experiment 2). If recent experience shapes the sensorimotor map, then we expected that changing the context target distribution should affect both the precision and accuracy reach planning. We evaluated such changes using a fixed set of probe targets that were randomly interleaved with the context trials (20 per block in experiment 1, 14 in experiment 2). Our principle measures of reach planning are the mean of the initial reach direction (θMV, for movement vector) (Fig. 1A) and its standard deviation across trials (σMV). These measures are made 100 ms after movement onset, before feedback can affect the action (Desmurget and Grafton, 2000). The reaching task. A, Subjects reached to visual targets with virtual visual feedback of the index fingertip (black dot) available ∼100 ms after movement onset. For experiment 1, the central gray target is both the probe target and the center of the context-target distributions. For experiment 2, all seven probe targets were used and the center target was located either at 150°, as shown, or at 60° (randomized across subjects). The initial movement direction, θMV, was determined 100 ms after movement onset. B–D, Example trial blocks for three context conditions in experiment 1 (black circles, context trials; white circles, probe trials). Insets, Context target histograms. In the first experiment, we measured how the variance of initial reach directions to a single probe target (Fig. 1A, gray circle) depended on the spread of context targets about the probe location. Eight different target distributions were used, ranging from the repeated target condition, with a single target location for all probe and context reaches (Fig. 1B), to the uniform condition, with context trial targets selected uniformly about the circle (Fig. 1D). The remaining conditions had normally distributed target angles, with the mean at the probe target and with different standard deviations (Fig. 1C). We found that the variance of reaches to the probe target changed across conditions (repeated measures, F(7,49) = 4.33, p < 0.001), with less variable contexts generally leading to less variable probe reaches (Fig. 2A). These data show that the repetition of similar reaching movements improves performance on those actions, i.e., “practice makes perfect” on a short timescale. Experimental results. A, Reach variability of the initial movement direction in experiment 1. B, Reach bias in experiment 2, with positive values reflecting bias toward the repeated-target position. The value of zero bias at the repeated-target position (0°) is nominal, since bias is defined as the error toward that location; there was no significant change across context conditions in the average angular error at this target (F(2,21) = 1.13, p = 0.34). C, Angular error in the CW and CCW trial blocks of experiment 3. Error bars represent SEs. In a second experiment, we measured how the distribution of context targets affects the mean movement error (bias) at an array of probe targets (Fig. 1A, white circles). When context reaches are all made to a single target, in this case the center probe target (Fig. 1A, gray circle), movements to the other probe targets are strongly biased inwards toward the center position compared with trial blocks with uniformly distributed context targets (Fig. 2B). This bias is stronger for probe locations further from the center target position; this effect attenuates as the distribution of context targets becomes more variable (repeated measures context × target interaction: F(9,63) = 5.20, p < 0.001). These results show that the reduced movement variability for repeated target directions comes at the cost of increased movement bias for other target directions. We will argue below that the experience-dependent changes in reaching shown in Figure 2 are the result of an automatic learning process. However, it is also possible that these effects reflect a high-level strategy, where subjects simply predict future target positions given recent history. To distinguish these possibilities, we conducted a third experiment in which future targets are predictable from recent reaches yet different from them. Within each block of 120 trials, targets were presented sequentially around the circle in either the clockwise (CW) or counterclockwise (CCW) direction, stepping in 3° increments. Subjects were aware of this pattern and could have predicted the location of the next target, resulting in little or no direction-dependent errors. In contrast, if future movements are always biased toward recently presented targets, then subjects should demonstrate a CW bias during CCW trial blocks and vice versa. Indeed, significant direction-dependent errors were observed (t(7) = 3.28, p < 0.025) (Fig. 2C), confirming that these experience-dependent effects are not the result of predictive, cognitive strategies. The tradeoff we observed between variance and bias can be qualitatively understood within a Bayesian framework for reach planning. In this framework, sensory signals, x, give rise to a likelihood function for the current target position, L(θ, x). Following Bayes' rule, this likelihood is combined with a prior expectation of the target, taking the form of a probability distribution p(θ), to yield a posterior distribution p(θ|x). The peak of p(θ|x) is the MAP estimate of the target location and is used to select the appropriate motor response (see Materials and Methods, above) (Fig. 3A). If p(θ) is an adaptive prior, then the bias and variance of reaching will reflect recent movement statistics. For example, when repeated movements are made to the center probe target, there is an increasing expectation that future movements will also be made in that direction; i.e., the prior probability distribution tightens about the center target. A tighter prior decreases the variance of the MAP estimate, but also biases it toward the center target. The width of the prior distribution modulates the degree of these effects (Fig. 3B,C). Variance–bias tradeoff in the normative Bayesian model. A, The components of the normative Bayesian model. B, Change in MAP estimator variance as a function of context variance. C, Output bias of the MAP estimator as a function of input location and context variance. The variance and bias effects of the simple Bayesian model follow the same trends as those observed experimentally; however, the detailed shapes of these curves are not the same (compare Fig. 3B,C with Fig. 2A,B). More sophisticated Bayesian estimation models can capture some of these details. For example, a robust Bayesian model (Körding and Wolpert, 2004; Knill, 2007) would predict that bias scales less than linearly with target distance, as seen in our data (Fig. 2B). However, we show later that the details of Figure 2 can be largely explained as the result of an iterative learning process acting on the specific order of trials used in our experiments. If the variance–bias tradeoff in Figure 2 arises from a process of adaptive Bayesian estimation, then we should be able to see the effects of learning evolve over time. Figure 4 illustrates how the average reach bias evolves for the ±90° probe targets over the course of an experimental session in experiment 2. We did not observe a large increase in bias across the session, although there is a trend for a slight increase within the first trial block, at least for the repeated target and 15° contexts. While these data might seem inconsistent with an adaptive Bayesian model, it is important to note that before the first ±90° probe target of a trial block occurred, a minimum of 10 context trials and an average of >22 context trials had already taken place. If learning occurs on relatively fast timescale, then it would have already approached the asymptotic value for that context by the time of the first probe trials. Evolution of reach bias across trial blocks. Data show the mean (±SE) across subjects of the bias at the ±90° probe targets for each trial block within a session, separately for each context. To measure the effective learning rates in our experiments, we used a simple incremental learning algorithm to model an adaptive Bayesian estimator. After each trial, the algorithm updates its estimate of the prior distribution, specifically the mean θ̄ and the variance σPrior2, based on that trial's target location (Eq. 6) (see Materials and Methods, above). The model includes two free parameters, the likelihood variance σLikelihood2 and a learning rate β that determines the weight given to the last trial in update algorithm. In the limit of β = 0, the system is not adaptive and the initial conditions for θ̄ and σPrior2 are used for all trials. In the limit of β = 1, the estimates have no memory, e.g., the mean of the prior is set to the target on the previous trial. We fit this model separately to the data from each subject and session in experiment 2 (excluding sessions with the uniform context), minimizing the sum-squared prediction error for movement angle. The best-fit learning rates show a large degree of heterogeneity across subjects, with a median value of 0.25 and a positive skew (SD = 0.27, mean = 0.29). Still, learning was generally fast. For example, with the median learning rate, the estimated mean of the prior would reach 66% of its asymptotic value within four trials. The presence of such fast learning rates explains why we see little change in the measured bias across a session (Fig. 4). The adaptive Bayesian model provides a much better account of the mean bias data than the simple the normative model, predicting a lower magnitude for the bias and better capturing the dependence of the bias on probe distance (compare Fig. 5A to Fig. 3C). This difference is not due to the difference in fit likelihoods for the two models (mean σ = 7.2° for the normative model and σ = 10.0° for the adaptive model), since the larger mean variance used in the adaptive model would only increase the magnitude of the bias. Rather, the improvement in fit is due to the effects of trial-by-trial learning and the actual sequence of targets experienced by the subjects. In particular, the presence of the probe trials prevents the prior distribution from converging to the context target distribution. Comparison of adaptive Bayesian model and experimental results. A, B, Reach bias (A) and reach variance (B) as a function of probe target angle and context target distribution for experiment 2. Dashed lines show mean results from the iterative Bayesian model fit to each experimental session in experiment 2. Solid lines show the observed data (mean ± SE). C, Reach variance for experiment 1. Dashed lines show predictions from the adaptive Bayesian model with the mean per session parameters used in A and B. Behavioral data in A and C are replotted from Figure 2. The influence of trial-by-trial learning can also be seen in the variability of movements. A notable feature of the experimental data is that the reach variability in experiment 2 depends significantly on the interaction between the context and the target location (repeated measures context × target, F(9,63) = 2.26, p = 0.029) (Fig. 5B). In the context of the normative Bayesian model, this result is unexpected, since the model predicts that reach variance should decrease monotonically with context variance, independent of target location (Eq. 4; Fig. 3B). We observed this predicted trend at the repeated target location for both experiments 1 (Fig. 2A) and 2. However, the opposite trend was seen for probe targets further from the repeated target location, i.e., an increase in reach variability was observed as the context distribution narrows (Fig. 5B, solid lines). The increased variability is not simply due a large change in bias during the early phase of learning (Fig. 4), as the patterns are qualitatively unchanged when the data are analyzed separately for each of the six trial blocks in the session (data not shown). where θ reflects the true value of the target (i.e., the mean value of x) and the parameters k1 and k2 only depend on the current values of the likelihood and prior variances. As predicted, the experimental data show an approximately quadratic increase in movement variance with target location (Fig. 5B, solid lines). Furthermore, the adaptive Bayesian model reproduces the variance effects seen in our data from experiment 2 (Fig. 5B, dashed lines). The adaptive Bayesian model also provides a much better account of movement variance at the center target than the simple the normative model (compare Fig. 5C with Fig. 3B). While the dependence on context is comparable for the two models, only the adaptive Bayesian model provides an accurate estimate of the lower bound on movement variance. Specifically, the simple normative model predicts no residual variability in the repeated target case, while the adaptive Bayesian model accurately predicts this value. Again, this difference is due to the affects of trial-by-trial learning. While the nominal target is at the same location on every trial, sensory noise injects trial-by-trial variability into the parameters of the prior distribution and prevents the variance of the prior from converging to zero. Together, these results suggest that the context-dependent changes in reach variance and bias are indeed the effect of a trial-by-trial learning process. These findings also illustrate that the process of learning can itself be responsible for a large portion of the trial-by-trial variability observed in sensorimotor tasks (Cheng and Sabes, 2006, 2007), even in a case such as this where the apparent goal of learning is the reduction of movement variability. While the adaptive Bayesian model captures many of the key features of the experimental data, it does not address the underlying mechanism for learning. In particular, we are interested in discovering candidate neural mechanisms that can link normative models of learning to the neural circuits that control behavior. Here we propose a parsimonious approach to adaptive Bayesian estimation within cortical sensorimotor circuits, similar to that proposed in previous studies (Wu et al., 2002, 2003; Wu and Amari, 2005). Consider a recurrently connected network of neurons (Fig. 6A) with dynamics that allow it to efficiently extract information from noisy inputs (Pouget et al., 1998; Deneve et al., 1999, 2001; Latham et al., 2003). On each simulated trial, neurons receive input activation that is determined by the current target angle, the neuronal tuning curves (mean activation vs target angle), and correlated noise (Wu et al., 2002), features that are consistent with physiological observations of sensorimotor cortex (Burnod et al., 1999; Georgopoulos et al., 1986). The pattern of activity across the network is driven by both the input activation and recurrent activation between neurons with similar tuning curves. At the end of each trial (five iterations of the network dynamics), the planned movement direction is read out from the pattern of activity across the network using a population vector decoder (Georgopoulos et al., 1986, 1988). Variance–bias tradeoff in the adaptive network model. A, The network model. Top, Mean input activation (black dashed line) and a single example input (gray lines) for a target at θ = 0°. Bottom, Recurrent connections reflect network topography. B, Change in network output variance as a function of context variance. C, Bias in network output as a function of input location and context variance. In order for the network to learn from experience, a normalized Hebbian learning rule (Hebb, 1949; Oja, 1982) is applied to the recurrent connections so that the changes in connectivity between any two units reflects the trial-by-trial correlations in their firing rates. On every trial, this learning rule acts to strengthen connections that give rise to the pattern of activity associated with the current movement direction, slightly biasing the dynamics of the network toward that pattern. We expected that repeated presentations of a narrow range of targets would strengthen the associated patterns of activity, thereby creating an effective prior on subsequent trials. We first tested this idea by examining whether the variance and bias of the network change with the statistics of recent experience in a manner similar to that observed experimentally. For each simulated trial block, the network was initialized with a weight structure that has been shown to yield nearly optimal outputs for the case of a flat prior, i.e., that approximates maximum likelihood estimation (Pouget et al., 1998; Latham et al., 2003), and with network parameters fit to the experimental data (see Materials and Methods, above). The network was then trained on a set of context trials with the input target angle drawn from one of the distributions used experimentally (see Blockwise simulations, Materials and Methods, above). After training, learning was turned off and a series of simulated probe trials was performed to measure the trial-by-trial variance and bias of the network output. As expected, the distribution of training targets affects both the variance and bias of the network output. After training with repeated inputs to the same target, i.e., 0° σ, the network's output variability is greatly reduced compared with the case of training with uniform targets, while intermediate training distributions lead to intermediate output variability (Fig. 6B). Training with repeated inputs also resulted in a marked bias toward the repeated target (Fig. 6C). These effects become smaller with wider training target distributions. While the block-wise network simulations in Figure 6 show the same variance and bias trends as the behavioral data, they do not account for the finer details of the data. This is perhaps to be expected. In the real experiment, learning is never turned off, so the probe trials themselves contribute to the learned effects. Also, the influences of learning can carry over between blocks, either because learning is slow compared with the timescale of a block or because of the presence of multiple timescales of sensorimotor learning (Avillac et al., 2005; Smith et al., 2006; Körding et al., 2007). This makes the results dependent on the specific ordering of trials and conditions. This effect was particularly important for the variance experiment, because the distribution of context targets differed across blocks within the same experimental session. Thus, information learned in one block could carry over and influence performance in future blocks. We therefore conducted network simulations with the exact sequence of trials that subjects experienced in each session. Because subjects were allowed to rest between trial blocks and learned information may have been lost during this delay (Körding et al., 2007), we also allowed for some unlearning between simulated blocks, with the network weights partially decaying back to initial values. With these full-experiment simulations, we found that the network was able to reproduce the human psychophysical data with good accuracy. The model matches much of the apparent noise in variance-by-context effects in experiment 1 (Fig. 7A), suggesting that these features can be explained by the specific sequence of trials and blocks used in our experiment. For example, the relatively low variance in the uniform context condition appears to be due to the fact that this condition was often presented in one of the last two blocks of the session, when the cumulative effects of learning were greatest. The model also naturally captures the pattern of context- and target-dependent variability that we observed in experiment 2 (Fig. 7C). Finally, even though the network parameters were fit on the reach variance data, the network model is able to predict the pattern of reach biases that were observed in both experiments 2 (Fig. 7B) and 3 (Fig. 7D). Comparison of network behavior (colored lines) and experimental results (gray lines) when the network is simulated with the same trials sequences that subjects experienced in experiments 1–3. A, Reach variance in experiment 1. B, Reach bias in experiment 2. C, Reach variance in experiment 2. D, Reach bias in experiment 3. E, Evolution of reach bias across trial blocks in experiment 2. Behavioral data are replotted from Figures 2 and 4. The same network parameters were used in all plots and were fitted to the data in A and C. The good match between the data and the adaptive network model for the variance in experiment 2 (Fig. 7C) suggests that the effective learning rate of the network model is also in the same range as that observed experimentally. We explored this issue further by looking at how the within-session changes in network bias compare with the changes in bias observed in experiment 2 (Fig. 7E). The network provides a good match to the data for wider context distributions. However, as the context variance becomes smaller, there are more pronounced learning-dependent increases in bias across trial block for the adaptive network model than for the experimental data, particularly during the repeated target condition. This difference is offset by a larger bias in the initial trial block for the experimental data, suggesting that the effective learning rate in the model is slower than that observed experimentally. As noted in the Discussion below, we hypothesize that this difference results from the fact that the network does not include a sufficiently strong stabilizing force that would cause learning to asymptote. We have shown that the network model can accurately emulate the experience-dependent changes in reach variance and reach bias that we observed behaviorally. In the network, these changes arise from modifications to the recurrent connections, represented by the matrix of connection weights, with element (i, j) representing the connection strength from unit j to unit i (Fig. 8A). In the repeated-target condition, the activity pattern representing the repeated 0° target is reinforced by Hebbian learning on every trial, causing an increase in the connection strengths between units with preferred directions near this target (Fig. 8B). Similar, but attenuated, changes are apparent when the network is trained with broader target distributions centered at 0°. The enhancement in recurrent connectivity around the 0° unit effectively shifts the energy landscape of the network, deepening the basin of attraction in that region. Adaptive network model approximates Bayesian estimation. A, Matrix of initial recurrent weights before training, with units arranged topographically by preferred direction (PD). 0° represents the repeated target angle. B, Changes in the central portion of the weight matrix following 100 training trials with different context target distributions. C–E, Comparison of network output and a matched normative Bayesian model as a function of context variance during training and input gain during testing. C, Solid lines, Network output variance. Dashed lines, Predictions from a matched Bayesian model with likelihoods determined after training with the uniform context variance (gray curve) and priors determined by the network results with 60 Hz input gain (gray vertical bar); the Bayesian model necessarily matches the network model for those data points. D, Same plot as in C but using a network with network baseline firing rates lowered to 1 Hz. Priors for this network were also estimated with the 60 Hz input gain (not shown). E, Network output bias (solid lines) for θ = 10° compared with the prediction of the matched Bayesian model from D (dashed lines). These learning-related changes in recurrent connections, and the resulting changes in network dynamics, alter the variance and bias of the network output. We characterized these effects with a range of input gains following training with different target distributions and then compared the network performance to the predictions of a matched normative Bayesian model. The matched value of σLikelihood2 for each input gain was determined from the output variance of the network trained on the uniform context condition (Fig. 8C, gray curve). The matched values of σPrior2 were then determined from the network outputs in the 60 Hz input condition (Fig. 8C, vertical gray bar). Details of these computations are given in the Materials and Methods, above. Figure 8C shows the network output variance as a function of training context and testing input gain. At lower input gains, the network and Bayesian models diverge sharply. This difference arises from the fact that in the normative Bayesian model, the prior distribution is known, and so there is no noise in estimating its mean (see Materials and Methods, above). Thus, when the input gain is low, σLikelihood2 is high and the MAP estimate is dominated by the noise-free prior. In contrast, the network never achieves a noise-free prior because there is persistent and stochastic baseline input (see Materials and Methods). As a result, the network exhibits monotonically increasing output variance as the input gain is reduced. If, however, we reduce the level of baseline noise (from 5 Hz in the network simulation to 1 Hz), then the network model provides a very close approximation to the matched normative Bayesian model (Fig. 8D). Furthermore, the same Bayesian model provides a close match to the network bias across training context and input gain (Fig. 8E), even though the parameters of the Bayesian model were determined using only the network variance. These simulations show that the adaptive network model can provide a close approximation to an ideal Bayesian estimator. We have shown that visually guided reaching exhibits a novel form of experience-dependent learning where the statistics of recent movements affect future actions. This learning produces a variance–bias tradeoff that is qualitatively consistent with the process of Bayesian estimation. In particular, we show that repeated performance of movements with similar goal parameters (i.e., target locations) improves the precision of subsequent actions with these goals, resulting in a short-term version of “practice makes perfect.” However, this advantage comes at the cost of reduced accuracy for movements with dissimilar goals. Others have reported evidence for Bayesian processes in sensorimotor control, primarily reflected as changes in movement bias (Körding and Wolpert, 2004; Miyazaki et al., 2005; Körding et al., 2007) or learning rates (Huang and Shadmehr, 2009; Wei and Körding, 2010). Here we show that such Bayesian affects can be observed even in the absence of the feedback perturbations used in previous studies (Körding and Wolpert, 2004; Wei and Körding, 2010). We show that changes in both movement bias and variance are consistent with the presence of an adaptive prior. In particular, an iteratively adaptive Bayesian model can capture many of the key features of our experimental data. One limitation of the experimental design used here is that we cannot determine the stage at which these bias and variance effects occur, e.g., target selection, movement vector estimation, or both (Sober and Sabes, 2003). A recent report by Diedrechsen et al. (2010) shows a potentially related form of use-dependent learning (see also Huang et al., 2011). In one of their experiments, passive movements were made to one side or another of an elongated target. This resulted in a bias toward the same side of the target during subsequent voluntary movements. Because their training movements were passive and the target was not precisely defined, it seems likely that the effect is movement-dependent, not target-dependent. To the extent that the same mechanism is at play in our study, our effect is also likely to be at least partly movement-dependent. With the adaptive Bayesian model, we have argued that these effects are the result of a trial-by-trial learning process. With this model, we were able to estimate the rate of learning and illustrate features of the bias and variance effects that likely result from the presence of trial-by-trial learning. We then showed that Hebbian learning in a simple network model makes a plausible candidate for the mechanism underlying this adaptive estimation process, providing a quantitative match to the experience-dependent changes in movement variance and bias that we observed experimentally (Fig. 7A–D). It is important to note that the adaptive Bayesian model and the network model serve very different purposes in this paper. Therefore, they were fit differently to the data and have different numbers of free parameters. Thus, the goodness-of-fit of the two models cannot be directly compared and we do not claim that one provides a better account of the data than the other. An important point of comparison, however, is the effective learning rates of the two models. The learning rates estimated from adaptive Bayesian model are fast, consistent with the evolution of the reach bias across blocks in experiment 2. In contrast, the best-fit network model appears to have a slower effective learning rate (Fig. 7E). This slower rate is most likely due to the fact that the model does not include a sufficiently strong stabilizing force that causes learning to asymptote in the way empirical learning does (Fig. 4). This may simply result from the choice of learning rule: Oja's rule guarantees stability for the square norm of the weights converging onto a cell, but only in the asymptotic limit. A detailed study of weight normalization in this model and its effects on trial-by-trial learning are left to future work. Finally, we have shown that the network model can implement a close approximation to a normative Bayesian estimator. Other network models have been proposed for how Bayesian priors can be incorporated into cortical networks (Pouget et al., 2003; Wu et al., 2003; Deneve and Pouget, 2004; Wu and Amari, 2005; Ma et al., 2006). In many of these models, priors are explicitly represented by a separate set of units whose inputs act as an additional source of information. The adaptive network approach offers two advantages. First, the prior naturally emerges within the network connections, removing the need for another set of units to encode this information. Second, Hebbian learning provides a simple mechanism by which these priors can be learned. A similar approach has been previously explored by Wu and colleagues (Wu et al., 2003; Wu and Amari, 2005). They showed that Hebbian learning in a very similar network model acts to smooth stochastic input signals over time, reducing output variance when the input is stationary at the cost of slow reaction to changing inputs (i.e., increased bias). This smoothing approximates a form of Bayesian estimation, i.e., a simple Kalman filter. We have extended these results, showing that a recurrent network with Hebbian learning approximates Bayesian estimation across a range input statistics and on behaviorally relevant timescales. Furthermore, we showed that the network output provides a good quantitative match to psychophysical data. Overall, these results suggest that prior experience may be incorporated into sensorimotor planning via Hebbian learning and as a natural byproduct of the ongoing dynamics of the underlying neural circuits. Supplemental material for this article is available at http://arxiv.org/abs/1106.2977. The supplement presents an analysis of the effects of recent inputs on the steady-state dynamics of the network model, providing additional insight into why the network approximates Bayesian estimation. This material has not been peer reviewed. This work was supported by NIH Grant P50 MH077970 (Conte Center) and the Swartz Foundation. We thank Charles Biddle-Snead for help with the network models. (2005) Reference frames for representing visual and tactile locations in parietal cortex. Nat Neurosci 8:941–949. (1999) Parieto-frontal coding of reaching: an integrated framework. Exp Brain Res 129:325–346. (2006) Modeling sensorimotor learning with linear dynamical systems. Neural Comput 18:760–793. (2007) Calibration of visually guided reaching is driven by error-corrective learning and internal dynamics. J Neurophysiol 97:3057–3069. (2004) Bayesian multisensory integration and cross-modal spatial links. J Physiol Paris 98:249–258. (1999) Reading population codes: a neural implementation of ideal observers. Nat Neurosci 2:740–745. (2001) Efficient computation and cue integration with noisy population codes. Nat Neurosci 4:826–831. (2007) Optimal sensorimotor integration in recurrent cortical networks: a neural implementation of Kalman filters. J Neurosci 27:5744–5756. (2010) Use-dependent and error-based learning of motor behaviors. J Neurosci 30:5159–5166. (1986) Neuronal population coding of movement direction. Science 233:1416–1419. (1988) Primate motor cortex and free arm movements to visual targets in three-dimensional space. II. Coding of the direction of movement by a neuronal population. J Neurosci 8:2928–2937. (1950) Sample criteria for testing outlying observations. Ann Math Stat 21:27–58. (1949) The organization of behavior: a neurophysiological theory (Wiley, New York). (2009) Persistence of motor memories reflects statistics of the learning event. J Neurophysiol 102:931–940. (2011) Rethinking motor learning and savings in adaptation paradigms: model-free memory for successful actions combines with internal models. Neuron 70:787–801. (2008) Reach adaptation: what determines whether we learn an internal model of the tool or adapt the model of our arm? J Neurophysiol 100:1455–1464. (2007) Robust cue integration: a Bayesian model and evidence from cue-conflict studies with stereoscopic and figure cues to slant. J Vis 7:5.1–5.24. (2007) The dynamics of memory as a consequence of optimal adaptation to a changing body. Nat Neurosci 10:779–786. (2008) Motion and disparity processing informs Bayesian 3D motion estimation. Proc Natl Acad Sci U S A 105:E117. (2003) Optimal computation with attractor networks. J Physiol Paris 97:683–694. (2008) Bayesian generic priors for causal learning. Psychol Rev 115:955–984. (2005) Testing Bayesian models of human coincidence timing. J Neurophysiol 94:395–399. (1982) A simplified neuron model as a principal component analyzer. J Math Biol 15:267–273. (1998) Statistically efficient estimation using population coding. Neural Comput 10:373–401. (2002) A computational perspective on the neural basis of multisensory spatial representations. Nat Rev Neurosci 3:741–747. (2003) Inference and computation with population codes. Annu Rev Neurosci 26:381–410. (2007) Bayesian inference explains perception of unity and ventriloquism aftereffect: identification of common sources of audiovisual stimuli. Neural Comput 19:3335–3355. (2001) Learning to move amid uncertainty. J Neurophysiol 86:971–985. (1994) Adaptive representation of dynamics during learning of a motor task. J Neurosci 14:3208–3224. (2009) Statistics predict kinematics of hand movements during everyday activity. J Mot Behav 41:3–9. (2006) Interacting adaptive processes with different timescales underlie short-term motor learning. PLoS Biol 4:e179. (2003) Multisensory integration during motor planning. J Neurosci 23:6982–6992. (2005) Flexible strategies for sensory integration during motor planning. Nat Neurosci 8:490–497. (2000) Learning of action through adaptive combination of motor primitives. Nature 407:742–747. (2010) Uncertainty of feedback and state estimation determines the speed of motor adaptation. Front Comput Neurosci 4:11. (2002) Motion illusions as optimal percepts. Nat Neurosci 5:598–604. (2005) Computing with continuous attractors: stability and online aspects. Neural Comput 17:2215–2239. (2002) Population coding and decoding in a neural field: a computational study. Neural Comput 14:999–1026. (2003) Sequential Bayesian decoding with a population of neurons. Neural Comput 15:993–1012.
0.996346
Radio-frequency (RF) impairments, which intimately exist in wireless communication systems, can severely limit the performance of multiple-input-multiple-output (MIMO) systems. Although we can resort to compensation schemes to mitigate some of these impairments, a certain amount of residual impairments always persists. In this paper, we consider a training-based point-to-point MIMO system with residual transmit RF impairments (RTRI) using spatial multiplexing transmission. Specifically, we derive a new linear channel estimator for the proposed model, and show that RTRI create an estimation error floor in the high signal-to-noise ratio (SNR) regime. Moreover, we derive closed-form expressions for the signal-to-noise-plus-interference ratio (SINR) distributions, along with analytical expressions for the ergodic achievable rates of zero-forcing, maximum ratio combining, and minimum mean-squared error receivers, respectively. In addition, we optimize the ergodic achievable rates with respect to the training sequence length and demonstrate that finite dimensional systems with RTRI generally require more training at high SNRs than those with ideal hardware. Finally, we extend our analysis to large-scale MIMO configurations, and derive deterministic equivalents of the ergodic achievable rates. It is shown that, by deploying large receive antenna arrays, the extra training requirements due to RTRI can be eliminated. In fact, with a sufficiently large number of receive antennas, systems with RTRI may even need less training than systems with ideal hardware.
0.929216
Is America's future prosperity crumbling? OHIO/TENNESSEE/GEORGIA -- Every working day, once in the morning, and again in the evening, Sarah Blazak drives at snail's pace in heavy traffic across one of the most dangerous bridges in America. It's the Brent Spence bridge, and it spans the Ohio river, linking the state of Ohio to the north with Kentucky to the south. And according to US safety officials, less than 50 years after it was built, it is now "functionally obsolete". It carries more than twice as much traffic as it was designed for, it has no emergency lanes for vehicles that have broken down, and its traffic lanes are too narrow. Accidents are frequent - I saw a car that had smashed into the side of the bridge when I drove across just a few days ago, causing chaos as police struggled to remove it - and there have been at least two fatalities in the past two years. "Every day when I get across, I breathe a sigh of relief," Sarah told me. "I'm closer to where I need to be, and I'm safer." The Brent Spence bridge is just one example of a problem that is of increasing concern to the US - its crumbling infrastructure. Roads, bridges, ports and airports -- many are in desperate need of repair or replacement - and the resulting delays are costing the nation billions of dollars a year. So why don't they build a new bridge? Simple answer: because they can't agree on who should pay for it. The present one was built mainly with funds from the Federal government in Washington - but there's no cash available from that source any more, and neither Kentucky nor Ohio much like the idea of picking up the tab themselves. None of this would matter very much to people outside the immediate region, perhaps, if it wasn't a typical example of a much wider problem. The US has long been the world's dominant economy, a global leader in manufacturing and technological innovation - but the question is for how much longer? Consider this: each year, the US turns out something like 100,000 newly qualified engineers. They're the ones who build the roads and the bridges. India and China, on the other hand, each produce a million new engineers, which means they have a lot more people available to build that all-important infrastructure without which no developed economy can prosper. If you want to take a gloomy view of America's economic future, you could point to its continuing sluggish economy, an education system that isn't producing anything like enough mathematicians and scientists, and a corporate environment in which cash for research and development may soon start drying up as CEOs worry whether steady growth will ever return. On the other hand, if you come to Atlanta, Georgia, where I spent the day yesterday, you'll find plenty of people at the Georgia Institute of Technology who are full of hope for the future. Lots of new ideas are bubbling away, they say - new materials to replace steel, new ways of producing cleaner energy, even new ways to produce robots with a sense of music - and yes, there's still money to fund the research. Next Tuesday, we'll be broadcasting a special programme to explore some of these themes, with the help of a panel of experts at the Council on Foreign Relations in Washington DC. It'll include my report from the Brent Spence bridge, and later in the week, we hope to broadcast my report from Georgia Tech. Meanwhile, if you're on Facebook, do take a look at The World Tonight Facebook page, where you can see some wonderful photos from our travels in Illinois, Ohio, Tennessee and Georgia, taken by producer and ace photographer Dan Isaacs.
0.976177
Serious injuries, illnesses, and deaths happen every day. In many cases, these are unfortunate events that naturally occur in our imperfect world. However, there are times when a personal injury or wrongful death occurs as the result of another party’s negligence or misconduct. For example, a drunk driver may cause a collision that kills another person, or a business may fail to take reasonable precautions when manufacturing a product. In these situations, the party at fault has a civil responsibility to compensate the injured party for their damages. - An Indiana jury awarded $35 million to a man who was rendered a quadriplegic in a car crash caused by a drunk driver. - A Georgia jury awarded $1 billion to a young woman who was sexually assaulted by an armed security guard employed by her apartment complex. - A San Francisco jury awarded $289 million, which was later reduced by a judge to $78 million, to a man whose cancer was caused, according to the court’s verdict, by Roundup weedkiller. What factors convince a jury to rule in favor of an injured person and to award large amounts of compensatory and punitive damages? This is a hard question to answer definitively, but there are several factors that play a role. An experienced personal injury lawyer knows how to leverage these factors and use them to construct a compelling combination of factual evidence and emotional arguments for a jury trial. One factor is that jurors seem to be less offended today by plaintiff requests for multi-million dollar verdicts and more willing to award them. Court-watchers speculate that American citizens have been affected by the barrage of online stories highlighting exorbitant professional athlete contracts, lottery jackpots, and the huge gap that has developed between CEO compensation and worker wages. As a result, jurors tend to believe that a corporate defendant can easily afford a large payout and must be made to feel the pain of their mistakes through punitive damages, and that the injured party deserves the compensation. Another factor is more widespread feelings of anger and distrust against “the elite.” Part of this stems from the growing division between the highest-paid and the lowest-paid workers in America. This has caused more anger and distrust toward the leadership of large corporations, who are viewed as getting rich at the expense of their workers. Also, with so much “fake news” going around on social media, people are becoming more distrustful in general. In some cases, jurors even distrust the injured person’s attorney and increase their award to the injured person just to “make sure they are taken care of” after the plaintiff’s attorney takes their share of an award. Short attention spans are a third variable that personal injury lawyers must increasingly take into account when developing their trial arguments. Younger generations do not have the patience to sit through days of oral testimony by technical experts. Attorneys need to use more graphics, videos, and even virtual reality recreations of an accident scene in order to make testimony more compelling and impactful to jurors. Attorneys must also be sensitive to the way the injured person is portrayed and the way their story is told. Juries who are emotionally touched by a well-told story can be swayed toward one side or the other. For example, when jurors see that the injured party is part of a likeable, hard-working family that they can relate to, they are more likely to favor the injured party. The injured person’s attorney may also encourage jurors to look at the injured person and think, “What if it were me?” When such feelings are strong enough, they can cause jurors to override arguments that the defendant acted according to reasonable standards of care. If you have been injured through another person’s or corporation’s negligence or wrongdoing, you could have grounds for a personal injury lawsuit and be eligible to receive compensation for your injuries. The first step is to discuss your case with an experienced Barrington personal injury lawyer. The attorneys of Drost, Gilbert, Andrew & Apicella, LLC have decades of experience obtaining due compensation for our clients in personal injury and wrongful death cases. Contact us at 847-934-6000 to schedule a free consultation. Although the United States is home to just 5 percent of the world’s entire population, it is the location of 31 percent of all mass shootings. In fact, statistics indicate that at least one event occurs each month. Depending on the details of the situation, victims and surviving families may be owed compensation after a mass shooting. Learn more, and discover how an experienced attorney can assist with help from the following information. Mass shootings can occur in any space, including private homes, but the majority (an estimated 73 percent) happen at business establishments. Schools, including colleges, come in a close second. When staff or management of these places act negligently, perhaps by not properly training their employees to handle a mass shooting or by not ensuring the fire exits are free and clear to ensure that their patrons have a safe and clear way to escape, they may be held liable.Sadly, when victims attempt to pursue compensation on their own, they are at a massive disadvantage – and not just because they are trying to cope with the grief of a loss or the upending of their life after an injury. Businesses often have teams of lawyers to represent them, and most schools are agents of the federal government, which dramatically complicates the process for pursuing compensation. Thankfully, victims do not have to face the process alone. Victims may not be required to have an attorney while pursuing compensation after a mass shooting, but the aid of one is highly encouraged. Able to protect your rights and best interests in a mass shooting lawsuit, an attorney can negotiate a fairer settlement for you and your loved ones while also increasing your odds of a positive outcome. Another major benefit for victims who hire an attorney is that they can handle all the legal aspects of the case. This can give the family more time to heal and grieve the losses and injuries they have experienced. If you or someone you love has been a victim of a mass shooting, do not delay. Contact Drost, Gilbert, Andrew & Apicella, LLC for assistance. Dedicated and experienced, our Rolling Meadows personal injury lawyers personal injury lawyers can examine your case, explain your options, and aggressively pursue the most favorable outcome possible. Start by scheduling a personalized consultation. Call our offices at 847-934-6000 today. While car crashes can often lead to injuries and property damage, victims typically do survive. However, statistics now suggest that auto accident deaths are on the rise. Worse yet, the rate of accident fatality has reached a nine-year high. If someone you love has been killed in a crash, the following information can help you understand what rights you may have, including the right to pursue full and fair compensation. You shall also learn how an experienced attorney can help. Data compiled by the National Highway Traffic Safety Administration (NHTSA) shows that auto accident deaths rose by 5.6 percent over the past year. With 37,461 people killed during 2106, that places traffic fatalities at a nine-year high (in 2007, there were 41,259 killed). NHTSA says there are many contributing factors, including distracted driving, which has been a continuous problem over the last several years. However, it appears that pedestrian deaths, which rose by 9 percent, and drunk driving deaths, which rose by 1.7 percent, were also confounding factors. If someone you loved was killed in an automobile crash, pedestrian accident, drunk driving crash, or some other type of traffic accident, you may be entitled to compensation. Sadly, the claims process to obtain that compensation is riddled with obstacles. When you are trying to recover from the loss of a family member, that is the last thing you need. To make matters even worse, insurance companies often delay or reduce payouts to try and get victims to settle for less. Some will even attempt to shift as much of the blame as possible over to the victim. If successful enough in doing this, they may even be able to outright deny a valid claim, leaving the family of the victim responsible for any final costs and expenses. Do not let this happen to you! Instead, employ the assistance of an experienced attorney. At Drost, Gilbert, Andrew & Apicella, LLC, we aggressively protect the rights of victims, including their right to pursue full and fair compensation for any losses they may have experienced. Committed to your best interest, our Rolling Meadows wrongful death lawyers will stand by your side, every step of the way, and pursue the most favorable outcome possible. Get started by scheduling a personalized consultation. Call 847-934-6000 today. After Headlines Name Medical Error as the Third Leading Cause of Death, Hospitals Vow to Change – Honest Commitment or Just Empty Promises? Hospitals used to be the place that patients went to be treated. Now it is the place that more than 250,000 people per year never leave. These patients – the ones that never return home – are not victims of circumstance, taken from their families by an injury or illness too severe to treat. Instead, they are the victims of preventable medical error. They die by the very hands that are meant to save them. Prevalence of these errors has increased to the point that they are now the third leading cause of death in America. Headlines and news outlets have announced this from the rooftops and all over the internet. In response, hospitals have vowed to change, but are they making a real, honest commitment, or are they simply feeding concerned citizens a bunch of empty promises? Back in 1999, the Institute of Medicine published the report, To Err is Human. This ground-breaking and then-controversial study determined that as many as 44,000 patients died each year from medical mistakes. Many hospitals and physicians refuted its accuracy, but they also promised to do better. They reportedly implemented new systems, technology, and control mechanisms to improve patient outcomes. Yet, new evidence suggests that all of their work and efforts (if there were, in fact, any made) have failed miserably. More patients are dying today from their mistakes than they were just a little more than a decade ago, and now we have been given more promises that things will change. Even most children know that one should own up to their mistakes and apologize when they have done wrong. Unfortunately, hospitals have shied away from this form of human decency. Out of fear of a medical malpractice lawsuit, they have taught physicians to avoid accepting any blame. Some have even attempted to cover up any mistakes that have been made, leaving the families of patients confused, heartbroken, and feeling as though something was “off” about the entire situation. These victims are right, of course, but no one would give them the honest answers that they deserved. Studies have indicated that patients are more likely to bring a lawsuit if they feel as though the hospital or the physician are being dishonest or deceptive. Furthermore, many physicians have claimed that they feel uncomfortable and anxious when they are prompted to hide a mistake. This, in turn, has caused hospitals to reconsider how they handle medical errors. Some have even moved to a more transparent stance, being honest and open when a mistake is made. Will more hospitals take that leap? Only time will tell for certain. Patients who are injured or killed because of a medical error have the right to seek compensation for their losses. Unfortunately, the legal process for doing so is highly complex and is full of potential pitfalls. For this reason, victims and their families should always seek assistance from an experienced medical malpractice attorney when filing a malpractice claim. At Drost, Gilbert, Andrew & Apicella, LLC, we have the skills and experience needed to effectively represent you and your loved ones in a medical malpractice claim. Dedicated to your best interests, and in helping you achieve justice, we will aggressively fight to get you the compensation you deserve. Schedule an initial consultation with our Crystal Lake medical malpractice attorneys today to learn more. Call us at 847-934-6000. In some cases, the negligence or wrongdoing of another person may result in the death of a victim before the victim has a chance to pursue legal action against those individuals responsible for the death. For surviving spouses or children, recovery is still possible through a wrongful death action. These actions can be very important in helping secure the financial stability of survivors. Wrongful death lawsuits allow surviving spouses and children of individuals who die as a result of the negligence or wrongdoing of another person to recover monetary damages from the person responsible for the death. A wrongful death claim is usually brought by the representative of the decedent’s estate on behalf of the survivors. The survivors are called the “real parties in interest.” For the claim to be successful, the plaintiffs must demonstrate the death would not have occurred but for the actions of the defendant. The two possible damage awards for wrongful death actions include economic and non-economic. Economic damages are tangible or the actual financial costs of the decedent’s death. They may include costs like lost expected future earnings or medical and funeral expenses. Non-economic damages are for items like mental anguish or pain and suffering. While these damages are often more difficult to determine than economic damages, they may result in much greater awards. One other form of damages that you may hear about are punitive damages. This type of damage is intended to punish the defendant for exceptionally bad conduct. However, punitive damages are not available to survivors in wrongful death actions in Illinois. Under Illinois law, a jury may award damages that they deem as being fair and just “compensation with reference to the pecuniary injuries resulting from death, including damages for grief, sorrow, and mental suffering.” Pecuniary damages are economic, like a decedent’s wages or the costs of the funeral. “Grief, sorrow, and mental suffering” refer to non-economic damages. Another important issue to be aware of is the statute of limitations. In most cases, the wrongful death action must be filed within two years after the death of the decedent. However, an action against a defendant arising from a crime committed by a defendant in whose name an escrow account was established under the Criminal Victims’ Escrow Account Act must be filed within two years after the establishment of the account. If negligence is the cause of action for the decedent’s death, contributory negligence must be considered. While contributory negligence is not a defense for the defendant, if the decedent’s death was caused in whole or in part by the decedent’s actions, the damage award is reduced by the percentage of fault assigned to the decedent. Therefore, the recovery amount may be reduced in some cases. If you would like more information about the possible methods of recovery for injuries you or a loved one have suffered, speak with an experienced Illinois personal injury law attorney today. Our firm proudly serves the communities of the northwest suburbs, including areas such as Crystal Lake, Buffalo Grove, Arlington Heights, Des Plaines, and Deer Park.
0.885973
Steps on how to preserve your summer bounty for the lean days of winter. I will not lie, when I first started canning food, I was terrified. What if I accidentally poisoned someone? What if the jars over heat and exploded in the water bath? Luckily, as I continued my canning research I found a few easy tips to ensure the safety of the process, and to make sure that no one winds up dead from eating my canned food- always a plus. I have been canning successfully for several years now, and below is an agglomeration of all that knowledge. Heed the warnings concerning the acid content of the food, and safely follow the procedures. A lot of the equipment you are working with will be extremely hot. A lot of my initial information came from Canning for a New Generation. Large Pot: the pot will need to fit your canning jars comfortably. When your jars are submerged there needs to be a couple of inches of clearance between the jars and the sides of the pot. Jar Basket: this is not necessarily needed for canning, but it will make the process a lot easier. It is a metal rack that you can load your jars into in order to lower and raise them from the boiling water. wide Funnel: this will help you pour your concoctions into your jars without dirtying the jar rims. Dirty jar rims can lead to mold growing on the outside of the sealed jar lid making it hard to open the jars without also contaminating the sterile insides. Jar Lifter: rubber coated tongs that allows you to securely lift hot jars out of boiling water. Lid Lifter: A wand with a magnetic end that allows you to lift the sterilized jar lid out if the water. Ladle: you need a heat resistant ladle to transfer the hot liquid fillings into the jars. Chop-Stick: You can use a sterilized chop-stick to move the food around the inside of your jars or to poke out any air bubbles. Fill your pot with water, insert the jar basket, and bring the water to a rolling boil. While you are waiting for the water to boil, wash out your jars with warm soapy water. Prepare the food you intend to can. Make sure your food has a high enough acidity content to make it safe for canning. If the food has 5% acidity or greater bacteria will not grow in it. Most pickled foods require vinegar of at least 5% acidity, and a lot of fruits naturally have a high level of acidity in them. When in doubt, add in some lemon juice, and that will aid in upping the acid content. Sterilize your jar lids and tools. Place the jar lid in a deep glass (heat resistant) bowl along with the magnetic end of the jar lifter, the ladle, the chop-stick, and the funnel. Now you have a choice of ladling the boiling water directly out of the large pot and into the lid bowl in order to sterilize them, or you can boil water separately for the lids. I find it easier to boil the water separately in a kettle, and then pour it right into the lid bowl. That way I do not have to worry about the water getting low in my water bath. Place the empty jars in the boiling pot for a few minutes to sterilize them before filling them with food. Remove the jars from the water using the jar lift. They should be hot enough to dry within a few seconds of their removal from the pot. I lay a thick, clean towel down on the countertop to cushion the jars from the hard surface. Fill the jars with your preserves. Use the funnel to cleanly fill your jars with the food you are canning. Leave at least a quarter of an inch or a centimeter of space at the top of your jars. This will help with vacuum sealing the lids. For pickles and whole foods, you can arrange them in the jars with the chop-stick or clean fingers, before carefully pouring the still boiling pickling solution over them. Make sure that the pickling liquid fully covers the food. For jams and jellies, you can use the sterilized ladle to fill the jars while the jam/jellies are still boiling hot. Cover your jars with the lids. Use the lid lifter to place the lid seals onto the top of each jar. Use the lid lifter to place the lid rims onto each jar. Lightly tighten the lids. The lids should be loose enough to allow for the air to escape from underneath them, but tight enough that they will not fall off in the water bath. Sink your jars into the boiling water for 8-10 minutes. Keep your jars upright, and a couple of inches away from the sides of the pot. Make sure that there are a few inches of water above them to further pressurize them. Use the jar lift to carefully remove the jars from the water bath and allow them to cool. As the jars cool your should hear lid seals popping into place. After a few hours of cooling, revisit you jars and check to make sure that each of your lids has sealed into place. You should be able to tell if they have sealed or not by pressing down on the middle of each lid. If the lid pops back up, it has not been sealed properly and the food will spoil. If you have any jars that have not sealed properly you can store the food safely in the fridge for up to a month. Tighten the lids before storing.
0.948301
(CNN) -- Justice Clarence Thomas issued a temporary stay in the death penalty case of Ronald Smith, an Alabama death row inmate whose execution was scheduled for later Thursday evening. The order is the latest of a flurry of unusual filings issued Thursday night by the Court that reflect its divisions on issues related to the death penalty. At 8:37 ET Thursday night, the Court issued an order saying that it would allow Smith's execution to go forward. The order noted that four justices, Ruth Bader Ginsburg, Stephen Breyer, Elena Kagan and Sonia Sotomayor had voted to grant a stay of execution. But lawyers for Smith immediately filed a motion for reconsideration noting the 4-4 split and asking the justices to take another look at the case. It is that motion that prompted Thomas to grant a temporary stay. The Court could rule on the motion at any time. The Supreme Court normally provides little to no information about their reasoning behind emergency motions. Thursday's order raised questions concerning the fact that last month Chief Justice John Roberts agreed to provide a "courtesy vote" in a different case when the four liberals had voted to grant relief. Smith was convicted in Alabama of the robbery and murder of Casey Wilson, a convenience store clerk. Lawyers for Smith argue that although the jury rendered a verdict of life without parole, the trial court overrode the jury's verdict and sentenced Smith to death. Smith argued in part that he should be given life without parole because Alabama's sentencing scheme is similar to that of Florida's which the Court struck down in an opinion called Hurst v. Florida. Lawyers for Alabama argue that that the case should be allowed to proceed and stress that Hurst v. Florida has no retroactive application to Smith.
0.942087
Italian Wars, 1494–1559, series of regional wars brought on by the efforts of the great European powers to control the small independent states of Italy. Renaissance Italy was split into numerous rival states, most of which sought foreign alliances to increase their individual power. It thus became prey to the national states that had begun to emerge in Europe. Foremost among those were France and Spain, whose prolonged struggle for supremacy in Italy was to curtail Italian liberties for more than three centuries. The wars began when, in 1494, Charles VIII of France invaded Italy and seized (1495) Naples without effort, only to be forced to retreat by a coalition of Spain, the Holy Roman emperor, the pope, Venice, and Milan. His successor, Louis XII , occupied (1499) Milan and Genoa. Louis gained his next objective, Naples, by agreeing to its conquest and partition with Ferdinand V of Spain and by securing the consent of Pope Alexander VI . Disagreement over division of the spoils between the Spanish and the French, however, flared into open warfare in 1502. Louis XII was forced to consent to the Treaties of Blois (1504–5), keeping Milan and Genoa but pledging Naples to Spain. Trouble began again when Pope Julius II formed (1508) an alliance against Venice with France, Spain, and Holy Roman Emperor Maximilian I (see Cambrai, League of ). But shortly after the French victory over the Venetians at Agnadello (1509), Julius made peace with Venice and began to form the Holy League (1510) in order to expel the French barbarians from Italy. The French held their own until the Swiss stormed Milan (1512)—which they nominally restored to the Sforzas—routed the French at Novara (1513), and controlled Lombardy until they were defeated in turn by Louis's successor, Francis I , at Marignano (1515). By the peace of Noyon (1516), Naples remained in Spanish hands and Milan was returned to France. The rivalry between Francis I and Charles V , king of Spain and (after 1519) Holy Roman emperor, reopened warfare in 1521, and the French were badly defeated in the Battle of Pavia (1525), the most important in the long wars. Francis was forced to sign the Treaty of Madrid (1526), by which he renounced his Italian claims and ceded Burgundy. This he repudiated, as soon as he was liberated, by forming the League of Cognac with Pope Clement VII , Henry VIII of England, Venice, and Florence. To punish the pope, Charles V sent Charles de Bourbon against Rome, which was sacked for a full week (May, 1527). The French, after an early success at Genoa, were eventually forced to abandon their siege of Naples and retreat. The war ended (1529) with the Treaty of Cambrai (see Cambrai, Treaty of ) and the renunciation of Francis's claims in Italy. France's two subsequent wars (1542–44 and 1556–57) ended in failure. Francis died in 1547, having renounced Naples (for the third time) in the Treaty of Crépy . Complete Spanish supremacy in Italy was obtained by the Treaty of Cateau-Cambrésis (1559), which gave the Two Sicilies and Milan to Philip II . The wars, though ruinous to Italy, had helped to spread the Italian Renaissance in Western Europe. From the military viewpoint, they signified the passing of chivalry, which found its last great representative in the seigneur de Bayard . The use of Swiss and German mercenaries was characteristic of the wars, and artillery passed its first major test. See F. L. Taylor, Art of War in Italy, 1494 to 1529 (1921).
0.999997
There are many actions you can take to obtain a better golf score. It depends on what areas of your game you want to improve and setting milestones of when to achieve such goals. As you review ways to improve your game create a plan to start getting results. In a nutshell, you just have to practice and work on building golf skills. Think about what you do well and how it can help on the course while working on weaknesses. Here are a few highlights of what you can do lower your golf score. Assess your play ability. First, figure out which areas of your play needs improvement. Then, determine what you need to do to get lower scores. This may include a combination of actions but you can have an experienced player help access your play and make suggestions. Which clubs give you a hard time? Which clubs do you think are giving you problems? Consider ways to improve your play by practicing with clubs you have a hard time with. Find golf drills to help work with the troublesome club and get on the green to practice. Do you have any physical issues such as lack of concentration or do you feel tired? Do you notice on the course lack of energy? Think about what you can do before getting on the green to stimulate your mind so your strategies can be played more efficiently. Practice at the driving range putting and chipping. Review tips for reading greens and playing fast greens. Work on your shot game. This element is important since it can really help with improving scores. Knowing how to read the grass helps in understanding how to play the ball. Get yourself in shape for golf with regular exercise. Work on activities that encourage flexibility, strength and balance. Being in good physical shape is important while reducing risk of injury. This encourages taking time to warm up before each game while helping you be on point with different plays requiring more movement. Work on distance and swing control while being in control of your shots. Distance and speed control are other important aspects of score improvement as it forces players to work on being more accurate with their strokes. Take golf lessons or work with an instructor. These aspects offer one-on-one elements to personally improve play skills.
0.924389
You have small children, and so you may think you will never travel again. To a certain extent, this is true. Going on vacation with kids is difficult at best, and a disaster at worst. Your success depends largely on your choice of destinations. Try one of these 10 best picks for a great holiday for the whole family. Pack up your flip-flops and head to one of the most family-friendly casual destinations in the U.S. San Diego features miles of white, sandy beaches, with lots of fun for the kids. Visit the San Diego Zoo or Sea World, and follow it up with a fun romp in the waves at the Silver Strand.
0.998918
Club records have been maintained continuously since 1968, with some records going back to 1964. Age group records are inclusive, in that a swimmer could break a record not only for his/her current age group, but also for higher age divisions as well. Prior to the 1979/1980 season, swim times were only measured and recorded to one decimal place. Prior to 2012, three times had to be recorded before a record could be acknowledged, with the middle time being the record time. Since 2011/12 and the introduction of electronic stop watches, only two times have to be recorded, with the greater time being the record time. The Club Age Championships were re-introduced in the 2012/13 season and records have been maintained since then. Club Age Championship records are exclusive: a swimmer can only break a record for his/her current age division.
0.940978
Meg initially has 3 hours of pop music and 2 hours of classical music in her collection. Meg initially has 3 hours of pop music and 2 hours of classical music in her collection. Every month onwards, the hours of pop music in her collection is 5% more than what she had the previous month. Her classical music does not change. Which function shows the total hours of music she will have in her collection after x months? D since it's the pop music that is increasing by 5% not the classical music. The classical music is a constant of 2 plus the increasing pop music.
0.999994
Fertilizing the Lawn by a Professional Service Company There are valid reasons why lawn fertilization is best entrusted to a professional service company, even if fertilizing your lawn can be done on your own, these reasons are: you may not properly apply the right type of fertilizer to match the grass type, you may end up applying the fertilizer unevenly resulting in patch green spots, and you take the risk of applying too much fertilizer which can lead to thatch build up, thus, damaging your lawn more than helping it. The professional service of a known lawn and landscaping company knows the kind of fertilizer that your lawn needs, how to apply it properly, and when to apply it based on soil, climate, and regional climate conditions. Before a fertilization process is started, the professional service company performs first an analysis check on your lawn and through its company specialist, he will inspect these factors: type of grass and soil, turf density, thatch level, shade and sun exposure, presence of insect and disease problems, and presence of grassy and broadleaf weeds. The correct fertilization approach to a lawn involves three factors: proper ingredients, which is basically composed of nitrogen, phosphorus and potassium; proper amounts of fertilizer; and proper scheduling of applying the fertilizer. The proper fertilizer mix formulation is 20-5-20 per bag, which means 20% nitrogen, 5% phosphate (phosphorus), and 10% potassium and the rest of the mix contains filler material that helps ensure and even application to the lawn, where fertilizing is done on the right schedule. The trend in fertilizer formulation is adopting the slow release product because with slow release, the lawn is fertilized every 6 to 8 weeks, instead of every four weeks and the outcome is that it provides your lawn the needed fertilizers as your lawn grass grows throughout the seasons. The importance of proper fertilization scheduling must be observed to achieve a healthy, green lawn, such that scheduling takes into account the nitrogen release rate, which means the speed to which nitrogen is released determines the speed to which the grass will green up, how much it will grow, and how long will the greening last. Proper scheduling must be accompanied by using the proper amount of ingredients, such that with slow release fertilizers, it is able to deliver just enough quick-release nitrogen to produce fast greening and the remaining balance of nitrogen is released gradually by microbial action in the soil over a period of 8 weeks, therefore, there is a constant supply of nitrogen for the grass.
0.999937
Administrative assistants are important to the success of companies in a variety of industries. In order to be a successful administrative assistant, individuals must be good at Microsoft Office, marketing, and also have an understanding of administrative activities and management. Many different undergraduate majors produce thriving administrative assistants. Common degrees for administrative assistants include psychology, business administration, communication, English language and literature, and marketing. Administrative assistant jobs are an integral part of many companies. They are more likely to be employed in higher education and hospital & healthcare industries than in financial services. Other industries that employ administrative assistants are staffing and recruiting and non-profit organization management. Now, what are the top cities recent graduates work at? Administrative assistants often find employment opportunities in cities such as New York, Los Angeles, and Chicago. Perform routine clerical and organizational tasks.
0.941153
The Lockheed Corporation was an American aerospace company. Lockheed was founded in 1926 and later merged with Martin Marietta to form Lockheed Martin in 1995. The founder, Allan Lockheed, had earlier founded the similarly named but otherwise unrelated Loughead Aircraft Manufacturing Company, which was operational from 1912 through 1920. Allan Loughead and his brother Malcolm Loughead had operated an earlier aircraft company, Loughead Aircraft Manufacturing Company, which was operational from 1912 to 1920. The company built and operated aircraft for paying passengers on sightseeing tours in California and had developed a prototype for the civil market, but folded in 1920 due to the flood of surplus aircraft deflating the market after World War I. Allan went into the real estate market while Malcolm had meanwhile formed a successful company marketing brake systems for automobiles. In 1926, Allan Lockheed, John Northrop, Kenneth Kay and Fred Keeler secured funding to form the Lockheed Aircraft Company in Hollywood (the spelling was changed phonetically to prevent mispronunciation). This new company utilized some of the same technology originally developed for the Model S-1 to design the Vega Model. In March 1928, the company relocated to Burbank, California, and by year's end reported sales exceeding one million dollars. From 1926 to 1928 the company produced over 80 aircraft and employed more than 300 workers who by April 1929 were building five aircraft per week. In July 1929, majority shareholder Fred Keeler sold 87% of the Lockheed Aircraft Company to Detroit Aircraft Corporation. In August 1929, Allan Loughead resigned. The Great Depression ruined the aircraft market, and Detroit Aircraft went bankrupt. A group of investors headed by brothers Robert and Courtland Gross, and Walter Varney, bought the company out of receivership in 1932. The syndicate bought the company for a mere $40,000 ($660,000 in 2011). Ironically, Allan Loughead himself had planned to bid for his own company, but had raised only $50,000 ($824,000), which he felt was too small a sum for a serious bid. In 1934, Robert E. Gross was named chairman of the new company, the Lockheed Aircraft Corporation, which was headquartered at what is now the airport in Burbank, California. His brother Courtlandt S. Gross was a co-founder and executive, succeeding Robert as chairman following his death in 1961. The company was named the Lockheed Corporation in 1977. The first successful construction that was built in any number (141 aircraft) was the Vega first built in 1927, best known for its several first- and record-setting flights by, among others, Amelia Earhart, Wiley Post, and George Hubert Wilkins. In the 1930s, Lockheed spent $139,400 ($2.29 million) to develop the Model 10 Electra, a small twin-engined transport. The company sold 40 in the first year of production. Amelia Earhart and her navigator, Fred Noonan, flew it in their failed attempt to circumnavigate the world in 1937. Subsequent designs, the Lockheed Model 12 Electra Junior and the Lockheed Model 14 Super Electra expanded their market. The Lockheed Model 14 formed the basis for the Hudson bomber, which was supplied to both the British Royal Air Force and the United States military before and during World War II. Its primary role was submarine hunting. The Model 14 Super Electra were sold abroad, and more than 100 were license-built in Japan for use by the Imperial Japanese Army. At the beginning of World War II, Lockheed under the guidance of Clarence (Kelly) Johnson, who is considered one of the best-known American aircraft designers answered a specification for an interceptor by submitting the P-38 Lightning fighter aircraft, a twin-engined, twin-boom design. The P-38 was the only American fighter aircraft in production throughout American involvement in the war, from Pearl Harbor to Victory over Japan Day. It filled ground-attack, air-to-air, and even strategic bombing roles in all theaters of the war in which the United States operated. The P-38 was responsible for shooting down more Japanese aircraft than any other U.S. Army Air Forces type during the war; it is particularly famous for being the aircraft type that shot down Japanese Admiral Isoroku Yamamoto's airplane. The Lockheed Vega factory was located next to Burbank's Union Airport which it had purchased in 1940. During the war, the entire area was camouflaged to fool enemy aerial reconnaissance. The factory was hidden beneath a huge burlap tarpaulin painted to depict a peaceful semi-rural neighborhood, replete with rubber automobiles. Hundreds of fake trees, shrubs, buildings, and even fire hydrants were positioned to give a three-dimensional appearance. The trees and shrubs were created from chicken wire treated with an adhesive and covered with feathers to provide a leafy texture. Lockheed ranked tenth among United States corporations in the value of wartime production contracts. All told, Lockheed and its subsidiary Vega produced 19,278 aircraft during World War II, representing six percent of war production, including 2,600 Venturas, 2,750 Boeing B-17 Flying Fortress bombers (built under license from Boeing), 2,900 Hudson bombers, and 9,000 Lightnings. During World War II, Lockheed, in cooperation with Trans-World Airlines (TWA), had developed the L-049 Constellation, a radical new airliner capable of flying 43 passengers between New York and London at a speed of 300 mph (480 km/h) in 13 hours. Once the Constellation (nicknamed Connie) went into production, the military received the first production models; after the war, the airlines received their original orders, giving Lockheed more than a year's head-start over other aircraft manufacturers in what was easily foreseen as the post-war modernization of civilian air travel. The Constellations' performance set new standards which transformed the civilian transportation market. Its signature tri-tail was the result of many initial customers not having hangars tall enough for a conventional tail. Lockheed produced a larger transport, the double-decked R6V Constitution, which was intended to make the Constellation obsolete. However, the design proved underpowered. In 1943, Lockheed began, in secrecy, development of a new jet fighter at its Burbank facility. This fighter, the Lockheed P-80 Shooting Star, became the first American jet fighter to score a kill. It also recorded the first jet-to-jet aerial kill, downing a Mikoyan-Gurevich MiG-15 in Korea, although by this time the F-80 (as it was redesignated in June 1948) was already considered obsolete. Starting with the P-80, Lockheed's secret development work was conducted by its Advanced Development Division, more commonly known as the Skunk works. The name was taken from Al Capp's comic strip Li'l Abner. This organization has become famous and spawned many successful Lockheed designs, including the U-2 (late 1950s), SR-71 Blackbird (1962) and F-117 Nighthawk stealth fighter (1978). The Skunk Works often created high-quality designs in a short time and sometimes with limited resources. In 1954, the Lockheed C-130 Hercules, a durable four-engined transport, flew for the first time. This type remains in production today. In 1956, Lockheed received a contract for the development of the Polaris Submarine Launched Ballistic Missile (SLBM); it would be followed by the Poseidon and Trident nuclear missiles. Lockheed developed the F-104 Starfighter in the late 1950s, the world's first Mach 2 fighter jet. In the early 1960s, the company introduced the C-141 Starlifter four-engine jet transport. During the 1960s, Lockheed began development for two large aircraft: the C-5 Galaxy military transport and the L-1011 TriStar wide-body civil airliner. Both projects encountered delays and cost overruns. The C-5 was built to vague initial requirements and suffered from structural weaknesses, which Lockheed was forced to correct at its own expense. The TriStar competed for the same market as the McDonnell Douglas DC-10; delays in Rolls-Royce engine development caused the TriStar to fall behind the DC-10. The C-5 and L-1011 projects, the cancelled U.S. Army AH-56 Cheyenne helicopter program, and embroiled shipbuilding contracts caused Lockheed to lose large sums of money during the 1970s. Drowning in debt, in 1971 Lockheed (then the largest US defense contractor) asked the US government for a loan guarantee, to avoid insolvency. The measure was hotly debated in the US Senate. The chief antagonist was Senator William Proxmire (D-Wis), the nemesis of Lockheed and its chairman, Daniel J. Haughton. Following a fierce debate, Vice President Spiro T. Agnew cast a tie-breaking vote in favor of the measure (August 1971). Lockheed finished paying off the $1.4 billion loan in 1977, along with about $112.22 million in loan guarantee fees. The Lockheed bribery scandals were a series of illegal bribes and contributions made by Lockheed officials from the late 1950s to the 1970s. In late 1975 and early 1976, a subcommittee of the U.S. Senate led by Senator Frank Church concluded that members of the Lockheed board had paid members of friendly governments to guarantee contracts for military aircraft. In 1976, it was publicly revealed that Lockheed had paid $22 million in bribes to foreign officials in the process of negotiating the sale of aircraft including the F-104 Starfighter, the so-called Deal of the Century. The scandal caused considerable political controversy in West Germany, the Netherlands, Italy, and Japan. In the US, the scandal led to passage of the Foreign Corrupt Practices Act, and nearly led to the ailing corporation's downfall (it was already struggling due to the poor sales of the L-1011 airliner). Haughton resigned his post as chairman. In the late 1980s, leveraged buyout specialist Harold Simmons conducted a widely publicized but unsuccessful takeover attempt on the Lockheed Corporation, having gradually acquired almost 20 percent of its stock. Lockheed was attractive to Simmons because one of its primary investors was the California Public Employees' Retirement System (CalPERS), the pension fund of the state of California. At the time, the New York Times said, "Much of Mr. Simmons's interest in Lockheed is believed to stem from its pension plan, which is over funded by more than $1.4 billion. Analysts said he might want to liquidate the plan and pay out the excess funds to shareholders, including himself." Citing the mismanagement by its chairman, Daniel M. Tellep, Simmons stated a wish to replace its board with a slate of his own choosing, since he was the largest investor. His board nominations included former Texas Senator John Tower, the onetime chairman of the Armed Services Committee, and Admiral Elmo Zumwalt Jr., a former Chief of Naval Operations. Simmons had first begun accumulating Lockheed stock in early 1989 when deep Pentagon cuts to the defense budget had driven down prices of military contractor stocks, and analysts had not believed he would attempt the takeover since he was also at the time pursuing control of Georgia Gulf. 1912: The Alco Hydro-Aeroplane Company established. 1916: Company renamed Loughead Aircraft Manufacturing Company. 1926: Lockheed Aircraft Company formed. 1929: Lockheed becomes a division of Detroit Aircraft. 1932: Robert and Courtland Gross take control of company after the bankruptcy of Detroit Aircraft. Company renamed Lockheed Aircraft Corporation, reflecting the company's reorganization under a board of directors. 1943: Lockheed's Skunk Works founded in Burbank, California. 1954: First flight of the Lockheed C-130 Hercules. 1954: Maiden flight of the Lockheed U-2. 1961: Grand Central Rocket Company acquired as Lockheed Propulsion Company. 1962: First flight of the A-12 Blackbird. 1964: First flight of the Lockheed SR-71 Blackbird. 1970 First flight of the L-1011 TriStar. 1976: The Lockheed bribery scandals. 1977: Company renamed Lockheed Corporation, to reflect non-aviation activities of the company. 1978: The company's Hollywood-Burbank Airport is sold to its nearby cities and becomes Burbank-Glendale-Pasadena Airport (later renamed Bob Hope Airport in 2003). 1981: First flight of the F-117 Nighthawk. 1985: Acquires Metier Management Systems. 1986: Acquires Sanders Associates electronics of Nashua, New Hampshire. 1991: Lockheed, General Dynamics and Boeing begin development of the F-22 Raptor. 1992: All aerospace related activities end at the Burbank facility. 1993: Acquires General Dynamics' Fort Worth aircraft division, builder of the F-16 Fighting Falcon. 1995: Lockheed Corporation merges with Martin Marietta to form Lockheed Martin. Lockheed-California Company (CALAC), Burbank, California. Lockheed-Georgia Company (GELAC), Marietta, Georgia. Lockheed Advanced Aeronautics Company, Saugus, California. Lockheed Aircraft Service Company (LAS), Ontario, California. Lockheed Air Terminal, Inc. (LAT), Burbank, California, now Bob Hope Airport and owned by the Burbank-Glendale-Pasadena Airport Authority. Lockheed Missiles & Space Company, Inc., Sunnyvale, California. Lockheed Propulsion Company, Redlands, California. Lockheed Space Operations Company, Titusville, Florida. Lockheed Engineering and Management Services Company, Inc., Houston, Texas. Lockheed Electronics Company, Inc., Plainfield, New Jersey. Lockheed Shipbuilding Company, Seattle, Washington. Lockport Marine Company, Portland, Oregon. Advanced Marine Systems, Santa Clara, California. Datacom Systems Corporation, Teaneck, New Jersey. Lockheed Data Plan, Inc., Los Gatos, California. DIALOG Information Services, Inc, Palo Alto, California. Metier Management Systems, London, England. Integrated Systems and Solutions, Gaithersburg, Maryland. A partial listing of aircraft and other vehicles produced by Lockheed. ^ "Lockheed Martin History". Lockheed Martin. Retrieved 2018-06-21. ^ "Allan Haines Lockheed". Davis-Monthan Airfield Register. Retrieved 2010-01-17. ^ "Lockheed, Allan Haines National Aviation Hall of Fame". National Aviation Hall of Fame. Retrieved 2018-06-22. ^ Parker 2013, p. 59. ^ Herman 2012, pp. 8586. ^ Parker 2013, pp. 59, 71. ^ Lockheed was delivering airplanes to Japan until May 1939. ^ a b Parker 2013, pp. 5976. ^ Herman 2012, p. 287. ^ Parker 2013, pp. 59, 7576. ^ "World War II-Lockheed Burbank Aircraft Plant Camouflage." Amazing Posts, August 16, 2008. ^ "How to Hide an Airplane Factory." Archived January 1, 2014, at the Wayback Machine Thinkorthwim.com, August 19, 2007; retrieved September 30, 2011. ^ "California Becomes a Giant Movie Set." Flat Rock, July 16, 2009. ^ Peck and Scherer 1962, p. 619. ^ Time Magazine, January 14, 1946. ^ Baugher, Joe. "Lockheed P-80/F-80 Shooting Star." USAF Fighter, July 16, 1999. Retrieved: June 11, 2011. ^ Stanton, T.P. "An Assessment of Lockheed Aircraft Corporation and the Emergency Loan Guarantee Act." U.S. Naval Postgraduate School, 1977. ^ "Fragen zur politischen Biographie". Franz Josef Strauß (in German). Archived from the original on January 21, 2010. ^ "Monday, August 18, 1975." Time magazine, August 18, 1975. Retrieved: September 30, 2011. ^ Lindsey, Robert (February 14, 1976). "2 Lockheed Officials Quit; Haack Is Interim Chairman". N.Y. Times. ^ Hayes, Thomas. "Lockheed Fends Off Simmons", The New York Times, March 19, 1991. ^ Richard W. Stevenson, "Simmons Is Considering Possible Lockheed Bid", The New York Times, February 1990. ^ "Simmons to Lift Lockheed Stake." The New York Times, November 22, 1989. ^ Francillon 1987, pp. 4749. Allen, Richard Sanders. Revolution in the Sky. Brattleboro, Vermont: The Stephen Greene Press, 1964. LOC 63-20452. Baker, Nicholson. Human Smoke: The Beginnings of World War II, the End of Civilization. New York: Simon & Schuster, 2008. ISBN 978-1-41657-246-6. Miller, Jay. Lockheed Martin's Skunk Works: The Official History, Updated Edition. Arlington, Texas: Aerofax, 1995. ISBN 1-85780-037-0. Peck, Merton J. and Frederic M. Scherer. The Weapons Acquisition Process: An Economic Analysis. Boston: Harvard Business School, 1962. Wikimedia Commons has media related to Lockheed. Kakuei Tanaka. "Chapter 4 The Lockheed Scandal" A political biography of modern Japan. (The Kodama organization, a Yakuza gang, got mixed up in this scandal). This article based on this article: Lockheed_Corporation from the free encyclopedia Wikipedia and work with the GNU Free Documentation License. In Wikipedia is this list of the authors .
0.990359
The Space Telescope Collection is a group of records compiled by Robert W. Smith. The material in this collection spans the years 1952 to 1991, but most of the records are concentrated in the 1970s and 1980s. The collection contains correspondence files, scientific plans, reports, publications, administrative records and subject files from various NASA centers, universities, research groups and contractors who were active in the development of the Hubble Space Telescope and the Space Telescope Science Institute. Robert W. Smith compiled the Space Telescope Collection while writing his book The Space Telescope: A Study of NASA, Science, Technology and Politics. The book, published in 1989, was one of the main goals of the Space Telescope History Project (STHP). STHP was the result of the efforts of historians from the Johns Hopkins University and the National Air and Space Museum. The project was partially funded by NASA and the National Science Foundation and in the 1980s it was housed in STScI on the Johns Hopkins University's Homewood Campus. The purpose of the project was to provide a history of the Space Telescope Project from its origins to the time when the telescope was placed in orbit. The project also developed a large body of historical resource files and oral history tapes. The oral histories and many of the files are housed at the National Air and Space Museum. The Space Telescope Collection is a portion of these working resource files. The Hubble Space Telescope was the product of collaboration between NASA and the US government, private industry, and the academic world. Unlike telescopes on earth, which are less effective due to the light from civilizations, the Space Telescope allows astronomers to gather very accurate and detailed data. A telescope of this type was needed by astronomers, physicists and space science researchers. However, with so many diverse organizations contributing to the telescope's development, a central location became necessary to coordinate and manage the overall project. For this reason, STScI was established in 1981. The final location, the Steven Muller Building of the Johns Hopkins University, was selected after a series of proposals and presentations to AURA, the Association of Universities for Research in Astronomy. The AURA consortium was responsible for running STScI. The Space Telescope Science Institute, previously housed in the Johns Hopkins University's Rowland Hall, moved into its new location in 1982. The design and construction of the Space Telescope and its components continued under its direction throughout the 1980s. In April, 1990, the completed Hubble Space Telescope was launched aboard the Space Shuttle Discovery, and put into orbit. The Space Telescope Collection is a group of records compiled by Robert W. Smith while researching and writing his book The Space Telescope: A Study of NASA, Science, Technology and Politics. The material in this collection spans the years 1952 to 1991, but most of the records are concentrated in the 1970s and 1980s. The collection contains correspondence files, scientific plans, reports, publications, administrative records and subject files from various NASA centers, universities, research groups and contractors who were active in the development of the Hubble Space Telescope and the Space Telescope Science Institute (STScI). Because this record group is an artificial collection dealing with the development of the Space Telescope Project, the documents it contains have been gathered form a variety of sources. The records do not provide complete documentation of either the Space Telescope or the agencies who contributed to its development. Twenty-four boxes of material were transferred to the Archives by Robert W. Smith in 1998. Additional materials from Robert Smith's collection were also transferred to the National Air and Space Museum Archives and the Smithsonian Institute Archives. The following publication cited or described this collection: Smith, Robert W. The Space Telescope: a study of NASA, science, technology and politics. Cambridge, Cambridge University Press, 1989. This collection was processed by Holly Callahan. [Name of folder or item], [Date], [Box number], [Folder number], [Collection title], [Collection number], Special Collections, The Johns Hopkins University. http://aspace.library.jhu.edu/repositories/3/resources/9 Accessed April 22, 2019.
0.999964
Has a monetary reward at work ever made you happier/more motivated? I have a question for you for an article I’m writing: Have you ever received a bonus or other monetary reward at work, that was given in a way that made you happier at work and/or more motivated? If so, what about the reward was it that worked for you? Let me know in a comment. In the past I’ve received a surprise bonus at the end of a big project and it was a moment of happiness and motivation. “Hey, these people appreciate the work we did!” But when the next three projects finished up and no such bonus appeared, it was demoralizing in that the Board appeared to have lost interest or appreciation for the years of work that went into the projects. On a similar note, small unexpected Thank you’s of gift cards to coffee shops or just special verbal recognition at a team meeting goes a long way towards motivation to continue dedicated work on a project. Team celebrations at milestones — cake and a quick break — may not seem meaningful when they happen regularly, but when they completely disappear over time, the company ends up feeling lifeless, without any sort of culture, and completely demoralized.