text
stringlengths 8
5.74M
| label
stringclasses 3
values | educational_prob
listlengths 3
3
|
---|---|---|
Once you have created the perfect craft product trailer in Apple Motion, with the best image pans, text and transitions, its time to use that as your prototype for a Motion template with drop zones. Drop zones are placeholders for your product images and .mov clips. They are just a gray box with an arrow, indicating that something else should be placed at that location on the timeline. However, its the keyframe transitions and other effects attached to the drop zone thatís important. Once you have all those set up for each placeholder, those effects will work the same when you add your product images to the drop zone. So to use a Motion template, you only need to replace the placeholder image in the drop zones or the text in the text boxes on the screen. Thatís what we will do in the next few tutorials. First we will take a look at our prototype product trailer (see below). We will only be duplicating the middle section of the trailer that contains the product images. We will not be using the existing beginning logo reveal or the closing animation. Instead, we will be leaving the first 10 seconds on the timeline empty so that we can import any logo reveal of our choice and another empty 9 seconds at the end for importing a closing. Of course, the background audio file will be replaced to reflect the product. So what we need to do now is dissect the prototype to get some information about the size of the product images, the duration of each image on the screen and the duration of the transitions between the images. As you can see from the list below, we have eight product images and each is on the screen for five seconds, with a half second fade in at the beginning and a half second fade out at the end of those five seconds. Also the fade out of the previous product image overlays the fade in of the next product image. Below is the break down of the prototype, showing the starting and ending positions of each image on the screen and on the timeline, i.e. each drop zone. Remember that Motion's Timing Display shows the location of the playhead on the timeline in number of hours, minutes, seconds and number of frames (HR:MIN:SEC:FR). For example, one and one half seconds on the timeline will be displayed as 00:00:01:15, because we have 30 frames per second. 1 - 10 seconds - Import a pre-made logo reveal 9:15 - 14:15 - First image pans from top left to center of screen 14:00 - 19:00 - Second image pans from bottom left to center of screen | High | [
0.697892271662763,
37.25,
16.125
]
|
CASA in invertebrates. Sperm movement has been described in several phyla of invertebrates. Yet, sperm motility has only been quantified using computer-aided sperm analysis (CASA-Mot) in externally fertilising species (broadcast spawners) of two phyla, molluscs and echinoderms. In the present study we quantified in detail the nature of the sperm tracks, percentage motility groupings and detailed kinematics of rapid-, medium- and slow-swimming spermatozoa in the oyster Crassostrea gigas and four species never previously studied by CASA-Mot, namely the molluscs Choromytilus meridionalis, Donax serra and Haliotis midae and the echinoderm Parechinus angulosus. A feature common to all these species are the helical tracks, the diameter of which seems to be species specific. Using CASA-Mot, the behaviour of spermatozoa was also studied over time and in the presence of egg water and Ca2+ modulators such as caffeine and procaine hydrochloride. For the first time, we show that hyperactivation can be induced in all species in the presence of egg water (sea water that was mixed with mature eggs and then centrifuged) and/or caffeine, and these hyperactivated sperm tracks were characterised using CASA-Mot. We relate the different patterns of sperm motility and behaviour to reproductive strategies such as broadcast spawning and spermcasting, and briefly review studies using CASA-Mot on other invertebrates. | High | [
0.6683937823834191,
32.25,
16
]
|
Mississippi senator says solar company on special session agenda View full sizeA Mississippi senator says a proposed solar project is on Friday's agenda for the legislative special session called by Gov. Haley Barbour. JACKSON, Mississippi — A Mississippi lawmaker said Friday's special legislative session includes incentives to bring a California-based company to Lowndes County. Republican Sen. Terry Brown of Columbus told The Associated Press on Wednesday that he has been briefed by local development officials about plans by Calisolar to open a manufacturing plant that would create 900 jobs. The company uses silicon to make cells that are used in energy-producing solar panels. The Commercial Dispatch newspaper first reported Monday that there had been local speculation about Calisolar coming to Columbus, Miss. Officials say Ohio also was competing for the Calisolar plant. Brown told the AP that the new jobs could pay $40,000 to $50,000 a year. He said he didn't know what kind of incentives the state will offer. Calisolar officials did not immediately return calls to the AP on Wednesday. Gov. Haley Barbour on Wednesday afternoon will announce specific projects to be considered during the session, which begins at 10 a.m. Friday. Barbour said Monday that one economic development project will be on the session's agenda, and officials were trying to finish details of another project that could be considered. The chairman of Calisolar's board of directors is John D. Correnti, according to the company's website. "I trust him with my life," Brown said of Correnti, who was CEO of a steel mill that opened in Columbus, Miss., in 2007. The mill originally was called SeverCorr. It was taken over by a Russian company and changed its name to Severstal. Severstal bought out the shares of SeverCorr's senior management, including Correnti. The state issues bonds as long-term debt to finance big projects such as construction or repair of highways or public buildings, or to provide incentives to lure companies to Mississippi. After legislators authorize bond debt, bonds are issued by the state Bond Commission, made up of the governor, the state treasurer and the attorney general. The commission's next meeting is Sept. 19. Barbour said Monday that the special session is timed to come before that meeting "so that these large projects can get started this winter, if the Legislature approves them." Only a governor can call a special session, and he determines which issues lawmakers can consider. This will be the first special session since lawmakers wrapped up their three-month regular session in early April. | Low | [
0.536203522504892,
34.25,
29.625
]
|
// Copyright 2000-2017 JetBrains s.r.o. Use of this source code is governed by the Apache 2.0 license that can be found in the LICENSE file. package qunar.tc.decompiler.struct.gen.generics; public class GenericFieldDescriptor { public final GenericType type; public GenericFieldDescriptor(GenericType type) { this.type = type; } } | Low | [
0.410430839002267,
22.625,
32.5
]
|
<?php namespace Oro\Bundle\ContactUsBundle\Tests\Unit\Form\Type; use Oro\Bundle\ContactUsBundle\Form\Type\ContactReasonSelectType; use Oro\Bundle\FormBundle\Form\Type\OroEntitySelectOrCreateInlineType; use Symfony\Component\Form\Test\TypeTestCase; use Symfony\Component\OptionsResolver\OptionsResolver; class ContactReasonSelectTypeTest extends TypeTestCase { /** @var ContactReasonSelectType */ private $formType; /** * {@inheritdoc} */ protected function setUp(): void { $this->formType = new ContactReasonSelectType(); } public function testGetParent() { $this->assertEquals(OroEntitySelectOrCreateInlineType::class, $this->formType->getParent()); } public function testGetBlockPrefix() { $this->assertEquals('oro_contactus_contact_reason_select', $this->formType->getBlockPrefix()); } public function testConfigureOptions() { /* @var $resolver OptionsResolver|\PHPUnit\Framework\MockObject\MockObject */ $resolver = $this->createMock(OptionsResolver::class); $resolver->expects($this->once()) ->method('setDefaults') ->with($this->isType('array')) ->willReturnCallback( function (array $options) { $this->assertArrayHasKey('autocomplete_alias', $options); $this->assertArrayHasKey('create_form_route', $options); $this->assertArrayHasKey('configs', $options); $this->assertEquals('contact_reasons', $options['autocomplete_alias']); $this->assertEquals('oro_contactus_reason_create', $options['create_form_route']); $this->assertEquals( [ 'placeholder' => 'oro.contactus.form.choose_contact_reason' ], $options['configs'] ); } ); $this->formType->configureOptions($resolver); } } | High | [
0.6683673469387751,
32.75,
16.25
]
|
Q: Signature below listings How can I put automatic signature below my code listings. Now I put my code listings like this: \begin{lstlisting} MY CODE \end{lstlisting} Settings of listings: \usepackage{color} \definecolor{bluekeywords}{rgb}{0.13,0.13,1} \definecolor{greencomments}{rgb}{0,0.5,0} \definecolor{redstrings}{rgb}{0.9,0,0} \usepackage{listings} \lstset{language=[Sharp]C, showspaces=false, showtabs=false, breaklines=true, numbers=left, frame=single, showstringspaces=false, breakatwhitespace=true, escapeinside={(*@}{@*)}, commentstyle=\color{greencomments}, keywordstyle=\color{bluekeywords}, stringstyle=\color{redstrings}, basicstyle=\ttfamily } A: You can use the caption key to obtain the captions; use captionpos=b so that the captions appear below the listing (the default position is above). The command \lstlistoflistings gives you a list of all listings for which you declared a caption. \documentclass{article} \usepackage{listings} \usepackage{xcolor} \definecolor{bluekeywords}{rgb}{0.13,0.13,1} \definecolor{greencomments}{rgb}{0,0.5,0} \definecolor{redstrings}{rgb}{0.9,0,0} \usepackage{listings} \lstset{language=[Sharp]C, showspaces=false, showtabs=false, breaklines=true, numbers=left, frame=single, showstringspaces=false, breakatwhitespace=true, escapeinside={(*@}{@*)}, commentstyle=\color{greencomments}, keywordstyle=\color{bluekeywords}, stringstyle=\color{redstrings}, basicstyle=\ttfamily, captionpos=b } \begin{document} \lstlistoflistings \begin{lstlisting}[caption={this is some description of the first listing}] MY CODE \end{lstlisting} \begin{lstlisting}[caption={this is some description of the second listing}] MY CODE \end{lstlisting} \begin{lstlisting}[caption={this is some description of the third listing}] MY CODE \end{lstlisting} \end{document} | High | [
0.673417721518987,
33.25,
16.125
]
|
On Media Blog Archives Select Date… December, 2015 November, 2015 October, 2015 September, 2015 August, 2015 July, 2015 June, 2015 May, 2015 April, 2015 March, 2015 February, 2015 January, 2015 The 2016 campaign is a "circus," Les Moonves remarked, but "Donald's place in this election is a good thing." | AP Photo Les Moonves: Trump's run is 'damn good for CBS' Donald Trump’s candidacy might not be making America great, CBS Chairman Les Moonves said Monday, but it’s great for his company. "It may not be good for America, but it's damn good for CBS," Moonves said at the Morgan Stanley Technology, Media & Telecom Conference in San Francisco, according to The Hollywood Reporter — perfectly distilling what media critics have long suspected was motivating the round-the-clock coverage of Trump's presidential bid. "Most of the ads are not about issues. They're sort of like the debates," Moonves said, noting, "[t]here's a lot of money in the marketplace." The 2016 campaign is a "circus," he remarked, but "Donald's place in this election is a good thing." "Man, who would have expected the ride we're all having right now? ... The money's rolling in and this is fun," Moonves went on. "I've never seen anything like this, and this going to be a very good year for us. Sorry. It's a terrible thing to say. But, bring it on, Donald. Keep going.” | Mid | [
0.552016985138004,
32.5,
26.375
]
|
New strategies in polypeptide and antibody synthesis: an overview. The synthesis of radioligands can benefit considerably from optimized recombinant protein production, both on the aspect of economy of production and on the level of improving the targeting and pharmacokinetics of the ligand. This paper first describes a general production optimization strategy, and then elaborates on a protein design strategy tailored to targeting applications. Production in Escherichia coli will benefit from economy of goods and time as compared to other organisms. In order to increase the chance of finding a successful production system in this host, we have assembled a large number of expression strategies in a single, uniform expression system (FastScreen). The system allows rapid optimization of direct production of native proteins or via a fusion protein strategy with subsequent recovery of the desired protein. As an example of recombinant radioligand synthesis for improved targeting and clearing, a manifold of intermediate molecular size was synthesized by fusing one Fab and two single-chain variable fragments (scFv) antibody binding fragments into a trifunctional molecule (Tribody). Due to the use of the specific heterodimerization of the Fab chains, trispecific, bispecific, or trivalent antibody derived targeting reagents can easily be obtained. Recombinant production techniques also allow for specific incorporation of amino acids favoring a site specific labeling (labeling tags). | High | [
0.706199460916442,
32.75,
13.625
]
|
Q: Using a single large controller or multiple controllers Rails 3.2 Ruby 2.1.5 I am working on an application to create tickets (service orders). A ticket is going to have a number of sections. Rather than create a single large controller, would it be better if I had multiple controllers/models, one for each section of the ticket, and have a single view to display the sections in a single form? I would also have a views/tickets/shared set of views (one for each section), and from the main view, I would render each as needed inside a div in the main view. A: You should use multiple controller for an application like this until you know the controller object will start to violate the Single Responsibility principle, making future changes to the code base difficult and error-prone. | Mid | [
0.5977011494252871,
32.5,
21.875
]
|
[Optic coherent tomography: a new high-resolution technology of visualization of tissue structures. Communication II. Optical images of benign and malignant entities]. This is the second communication of a series of publications on Russian studies in the field of optical coherent tomography (OCT), the newest noninvasive highly resolving technology of visualization of the structure of biological tissues. By using the investing tissues as an example, this paper demonstrates the universal types of changes in their optical properties. Optimal images permit differentiate benign and malignant processes with a high degree of diagnostic accuracy. Diverse benign processes occurring in the epithelium are detected on the OCT images as changes in its height, the scattering properties and stroke of a basilar membrane. The absence of any structure on the image is the main OCT criterion for malignancy. The diagnostic efficiency of OCT is high in recognizing neoplasia of various mucous membranes: the sensitivity of the technique is 77-98%; its specificity and diagnostic accuracy are 71-96 and 81-87%, respectively. | High | [
0.686813186813186,
31.25,
14.25
]
|
Confluence Now Patching Leaks Following Dec. Dealer Letter Confluence management has been left scrambling to patch holes in a retail ship that sprang leaks faster than an air mattress lying on a carpet of thumbtacks following a now infamous dealer letter. In fact, the letter itself may go down as one of the most well intentioned, but most poorly conceived and executed letters in the industry's history. Confluence management has been left scrambling to patch holes in a retail ship that sprang leaks faster than an air mattress lying on a carpet of thumbtacks following a now infamous dealer letter. In fact, the letter itself may go down as one of the most well intentioned, but most poorly conceived and executed letters in the industry's history. If you haven't already read it, please click here to view the text of the letter retyped from a widely faxed copy that made the rounds of incredulous dealers and manufacturers, eventually ending up on SNEWS® desks. Confluence CEO, Bill Medlin, told SNEWS® that, "Kelley's (Woolsey) background with O'Neil is operative here, and while he was there the company made the decision to broaden distribution beyond purely specialty channels and make it more mainstream. And that is not entirely inconsistent with what our view is." ADVERTISEMENT Thanks for watching! Medlin went on to tell us that the traditional specialty paddlesports store is the company's bread and butter and where the majority of business is conducted. But he also added that Confluence eyes two other categories of retailer: One, A middle-tier hybrid along the lines of REI and EMS where a few of the stores are specialty in nature and manage a real focus on paddlesports and, two, the more mainstream store that was meant by the letter Woolsey sent out -- the Galyan's, Dicks, Sunny's, West Marine, and GI Joes of the world. ADVERTISEMENT Thanks for watching! ADVERTISEMENT Thanks for watching! "We have no desire to sell to big boxes or Jumbo sports or Wal-Mart or Kmart," says Medlin. Medlin added that the company's goal is to create brand equity as well as model equity. "It is all theory and we are still working it out, but essentially, we want to create a level of product that would be considered entry level -- be it Mad River or Wave Sport or Wilderness Systems -- that would be available to that mainstream store such as West Marine. And then, if the customer wants to upgrade that product with premium accessories or wants a higher-end product, they will have to go to the specialty dealer. Intentions aside, Medlin doesn't disagree with the passionate and sometimes vitriolic response the dealer letter generated. "If I were in their shoes (the dealers) I can't say I wouldn't have felt the same way when I read the letter; however, many folks are making assumptions and drawing conclusions that are not fair." When all the dust settles, Medlin told SNEWS® that Confluence wants to be known first and foremost for what the company sells through the specialty chain, with boats and add-on features that are -- and will only be -- available at specialty. Did the company learn anything from this experience? "First of all, Kelley and I need to spend more time talking with each other before we head out and do things. Secondly, this industry is largely one based on relationships, and it is very clear we need to get out there even more to establish and re-establish relationships in the paddlesport community." SNEWS® View: Confluence is definitely paddling furiously in some very stormy waters of its own making, and Woolsey needs to realize that. He created the storm with an ill-conceived letter that broke a cardinal rule in this industry -- don't surprise your key dealers. Many of those dealers, as well as SNEWS®, are still waiting for a return call from Woolsey. One key dealer, who sells Confluence boats in the hundreds, was promised a return call to explain the letter in December -- he's still waiting. We would firmly suggest to Woolsey that he consider calling each and every dealer quickly -- first, to apologize, and second, to listen. We are particularly stunned at his pointing to the ski industry and the tennis industry as models upon which to build a distribution foundation. Please! Woolsey's logic that placing boats in a sporting goods store with a wider audience will encourage more buyers at the specialty level certainly hasn't been proven true in either the ski or tennis industries, and attempts by a few outdoor and paddlesports manufacturers to dabble in similar distribution schemes in the past have failed miserably. Perhaps he knows something we don't? If so, we'd certainly love to hear it. Finally, to end the letter that is filled with wonderful words like "peace" and "partnership" and "prayers" with a statement with a meaning something like, if you don't sign the agreement Confluence has created (never mind the fact we never discussed it with you first) we will just cut off your access to our boats, is not only short-sighted, it is patently foolish. Of course, there are a few companies in this industry -- and you know who you are -- that took unfair advantage of this situation too, and one in particular committed an act that we feel should be beneath the integrity and values of any legitimate business in this industry. Reprinting portions of Confluence's dealer letter as well as spreading a very one-sided view of the situation in your own letter to industry dealers is simply fanning flames and needs to stop. Now. We do agree with Woolsey on one point made in his letter: "We need to realize and understand that our greatest competition is not each other, but other sports in general." If you believe in the well-being and future of this industry, then work to build it up, not tear down pieces of it for others to take fleeting pleasure in stomping on. Dear Dealer, I've enjoyed seeing all of you at this year's OR Show, EORA and other regional shows. As we all know it's been a challenging year. The softening economy and unseasonable weather patterns have depressed the paddesports market. As a company Confluence Watersports has ...read more Just when it looked like Confluence might be floating into calmer waters with the appointment of new CEO John Bergeron in March of this year, the company managed to steer itself back into rough waters that has left reps and the staff bailing frantically to stay afloat with ...read more Paddlepsorts dealers must have had that sinking feeling when they learned in early May that certain Tarpon kayaks sold this year could possibly leak. Mike Plante, general manager of Travel Country Outdoors, received a letter from Confluence warning him that several Tarpon boats ...read more Confluence's senior vice president of sales and marketing, Kelley Woolsey, is no stranger to challenges. Yet it would be safe to say that in the weeks and months following his infamous dealer letter (click here to read related story with accompanying link to that letter) of last ...read more Confluence Watersports was forced to eliminate 71 positions on April 19, resulting in 69 layoffs. Of the two who were not laid off, one was a company controller who was let go in January and the company has decided not to fill the position. The other, Keith Wallace, was the ...read more John Bergeron was removed as CEO of Confluence during a special Jan. 19 meeting of the company board of directors chaired by Bob Sharp of American Capital Strategies. When SNEWS® called for confirmation of Bergeron's termination, we were told only that the decision to fire ...read more Several weeks prior to the start of Outdoor Retailer Summer Market, SNEWS® became aware of the fact that serious questions were being raised by at least one individual over the recent Confluence/WaterMark merger. It didn't take us long to find out who, and to get our hands on a ...read more Following the recent acquisition of WaterMark, Confluence heads into Summer Market with a clearer picture as to how it intends to position its brands, as well as a restructured sales team, a generous preseason program, and an eye to developing new markets and expanding existing ...read more On Nov. 3, we received word that Richard Feehan, CEO of Confluence, was out of a job. Phone calls and emails to sources at Confluence, including Feehan, went unreturned. An email sent to Brian Maney, director of corporate communications for American Capital Strategies (ACS), ...read more | Low | [
0.49118942731277504,
27.875,
28.875
]
|
Parasite genetics and the immune host: recombination between antigenic types of Eimeria maxima as an entrée to the identification of protective antigens. The genomes of protozoan parasites encode thousands of gene products and identification of the subset that stimulates a protective immune response is a daunting task. Most screens for vaccine candidates identify molecules by capacity to induce immune responses rather than protection. This paper describes the core findings of a strategy developed with the coccidial parasite Eimeria maxima to rationally identify loci within its genome that encode immunoprotective antigens. Our strategy uses a novel combination of parasite genetics, DNA fingerprinting, drug-resistance and strain-specific immunity and centres on two strains of E. maxima that each induce a lethal strain-specific protective immune response in the host and show a differential response to anti-Eimeria chemotherapy. Through classical mating studies with these strains we have demonstrated that loci encoding molecules stimulating strain-specific protective immunity or resistance to the anti-coccidial drug robenidine segregate independently. Furthermore, passage of populations of recombinant parasites in the face of killing in the immune host was accompanied by the elimination of some polymorphic DNA markers defining the parent strain used to immunise the host. Consideration of the numbers of parasites recombinant for the two traits implicates very few antigen-encoding loci. Our data provide a potential strategy to identify putative antigen-encoding loci in other parasites. | High | [
0.6762028608582571,
32.5,
15.5625
]
|
187 Cal.App.3d 1344 (1986) 232 Cal. Rptr. 588 JOAN M. YOUNG, Plaintiff and Appellant, v. DAVID J. BRUNICARDI, Defendant and Respondent. Docket No. A030566. Court of Appeals of California, First District, Division Two. November 17, 1986. *1347 COUNSEL Jeanette K. Shipman, Sterns, Smith & Walker and Sterns, Smith, Walker, Pesonen & Grell for Plaintiff and Appellant. David F. Beach and James D. Biernat for Defendant and Respondent. OPINION ROUSE, J. Plaintiff, Joan Young, appeals from a judgment entered on a special verdict finding that defendant, David Brunicardi, was not negligent. Plaintiff appeals on the ground of jury misconduct and on the ground that the verdict is against the weight of evidence. This is an action for personal injuries sustained in a head-on automobile accident which occurred on April 23, 1981. The case was arbitrated on September 16, 1983, with an award to plaintiff. Defendant rejected the arbitration award and filed a request for a jury trial pursuant to rule 1616(a), California Rules of Court. Trial began on October 24, 1984. During voir dire, a venireman stated she was "familiar with the case because it did go through arbitration," and was duly excused from the panel. No evidence concerning arbitration proceedings was introduced at trial. Trial concluded on October 30, 1984, and the jury returned a verdict on a vote of nine to three finding defendant was not negligent. Judgment was entered on November 26, 1984. On November 13, 1984, plaintiff filed notice of her intention to move for a new trial, citing jury misconduct and insufficiency of the evidence as grounds for the motion. (Code Civ. Proc., § 657, subds. 2, 6.) Plaintiff submitted four juror affidavits to support the impeachment of the verdict. Plaintiff and her attorney also filed affidavits disclaiming they had knowledge of jury deliberations and potential misconduct prior to rendition of the verdict. Defendant submitted two juror counter declarations to support his opposition to the motion for new trial. The motion was argued on December 21, 1984, and denied without comment on December 27, 1984. Appeal from the judgment was timely made. *1348 I. Plaintiff claims that the trial court improperly denied her motion for new trial[1] made on grounds that there had been prejudicial jury misconduct. Defendant argues that the grant or denial of a motion for a new trial rests so completely with the discretion of the trial court that an appellate court will not interfere unless abuse of discretion is shown. (1) Defendant's assertion is partially correct in that extraordinary deference is usually shown to the trial judge's determination in appeals from orders granting a new trial. (Weathers v. Kaiser Foundation Hospitals (1971) 5 Cal.3d 98, 109 [95 Cal. Rptr. 916, 485 P.2d 1132]; Andrews v. County of Orange (1982) 130 Cal. App.3d 944, 954-955 [182 Cal. Rptr. 176].) However, where the trial judge denies the motion, the situation is different, and calls for a different approach. In our review of such an order denying a new trial, as distinguished from an order Granting a new trial, we are mindful that the appellate court has a constitutional obligation (Cal. Const., art. VI, § 13) to review the entire record, including the evidence, so as to make an independent determination as to whether the act of jury misconduct, if it occurred, was prejudicial to the complaining party's right to a fair trial. (Hasson v. Ford Motor Co. (1982) 32 Cal.3d 388, 417, fn. 10 [185 Cal. Rptr. 654, 650 P.2d 1171]; City of Los Angeles v. Decker (1977) 18 Cal.3d 860, 872 [135 Cal. Rptr. 647, 558 P.2d 545]; Tapia v. Barker (1984) 160 Cal. App.3d 761, 765 [206 Cal. Rptr. 803]; Andrews v. County of Orange, supra, 130 Cal. App.3d at 955.) (2) Once juror misconduct is established in either a criminal or civil case, a presumption of prejudice will arise. (People v. Honeycutt (1977) 20 Cal.3d 150, 156 [141 Cal. Rptr. 698, 570 P.2d 1050]; Hasson v. Ford Motor Co., supra, 32 Cal.3d 388, 416-417.) "However, the presumption is not conclusive; it may be rebutted by an affirmative evidentiary showing that prejudice does not exist or by a reviewing court's examination of the entire record to determine whether there is a reasonable probability of actual harm to the complaining party resulting from the misconduct. [Citing Smith v. Covell (1980) 100 Cal. App.3d 947, 953-954 (161 Cal. Rptr. 377).] Some of the factors to be considered when determining whether the presumption is rebutted are the strength of the evidence that misconduct occurred, the nature and seriousness of the misconduct, and the probability that actual prejudice may have ensued." (Hasson v. Ford Motor Co., supra, 32 Cal.3d at p. 417.) *1349 II. Plaintiff's allegation of juror misconduct is based upon certain statements attributed to Juror Rudolph Milon who, in the course of his voir dire questioning, had stated that he had retired as a police sergeant seven years before after serving more than 27 years on the force; that, as a policeman, he had occasion to investigate vehicular accidents and that, currently, he was vice-president of the San Francisco Police Credit Union. Plaintiff claims that prejudicial misconduct occurred when, in the course of jury deliberation, Mr. Milon gave erroneous instructions on the law to the other jurors; also, when the jury discussed and speculated about why a police report was not introduced into evidence. In support of her motion for a new trial, plaintiff submitted affidavits from six jurors. Two of those jurors, Smith and Michela, referred to Juror Milon as the "retired police officer." According to their affidavits, Milon told his fellow jurors that they "should have been able to see the police report which would have indicated the presence of negligence" (Juror Smith) or that the "jurors needed to see the police report on the accident to find negligence ..." (Juror Michela). According to both Smith and Michela, Juror Milon stated that the plaintiff must have had something to hide, otherwise the jurors could have looked at the report. Jurors Smith and Michela also reported that Juror Milon said defendant could not be negligent if there was no violation of the Vehicle Code. According to them, Juror Milon then asked that the jury be polled on the issue of whether there had been a Vehicle Code violation. Their story was corroborated by Juror Kutches, who noted that "[s]everal jurors discussed the fact that the police officer wasn't asked by the plaintiff's attorney whether he had issued a citation. These jurors said that with no citation there was no violation of the law and no negligence." Mr. Milon did not refute these affirmations in his counterdeclaration but stated that "the issue of the Defendant's possible negligence was discussed" in the course of the jury's deliberations and conceded that "[s]ome jury members, including myself, also felt that important evidence had not been produced, including a police report." (3) Jurors cannot, without violation of their oath, receive or communicate to fellow jurors information from sources outside the evidence in the case. (Smith v. Covell, supra, 100 Cal. App.3d 947, 952.) Communication to fellow jurors of information on an issue under litigation except in open court and in the manner provided by law constitutes misconduct. (Andrews v. County of Orange, supra, 130 Cal. App.3d 944, 958.) (4) When extraneous *1350 law enters a jury room i.e., a statement of law not given to the jury in the instruction by the court the defendant is denied his constitutional right to a fair trial unless the People can prove that no actual prejudice resulted. (In re Stankewitz (1985) 40 Cal.3d 391, 397 [220 Cal. Rptr. 382, 708 P.2d 1260], citing Noll v. Lee (1963) 221 Cal. App.2d 81, 87-94 [34 Cal. Rptr. 223].) (5) Certain evidence is admissible to impeach a verdict: "Upon an inquiry as to the validity of a verdict, any otherwise admissible evidence may be received as to statements made, or conduct, conditions, or events occurring, either within or without the jury room, of such a character as is likely to have influenced the verdict improperly." (Evid. Code, § 1150, subd. (a).) It is settled that jurors are competent witnesses to prove objective facts under this provision. (People v. Hutchinson (1969) 71 Cal.2d 342, 351 [78 Cal. Rptr. 196, 455 P.2d 132].) By contrast, the Legislature has declared evidence of certain other facts to be inadmissible for this purpose: "No evidence is admissible to show the effect of such statement, conduct, condition, or event upon a juror either in influencing him to assent or dissent from the verdict or concerning the mental processes by which it was determined." (Evid. Code, § 1150, subd. (a); italics added.) "Thus, jurors may testify to `overt acts' that is, such statements, conduct, conditions, or events as are `open to sight, hearing, and the other senses and thus subject to corroboration' but may not testify to `the subjective reasoning processes of the individual juror....' (People v. Hutchinson, supra, at pp. 349-350.)" (In re Stankewitz, supra, 40 Cal.3d 391, 398.) (6) Among the overt acts that are admissible and to which jurors are competent to testify are erroneous statements of law made by another juror. (In re Stankewitz, supra, 40 Cal.3d 391, 398-400.) In Stankewitz, defendant was convicted of first degree murder and robbery by a jury which had been told by one juror, a retired police officer, that a robbery occurs when a person forcibly takes personal property from someone else, regardless of whether the malefactor intends to keep the property. (Id., at p. 396.) The Supreme Court found that the officer had, in effect, consulted his own experience in law enforcement on a question of law. (Id., at p. 399.) "[V]ouching for [the] correctness [of his statement of the elements of robbery] on the strength of his long service as a police officer, he stated it again and again to his fellow jurors and thus committed overt misconduct." (Id., at p. 400.) (7) The determination by a trial court of a motion for a new trial submitted on affidavits which present conflicting facts is a determination of those *1351 controverted facts in favor of the prevailing party. (Weathers v. Kaiser Foundation Hospitals, supra, 5 Cal.3d 98, 108; Andrews v. County of Orange, supra, 130 Cal. App.3d 944, 957.) Here the affidavits and declarations before the trial court were not in conflict. (8) Here, the jury was admonished to follow the law on which they were instructed by the court. Yet Juror Milon, in facts similar to those in Stankewitz, violated the court's instructions and described his own outside experience as a police officer on a question of law. He erroneously instructed his fellow jurors, some of whom apparently repeated his legal advice, that defendant was not negligent if he was not cited for a Vehicle Code violation as a consequence of the accident. Juror Milon's erroneous statement of law carried substantial authority because he was, as his fellow jurors knew, a veteran police officer who had retired after more than 27 years on the force. His erroneous legal advice to his fellow jurors constituted an overt act of misconduct. That misconduct raises a presumption that plaintiff suffered prejudice. (People v. Honeycutt, supra, 20 Cal.3d 150, 156; Hasson v. Ford Motor Co., supra, 32 Cal.3d 388, 416-417.) Defendant has failed to rebut the presumption of prejudice that arises from jury misconduct. There was no conflict presented by the affidavits and declarations. Nor did Juror Milon's affidavit refute acts of misconduct ascribed to him. Defendant argues that, in any case, the jury did not believe that plaintiff had submitted sufficient evidence to establish negligence and that it was the jury's determination that plaintiff had not met her burden of proof which led to the discussion of the police report.[2] Inquiry into what the jurors "felt" about the quantum of evidence presented by plaintiff is evidence of the mental processes by which the jury reached its verdict; as such it is inadmissible. (Evid. Code, § 1150, subd. (a); see also In re Stankewitz, supra, 40 Cal.3d 391, 402-403.) In this case the jury's verdict for defendant was reached on a nine-to-three vote, with Juror Milon voting with the majority. Had he not applied an erroneous legal standard to plaintiff's negligence claim, then a verdict for defendant might have been lacking his vote. (See Andrews v. County of Orange, supra, 130 Cal. App.3d 944, 959.) Furthermore, as recounted in the affidavit of Juror Kutches, it appears that other jurors stated the applicable law as Milon set it out if there was no citation, there was no negligence. *1352 After an examination of the entire record, we conclude that there is, at the very least, a reasonable probability that plaintiff did suffer harm as a result of the misconduct. (Hasson v. Ford Motor Co., supra, 32 Cal.3d 388, 417.) III. (9) Plaintiff asserts two additional instances of alleged jury misconduct. The first of these rests upon a discussion by jurors of the source of money to pay a potential judgment. Plaintiff suggests that discussion of where the money for the judgment was to come from was somehow tied to the jury's impression that defendant was a "nice guy." Hence, plaintiff suggests, the jury declined to find defendant liable because it was concerned about the financial impact a verdict would have upon him. Looking to the affidavits we find the following references: Juror Smith states that "one juror asked where the money would come from if the verdict was in favor of the Plaintiff," where upon several of the jurors discussed this subject; Juror Michela states that "some of the jurors wondered where the money was going to come from if the jury found in favor of the plaintiff"; Michela notes, in a separate paragraph, that "one male juror ... said that David Brunicardi was a `nice guy.' Other jurors agreed with this statement...." We find nothing in these affidavits which is sufficient to establish a bias in favor of defendant or to cause the jury to avoid imposing the financial burden on a judgment upon him. (10) Plaintiff also alleges there was jury misconduct based upon a discussion of prior arbitration proceedings. Juror Smith's affidavit states that "one juror made reference to a dismissed juror's comment that the case had already been to arbitration, and that if that was the case and no decision could be made at that time, it was probably a weak case." Juror Kutches' affidavit states that "there was discussion by members of the jury that the case was already four years old and that this was probably not the first time that the case went through the court system because the case was so old." The venireman's remark about prior arbitration was made in response to a question addressed to the whole venire as to whether any of them were familiar with the events in question. She said she was, and mentioned that the case had been in arbitration. Apparently, plaintiff's counsel assumed that the reference to arbitration was not potentially prejudicial, and thus did not ask for an admonition. His tactical choice seems to have been correct *1353 in that only one affidavit mentions the arbitration, which apparently did not loom large in the jury's discussion. Having reviewed the entire record, we conclude that there was jury misconduct here which prejudiced the plaintiff and prevented her from receiving a fair trial.[3] The judgment is reversed and the matter remanded to the trial court for further proceedings. Plaintiff shall recover her costs on appeal. Kline, P.J., and Smith, J., concurred. NOTES [1] Although an order denying a new trial is nonappealable, it is reviewable on an appeal from the judgment under Code of Civil Procedure section 906. [2] Defendant relies upon a statement which appears in the affidavit of Juror Greene and in the counterdeclarations of Jurors Milon and Simpson. It reads, in pertinent part, as follows: During the course of deliberations, the issue of defendant's possible negligence was discussed. Because a majority of the jury members did not feel that enough evidence had been presented to make a fair determination regarding negligence or non-negligence, a majority of the jurors determined that they could not find negligence. [3] Because we reverse on other grounds, we need not reach plaintiff's additional contention that the verdict is against the weight of evidence. | Mid | [
0.558704453441295,
34.5,
27.25
]
|
Queer Erotica: Pony Play. Reclaiming the devilish I wanted to go, but $80 was too much. She was coming I thought it would be wonderful to see her again… I said “I’ll be yours for the night if you have a plus one.” It was too late, I’d said it and she’d said yes. I was going and I was quite excited. A bold move on my part, such bolshy confidence I hadn’t felt in such a long time, metered with the overthinking after thought of “was that rude, what if she says yes, what does that involve, what have I signed up for?!” It was too late, I’d said it and she’d said yes. I was going and I was quite excited. I knew one of my best friends would be there if I needed so I knew I’d be safe, but the adventure of the unknown was intoxicating. I’d not been out on a wild unknown limb in a while, and certainly hadn’t let anyone any near my body. The day grew closer and she sent me a picture of a pony bridle and bit, and asked if I was into pony play. My mind raced, I didn’t know what that would entail for her.. I’d participated in a few other play scenarios before mind you, with less industrial equipment shall we say. Unicorns are ponies I thought, I love those rainbow tails you can get, I jumped online and put one on a wishlist, thinking the always come with such wee plugs, maybe that’s so it’s more comfortable to wear over a longer period of time. Anyway back to the story. The day arrived and I’d cried three times before it was near time to get there. I’d woken up feeling low, tired and lonely… tears flowed in the shower as I pulled myself together to face the day, dance practice was next. I was looking forward to this, a blat of exercise to shake up the adrenaline and shift the mood so I’d be bouncy and ready to dance later. Queue a wonderful lesson, on preparing for dance competitions by being kind to your inner child – and tears. I love this work and have much to say to my inner child and much to re-write. Shaking that off I was on to the next thing. I don’t do busy days by halves I thought, and at least it was a comedy show, laughs and light heartedness that I love to shift the mood so I’d be bounce and ready to dance later. The universe really had other plans for my day. Hannah Gadsby was doing her retirement show “Nannette”. She’s an amazing woman, and boy did she share her story and the ringer she’s been put through. Powerfully she announced she’s retiring. Boldly exposing how so much comedy is based around self deprecating, self humiliating and reinforces one’s own attachment to emotional repression, an inability to communicate or ask for help when hurt, frustrated or angry. That much “humour” is mocking someone, something or calling oneself terrible things in order to garner a laugh from an audience. She was standing up for herself, her self worth and refusing make herself the brunt of the joke anymore. Humour is amazing but boy does it conceal or shut down emotional openness and deflect from a world of hurt or acute fear of vulnerability. Queue more tears, me and the rest of the entire theatre. This isn’t the sexy story you thought you’d be reading but it has a happy ending I promise.I was shaken, the universe had wanted to get a point across to me, and I was listening. I was fragile but being kind to myself again. Sitting in my vulnerability, I thought “I guess I’m ready for a dance now”. This was not the mood shift or energy I had been expecting. The club was dark, mirror ball covered dangly light installations decorated the ceiling and rainbow flashes danced about the walls and across the faces of all the shadowy people in the venue. I was late, they’d all been there for a few hours, but I crept in ready to be swallowed by a crowd of faceless bodies, rolling to the waves of the bass as it thumped from the speakers. I wiggled my way to the midst of the madness, my skin taking in the temperature difference from outside to the damp warmth inside. And there she was. Legs crossed in lotus position, arms out beside her, oosing the power of the goddess to the very tips of her long tallon’d fingers. She was floating a good metre off the floor, a spider web of ropes woven all around her in a beautifully symmetrical arch that made her look like she was floating on a throne. She didn’t move, her limbs hugged tight by beautiful purple bonds, “it is her favourite colour”, I thought. Her head masked in glossy black latex, like a bald cap that came all the way over to mysteriously hide her eyes, ending elegantly just above her nose, highlighting her cheekbones. The mask sported a glossy black latex halo, a solid dark shiny disk that framed her head, with silken tassels hanging down past each ear. This was a powerful goddess of the night. I was barely clothed, covered mostly in golden bronze metallic paint. Feeling freer without clothes trying to force me into a certain shape or cover up the beautiful ink that I’ve etched into my skin over the years. My hair was high, and filled with colourful flowers, my neck draped with a heavy necklace of tiny cocaine spoons. My body strapped into a beautiful pink harness that glowed like magick under the lights. My boobs sported matching weighted twirling tassels that I knew I’d show off later. The music was hypnotic, wooing me into it’s dark rhythms, most of the humans that surrounded me, naked or equally dressed in little clothing. I’d brought my flogger with me, feeling proud it was a well made piece and beautifully colour coordinated with the other harness pieces I was wearing. My mind had started to wander, so I asked a person dancing close to me if they’d want a gentle flogging or if they wanted to flog me. My offer was quickly accepted and we moved to part of the club there was room to swing. My body warm, my skin warming up too as the sensation of tickling, teasing, and soft leather smacking into me repeatedly building up to an intoxicating sting. My shoulders leaning into the pleasure of this pain, the thud then the tickle of the ends of the straps as it brushed up my bare back. The sting and tingle as it flicked around to the soft sensitive skin of my inner thighs. My butt cheeks framed by a little delicate black hassling and hanging sequins were bare and flushed pink with the blood flow of excited skin. My body didn’t wince, or jump, it leaned into the intoxicating sensations all over my skin. My mind ceased to be in my body, it felt like it was simply consumed by sensation. A gentle hand runs over the raised skin checking in to see that I am okay, and if I wish to continue. Hips press onto my ass, my body leans closer into the brick wall in front of me as I feel skin against skin, and breath whispering into my ear. I haven’t had another person’s skin against mine in what feels like an eternity. I return to my body, suddenly feeling very raw and vulnerable. The music floods back into my brain as I come back down to the environment around me, and we slink back to the dance floor to be enveloped again into the safety of the crowds, suddenly aware of the audience behind us hiding in the shadows enjoying the play we were having, sensing the energy of wild abandon and tactile pleasure. She was there in the crowd, released from her suspended throne of purple ropes. She kissed me on the cheek and I blush. I feel like a kid around someone they admire and look up to. Suddenly all my experiences of kink and all things of the underworld melt away and I feel like an innocent creature next to her. She is covered in beautiful tattoos, the long silken tassels from her latex halo frame her as she looks around then back to smile at me. I tingle with excitement and uncertainty. These things are never rushed, or non consensual but still I was still feeling very vulnerable. Where was the sassy creature that wanted to be hers? I didn’t know but I was enjoying myself regardless. My energy open with a “wise” innocence calmly just letting what ever was going to happen unfold around me. I sighed, this was beautiful, I was safe, cared for and surrounded by wonderful humans who knew what they were doing and had warm sexual energy and love. Flash forward through my body moving and getting lost in the hypnotic rhythm and thump of the music, I was warm sweaty and happy, some how letting go over the tension that had built up and the emotional overwhelm of the day. This was the energy and and mood I’d hoped for… the universe had rewarded me for my patience through the lessons I’d needed to learn that day. There was a small room off the side of the dance floor, it’s roof a web of shibari rope she and I had woven for hours the day before. Suspended in the middle was a giant tire, as if it were her prey and she were the Queen of her web and it was caught in her clutches. The master behind the rope works of art lurked in the shadows, as we pressed our bodies together. A few moments later we are lashed together, a happy sweaty pile, teasing, scratching and writhing around. It’s curious, I thought, this is not quite what I expected tonight. Later I sit on a little crate as she is pleasured by the master, and the other person I’d played with earlier with the flogger. The exhibitionist in me is excited, I am not yet ready to participate, but I love being a voyeur. My body is excited by the unfamiliarity of it, yet not surprised that this beautiful collision of sexual energy has culminated in a beautiful puddle of wonderful people. We all writhe around in pleasure, me on my wee crate and them on and around the suspended tire ropes, with plenty to grip as our legs turn to jelly. Someone, maybe the rope master, I don’t remember – grips, pinches and roughly twists and squeezes my nipples as the tassels had come unstuck from my sweaty skin. Fingernails scratch my skin. The tattoos on my back are dancing with sensations raised above my skin like icing on a cake. I remember how much I love roughness, that fine line between pleasure /pain and being thrown around, and my body sighs in pleasure. Willing to take risks, willing to adventure to push my boundaries and grow. Learning my limits by testing them. Taking my philosophy on emotional intimacy and connection and put it to the practical test. Living life to the fullest, putting intellectual beliefs to the front of my lived experience and holding space for myself and where my mental headspace was at. My body, glowing, glistening with dampness, the taste of my pleasure on my lips. My limbs shaking, overwhelmed and on sensation overload – torn between wanting more and not being comfortable all at the same time, outside of the four walls of my temple boudoir. I was in my power, open and vulnerable, willing to share intimacy and connection. Rewriting rejection with scratch marks, practicing self love with welts across my skin and positively reframing ‘neediness’ with raised red lines over my body. That desire for affection, craving intimacy and wanting the comfort of physical touch are not weaknesses, and nor should I be ashamed of my desires and emotive affections. I am not broken, I am just rediscovering my sparkle – she is wonderful, but tonight I reclaimed that devilish part of me and fell in love with myself again. Share this: Flossy, the photographer behind the vision of the Queer Tarot Cards, is a geeky queer witch hailing from New Zealand, now living in Melbourne. A creative writer and self-employed web developer, she runs site called Create Magick. Her work brings self-love and positive queer politic together, telling personal stories about manifesting magick, freedom and creativity. | Mid | [
0.585034013605442,
32.25,
22.875
]
|
On May 1st 2019, VRDB.com and SlideDB.com were closed. We no longer support VR, AR, iOS or Android only games. We are focused on PC, console and moddable games. If this is your project and you would like to release it on Indie DB, please contact us with the details. Fork of Xash3D engine that ported to Android. Allows play Half-Life out-of-box. NOT available on Indie DB The news you are trying to read is not available on Indie DB. Only articles related to content released on Indie DB are listed. You can read Xash3D FWGS 0.19.2 on Mod DB. We recommend you return to the news list and browse the links from there. | Low | [
0.485537190082644,
29.375,
31.125
]
|
INTRODUCTION ============ Calcaneal malunion causes complications of traumatic subtalar arthritis, peroneal tendon lesions like as tendinitis, entrapment, anterior ankle impingement syndrome and varus or valgus hindfoot deformity.[@B1],[@B2],[@B3],[@B4],[@B5],[@B6],[@B7],[@B8],[@B9] Although subtalar arthrodesis is capable of relieving subtalar arthritic pain, if arthritis is accompanied by severe deformity, it is difficult to correct calcaneal height, talar declination angle, or talocalcaneal angle caused by malunion. Subtalar distraction arthrodesis for restoration of calcaneal height was introduced by Carr, et al.[@B4] This involves a combined surgery with subtalar arthrodesis and realignment surgery for hindfoot deformity using iliac crest bone block graft. Since the operation was introduced, many researchers have reported it to be effective. Previous authors reported the results of subtalar distraction arthrodesis using a single bone block; however, we experienced subsidence of grafted bone during long-term follow-up.[@B10] In order to solve this problem, we performed subtalar distraction arthrodesis using double bone-blocks and analyzed the results at mid-term follow up. MATERIALS AND METHODS ===================== From January 2004 to June 2007, we carried out retrospective analysis on 6 patients (10 cases) who underwent operation for calcaneal malunion and subtalar arthritis resulting from a complication of intra-articular calcaneal fracture. The average follow-up period was 58 months (from 32 to 113 months). There were 5 males (9 cases) and 1 female (1 case), four of which presented with bilateral calcaneal malunion. The average age thereof was 41 years (ranging from 26- to 64-years-old). The initial treatments for calcaneal fracture comprised three conservative approaches with a cast and seven surgical approaches, including four percutaneous pinnings and three open reductions and internal fixation with a plate and screws. The average period until they received arthrodesis after their initial injury or surgery was 23 months, except for 1 patient who visited the hospital because of complication due to a fracture that occurred 30 years prior. All patients complained of severe pain along the distal fibula and subtalar joint. Also, decreased talo-calcaneal height and subtalar arthritis were found radiographically. The patients reported being treated with conservative treatments, such as medication, physical therapy, and orthopedic shoes, but these were ineffective. Severe range of motion limitations of the subtalar joint were found in all cases upon physical examination, and severe range of motion limitations of the ankle in the sagittal plane were found in three of these. Two cases of valgus deformity of the hindfoot and 1 case of flatfoot deformity was observed. Also, hammer toe deformity of the second, third, and fourth toes were observed in another 1 case. Plain X-rays were generated for the bilateral foot, taken in the same condition with full weight-bearing to evaluate the degree of deformity. In lateral views, the degree of hindfoot deformity in the sagittal plane was assessed by measuring the talo-calcaneal height, the talo-calcaneal angle, the talar declination angle, and the talo-first metatarsal angle. The talo-calcaneal height comprised the distance from the base of the calcaneus to the dome of the talus. Measurement of the talo-calcaneal angle was made along the long axis of the talus and its intersection with the longitudinal axis of the calcaneus. The talar declination angle was measured at the axis of the talus and the plane of support. The talo-first metatarsal angle was measured from the axis of the talus to the axis of the first metatarsal bone ([Fig. 1](#F1){ref-type="fig"}).[@B4],[@B11] The degree of valgus and varus deformity in the coronal plane of the calcaneus was estimated in the calcaneal axial view. Computed tomography was performed on all patients to examine calcaneofibular impingement due to a bony prominence of the lateral calcaneus or talar arthritis.[@B12] Based on these examinations, we planned surgery taking into consideration the extent of subtalar distraction, removal range of bony prominences, and correction degrees of valgus and varus deformity.[@B13] Radiological parameters were assessed and compared at 12 months after surgery and at final follow-up. In radiologic analysis, we used paired t-test (SPSS 12.0, SPSS Inc., Chicago, IL, USA, *p*\<0.05) and confirmed if there are significant differences between points preoperatively and points at final follow-up. Operative technique ([Fig. 2](#F2){ref-type="fig"}) --------------------------------------------------- The patient was placed in the lateral decubitus position with the affected side up and a compressive thigh tourniquet was applied. An extensile lateral approach via an L-shaped incision along the lateral aspect was applied ([Fig. 2A](#F2){ref-type="fig"}).[@B8] After the sural nerve and peroneal tendon were identified and protected, an incision was made to the calcaneal periosteum and bony prominences of the lateral calcaneus were removed. Using a lamina spreader, the subtalar joint was exposed, and then the residual cartilage was debrided and the subchondral surface prepared ([Fig. 2B](#F2){ref-type="fig"}). While holding the subtalar joint in distraction, we paid attention to avoid injury to the flexor digitorum longus tendon and to maintain the hindfoot in a neutral or a little everted position after the medial articular capsule of the subtalar joint was separated enough. The degree of distraction was determined by measuring the loss of height in comparison with an unaffected site. In bilateral cases, this was determined according to the correction degree of the talo-first metatarsal angle. After fluoroscopic analysis confirmed the corrected height of the hindfoot, talar declination angle, alignment, and stability, two 6.5 mm cannulated screws were inserted from the posteroinferior calcaneus to the dome of the talus. During screw insertion, the tricortical double bone-blocks needed to be protected ([Fig. 2C](#F2){ref-type="fig"}). During the operation, we first removed sclerotic portions of the subtalar joint and subchondral surfaces. Then, the subtalar joint space was measured again and tricortical double bone-blocks ([Fig. 3](#F3){ref-type="fig"}) to fit the space were harvested from the iliac crest. Then, the bone block was placed in the subtalar joint with cancellous bone. The patient started ROM exercises of the ankle at postoperative 4 weeks and was allowed to bear weight gradually from postoperative 8 weeks. Full weight bearing was allowed when radiological findings showed subtalar joint union. Clinically, the American Orthopaedic Foot and Ankle Society (AOFAS) ankle-hindfoot score was assessed before and after operation. At the final follow-up, physical examination and ankle-hindfoot score were assessed. RESULTS ======= Clinical assessment ------------------- AOFAS Ankle-Hindfoot scores (100 points) were obtained both before surgery and at final follow-up. Any pain, function, and alignment were also evaluated. A score of 100 points means that the patient has no pain, no limitation of ROM, is stable, and in good alignment. It also means that the patient can walk over six blocks without any help, and has no problem in performing daily activities. Of the possible 100 points on the hindfoot score, the maximum possible for a patient who has undergone subtalar arthrodesis is 94 because of loss of subtalar movement.[@B14],[@B15] The patient was assessed in regards to pain score and total AOFAS Ankle-Hindfoot score. The mean pain score was 11.4 points before operation and 35.7 points after operation. The mean AOFAS ankle-hindfoot score including the pain score was 43.3 points (12 to 66) before operation and 84.0 points (range, 74 to 91) after operation. These scores were increased and showed favorable results clinically. At the final follow-up, the mean dorsiflexion of the ankle was 10° and plantar flexion was 25°. Only one patient who developed nonunion of the operation site underwent an additional bone graft surgery. Radiological assessment ----------------------- In the radiological analysis, 12 patients, including 1 patient who underwent reoperation because of nonunion, exhibited perfect bone union at the final follow-up. The mean talocalcaneal height increased from 66.7 mm (61.0 to 72.5) preoperatively to 73.1 mm (69.0 to 79.5) postoperatively and to 72.3 mm (55.0 to 78.5) at the final follow-up. The mean talocalcaneal angle increased from 22.2° (16.0 to 30.0) preoperatively to 29.2° (23.0 to 33.0) postoperatively and to 24.0° (15.0 to 33.0) at the final follow-up. The mean talar declination angle increased from 13.5° (7.0 to 19.0) preoperatively to 21.3° (16.0 to 32.0) postoperatively and to 18.6° (9.0 to 26.0) at the final follow-up. The mean talo-first metatarsal angle increased from 6.1° (-2.6 to 12.1) preoperatively to -1.3° (-12.1 to 5.3) postoperatively and to 0.8° (-5.2 to 12.5) at the final follow-up ([Table 1](#T1){ref-type="table"}). These results indicated improvement in pre- to postoperative status and somewhat worsening at final follow-up. Nevertheless, statistical analysis showed significant improvement in preoperative to both postoperative and final follow-up results (*p*\<0.05). Case review ([Fig. 4](#F4){ref-type="fig"}) ------------------------------------------- A 26-year-old man experienced a right intra-articular calcaneal fracture when he fell from some height, for which he underwent axial pinning as an initial operation. Although he had undergone conservative treatments because of persistent postoperative ankle pain, the symptoms were not relieved. He underwent subtalar distraction arthrodesis using a double bone-block 3 years and 7 months after the initial surgery. After the surgery, a short leg cast was retained postoperatively for 5 weeks, and then, the patient started ROM exercise of the ankle and was allowed to bear weight gradually. At 10 weeks after surgery, full weight bearing was allowed after radiological subtalar joint union was obtained. Eleven months after surgery, the pinning was removed. At the final follow-up, 72 weeks after surgery, the arthrodesis was well maintained and deformity was corrected. Walking pain was also much better postoperatively. The patient\'s Ankle-Hindfoot score increased from 52 points (pain score 20 points) preoperatively to 90 points (pain score 40 points) postoperatively. DISCUSSION ========== Subtalar arthrodesis is an effective surgical treatment for symptomatic subtalar arthritis due to calcaneal malunion after calcaneal fracture. This is often combined with decompression of the lateral wall to effectively relieve symptoms.[@B16] Mostly, subtalar arthritis causes anatomical deformities, decreased calcaneal height, and so on. In such cases, *in situ* subtalar arthrodesis alone is not suitable for relieving these symptoms and can result in poor prognosis and limitations in functional aspects.[@B17],[@B18],[@B19] Therefore, correction of deformities via arthrodesis has received attention. In 1943, Gallie[@B20] introduced subtalar distraction arthrodesis using a bone-block as a new solution for treating such deformity. The bone-block was harvested from the middle of the tibia in the affected site. In 1977, Kalamchi and Evans[@B21] introduced a new way to get a bone-block from the lateral wall of the calcaneus, improving upon Gallie\'s method. This method can correct coronal sections, but not sagittal sections. In 1988, Carr, et al.[@B4] introduced subtalar distraction arthrodesis using an iliac crest bone-block, and reported good results in 6 of 8 patients. Calcaneal malunion patients with traumatic subtalar arthritis with anterior ankle impingement syndrome due to loss of hindfoot height and talar declination angle are commonly indicated for subtalar distraction arthrodesis.[@B22],[@B23] Many studies have reported favorable results for such patients ([Table 2](#T2){ref-type="table"}).[@B1],[@B6],[@B7],[@B11],[@B13],[@B15],[@B16],[@B24] Myerson and Quill[@B7] suggested guidelines for subtalar distraction arthrodesis using a bone block: the indications for surgery included a loss of talo-calcaneal height more than 8 mm and radiologically proven anterior tibiotalar impingement because of an abnormal talar declination angle. Pain in the anterior aspect of the ankle was not prerequisite for this surgery. Several operative techniques have been suggested with increasing research on subtalar bone block distraction. The posterolateral approach or the lateral extensile approach is one such technique. The goal of posterolateral Gallie incision is to restore the talo-calcaneal height and lateral talo-calcaneal angle.[@B15],[@B16],[@B19],[@B20],[@B25],[@B26],[@B27],[@B28],[@B29] However, this approach cannot expose the lateral wall enough and it is difficult to operate on the calcaneocuboidal joint and impossible to approach peroneal tendon dislocation problems. Meanwhile, a lateral extensile approach is favorable for exposing the lateral wall and to correct alignment with insertion a bone block.[@B29],[@B30],[@B31] Also, there is no significant difference in the correction of talo-calcaneal angle between the lateral extensile approach and Carr\'s way of surgery. In the present study, we experienced no complications with this procedure, such as a problem of wound repair after subtalar distraction, necrosis of the wound after surgery, and infection. Generally, a tricortical bone-block from the iliac crest is usually used as a graft material. Most studies that have used an autograft have reported high rates of union, except for patients who smoke. Fresh frozen or lateral calcaneal wall grafts can also be utilized as graft materials. Studies have reported no complications of donor site pain or infection and the surgical time can be reduced when femoral head allograft materials are used. However, there are concerns for a lower rate of union, compared to an autograft, and loss of talo-calcaneal height may occur during follow-up. As well, allograft rejection or infection should also be considered.[@B27],[@B32] This study used a tricortical bone block from the iliac crest and a harvested double bone-block in the shape of a truncated wedge, which is wider than a single bone block. Considering the width of the surface of the subtalar joint, using a double bone-block can increase the rate of union because it can fix the surface of union more widely, compared to using single bone-block. It could also help to correct varus deformity by using different heights of bone-block and prevent the loss of talo-calcaneal height during follow-up.[@B33] Since the height of grafted bone is determined by the height of sclerotic bone and subchondral bone removed intraoperatively, as well as preoperative measurements of the decreased talo-calcaneal height, a much higher bone-block may be needed than that determined preoperatively. Screws are used for fixing a bone-block and the subtalar joint to limit movement of the subtalar joint. Mostly, 6.5 mm or larger cylindrical screws, which must bear a tremendous amount of force, that are either partially or fully threaded to obtain sufficient compression force and prevent collapse or subsidence of graft are used. Carr, et al.[@B4] used a fully threaded 6.5 mm stainless-steel lag screw (core diameter 3.2 mm) and it was not enough to compress the union site of non-lag mode. If the screw\'s size and core diameter are small, bending force on the union site can be weak, causing damage of fixture or malunion. Clare, et al.[@B30] suggested using a 7.3 or 8.0 titanium-alloy large-fragment cannulated screw to complement the problem. Titanium is fit for bone more than stainless steel, in the aspect of modulus of elasticity, and can reduce the chances of damage to internal fixture.[@B30] Meanwhile, Pollard and Schuberth[@B25] reported that partially threaded screws are favorable for bone graft because it fixes the location of bone-block and allows for compression from surrounding tissues. There are two ways to insert screws to complete this procedure: 1) two cannulated screws should be inserted toward talar dome from the posteroinferior calcaneal tuberosity,[@B30] being careful not to break the bone-block during the procedure; and 2) an additional cannulated screw is inserted to put pressure on the anterior aspect of the subtalar joint.[@B25] In this study, two 6.5 mm partially threaded titanium cannulated screws were fixed, considering the size of a typical Korean\'s calcaneus. The mean rate of union in the present study was 96% (83 to 100) ([Table 2](#T2){ref-type="table"}).[@B34] This was similar to 94% for *in situ* subtalar arthrodesis after calcaneal fracture[@B35] and 97% for primary subtalar arthrodesis.[@B36] Autologous bone is commonly used, but sometimes homologous bone can be used.[@B16],[@B27],[@B37] Trnka, et al.[@B16] noted 4 cases of nonunion among five cases of allograft, while other authors have reported union rates over 90%, even though they used the allograft bone.[@B16],[@B27],[@B37] Chen, et al.[@B13] reported satisfactory rates of union throughout sufficient decortication of cortical bone from the subtalar joint, removal of avascular bone, and cancellous bone graft. In this study, radiological bony union was achieved in all patients at an average of 6 months. One patient who showed nonunion at follow-up also achieved union after a reoperation with autogenous bone graft. The AOFAS Ankle-Hindfoot scale (maximum 94 points) has been used for clinical evaluation of patients to compare and analyze pre- and postoperative results. Most patients typically show good results clinically, as well as increased scores after surgery, an average of 73 points (64-83) at final follow-up.[@B34] In this study, scores increased from 43.3 preoperatively to 82.8 postoperatively. This result was similar to those in previous studies. Radiological assessment of the talo-calcaneal height and talar declination angle is typically undertaken to evaluate improvement of hindfoot alignment in the sagittal plane. Our results of an increase in talo-calcaneal height after surgery, compared to that before surgery, was and a subsequent decrease at final follow-up is regarded as an outcome from weight-bearing and absorption of the bone-block. Myerson and Quill[@B7] noted unfavorable results in seven of fourteen cases after operation by Carr\'s method, mainly attributed to loss of talo-calcaneal height due to absorption of bone-block. Chan and Alexander[@B6] improved on Carr\'s method and grafted a double bone-block. After this procedure, the reported loss of talo-calcaneal height was only 1.4 mm. This was an excellent result in comparison to a reduction in the height of 4.7 mm when a single bone-block was used. Also, Garras, et al.[@B27] have used autograft materials from the homologous femoral head allograft, while Zion, et al.[@B38] have used a ramp cage made of carbon composite materials instead of a bone-block. Both studies reported satisfactory results. In this study, the loss of height was 0.7 mm in the short term (mean 19 months), as previously published.[@B33] The loss of height was 0.8 mm and the decreased width was 0.1 mm at final follow-up. The results seemed favorable because there were seldom differences between the short-term and mid-term follow-up ([Table 3](#T3){ref-type="table"}). Limited range of motion was unchanged or barely improved in our study, as in other studies. The average range of motion of ankle was in 10° of dorsiflexion and 25° of plantar flexion. This was unchanged and barely improved after surgery. Degenerative changes in adjacent joints, usually a talonavicular joint or a calcaneocuboidal joint, are noted in 0 to 26%.[@B30] Generally, additional surgery is not needed and all symptoms of degenerative changes are improved by conservative treatment. Although patients are able to obtain satisfactory results, such as alleviation of symptoms and development of daily living skills (e.g., walking skills, return to work) after subtalar distraction arthrodesis, a number of complications have been reported as a result of the complexity of the procedure, problems of soft tissue, and so on. Postoperative complications such as wound infection, nonunion, sural nerve neuralgia, varus malunion, persistent heel pain due to implant extrusion, transposition and dislocation of a grafted bone-block, and damage of internal fixture have also been reported.[@B1],[@B4],[@B6],[@B7],[@B14],[@B39] After restoration of height by a bone-block in subtalar distraction arthrodesis, traction neuralgia can occur. Also, the sural and tibial nerves are also at risk for injury. A complex regional pain syndrome may occur or can worsen due to such injuries.[@B15] In most studies, this depends on the severity of lateral calcaneal extrusion, and it is recommended that the lateral wall should be removed in order to reduce the width of the calcaneus and to decompress the peroneal tendon and the sural nerve.[@B29] As described above, there is relatively well formulated indication of subtalar distraction arthrodesis. However, in terms of the number of bone block for distraction according to the type of calcaneal malunion, previous studies[@B21],[@B22] on the specific protocol are still lacking. Therefore, the purpose of this study was to provide useful, mid-term follow up results for developing better indications for surgery. Subtalar distraction arthrodesis is an effective surgery for patients with various anatomical deformities, including subtalar arthritis and loss of talo-calcaneal height due to malunion after displaced intra-articular calcaneal fracture. As many studies have reported, satisfactory results were obtained from this surgery. The subtalar distraction arthrodesis using a double bone-block that we suggested led to excellent results for not only alleviation of pain but also functional recovery and restoration of the anatomical structures by correcting deformities. Accordingly, this procedure should be considered for patients with serious loss of talo-calcaneal height and severe pain due to subtalar arthritis. The authors have no financial conflicts of interest. {#F1} {#F2} {#F3} {#F4} ###### Mean Range of Radiological Measurements before and after Operation  TCH, talo-calcaneal height; TCA, talo-calcaneal angle; TDA, talar declination angle; TFMA, talo-first metatarsal angle. ###### Summary of the Main Results from the Literature  ###### Comparison of Mean Range of Radiological Measurements Short Term Follow Up and Mid-Term Follow Up  TCH, talo-calcaneal height; TCA, talo-calcaneal angle; TDA, talar declination angle; TFMA, talo-first metatarsal angle. | Mid | [
0.597285067873303,
33,
22.25
]
|
Meloetta C-Gear Skin Approaches Do you enjoy Pokémon Global Link promotions? How about those exclusive Pokémon-themed C-Gear Skins? If you answered yes to either one of these questions, you're in for a treat. Starting on March 7th, you can get a Meloetta-themed C-Gear Skin to compliment your Pumpkin Pikachu C-Gear, Keldeo C-Gear, and Custom Klink C-Gear Skins. This special Meloetta C-Gear Skin is only available for the Pokémon Black Version 2and Pokémon White Version 2 game cards for the Nintendo DS. Is that still not enough to get you singing? On that same day, the Pokémon Global Link is also offering the Meloetta Musical as part of the promotion. Just head over to Nimbasa City with your Meloetta and start directing it in a brand-new stage performance. It's a performance you're not going to want to miss! | Mid | [
0.6293103448275861,
36.5,
21.5
]
|
In order to survive in today's world, you have to get REALLY good at suffering. There's a way, actually many ways, to become tougher. And I can teach them to you. You can thank me later. We are NOT destined to live with the amount of willpower that we were born with. There are STRATEGIES, and there are METHODS that can help us get closer to our potential. And in contrast to so much else, this isn’t just empty talk. It’s almost stupid how many books are out there competing for our attention, all claiming that they can help us do the things we previously thought were impossible. Some are garbage. You probably know this already. But there are some phenomenally inspiring books out there, filled with wisdom, and they are out there just waiting for you to open them. What I’ve done here is go through my list of the first 100 books that I ever read, and picked out the best ones relating to self-discipline and willpower. It turns out that there were 37 on that list. In a future article, I will go from 101-200 and pick out the best books from there. The link above is a non-profit fundraising campaign for Doctors Without Borders. That is my main cause, and I’ve given everything to them. My entire website is non-profit and I urge you to give what you can to this worthwhile cause. I am giving away all of my book notes in exchange for donations to support their life-saving work around the world. And you can download your free copy of my OWN book, The Godlike Discipline Handbook, by following this link HERE. It features 13 concepts that are absolutely critical to achieving superhuman self-control, and gives you 64 specific, actionable strategies to help you master self-discipline and willpower. Lastly…remember always, that the person who reads books lives a thousand lives, but alas, the non-reader lives but once. This is one of the seminal works on the science of self-control, and Roy is referenced so many times it’s almost impossible to read a self-improvement book without seeing him mentioned. There’s a reason for that: This book is powerful beyond measure. Major Lessons: Willpower can be depleted and replenished You have a finite amount of willpower that is depleted as you use it Do not attempt any important tasks while running low on glucose The time it takes to complete a task expands to fill the amount of time allotted to it Making decisions saps your willpower Train yourself to face worse conditions than you will ever actually face Use pre-commitment to conserve willpower Forge ahead one day at a time Form a specific willpower implementation plan to be followed when confronted by certain temptations Monitor yourself every day Tell yourself you can have some unhealthy food later if you pass up on it now I am a Steven Pressfield evangelist. The man inspires me daily to do my best work, and he’s written four of my favorites. The War of Art introduces us to the idea of the “Resistance”, or basically anything that stops us from achieving something great. I read this way back in 2014 and I believe it was one of the first times that I ever hugged a book. Major Lessons: Resistance can’t be reasoned with Resistance will say anything and do anything to prevent you from doing your work Resistance is strongest close to the end There will never be a moment when we are unable to change our destiny Resistance can and has been beaten Respect Resistance because it can beat you on any given day The artist pursuing his calling has volunteered for hell, whether he knows it or not Taking a few blows is the price of standing in the arena and not on the sidelines Resistance is like a telemarketer: Once you so much as say hello, you’re finished It’s better to be in the arena getting stamped by the bull, than to be up in the stands or in the parking lot “Whatever you can do, or believe you can, begin it. Boldness has genius, magic, and power in it. Begin it now.” – Goethe Dreams and inspiration are as common as dirt. So are sunrises. But that doesn’t make them any less of a miracle. We most fear that We Will Succeed If we were born to throw off the order of injustice and ignorance of the world, then it’s our job to realize it, and get down to business The artist must do his work for its own sake Ask, “If I were the last person on earth, would I still do what I’m planning to do?” Now here’s a man with a simple and powerful message. All your habits have the same structure: Cue –> Habit –> Reward. This means that when we experience a certain cue, say, driving by a fast-food restaurant, we execute a habit. The reward associated with that habit is grease in this example. But you can change your “habit loop”, as Duhigg calls it, into anything you want. This book explains how. It’s been immeasurably valuable to me personally. Major Lessons: Find a simple and obvious cue Clearly define the rewards Cultivate a craving to fuel adoption of the habit The cue must trigger a craving for the reward Champions do ordinary things but they do them without thinking Use the same cue and the same reward but a different routine Go for small wins Mentally rehearse how to respond and deal with failure and setbacks Find the absolute root causes of problems Crises afford the opportunity for changes Your habits are what you choose them to be We need to see small victories in order to believe that a long battle can be won Plan for setbacks and don’t let them get you off track Once you diagnose the cue, routine, and reward, you gain power over your habits Have you heard about this book? I hadn’t. I was traveling to a work conference and read pretty much the entire book on the plane. I wasn’t expecting there to be so much wisdom here, but I guess that’s just another case of me being wrong, now isn’t it? Major Lessons: The enemy is our self concept which was based on past performances and our beliefs about what we can and cannot do A heart built on the love of temporary things will have insecurity as a constant companion Winning doesn’t necessarily mean that you were great, or even good Sacrifice pride and status for growth and experience Keep death in mind at all times Extraordinary performance often comes as the result of pursuing extraordinary experiences Tomorrow may never come It may be only through eyes rinsed with tears that we can see who we really are When you’re attached to something you can’t control, you live in constant fear of losing hold of it The more you give in, the easier it is to do Beauty and presence as well as focus are always there regardless of whether you choose to experience them or not The more you look for beauty, the more you will find Try and find beauty everywhere Everything that is around you was meant to be there What you don’t have in this moment, you don’t need Take a break from work every hour and a half to really see and feel and be present Pressure comes from what we think about the situation and not the situation itself Living your dreams means loving what you’re doing and not the outcome of what you’re trying to do Be grateful when your opponents do well or fight hard Accept every circumstance that comes your way just as if you chose it Everything happens twice: first in your mind and then in your life Be comfortable being uncomfortable Your main pursuit is absolute fullness of life Consistently challenge what you know Winning is dangerous because we don’t learn anything, or at least it is easier not to learn anything Master the in-between moments of life by realizing that nothing is more important than the present moment Leo runs one of the most popular self-improvement blogs on the internet, called Zen Habits. He’s been around for a while, and has made a name for himself out there in the sea of mediocrity otherwise known as the real world. Zen Habits is all about simplicity, and it’s something I’ve always appreciated in my own life. I make everything as simple as possible, but not simpler. His is a great book with some very practical advice. You may have been wondering when we were going to get into some Tony Robbins! He’s one guy that has known what he’s been talking about for decades. He’s read even more books than I have (at the time of THIS writing, anyway!), and the scope of his knowledge is simply awe-inspiring. He’s the best in the business, and there’s a legitimate reason for that. Personally, I find his book titles dumb as anything, but you can’t argue with the results that his books have helped me to achieve. Major Lessons: Winners have a sense of certainty Make a decision not to be less than you could be It is what we do consistently that shapes our lives Never leave the scene of a decision before taking a concrete action Achievers rarely see a problem as permanent You travel in the direction of your focus You will get a better answer if you ask a better question Move your body in the direction you want to go Put yourself in a state of determination instead of trying to push yourself Cut off any other possibility except success Giant goals produce giant motivation Decide whether you are absolutely committed to achieving your goals that you set Spend 90% of your time on the solution and only 10% on the problem Act congruently with your values Spend less than you earn and invest the difference Scarcity is an illusion Spend so much time improving yourself that you have no time to criticize others You want MORE from Tony Robbins? Well ok, he’s more than able to deliver. Unlimited Power (again, dumb title, in my opinion) has changed more lives than Netflix. If you have time to devote to a larger book like this, then it’s absolutely worth it. Remember the guy who started Chicken Soup for the Soul? Well he’s packed this book with useful advice and game-changing insights. I was actually extremely surprised by how much I took away from this one, and it seemed like I was taking notes on every page. There is some major substance here, and Jack knows how to get an idea off the ground at all costs. Get past the title, and draw strength from this book. Major Lessons: “If we did all the things we were capable of, we would literally astound ourselves” You create everything that happens to you No matter how small the decision, make one We often achieve exactly what we anticipate Vague goals achieve vague results Pursue a breakthrough goal that would change everything if you reached it Stop thinking the same thoughts Ask repeatedly for what you want Measure what you want more of It’s always too soon to quit Provide more service than that for which you are being paid Use the end of the day to reflect and plan because that is what the unconscious mind focuses on during sleep As soon as we feel as if we should do something, we create an internal resistance against doing it You have handled everything that has ever happened to you A 100% commitment is easier to keep When you’re happy doing what you love, you’re already successful Work on your core genius and pay people to do everything else Everything that you need to solve any problem, or achieve any goal, is already inside of you For those of you who haven’t heard that phrase before, it comes from Mark Twain. Basically, the idea is that if you eat a live frog at the beginning of the day, then you can go the rest of the day with the comfort of knowing that the worst is behind you. Do you see how you can relate this to your most dreaded tasks and activities? Brian Tracy is literally one of the superstars of the productivity and discipline space, and I’ve read a large number of his books. They really got me into the idea of taking massive action. It’s still something I’m working on, of course, but that’s what great books do. They light the way. Pressfield for the win. Again. His books are short (at least his non-fiction ones are), but his ideas cut to the heart of what really holds us back from achievement. I can’t recommend him enough, and I urge you to check out both “Turning Pro“, and “The War of Art“. “The Warrior Ethos” is also incredible. Major Lessons: It all starts with a decision Our lives are entirely up to us The professional says “One day at a time” The professional acts in anticipation of inspiration The real enemies lie inside whereas the physical opponents are just stand-ins The hero wanders, the hero suffers, and the hero returns to give his gift. You are that hero. I listened to this one as an audio-book, which I don’t normally do. But regardless, I stopped the tape numerous times and took notes. Procrastination is a manifestation of the “Resistance” that Steven Pressfield talks about, and Neil Fiore has been working on developing an answer for decades. The Now Habit is his answer. Major Lessons: Become your own source of approval Procrastination has been learned, so it can be unlearned Use work to give more pleasure than procrastination can provide Think about worse reasons for not starting You don’t have to do anything in order to be a worthwhile person Schedule play time so that it becomes legitimate and guilt free Feeling overwhelmed is natural and should not lead you to believe that you won’t be able to do the task at all I feel as if Robert Greene doesn’t get enough credit for being the brilliant man that he is. Sure, “The 48 Laws of Power” is the #1 most-requested book in prison libraries, but that doesn’t seem like enough recognition to me. Mastery breaks down the exact process everyone goes through when they become really, really good at something. It’s almost poetry, and so it definitely deserves a spot on this list. Major Lessons: Mastery is the latent power within us all You are setting an example for humanity concerning what we can accomplish Mute your desire to impress and be the focus of attention in favor of learning Value learning over money Our minds close to other possibilities if we feel we already know something Trust in the learning process and move past negative emotions Resist the temptation to be nice to yourself in your criticisms You must continually start over and challenge yourself Masters are those who have struggled in order to get where they are Adopt the philosophy of complete and radical acceptance of human nature Speak through the power of your work It is the choice of where to direct your creative energy that makes the master Cultivate the ability to entertain two contradictory thoughts at the same time and doubt your previous beliefs Manufacture deadlines for yourself Look for that one thing that will yield amazing results when capitalized upon but not at the expense of equal success Cultivate profound dissatisfaction in your work and the need to constantly improve your ideas The problem you are working on should always be connected with something larger The time that leads to mastery is directly dependent upon our level of intensity and focus Quiet the anxiety you feel when confronted with anything that seems beyond your capabilities Your experience of something that occurs in the world physically alters your brain Another excellent addition to this list from the one and only Brian Tracy. This man is a productivity genius, and he’s going to affect you in positive ways. Give this book a chance and you’ll be rewarded with greater self-discipline and ultimate willpower. Major Lessons: Self discipline is the magic quality that makes all other success possible Everything is hard before it is easy Be willing to pay the price Do what needs to be done even when you don’t feel like doing it To become someone that you’ve never been before means that you have to do something that you’ve never done before You must do the things that average people don’t like to do Your mind can only hold one thought at a time, so make it a helpful one More gold from Brian Tracy. If you haven’t noticed the trend, it’s that this man can do wonders for your self-discipline and your willpower. Get him in your corner, and get his ideas working for you, and you’re going to surprise yourself with your progress. This book surprised me, and I surprised MYSELF by even buying it in the first place. I always kind of thought of him as this wacky TV-personality that didn’t actually have anything to do with discipline and self-control. Wrong again, Matt Karamazov! The Big Picture comes across as a well thought out meditation on asserting control over your actions, and setting yourself up for continued success. It’s almost intimate, as he’s speaking directly to you about what might work in your life. He definitely gained a new fan, even though I’ll never watch one of his P90X videos. Major Lessons: Do your best and forget the rest Have a plan Switch things up that no longer work for you Just doing it will immediately make you feel better afterwards Every meal should support your goals and lifestyle choices You can add intensity to everything you do Gradual progressive overload can be used anywhere and even outside the gym Curveball! You will not find this book in the self-improvement or business section of your local bookstore. But as with all great fiction, it contains profound truths about what we are capable of, and what it might look like once we set out on our way. This book is extremely easy to get through, and I finished it at work, all in one sitting. Granted, it was a slow night at the bar and I spent it with my head buried in a book…but it was worth it. Major Lessons: Each day, each hour is part of the good fight Believe yourself worthy of what you fought so hard to get It’s the possibility of having a dream come true that makes life interesting Everybody seems to know how other people should lead their lives, but no idea about how to live their own There is one great truth on this planet: Whoever you are and whatever it is that you do, when you really want something, it’s because that desire originated in the soul of the universe When you can’t go back, you can only think about the best way of moving forward Every day is here to be lived or to mark our departure from the world The fear of suffering is worse than the suffering itself Dying in the midst of pursuing your personal legend is better than dying like those endless millions who never even discover what their personal legend is The world we live in will become better or worse depending on whether we become better or worse. That’s where love comes in. Because when we love, we strive to become better than we are. Finally, we get to one of my favorite authors of all. Seth Godin runs one of the most popular business blogs on the planet (and we can only assume, the universe), and is mostly revered by all. I place myself in that group of course. This man’s whole life seems to have been dedicated to getting people to become remarkable. Different. And he uses short sentences. A lot. For impact. He’s better at it than I am. Clearly. I’m sure that he doesn’t know that I’m in love with him…but he will. I’m determined to get him as a guest for Godlike Discipline, but so far that hasn’t happened. I also found out about a speech of his in my city the day after it happened. Not cool. So who is this guy? Chris is a Canadian astronaut who has spent time on the ISS. The man knew he was going to be an astronaut before it was even technically possible for Canadians to BECOME astronauts. Read this treasure of a book to find out how. Major Lessons: You have a lot of choices and every decision matters What you do each day determines the kind of person that you will become Do the things that move you in the direction of your dreams, but make sure those things interest you so that whatever happens, you’re happy Be as ready as possible, just in case All you can control is your attitude If you have the time, use it to get ready Picture the most demanding challenge and then visualize what you would need to do to meet it Fear comes from being unprepared and without control over what will happen Have a plan for dealing with problems as they arise Helping someone else look good doesn’t make you look worse As a leader, set up your team for success, then stand back and let them shine This handy book was written by a CEO. Most of those books are worthwhile because not everyone gets to be CEO. You have to bring something special to your organization in order to be trusted with the top spot (the good ones, anyways), and Mr. Pozen has some valuable insights to share. And it’s good to get a little personal with the CEO too. Robert comes across as very likeable and knowledgeable, and his book is definitely worth reading. Major Lessons: Focus on the results that you want to achieve, instead of the hours that you work Plan your work around your strengths and skills Spend a higher percentage of your time on high priority tasks and objectives Be aware of spending more time on a project than necessary and of when the project is good enough Respond immediately when possible instead of wasting time in the future getting reacquainted with the request or task A strange choice? Perhaps. But there is wisdom in this book, and centering yourself in the present moment will do wonders for your productivity. I’ll put a disclaimer out there though that this book gets a little New Age-y at times. But that’s ok. If you can tolerate a few chapters of that, there is some major wisdom to be gained. I certainly took a lot from it and it continues to affect me in positive ways. Eckhart Tolle claims to have been homeless and living on a bench in a state of blissful gratitude. Do you believe that? Strangely, I certainly do. Major Lessons: Become intensely conscious of the present The past doesn’t exist anymore and the future will never exist The present moment is all that you will ever have Death is a stripping away of all that is not you until you realize that there is no you and there is no death Withdraw attention from the past and future whenever they are not needed Learning from a mistake makes it no longer a mistake Have a stillness inside you that never leaves you The stillness and vastness that enables the universe to be is also present within you Do not make living and dying into a problem Become like a deep lake; still at the bottom, no matter what is going on at the surface Only those who have transcended the world can bring about a better world Brian Tracy appears so many times on this list for a reason. I urge you to check out some of his stuff if you are serious about self-improvement and productivity. He’s right up there with Tony Robbins as being one of the best of the best. We can all learn a thing or two from both of them. Major Lessons: Do what other successful people are doing If you want the effects, simply repeat the causes You can choose what your attitude will be every minute of every day You perform as well as you believe yourself capable of performing Carry on with your goals in the same mood as when they were set in the first place Tim Ferriss broke out of obscurity with this instant classic, and I thoroughly enjoyed reading it. It’s actually one of the very few books that I’ve read twice. It’s that good. He basically coined the term “lifestyle design” and he has set up his entire life to be one great big classroom. Lately, he’s been deconstructing top performers and teaching others how to elevate their game. This man is a hero to many, and to me as well. Major Lessons: There is hardly any competition for the top Success can be measured by the number of uncomfortable conversations that you are willing to have The most important actions are never comfortable Be productive instead of busy Ask, “If this is the only thing that I accomplish today, will I be happy?” There are seldom any real emergencies Let a few small bad things happen in order to focus on making the important big things happen These two pretty much go together, so I put them together here as well. Tim basically experimented on himself constantly, and compiled all the results of his experiments into this sensational book. You don’t have to take ice baths and a lot of crazy supplements if you don’t want to, but he’s done it all, and brought us the best of what works. The best way to read this book is like a reference book. I read it all the way through, but by all means, skip to the chapter on weight loss/gain, or running faster etc, if that interests you more. There’s something here for everybody. Major Lessons: The decent method you follow is better than the perfect method you don’t Doing the uncommon requires uncommon behavior Develop singular focus on the process Take at least one nap throughout the day Trust data instead of the masses There is nothing in biology yet discovered that points to the inevitability of death You getting tired of hearing this guy’s name mentioned on this list? Well you can always go to another discipline-related site. Wait…no…don’t do that! This is the last Brian Tracy book on this list. As always, I have notes on every single one of these books, so if you’d like me to send them to you all at once, go HERE. Jason Selk is another good one. I discovered him back in 2014 and I remember looking at everything differently after reading this one. A number of the books on this list will do that for you. This one is all about the mental game, and properly preparing yourself to compete. It doesn’t matter if you’re competing against your to-do list, or the boxer across the ring from you, Jason Selk will give you an edge. Or rather, he’ll help you give yourself an edge. Major Lessons: If you are thinking about what is going wrong in your life, then you won’t be able to think about what you need to do in order to make it better People end up accomplishing what they believe themselves capable of accomplishing The self image will eventually regulate behavior and outcomes in accordance with the dominant beliefs of the individual Continually tell yourself that you have what it takes to be the kind of person you want to be 5 percent of the people do 95% of the winning; most people will not be as prepared as you are Relentlessly focus on solutions A solution exists for your problem You must breathe life into every solution you identify Make success permanent and failure temporary Mental toughness can be said to be present when the mind can control the body enough in order to do what needs to be done to be successful I dare say this book was a little advanced for me when I first read it. Of course, I didn’t think so at the time, but I’m sure I’ll have to read it again at some point in order to truly get everything out of it. David Deida is a very smart man, he’s done what most people only think about doing, and I highly recommend the book. It definitely explained some things to me that I’m still learning to this day. Solid addition to this list. Ah, Napoleon Hill. What a guy. Look past the fact that he got rich writing self-help books and was never actually successful until that point. I use the term “successful” very loosely of course, because success comes in many different forms. The back story behind this book is that he was asked by Andrew Carnegie (the steel magnate) to interview all his most powerful friends and find out what they all did and didn’t do when it came to becoming successful. Hill spent 25 years doing this, and the result is this book. A classic. Major Lessons: The belief that success for you is inevitable makes you into an effectively new person and the world can’t help but change for you Cultivate the burning desire to win The practical dreamers will always be the ones to drive progress Every failure brings with it the seeds of an equivalent success No one has ever been defeated until defeat has been accepted as a reality Our only limitations are those that we set up in our own minds Faith removes limitations We rise or stay at the bottom due to conditions that we may decide to control The conversion of desire into its monetary equivalent is no more miraculous than the formation of the universe Definiteness of purpose must be the starting point Temporary defeat is not permanent failure No leader is ever too busy to do what is required of him as a leader The subconscious mind works day and night and responds to all manner of stimuli This book may appear like a strange choice for me, and indeed it was. But I regret nothing, as it more than lived up to what I heard about it. 37Signals (now BaseCamp) is the web development company that Jason Fried co-founded. He re-thinks old business knowledge in this book and shares some of his insights about real productivity and progress in the modern world. It’s a short book, and anyone in business (or most other people, for that matter) can find a lot of value in it. Major Lessons: That “real world” may be real for some people, but you don’t have to live in it Evolution has always built upon what has worked and so should you The real hero is not the workaholic but the person who got home early because they found a solution to the problem Ask what you really need to get started See how far you can get with what you have Cut out the good stuff and leave only the great When you’re stuck on something, that means you’re not doing other things Don’t throw good time over bad work You build momentum by finishing one thing and then moving on to the next thing Wayne Dyer is a personal role model of mine, and don’t let his place on this list mislead you. He was one of the most brilliant men on the planet and one of the greatest influences on my entire life. I cannot overstate that fact. Wayne taught me so much about the world and how it’s possible to be happy in it, and a lot of his stuff can be translated into self-discipline and self-control as well. The man did whatever it took to get his message to the people, and this message has literally transformed millions of lives. My own included. Read the damn book already! Major Lessons: Be willing to do what it takes to make your visualization into reality Any Napoleon Hill fans here? You may or may not know that he also wrote this one, called “The Law of Success”. Now, it’s a long one (actually 16 booklets), but I definitely got some value out of it. The writing style is, well…old. But it doesn’t feel like strenuous reading. I’d read “Think and Grow Rich” first if you’re just getting into Napoleon Hill, but this one is also out there should you choose to pick it up. I for one certainly don’t regret it. Major Lessons: Power is applied knowledge Acting with initiative constantly will make it more likely in the future Teaching others will develop the same skills within you as well Only excellence inspires jealousy; mediocrity is ignored Thought is the only thing over which you have total control Nothing great is achieved without temporary defeat Refrain from labeling anything a failure until you have had enough time with which to reflect There can be success without happiness, but it’s never worth it Guard your thoughts because of how easily they can be influenced Any kind act or thought, regardless of whether it is reciprocated, has a positive effect on your own character Your reputation is made by others, but your character is made by you The most successful people reach decisions quickly and stand by them firmly until they are carried out Do you remember the last time you hugged a book? I’ve hugged many books since this one, but Man’s Search for Meaning is REQUIRED READING. I cannot overstate the importance of this book, and it’s not just an account of one psychiatrist’s imprisonment in several concentration camps during world war two; there are valuable discipline lessons to be learned from it as well. I urge you to pick this one up at some point in the next 3 months. Major Lessons: Happiness and success must be reached indirectly The last of human freedoms: to choose one’s attitude in any given set of circumstances, to choose one’s own way Without suffering and death, human life cannot be complete Ask what life expects from you The hopelessness of our struggle does not detract from its dignity or meaning A meaningful life can and should include all of your sufferings Man does not simply exist, but also decides what he will become in the next moment Chris is one of those guys who is just doing it right. When you read any of his stuff, you see that he’s not an internet marketer, and he doesn’t want your money unless you feel as though he’s helped you. Well, he’s helped me and he’s gotten some of my money! As the title might suggest, the reader is made to realize that he or she doesn’t have to live their lives according to anyone else’s rules. That takes discipline, and there is a large helping of it here. Another solid addition to the list. Major Lessons: You don’t have to live your life the way other people expect you to The key to a better lifestyle is not less work, but better work The most memorable times of our lives are often the most challenging Work on the meaningful stuff that is meaningful both now and in the future Start taking your dreams very, very seriously In the end you probably won’t be satisfied with a life that revolved solely around you Momentum drives progress and growth The person who says something is impossible should not interrupt the person who is doing it We just saw this guy! Again, don’t let his placement in the list take anything away from him. Chris is world-class when it comes to making everyone around him better, as well as himself. He doggedly pursued the goal of traveling to every single country on earth, and he did it. That alone would be impressive, but he runs a very successful blog (The Art of Non-Conformity), and a yearly summit. He’s one of the most accessible guys that do this kind of thing for a living, and his competitive advantage is that he actually cares. Major Lessons: Whatever we want to learn, the possibility is readily available for each of us The journey produces its own rewards Don’t save anything for later Cultivate an emotional awareness of death instead of just an intellectual one You have to be deliberate about doing what matters to you Any real trial will challenge you to your core Value the overall experience enough to persevere We tend to overestimate what we can do in a day, but underestimate what we can do in a year If your family and friends don’t support you, then you need to find people who do Know when to quit or change tactics If anything is going to keep you up at night, let it be the fear of not following your dream Regret is what you should fear the most Be afraid of settling As you gain confidence, “I can do this!” becomes “What else can I do?” Entrepreneurs are willing to work 24 hours a day for themselves, but not a single hour doing something they hate I don’t recall how this book came into my life, but I still think about it after all this time has elapsed since I finished it. I took about 4 pages of notes in all, and I give this book my heartfelt recommendation here. They go all out with the exercises in this book, and each one is worth trying out. I often wonder, how many people actually do the exercises in books like these? Well I went ahead and placed my faith in these two authors, and I was not disappointed. They will help you cultivate courage, creativity, and willpower in abundance. Major Lessons: If you want different results, you’ll have to do things differently Adversity is the “weight” with which you build up your inner strength The moments when you want to quit are the moments when it’s most important not to quit Asking pain to stop is like asking for your education to stop Commitment requires an endless series of small painful actions Anger puts your life on hold while the world moves forward without you If unchallenged, negative thoughts will just grow stronger A human being can never be more than a work in progress The future is yours to lose or gain Your future is in jeopardy every moment, and that develops incredible urgency The future may bring you darkness, but it can’t take away your ability to create light In case you’re joining this list late, Wayne Dyer is one of my major role models. I even try to speak like this guy sometimes. Seriously, listen to him talk. He’s the calmest, wisest, most caring person I know. And I know a lot of calm, wise, caring people. In Change Your Thoughts, Change Your Life, Wayne breaks down the 81 verses of the Tao Te Ching, and distills Lao Tzu’s ideas into 81 essays. I listened to this as an audio-book, and I was untouchable for the rest of the month and beyond. Major Lessons: Nature doesn’t create storms that never end In every moment, you have a choice The cure to a life of unrest is to choose stillness If you realize that you have enough, then you are truly rich For you to know weakness, you must have once felt strength Challenges confronted do not arise Simplify and take on difficulties while they are still small Every individual action is simple Take one single simple step Take preventative control over your health and affairs One action or non-action, one day at a time When people know that they don’t know, they can find their own way Be like the water in the ocean and never put yourself above anyone Never assume that you know what’s best for anyone and not even yourself Without the graciousness of your competitor, there could be no winning or losing This next work of art is one of those books that gets referenced so often that it’s difficult to ignore. I tend to gravitate towards books like that because I know they are probably popular for a reason. If you’re just becoming a student of discipline and achievement, you’re going to hear the term “deliberate practice” over and over again. This describes the process of systematically becoming better during each and every practice session, or assigned task. Deliberate practice will separate you from the “also-rans” and Geoff Colvin digs deep in this one. Major Lessons: There are so few people today who are truly excellent at what they do Focus on the skills that will create dominance An observer can point to our mistakes much better than we can Deliberate practice is difficult and you can take solace in the fact that most people won’t do it The small things that elite performers do take lots of practice to implement successfully Look further ahead Best performers set precise roadmaps to get to where they want to go You learn more during a crisis situation than during any other time Creativity is rarely a burst of inspiration and more often the result of deliberate practice You’ve made it through 37 books! I wonder how long it will take you to finish them all. It would probably take me 45-50 days, but everyone is different! Laura’s thesis is that we all have more time than we think we do. In this, I think she is absolutely correct. She even stopped by Godlike Discipline to answer some of my questions on the subject in an interview, located HERE. Being intentional about how we spend our time, and tracking it to keep ourselves accountable can really be life-changing if we commit to it. She gives us the ins and outs in this very accessible and enlightening book. Major Lessons: Slow down and actually live There is enough time to do everything Plan your week instead of your day You can choose how you spend your 168 hours You have more time than you think When estimating how long we work, we tend to unconsciously shift to cultural pressures or norms Create a blank spreadsheet with 168 hours on it Complete a time log for one week Any “work” that is not moving you towards your professional and personal goals should not be labeled as work Follow through on anything you tell yourself you’ll do, as a matter of personal integrity The world is not going to make it easy for you to stick to your priorities Change your meeting mindset: You were invited because you don’t have anything better to do You get 30-60 hours per week, or 1500-3000 hours per year at work Time spent doing one thing is time not spent doing another There is time for anything you really want to do WILLPOWER AND WHAT TO DO NOW That’s it! 37 of the best books for increasing your self-discipline and willpower. I hope the journey has been enlightening for you. I loved each of these books, and especially for what they taught me. If you know of anyone else who could benefit from reading one of these, please share this article with them. Godlike Discipline is completely non-profit (and always will be!), mostly in support of Doctors Without Borders. Your help would mean the world to us, and all the people whose lives you will be helping to save with us. The major lessons above were taken from my personal notes that I keep for each book. If you’d like a personal copy of those notes, along with every note for every book that I’ve ever read (FULL LIST HERE), then simply contribute to our non-profit campaign HERE. And you can download your free copy of my OWN book, The Godlike Discipline Handbook, by following this link HERE. It features 13 concepts that are absolutely critical to achieving superhuman self-control, and gives you 64 specific, actionable strategies to help you master self-discipline and willpower. Do You Love Books? So Do I. I Read 100+ Every Single Year And I Want To Share With You What Reading Has Done For Me. What's In It For You: I give my readers everything. For starters, you can have my own book on developing self-discipline for FREE (Value = $30), my NEXT book on beating procrastination for FREE (Value = $30), FREE access to my daily email course on the "Great Books" (Value = $275), and massive discounts on everything I am developing in the future. Just for extending me the privilege of having your email address. My goal is to have read and taken notes on 1,000 books before I turn 30, and I want you there with me! | Mid | [
0.634382566585956,
32.75,
18.875
]
|
Overview An applet that uses extensions is packaged as a signed JAR file including manifest. When an applet is downloaded and run with Java Plug-in, Java Plug-in checks the manifest of the applet JAR file. The manifest will contain a list of all extensions that the applet requires. An extension consists of one or more JAR files to be installed into the <jre>/lib/ext directory. In general, for each extension the applet manifest will list name, vendor, and version information of the extension JARs; it will also list URLs from which the JARs, or an installer for them, may be obtained if the JARs are not already installed <jre>/lib/ext or are out of date. A URL may directly specify one of the extension JARs, or it may specify an installer, native or Java, that will install the extension JARs. The rules for deciding that an update is required are described in Optional Package Versioning. To use Java Plug-in for deploying Java Extensions, information about the extensions must be specified in three different manifest files: Manifest of the applet JAR file To deploy Java extensions with an applet, the applet must be packaged as a JAR file. Moreover, the manifest file of the applet JAR must define the list of extensions it requires and specify the URLs from which the extensions can be downloaded, along with other information about the extensions, according to the Optional Package Versioning. For example, below is the manifest file for two extensions: In this example, two extensions are deployed with the appletRectangleArea and RectanglePerimeter. Each has a single JAR file. If they have not been installed or if updated versions are needed, the proper versions will be downloaded from the Implementation-URL specifications. Notice that an Implementation-URL must point to a JAR file that: Extension-List names and attribute prefixes There are two basic scenarios here: An extension may have a single JAR file, or it may have multiple JAR files. Extension-List names and attribute prefixes are discussed below for these two scenarios: Extension with single JAR file For an extension with a single JAR file (as in the example above), the name in the Extension-List, and the prefix of the related manifest attributes, should be the name of the extension JAR file. Extension with multiple JAR files Some extensions consist of multiple JAR files. For example, the Java 3D extension consists of the following JAR files: j3daudio.jar, j3dcore.jar, j3dutils.jar, and vecmath.jar. There are two scenarios that need to be considered: (1) The JARs are installed by a native or Java installer or (2) no installer is used (i.e., raw installation of the extension JARs). If a native or Java installer is used to install an extension, then only one of the JAR file names should be used in the Extension-List, and only one set of attributes, using that name as the prefix, should appear. Usually an extension has a main JAR file; if so, you should use its name in the Extension-List and as the prefix for the related manifest attributes. If there is no main JAR file, you can use the name of any JAR file in the optional package. Here is an example of the applet manifest for the Java 3D extension. j3dcore.jar is the main JAR file. For a raw installation with multiple JAR files, the story is different: You must treat each JAR file as though it were a separate extension and list each according to its name in the Extension-List. Each one listed then must have its own set of manifest attributes, where the prefix for an attribute set is the name of the related JAR file. Manifest of each extension JAR file Here we are talking about the JAR files that Plug-in can obtain from the URLs specified by Implemenation-URL. The URL-obtainable extension JARs may be directly obtained (raw installation) or they may be obtained via a Java or native installer. In either case they are installed into <jre>/lib/ext. The extensions that the applet requires are listed in the applet manifest. This allows Plug-in to examine the JAR files present in the <jre>/lib/ext directory when an applet is launched and to decide if it needs to install missing or out-of-date extensions. In general, the manifest of an extension JAR obtained via an Implementation-URL needs to include various name, version, and vendor information. Thus, when such an extension JAR is installed, it will be possible in the future for Java Plug-in to compare this information to the information about an extension that an applet requests; and Plug-in will be able to determine if an extension needs to be installed/upgraded. Prior to any applet ever requesting an extension, it is more than likely that no extension is installed in <jre>/lib/ext, or that no or incomplete manifest information is present in the installed extension JAR. For an extension with a single JAR file, the JAR file must be signed and include a manifest file with the following attributes: If an extension consists of more than one JAR file and the extension is installed with a native/Java installer, then only the JAR file whose name is listed in the Extension-List of the applet manifest needs to have extension information (i.e., Extension-Name, Specification-Version, etc.). If no installer is used, then all JAR files must include extension information. Manifest of the Implementation-URL JAR file This is the JAR file which the applet refers to with the Implementation-URL attribute in its manifest. It is the URL from which the extension can be obtained if no extension is installed in <jre>/lib/ext, or an extension is installed but it is out of date. If the Implementation-URL JAR is a native or Java installer, this is indicated in the manifest via two special attributes: Main-Class indicates a Java installer; Extension-Installation indicates native installer. Note that if no installer is indicated, then the Implementation-URL JAR file is simply the extension JAR file itself. As implied from the above, there are three ways that extensions can be installed by Java Plug-in: Raw installation With raw installation of an extension, each extension JAR is installed by Java Plug-in into the <jre>/lib/ext directory without an installer (Java or native); i.e., Java Plug-in is the "installer" for each JAR. If an extension has a single JAR file, then the URL of that JAR is shown as the Implementation-URL in the applet JAR manifest; and Java Plug-in knows it is a raw extension because the manifest of the extension JAR file includes neither Main-Class nor Extension-Installation attribute. Suppose we have an extension called javax.mediax with a single JAR, mediax.jar. Then the applet and extension JAR might be as shown below: Now suppose we have another version, javax.mediax-2, that has two JARs: mediax_core.jar and mediax_codex.jar. Then we must treat the two JAR files as though they were separate extensions and list each in the applet JAR manifest. Java Installer An extension can be installed through a Java installer. The Java installer must be bundled as a JAR file, and the resulting JAR file must be specified as Implementation-URLin the applet JAR manifest file. During installation the JAR file will be downloaded and verified, and the Main-Class of the Java installer inside the JAR file will be executed to start the installer. It is the job of the Java installer to copy the extension JAR files, normally bundled with the installer, into the right location of the Java 2 Runtime (i.e., <jre>/lib/ext). Though we are now dealing with an application JAR file, the attributes in its manifest should be the same as those shown for the extension JAR whose name is listed in the Extension-List of the applet manifestwith the addition of the Main-Class attribute. In this case, because Main-Class is present in the manifest, the JAR will be treated as a Java Installer, and Main-class will be invoked. It is the job of the Java installer to copy the extensions JAR files into the <jre>/lib/ext directory. Note that each extension JAR file must contain proper versioning information. Native Installer An extension can also be installed through a native installer. The native installer must be bundled as a JAR file, and the resulting JAR file must be specified as the Implementation-URLin the applet JAR manifest file. During installation the JAR file will be downloaded and verified, and the native installer will be started. It is the job of the native installer to copy the extension JAR files, normally bundled with the installer, into the right location of the Java 2 Runtime (i.e., <jre>/lib/ext). Though we are now dealing with an application JAR file, the attributes in its manifest should be the same as those shown for the extension JAR whose name is listed in the Extension-List of the applet manifestwith the addition of the Extension-Installation attribute. In this case, because Extension-Installation is present in the manifest, the JAR will be treated as a native installer; and the installer itself will be launched. It is the job of the native installer to copy the Java extensions into the <jre>/lib/extdirectory. Note that each Java extension JAR file must contain proper versioning information. Security When an installed extension needs to be update, the extension will be downloaded and verified to ensure that it is correctly signed. If it is valid, the Plug-in will pop-up a security dialog providing three options: Grant always: If selected, the Implementation-URL JAR will be granted the AllPermission permission. Any sapplet or extension signed with the same certificate will be trusted automatically in the future, and no security dialog will pop up when this certificate is encountered again. This decision can be changed from the Java Plug-in Control Panel. Grant this session: If selected, the Implementation-URL JAR will be granted the AllPermission permission. Any applet or extension signed with the same certificate will be trusted automatically within the same browser session. Deny: If selected, the installation is cancelled. Once the user selects the options from the security dialog, the extensions installation will be executed in the corresponding security context. The applet will not be started until the extensions are properly installed. Because Java extensions are downloaded and installed into the Java 2 Runtime <jre>/lib/ext directory, each must be signed. Once the extensions are installed, they will have the permissions granted to Java extensions through the policy file. For more information about the jar tool, see the Tools and Utilities documentation for your platform. To sign the JAR file is going to take some trouble. In outline form, this is what you can do: Use the keytool -genkey option to generate a key pair. Use the keytool -certreq to generate a certificate request for a Certificate Authority (CA), such as VeriSign and Thawte. Email the request to the CA. After the CA has confirmed your identity, it will respond with a certificate chain via email. Copy the certificate chain to a file. You can then use the keytool -import option to import the chain to the keystore. You can now use the jarsigner tool to sign the JAR and the -verify option to check that it is signed. For more information about keytool and jarsigner, see the Tools and Utilities documentation for your platform. More information on this topic, along with examples, is given in the chapter called How to Sign Applets Using RSA-Signed Certificates. Although that chapter discusses how to sign an applet JAR file, the process is identical to signing an extension JAR file. II. Create/obtain the Implementation-URL JAR files If no installer is to be used: The extension JAR files described in step I are the Implementation-URL JARs. If an installer is to be used: Create/obtain the installer. Create the manifest for the JAR of the installer and any bundled extensions that need to go into it. For a Java installer include the Main-Class attribute in the manifest; for a native installer include the Extension-Installation attribute. JAR the installer, the manifest, and any bundled extensions that need to be included and sign the JAR. (The steps for JARing and signing are the same as described in step I above.) Example Suppose we have an applet that requires Sun's Java Advanced Imaging as an installed extension. You can download this here: http://java.sun.com/products/java-media/jai/downloads/download.html Suppose you select the "Windows JRE Install" version. The following file will be downloaded: jai-1_1_1_01-lib-windows-i586-jre.exe This installer bundles the following JAR files, which it will install into the <jre>/lib/ext directory: jai_codec.jar jai_core.jar mlibwrapper_jai.core You need to create the manifest for a JAR file that contains the .exe installer above, and you need to sign the JAR file. Now JAR up the installer as jai_win.jar, together with the manifest file. You don't need to include the extension JAR files, as they are bundled with the .exe installer in this case. Be sure to include the .jar extension in the JAR file name. Now sign jai_win.jar. III. Create the applet JAR 1. Create a manifest file for the applet. Below is a manifest file for the jai example using a native installer: Some optional packages come packaged in different JAR files for different operating systems. If you want your applet to work on different OSs, you can use the $(os-name)$ construction in the Implementation-URL manifest attribute. The $(os-name)$ will translate to the target OS that the applet is being run oni.e., SunOS, Linux, Windows-98, Windows-NT, Windows-2000, Windows-Me. optpkg-Implementation-URL: http://.../optpkg-$(os-name)$.jar 2. JAR up the *.class files, and any other supporting files the applet needs, together with the applet's manifest file, and sign the JAR. (The procedure for JARing your files and signing the JAR is the same as discussed previously.) Be sure to include the .jar extension in the JAR file name. IV. Generate the HTML to launch the applet Create the HTML page for the applet. You can do this manually or you can use the HtmlConverter that comes with the JDK. It is recommended that you use the HtmlConverter. But if you want to do it manually, see Using OBJECT, EMBED and APPLET Tags in Java Plug-in for information on how to do it. Note that the applet JAR file should go in the archive attribute. Suppose your applet is called JaiApplet, the JAR file you created for it is called JaiApplet.jar, and the main class is JaiApplet.class. When you run the applet Java Plug-in will display a Java Security Warning if the extension is not already installed, informing you the applet requires installation of extension javax.media.jai from http://myserver.com/jai_win.jar. If you grant permission to install the extension, the installer will install the JAR files in the <jre>/lib/ext directory. Once the installation is complete your applet will run. Known Limitations and Other Notes If an Implementation-URL JAR file is not signed properly, Java Plug-in will fail silently. For any extension be sure that a newer version of the extension contains at least the same set of JAR file names as the older version. Otherwise, installing a newer extensions may not overwrite all the older extensions JARs, and there will be a mix of different versions of an extension in <jre>/lib/ext. The results will be unpredictable. If Java Installer is used, make sure the program does not exit the Main-class until the installation is done. In some cases, Java Installer may create an AWT window and switch control to a different thread and return immediately from the Main-class. Returning control from the Main-class will force the applets to be loaded and started immediately, even if the Java Installer is still in the process of installation. This will cause the applet to fail to load because the extension is not installed yet. | Mid | [
0.641791044776119,
32.25,
18
]
|
Jones J, Bion J, Brown C, Willars J, Brookes O, Tarrant C; On behalf of the PEARL collaboration. Reflection in practice: How can patient experience feedback trigger staff reflection in hospital acute care settings?. Health Expect. 2020;23:396--404. 10.1111/hex.13010 **Funding information** PEARL is funded by the NIHR HS&DR programme (Ref 14/156/23). CB is supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care \[West Midlands\] (NIHR CLAHRC WM). The views expressed in this article are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care. 1. INTRODUCTION {#hex13010-sec-0006} =============== Patient and staff experiences provide important insights into care quality, but health systems have difficulty using these data to improve care. Evidence suggests that organizations struggle to manage the data they collect and to make improvements based on patient experience feedback, and that clinicians often fail to change their practice based on patient experience feedback[1](#hex13010-bib-0001){ref-type="ref"}, [2](#hex13010-bib-0002){ref-type="ref"}. One particular challenge in acting on patient experience feedback is that, when patients express dissatisfaction with their care, they often identify problems with staff‐patient interactions.[3](#hex13010-bib-0003){ref-type="ref"} Around one‐third of patient complaints relate to staff‐patient relationships[4](#hex13010-bib-0004){ref-type="ref"} such as communication, empathy, courtesy, consideration and compassion demonstrated by front‐line staff; these aspects of care are critical for positive patient experiences.[5](#hex13010-bib-0005){ref-type="ref"}, [6](#hex13010-bib-0006){ref-type="ref"}, [7](#hex13010-bib-0007){ref-type="ref"} Evidence suggests, however, that patient experience data currently available in the NHS tend to be used to stimulate changes in care processes which are technical in nature, rather than tackling the more difficult task of changing clinician behaviour.[8](#hex13010-bib-0008){ref-type="ref"} Although staff are unlikely to intentionally behave in ways that are detrimental to the patient experience, they may lack insight into how their behaviours affect patients or how to modify those behaviours. One approach for promoting insight and change is reflective learning. Reflection involves engagement in retrospection, self‐evaluation and re‐orientation[9](#hex13010-bib-0009){ref-type="ref"} based on individuals\' own experiences or feedback on their performance, or the experiences of others. Reflection can take different forms. It can be an individual or group activity.[10](#hex13010-bib-0010){ref-type="ref"} It may happen 'in action' when an event gives immediate cause for thought or can be a deliberative process looking back 'on action' to generate new perspectives and intentions for change.[11](#hex13010-bib-0011){ref-type="ref"} The idea that reflection will lead to learning and improvement is based on the work of Dewey from the 1930s[12](#hex13010-bib-0012){ref-type="ref"} and continued with models such as Schön and Gibbs designed to support reflective practice.[13](#hex13010-bib-0013){ref-type="ref"}, [14](#hex13010-bib-0014){ref-type="ref"}, [15](#hex13010-bib-0015){ref-type="ref"}, [16](#hex13010-bib-0016){ref-type="ref"} Whether reflection prompts learning and change has been questioned, although some studies have identified changes in behaviour as a direct result of reflection taking place within clinical practice settings.[17](#hex13010-bib-0017){ref-type="ref"}, [18](#hex13010-bib-0018){ref-type="ref"}, [19](#hex13010-bib-0019){ref-type="ref"}, [20](#hex13010-bib-0020){ref-type="ref"} Reflective practice is now mandated for most health professionals, with documented evidence of reflecting on patient and colleague feedback required for continuing professional development and revalidation. Despite the focus on retrospective written reflection, increasingly, arguments are being made that reflection, and in particular reflection in action, should instead be fully embedded within the multiple contexts of clinical practice.[21](#hex13010-bib-0021){ref-type="ref"}This requires clinicians to make reflection part of everyday routines and practices, and develop skills to recognize and act on prompts or triggers for reflection.[22](#hex13010-bib-0022){ref-type="ref"} Reflection requires a prompt or trigger: 'a "disorientating dilemma" or a period of uncertainty in what should be done---that leads to exploration with a critical perspective, challenging underlying assumptions, beliefs, motives and values'.[22](#hex13010-bib-0022){ref-type="ref"} By definition, reflection involves a switch from automatic processing, to enhanced cognitive awareness and deeper processing and learning.[12](#hex13010-bib-0012){ref-type="ref"} The ability of a trigger to prompt an emotional response is considered to be critical for stimulating reflection; indeed, reflective learning is argued to involve an interplay between cognition and emotion.[23](#hex13010-bib-0023){ref-type="ref"} In principle, feedback about patients\' experiences can be a powerful trigger or prompt for reflection, opening up the opportunity for personal insight development and changes in attitudes and practice. Several studies have assessed how reflective learning has been enhanced by providing a patient experience trigger and measuring its impact, usually as an intervention study. For example, video vignettes have been used by dental undergraduates,[24](#hex13010-bib-0024){ref-type="ref"} facilitated patient experience feedback has been shown to improve nursing care,[25](#hex13010-bib-0025){ref-type="ref"} and studies have shown how patient narratives can serve as reflective devices for health‐care professionals.[26](#hex13010-bib-0026){ref-type="ref"}, [27](#hex13010-bib-0027){ref-type="ref"} Qualitative research has identified patient experience feedback as a trigger for reflection in everyday clinical practice, along with other triggers, including difficult interpersonal interactions with patients and their families or between staff members; uncertainty about clinical care; unexpected clinical outcomes; emotional responses to high stakes situation; and external feedback on performance[28](#hex13010-bib-0028){ref-type="ref"} A wide variety of patient experience data is available in the health‐care setting, ranging from surveys and questionnaires, to compliments, informal feedback to PALS and suggestion boxes,[29](#hex13010-bib-0029){ref-type="ref"}, [30](#hex13010-bib-0030){ref-type="ref"}, [31](#hex13010-bib-0031){ref-type="ref"} It is not clear, however, that current approaches to managing feedback about patients\' experiences maximize the value of this feedback as a trigger for reflection in practice.[32](#hex13010-bib-0032){ref-type="ref"} Little attention has been paid to understanding how different types of patient experience feedback can act as a prompt to reflection in practice in the natural clinical setting, rather than as part of an intervention study. We aimed to identify the ways in which different types of patient experience feedback act as a trigger or prompt for engagement in reflection in clinical practice in acute hospital settings and identify important considerations for enhancing the value of patient experience data for reflective learning in clinical practice. 2. METHODS {#hex13010-sec-0007} ========== 2.1. Setting {#hex13010-sec-0008} ------------ We conducted an ethnographic study of reflection on patient experience feedback in eight acute care units in three NHS hospital trusts in England, including observations and interviews with staff working in acute medical units (AMUs) and intensive care units (ICUs), as part of the Patient Experience and Reflective Learning (PEARL) project.[33](#hex13010-bib-0033){ref-type="ref"} The three trusts were purposively selected as serving diverse, predominantly urban populations with high‐volume workloads. The eight participating units included three AMUs and five ICUs on four hospital sites. The core project team involved patient and carer representatives as active team members; local project teams also included patient and carer representatives who had experience of care in the participating units (named in the acknowledgements). The PEARL Project received ethics approval from the London Brent Research Ethics Committee (REC Ref 16/LO/224). 2.2. Sample {#hex13010-sec-0009} ----------- Interview participants were selected to include staff from across the different units and to include nursing, medical and managerial staff with different levels of organizational and individual involvement in patient experience data and reflective practice. 2.3. Data collection {#hex13010-sec-0010} -------------------- Observations and interviews were conducted between May and December 2017, and focused on exploring how patient experience feedback was collected and used, how and why staff reflected on patient feedback, and the structures, processes and activities that facilitated or obstructed staff engagement in reflection in clinical practice. Over 140 hours of observations were conducted in the acute care units by JW, a non‐clinical researcher with extensive qualitative research experience. Observations involved the researcher spending time in the clinical setting, observing day‐to‐day practice, shadowing staff while they performed their tasks, talking to staff informally in clinical and social areas and attending relevant meetings (eg patient coffee mornings and clinical governance meetings). The researcher observed and questioned staff specifically about activities around the collection and use of patient experience data, and engagement in and support for reflection in practice. The researcher documented 81 informal conversations with a wide range of staff about feedback of patient experience data and reflection on patient experience. We collected relevant documents such as newsletters and photographs of patient experience displays within the units. The researcher made written field notes during observations, which were summarized as audio‐recorded debriefs. Semi‐structured interviews were conducted by JW with a purposive sample of 45 members of staff, between 14 and 16 in each hospital trust. Interviews were conducted in two rounds. Round 1 (36 interviews) focused on the collection and use of patient experience data and reflection on patient experiences. Round 2 (nine interviews) focused in on reflection in practice---triggers, barriers and facilitators---to explore emergent themes around reflection in practice in more depth. Informed consent was obtained for interviews. Interviews were recorded and transcribed verbatim, and anonymized during transcription. 2.4. Topic guides {#hex13010-sec-0011} ----------------- Observations were guided by a sensitizing observation guide, which focused observations on the collection and use of patient feedback, and the structures, process and activities in place in sites that impacted on reflection in practice. The topic guide was used to help anchor the observations to the research questions while leaving the researcher room to pursue lines of enquiry in the field. The topic guides for interviews explored staff experience of the collection and use of patient experience data, how feedback on patient experience stimulated reflection, and the barriers and facilitators to reflection in clinical practice. The topic guide was modified for the second round of interviews. 2.5. Analysis {#hex13010-sec-0012} ------------- We took a thematic analysis approach to analysing the data.[34](#hex13010-bib-0034){ref-type="ref"} Interview and observational data were analysed together through the analysis process. A subset of interviews and observation debriefs were read in close detail by JJ and CT and then open‐coded to create a coding frame and initial thematic categories; these were discussed with the wider study team. The coding frame was then applied to the remaining interviews and observational data transcripts. The coding frame was modified and extended as new themes arose. NVivo 11 software was used to support the management, coding and querying of the data. We used narrative summaries and visual displays to interpret and synthesize the data. We conducted regular team debriefs during the data collection and analysis period (involving JJ, JW, and CT) to reflect on emerging findings and guide ongoing data collection and discussed findings with the wider team. As thematic analysis showed similar staff responses regardless of site or setting, we did not do a comparative analysis between hospitals or between ICUs and AMUs. Differences in the types of feedback available to staff in ICU and AMU settings are discussed as part of our findings. 3. FINDINGS {#hex13010-sec-0013} =========== We distinguish between formal patient experience data sources: data purposively collected and collated to capture the patient experience of care (generally at organizational level, including surveys, complaints and comments); and informal sources of feedback on the patient experience recognized by staff alongside the formal data. We also identified patient narratives as an 'in between' source of data. These three sources of patient experience feedback differ in their intrinsic qualities and hence their utility for triggering reflection and the extent to which they can be systematized as part of strategies to promote reflection in practice. 3.1. Formal patient experience feedback {#hex13010-sec-0014} --------------------------------------- Formal sources of patient experience feedback, generated through organizational activities including patient experience surveys and systems for recording complaints, were shared widely with front‐line staff through poster displays, reports, emails and information in meetings. Formal patient experience feedback was seen by staff as having value for organizational performance monitoring and identifying areas for quality improvement, but tended to be less impactful in stimulating individual reflection and attitude change in practice. This was particularly the case for surveys employing quantitative or semi‐quantitative methods without qualitative or narrative components. ### 3.1.1. Lack of meaningfulness or emotional response to survey data {#hex13010-sec-0015} Staff identified issues that limited the extent to which they were motivated to engage effort in processing feedback from patient experience surveys including concerns about local or personal relevance, timeliness and lack of granularity in the data. In the main, however, survey data feedback that was purely numerical and lacked free‐text components was relatively ineffective for promoting reflection and individual attitude and behaviour change, because the personal meaningfulness was limited, and affective cues generating an emotional response were lacking."I had a conversation with an HCA \[...\] And she went \"oh, I think we display \[patient experience survey feedback\]\", and then she went over to the board,\"this is it\" And \[...\] she was looking at it then and she was saying \"well but that doesn\"t mean anything to me\" (observation)We have figures about \[patient experience surveys\] and I look at them and I just think I\'m not acting upon that, I\'m not changing my practice based upon that (interviewee 009, nurse)" By contrast, qualitative feedback such as survey free‐text or individual complaints or compliments triggered spontaneous individual reflection and prompted changes in individuals\' attitudes and practice. Formal feedback prompted reflection when staff members were able to relate to it personally and experienced an emotional response that led them to think carefully about their actions and future practice in their interactions with patients."The forms I\'ve read with the patient experience, I\'ve noticed that sometimes they \[feel\] like, that they\'re treated sometimes by their illness rather than as a person. \[...\] That\'s made me feel awful that person\'s felt like that. So on reflection, I think it\'s made me try and personalise care, and try and remember at the end of the day there\'s a person in that bed, and we\'re not just treating what they\'ve come to hospital with. (interviewee 032, nurse)" ### 3.1.2. Reflecting on formal feedback: Feedback needs to be made meaningful and relevant {#hex13010-sec-0016} Staff suggested that formal feedback could be used purposefully to stimulate reflection, but needed to be curated and digested to make it meaningful and relevant to staff. Also, efforts were required to engage staff in reflecting on formal sources of patient experience such as survey feedback or complaints as part of routine clinical practice, including allocating time for processing and reflecting together on the information. Having organizational systems in place to actively disseminate feedback and encourage reflection made it more likely that formal patient experience feedback would be recognized as a prompt for reflection, and that opportunities for reflecting based on this feedback would be taken up."We produce a monthly complaints mailer \[...\] essentially saying these are two or three themes that we\'ve identified through complaints, this is what\'s happened \[...\] reflect on it, reflect on the practice in your area, could this happen essentially to your patients? (interviewee 047, admin)" We observed, however, that on the whole organizational efforts gravitated towards highlighting and acting to address *negative* feedback and identifying areas for improvement. Staff described how their organizations disseminated negative feedback from formal systems, particularly complaints, to promote cross‐organizational learning and improvement, but that this same approach was not always taken to ensure positive feedback was shared across the organization. We also observed examples where potential triggers for reflection and learning based on positive feedback were passed over."At the clinical governance meeting \[...\] they\'d just spent an hour discussing incidents \[...\] but when it came to the compliments literally it was really skipped over. \[Feedback from the patient was read out:\] \"\[Person 1\] \'s kind words and use of hands to squeeze was gratefully appreciated.\" And the staff at the meeting went 'oh, great to squeeze hands'. And they sort of dismissed it really. (observation)" 3.2. Informal feedback on the patient experience {#hex13010-sec-0017} ------------------------------------------------ Alongside formal patient experience feedback solicited by the organization, staff recognized a large and diverse field of informal sources of feedback. Usually unsolicited, this included conversations with patients and relatives at the bedside, thank you cards and gifts, a hug from a patient or relative. This type of feedback was more often described as personally relevant and highly emotionally engaging, and as a valuable trigger for stimulating spontaneous reflection. ### 3.2.1. Informal feedback from patients and colleagues had relevance and emotional salience {#hex13010-sec-0018} This informal feedback received by staff from patients and relatives as part of their daily practice was usually valued and had the potential to incur a sense of personal responsibility in staff to consider their behaviours and relationships with patients. Staff described the discomfort of receiving personal negative feedback; this could motivate them to reflect and elaborate on the experience and think about the implications for their practice."I said to the nurse \[about a patient\] \"I think he\'s definitely got diabetes 'cause he\'s got a large BMI\" and then later on the patient said \"Oh I heard you saying large BMI\" and told me how he found it quite offensive and how he was upset by me saying that. \[...\] So I think that experience, has changed the way I talk about patients (interviewee 056, doctor)" Staff also recognized that their colleagues could provide insight into the way they communicated and engaged with patients and how this impacted on the patient experience. Although staff may not always be comfortable in speaking out to colleagues about their practice, feedback from colleagues could be a valuable stimulus for reflection on and improvement in relational aspects of care."The nurse said to me that \"the family said that you were not believing them.\" \[...\] I thought, because I was in stress probably I asked a question more than two or three times. So \[...\] from then on I take my time when I interact with them. So, I do reflect. And that, that has obviously \[...\] changed my approach. (interviewee 026, doctor)" Staff recognized that staff groups had different opportunities for informal feedback: nurses felt that they were more likely to get informal feedback from patients and relatives at the bedside, positive feedback in particular, whereas doctors felt they often missed out on this opportunity. Informal feedback may not even reach staff, meaning they have no opportunity for the reflective learning that could be triggered."And this one particular doctor said to me \"even if a patient may have made a comment to a nurse about 'oh, wasn\'t the doctor lovely', that won\'t get fed back to the doctor, because it\'s not \[nurses\'\] priority to do that, and the nurses are too busy. \[...\] That feedback just doesn\'t reach them\". (observation)" Staff in AMU felt they were less likely than those working in ICU to have the opportunity to build rapport with patients and their relatives due to the short length of stay and felt that they were less likely to get this type of informal feedback from relatives or patients under their care. ### 3.2.2. Power of positive feedback {#hex13010-sec-0019} Staff described the powerful impact of informal positive feedback for reflection and learning. Informal positive feedback on patient experience, whether in the shape of a comment from a patient or colleague, a thank you card, a box of chocolates or a hug from a relative, often did more than just make staff feel good. Such feedback could have an impact by stimulating staff to reflect on what they had done well and generate learning about aspects of their practice they should maintain and develop. Staff in ICU described how positive feedback helped assuage their fears about whether they were 'doing the right thing' and to reinforce for them the value of the sometimes distressing treatments and interventions they had to implement."You\'ll get a card or a letter, maybe months down the line that \[...\] they\'ve appreciated the care that the patient\'s received and the time we\'ve given them, the discussions that we\'ve had, how open we\'ve been. And having that at least takes some of the sting out of the \[...\] moral distress \[...\] that you feel ‐ that you\'re torturing \[patients in ICU\], with the best of intention, but you\'re torturing in what you do. (interviewee 011, nurse)" Staff accounts demonstrated how positive feedback could be a powerful source of learning in terms of bringing their attention to what they were doing well and reinforcing aspects of their practice that contributed to positive patient experiences. Positive feedback also contributed to staff well‐being and a sense of worth in their professional role. ### 3.2.3. Reflecting on informal feedback: Recognizing and responding to a trigger {#hex13010-sec-0020} Reflection on informal feedback could be unstructured: staff commonly described thinking through a trigger (such as bedside feedback from a patient or colleague) themselves or discussing with colleagues, and in itself this could generate valuable learning and impact on practice in their future interactions with patients. Staff sometimes also used informal triggers as the basis of more formal reflective activity, often linked with the requirement for them to demonstrate reflective learning as part of revalidation or continuing professional development. Although informal feedback was seen as highly powerful, it is serendipitous: opportunistic and unsystematic. Precisely because of the informal and unsystematic nature of this feedback, the use of it for reflection was dependent on staff being able to recognize it as a prompt or a trigger for reflection, to manage their own emotional reactions to the feedback (which could include defensiveness and denial in the case of negative feedback) and to have the mental capacity and ability to engage in reflection either in the moment or at a later point in time, which could be difficult when staff were tired or stressed."It\'s the ability of the individual to accept that and I suppose if I heard anything negative or bad, your initial reaction is \"they\'re wrong\"! (interviewee 005, admin)" 3.3. Patient narratives---'in between' feedback {#hex13010-sec-0021} ----------------------------------------------- Patient narratives were identified by staff as impactful for stimulating reflection; this source of feedback sat between the formal patient experience data 'economy' and the milieu of informal sources of feedback that staff were exposed to in their day‐to‐day practice. Staff described initiatives that elicited patient experience of care directly from the patients themselves in the form of stories or narratives. In some cases, these initiatives involved purposively identifying and using narratives as a prompt for learning, and in others, the reflection and learning were incidental. An example of the former was the collation and use of video narratives from patients about their experiences, to trigger reflection and learning. Incidental reflection arose in the case of patient coffee mornings, observed in one of the participating ICUs. These coffee mornings were arranged for patients who had stayed in ICU to return and talk about their experiences; the primary purpose was to support the patient\'s rehabilitation through helping them to reconstruct what had happened to them while in the hospital. An unintended consequence was that staff got to hear first‐hand about the patient experience in the ICU. Staff gained considerable insight from hearing patients\' personal stories and found that they were challenged to think more deeply about their attitudes and behaviours, and as a result had changed their approach to communicating and interacting with patients in the ICU. These types of activities, where patients return to the ward to recount their experiences, did not happen on AMUs."I feel like I\'ve certainly become more empathetic towards patients \[...\] after \[coffee morning\] and reading the experiences online. I actually, I feel like I take it more seriously, \[...\] if there was anything that we can do to help them sleep better, because obviously sleep deprivation can increase the chance of hallucinations. I also find that I do regularly orientate my patients more now than I ever have. (interviewee 012, nurse)" 4. DISCUSSION {#hex13010-sec-0022} ============= In this study, we used interviews and observations in acute care settings to assess how staff used feedback from patients to reflect, learn and modify their behaviour. We categorized patient experience feedback into two broad categories: formal feedback and informal feedback. Formal feedback which was collected and collated at organizational level (eg through patient surveys) had limited value for triggering reflection unless efforts were made to make it meaningful and flag it as a stimulus for reflection, and opportunities created for staff to take time to reflect on the feedback. Informal feedback (such as bedside comments and gifts of thanks---sometimes considered as 'soft' data[35](#hex13010-bib-0035){ref-type="ref"}) was more likely to trigger spontaneous reflection but access to this type of feedback and use of it for reflection in practice was highly unsystematic. In between these two categories were patient stories---actively solicited and sometimes (but not always) purposefully used to stimulate reflection and learning. The impact of different types of patient feedback in triggering reflection primarily depended on the extent to which the feedback was experienced as personally relevant, meaningful and emotionally salient.[23](#hex13010-bib-0023){ref-type="ref"} This finding is in line with theory‐based predictions about the influence of different types of message in changing attitudes and behaviour, in particular, that messages perceived as personally relevant are more likely to prompt deeper processing.[36](#hex13010-bib-0036){ref-type="ref"} We also identified the value of positive feedback for reflection and learning. When we observed discussion of formal patient feedback, there was a strong tendency to focus on the negative, with efforts to try to identify concrete lessons for improvement and change. Positive feedback attained through organizational patient feedback systems, while acknowledged, was commonly overlooked in terms of its potential for generating learning---perhaps because it did not highlight things that needed 'fixing', in line with quality improvement goals. In contrast, staff described many examples of positive informal feedback, and how this had supported their learning, reinforced their practice and provided reassurance about their approach to care. We also identified that access to the types of feedback that are most impactful in stimulating reflection could vary between staff groups and settings. In particular, staff working in ICU settings described having more access than AMU staff to informal and individual patient feedback, such as through bedside comments and coffee mornings, providing them with more potential triggers for reflection. Patient experience feedback is multi‐faceted, but our study suggests that all types of feedback could be harnessed more effectively to prompt reflection. This could include active efforts to maximize the value of formal feedback as a trigger for reflection, through work to make it meaningful and emotionally salient. Ensuring the feedback is comprehensible, the local relevance is made clear, and individual patient experiences provided verbatim alongside graphs and percentages, is likely to enhance the value of formal feedback for reflection, not just for quality improvement. Our findings also highlight the importance of focusing on sharing and learning from positive feedback, to reinforce or enhance current practice. In addition, expanding opportunities for staff to hear patient stories, capitalizing on serendipitous feedback and engaging in efforts to purposefully share informal feedback to enable collective learning, will help increase the exposure of staff to effective triggers for reflection and learning. Key study findings are included in Box [1](#hex13010-fea-0001){ref-type="boxed-text"}. ###### Key study findings {#hex13010-sec-0028} Patient experience feedback has most value for stimulating reflection if it is personally relevant, meaningful, and emotionally salient.Positive feedback has value for reflection and reinforcement of good practice, as well as providing comfort and reassurance to staff.Informal or serendipitous feedback can be a powerful trigger for reflection but may be overlooked in terms of its potential for generalisable learningOrganisations should consider ways to maximise the capabilities and opportunities for staff to use feedback, particularly informal and serendipitous feedback, for reflection and improvement. This paper has focused on how staff respond to different types of patient feedback as potential prompts or triggers for reflection. We have identified the features of feedback that make it more effective as a trigger for reflection, notably, emotional salience and personal relevance. We found, however, that staff did not always recognize and respond to prompts for reflection that arose from patient feedback, either because the prompt was not acknowledged as a stimulus for reflection or because they lacked the capacity or opportunity to actively engage in reflection in the context of their clinical practice. Apart from appraisals, revalidation and responding to complaints---all mandatory and described by some as 'ritualistic',[22](#hex13010-bib-0022){ref-type="ref"} there were few occasions where staff mentioned being actively encouraged to reflect on patient experience data, and few opportunities in routine clinical practice for staff to take time to reflect. We did not focus in this paper on describing reflective activities or exploring the broader barriers and facilitators to reflection in practice, such as organizational resources or infrastructure, but this will be the focus of a subsequent paper. Although trusts have well‐established systems for using patient feedback, particularly negative feedback, for quality improvement, there is a lack of infrastructure to enable improvement through reflection in practice. We need to consider how to provide the tools and create an environment that supports reflection in practice, enabling attitude and behaviour change. Deeper cognitive processing is dependent on ability to process, including capacity to engage with the message.[36](#hex13010-bib-0036){ref-type="ref"} As such, effective reflection is dependent on staff having the ability to process---for example there might have been an effective trigger but the ability to reflect may be limited through stress, overwork, tiredness and burnout; in addition, negative feedback can be demoralizing. Work is needed to understand how staff can be supported to enable them to have capacity to reflect, as well as having opportunities to engage in reflection in their day‐to‐day clinical practice. While toolkits have been developed to support the use of patient experience feedback for quality improvement,[29](#hex13010-bib-0029){ref-type="ref"}, [37](#hex13010-bib-0037){ref-type="ref"} no equivalent toolkit exists for the use of feedback in reflection. As part of the wider Pearl study, we aim to map barriers and enablers to embedding reflection in clinical practice based on behaviour change theory [38](#hex13010-bib-0038){ref-type="ref"} and to develop a practical toolkit to support reflection on the patient experience in practice. Our research involved in‐depth study of the use of patient experience feedback for reflection, and reflection in practice, across three trusts, including eight individual acute care units. A wide range of staff were interviewed and observed within the acute care settings so that the views of medical, nursing, administrative and managerial staff were captured. The study only encompassed three sites and focussed on acute care settings; while this might limit generalizability, the findings resonate with other studies investigating patient experience which have taken place in other health‐care environments.[29](#hex13010-bib-0029){ref-type="ref"}, [39](#hex13010-bib-0039){ref-type="ref"} Staff who agreed to be interviewed may be biased towards the importance of patient experience and reflective practice, and however, dissenting views were heard during the interviews and casual conversations. We conducted the research in two types of acute care unit, AMUs and ICUs. This enabled us to gain insight into reflection in practice across a range of settings. We have focused in this paper on commonalities in staff response to patient feedback across these settings. We did not attempt to make comparisons across the different types of units, although we acknowledge that the nature of patient feedback in each unit was qualitatively different---in particular, because patients tended to have longer stays on ICUs staff had more opportunity to get bedside feedback from patients and relatives, were more likely to receive cards and chocolates, and to hear from patients who returned to the unit following discharge. Taking into account, these local contextual differences will be important in efforts to develop interventions to support reflection in practice. 5. CONCLUSION {#hex13010-sec-0023} ============= Most formal organizational‐level feedback of patient experience lacks immediacy for many staff and therefore tends not to stimulate reflective learning. The free‐text responses from surveys and hearing the patient stories at coffee mornings tend to have more impact on staff than aggregated quantitative data. Individuals are prompted to reflect when receiving informal personal feedback from patients, relatives or other members of staff, but this feedback is largely unrecognized at an organizational level. Staff value positive feedback, while organizations tend to respond to negative feedback such as complaints. All types of patient experience feedback-- formal and informal, qualitative and quantitative, positive and negative--have the potential to stimulate reflective learning for staff in acute care settings, but maximizing this potential requires work to support staff in recognizing triggers for reflection and having the capacity and opportunity to reflect and learn from patient experience feedback. CONFLICT OF INTEREST {#hex13010-sec-0025} ==================== The authors declare that there is no conflict of interest. The PEARL project team are grateful to participating hospitals, staff and local PPI representatives and the study Steering Committee for their invaluable support: Prof Rebecca Lawton, Mr Harry Turner, Prof James Neuberger and Prof Stephen Brett and the PPI representatives Duncan and Lisa Marie Buckley. The PEARL collaboration: C Higenbottam; F Wyton; E Fellows; K Moss; L Cooper; L Flavell; J Flavell; J Raeside; M Hawkesford; H Laugher; T Jones; S Nevitt; K Naylor; J Sampson; J Mann; S Ballinger; T Melody; G Buggy; L Linhartova; J Thompson; S Majid; P Diviyesh; P Thorpe; A Shaha; R Carvell; A Joshi; K Kneller; H Halliday; C Iles; I O\'Neil; G Yeoman; C Randell; H Korovesis; C Scott; H Doherty; K Protheroe; E Swann; L Dunn; K McCourt; S Perks; T Chakravorty; D Wolstenholme; C Grindell; R Bec; L Duffy; E Tracey, C Nee; S Vince; I Barrow; N Alderson; C Straughan; K Cullen; I Spencer; M Thomas; J Archer; I Clement; F Evison; F Gao Smith; C Gibbins; E Hayton; R Lilford; R Mullhi; G Packer; G Perkins; J Shelton; C Snelson; P Sullivan; I Vlaev; S Wright. DATA AVAILABILITY STATEMENT {#hex13010-sec-0027} =========================== The data that support the findings of this study are available from the corresponding author upon reasonable request. | High | [
0.6899441340782121,
30.875,
13.875
]
|
Q: Entity Framework: The context is being used in Code First mode with code that was generated from an EDMX file I am developing an WPF application with EF 6 database first approach, I am have 1 project in my solutions, if i run my project this error always appear. The context is being used in Code First mode with code that was generated from an EDMX file for either Database First or Model First development. This will not work correctly. To fix this problem do not remove the line of code that throws this exception. If you wish to use Database First or Model First, then make sure that the Entity Framework connection string is included in the app.config or web.config of the start-up project. If you are creating your own DbConnection, then make sure that it is an EntityConnection and not some other type of DbConnection, and that you pass it to one of the base DbContext constructors that take a DbConnection. To learn more about Code First, Database First, and Model First see the Entity Framework documentation here: http://go.microsoft.com/fwlink/?LinkId=394715 A: My mistake was using standard connection string in constructor (Server = test\test; Database = DB; User Id = test_user;Password = test), but Entity Framework needs different format (metadata=res://*/DBModel.csdl|res://*/DBModel.ssdl|res://*/DBModel.msl;provider=System.Data.SqlClient;provider connection string="data source=test\test;initial catalog=DB;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework""" providerName = ""System.Data.EntityClient) Edit: Changed code to be formatted as code so it's easier to read. A: EF makes assumptions based on the presence or absence of a metadata section in the connection string. If you receive this error you can add the metadata section to the connection string in your config file. E.g. if your connection string looks like this: <add name="MyModel" connectionString="data source=SERVER\INSTANCE;initial catalog=MyModel;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework" providerName="System.Data.SqlClient" /> Prepend metadata=res://*/MyModel.csdl|res://*/MyModel.ssdl|res://*/MyModel.msl; so that it looks like this: <add name="MyModel" connectionString="metadata=res://*/MyModel.csdl|res://*/MyModel.ssdl|res://*/MyModel.msl;data source=SERVER\INSTANCE;initial catalog=MyModel;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework" providerName="System.Data.SqlClient" /> A: One thing you can do is... (if is Database first) Open the .edmx[Diagram] -> right click -> "Update Model from database" And see if the will appear the "Add", "Refresh" and "Delete" tabs. If doesn't... probably your connection is broken and the dialog for VS creates a new connection string will appear instead. =) | Mid | [
0.622727272727272,
34.25,
20.75
]
|
PC Insanity May Mean The End Of Universities Once upon a time, universities were institutions dedicated to the pursuit of truth and the transmission of the highest values of our civilization. Today, most are dedicated to the destruction of those values. It is past time to call them to account. People used to talk about the ends of the university and how the academic establishment was failing its students. Today, more and more people are talking about the end of the university, the idea being that it is time to think about closing them rather than reforming them. Last month at a conference in London, the distinguished British philosopher Sir Roger Scruton added his voice to this chorus when responding to a questioner who complained of the physical violence meted out to conservative students at Birkbeck University. There were two possible responses to this situation, Sir Roger said. One was to start competing institutions, outside the academic establishment, that welcomed conservative voices. The other possibility was “get rid of universities altogether.” That response was met with enthusiastic applause. Sir Roger went on to qualify his recommendation, noting that a modern society required institutions to pursue science and engineering. But the humanities, which at most colleges and universities have devolved into cesspools of identity politics and grievance studies, should be starved of funding and ultimately shut down. It’s an idea that is getting more and more traction. In a remarkable essay in Quillette titled “After Academia,” Allen Farrington summed up the growing consensus. “We need to stop wringing our hands over how to save academia and acknowledge that its disease is terminal.” Is he right? It is too soon to say for sure. But if so, Farrington is correct that its demise “need not be cause for solemnity.” On the contrary, the end of academia “can inspire celebration,” because it could “allow us to shift our energies away from the abject failure of modern education and to refocus on breathing new life into the classical alternative.” A huge amount of attention and public anxiety has been expended on the plight of free speech on campus. Every season the situation seems to get a little worse. Guest speakers are routinely shouted at, de-platformed, or disinvited. Students and teachers alike are bullied into silence or craven apology by self-appointed virtue-crats in college administrations and among designated victim groups among the students. But the issue isn’t really, or not only, free speech. Bret Weinstein, a former biology professor, was hounded out of Evergreen State College when he objected to a “Day of Absence” rally that insisted that all whites stay off campus for a day. Since then, he has been frequently invited to talk about free speech on college campuses. But he notes that the real crisis in education isn’t about free speech. Rather, it is about “a breakdown in the basic logic of civilization.” Academia is the crucible, the engine room of this rot. But the breakdown of which Weinstein speaks isn’t confined to college campuses. The revolutionary intolerance that has made college campuses so inhospitable to free expression and the impulses of civilization has also deeply affected the woke mandarins of social media and Big Tech. It has made serious inroads into the HR departments of the Fortune 500 and elsewhere in the world of business. And it has insinuated itself into the values and practices of most governmental agencies, many of which have yet to meet a politically correct left-wing cause they do not embrace. The economist Herb Stein once observed that what cannot go one forever, won’t. In the coming decade, we will see many so-called liberal-arts college close their doors. We will also see more alternatives to traditional colleges. Many of these will be on-line. Some will be local, ad hoc ventures. All will be rebelling against the poisonous hand of identity politics. | Mid | [
0.615183246073298,
29.375,
18.375
]
|
Sexual orientation and mental health: results from a community survey of young and middle-aged adults. Community surveys have reported a higher rate of mental health problems in combined groups of homosexual and bisexual participants, but have not separated these two groups. To assess separately the mental health of homosexual and bisexual groups compared with heterosexuals. A community survey of 4824 adults was carried out in Canberra, Australia. Measures covered anxiety, depression, suicidality, alcohol misuse, positive and negative affect and a range of risk factors for poorer mental health. The bisexual group was highest on measures of anxiety, depression and negative affect, with the homosexual group falling between the other two groups. Both the bisexual and homosexual groups were high on suicidality. Bisexuals also had more current adverse life events, greater childhood adversity, less positive support from family, more negative support from friends and a higher frequency of financial problems. Homosexuals reported greater childhood adversity and less positive support from family. The bisexual group had the worst mental health, although homosexual participants also tended to report more distress. | Mid | [
0.627551020408163,
30.75,
18.25
]
|
{{template "member/top.html" .}} <div class="m-b-md"> <h3 class="m-b-none">{{.userInfo.Username}}, {{msg . "welcomeToLeanote"}}.</h3></div> <section class="panel panel-default"> <div class="row m-l-none m-r-none bg-light lter"> <div class="col-sm-6 col-md-3 padder-v b-r b-light"> <span class="fa-stack fa-2x pull-left m-r-sm"> <i class="fa fa-circle fa-stack-2x text-warning"></i> <i class="fa fa-file-o fa-stack-1x text-white"></i> </span> <a class="clear" href="javascript:;"> <span class="h3 block m-t-xs"><strong>{{.countNote}}</strong></span> <small class="text-muted text-uc">{{msg . "note"}}</small> </a> </div> <div class="col-sm-6 col-md-3 padder-v b-r b-light"> <span class="fa-stack fa-2x pull-left m-r-sm"> <i class="fa fa-circle fa-stack-2x text-info"></i> <i class="fa fa-bold fa-stack-1x text-white"></i> </span> <a class="clear" href="javascript:;"> <span class="h3 block m-t-xs"><strong>{{.countBlog}}</strong></span> <small class="text-muted text-uc">{{msg . "blog"}}</small> </a> </div> </div> </section> <!-- 最新动态 --> <section class="panel panel-default"> <h4 class="font-thin padder"> {{msg . "leanoteEvents"}} </h4> <ul class="list-group" id="eventsList"></ul> </section> <!-- <section class="panel panel-default"> <form> <textarea class="form-control no-border" rows="3" placeholder="Suggestions to leanote"></textarea> </form> <footer class="panel-footer bg-light lter"> <button class="btn btn-info pull-right btn-sm"> POST </button> <ul class="nav nav-pills nav-sm"> </footer> </section> --> {{template "member/footer.html" .}} <script> $(function() { // leanote动态 var url = "https://leanote.com/blog/listCateLatest/5446753cfacfaa4f56000000"; function renderItem(item) { return '<li class="list-group-item"><p><a target="_blank" href="http://leanote.com/blog/post/' + item.NoteId + '">' + item.Title + '</a></p><small class="block text-muted"><i class="fa fa-clock-o"></i> ' + goNowToDatetime(item.PublicTime) + '</small></li>'; } $.ajax({ dataType: "jsonp",//跨域访问 dataType 必须是jsonp 类型。 url: url, type: "GET", jsonp: "callback", jsonpCallback: "jsonpCallback", success: function(data) { if(typeof data == "object" && data.Ok) { var list = data.List; var html = ""; for(var i = 0; i < list.length; ++i) { var item = list[i]; html += renderItem(item); } $("#eventsList").html(html); } } }); }); </script> {{template "member/end.html" .}} | Low | [
0.519015659955257,
29,
26.875
]
|
Q: Use Values from Raster Statistics (e.g. sum of all cells) in Raster Calculator I need to do calculations on raster layers which require the sum of all cells of the raster. I've used the zonal statistics tool for that but it gets a little annoying to do those extra steps (run the plugin, open the Attribute table, copy the sum...). Is there a way I can calculate and use the values of raster statistics directly inside a raster calculator? Even better would be to calculate those values from a raster created in the same step: A*B*C / Sum (A*B*C) A: A calculation like A*B*C / Sum (A*B*C) will perform the A*B*C operation twice. To avoid that duplicate effort instead do it in two steps X = A*B*C X / Sum(X) Storing A*B*C has an immediate payoff. To implement the Sum operation, use a zonal summary operator with the entire raster as the zone. That requires placing a constant, non-null value at every non-null cell of the raster. A simple way to accomplish this is to equate the raster with itself, thus: ZonalSum(X, X==X) (The syntax for ZonalSum will depend on the platform and the version of the software.) The full workflow therefore is X = A*B*C X / ZonalSum(X, X==X) | High | [
0.6560364464692481,
36,
18.875
]
|
Anime Problems: An interview with Terumi Nishii, Part 1 Terumi Nishii is an animation director and character designer who has had a long career working on such hits as One Piece, Pokémon, and JoJo’s Bizarre Adventure: Diamond Is Unbreakable. In April 2019, Terumi made headlines when she tweeted in English about difficult working conditions within the anime industry, flat out telling her audience of mostly foreigners, “No matter how much you like anime, it is not advisable to come to Japan and participate in anime work. Because the animation industry is usually overworked”. In Part 1 of our interview with Terumi, we zero in some of the biggest problems facing animators today with some possible solutions and rays of hope to be revealed in Part 2, printed in the December 2019 issue currently on newsstands. What made you want to work in the anime industry? Actually, I originally wanted to be a manga artist. I was working with someone from the Shonen Jump editorial team to get my work published, but then I saw Evangelion and I decided I wanted to make anime instead. It just looked cooler. What were the conditions like when you first entered the industry? It was really fun. It was the best time in my career and in my life. I was in sort of a training program/test period in an anime studio called Cockpit. I could do the thing I loved and get some money for it. It wasn’t enough to live on—I was only getting paid 2800 yen a month (about US$25.00)—but I was only doing tracing of other people’s drawing. A few months later I was offered a job there for around 50000 yen a month (about US$450) doing phone operator work, sales, and some project managing. How did your career develop from there? I worked under Kagawa Hisashi, the animation director of Sailor Moon, for a while. Then I worked under the My Hero Academia character designer and animation director Yoshihiko Umakoshi for 10 years. Next, I worked for director Kunihiko Ikuhara on Penguindrum (2011) and that’s when I started working more independently and getting bigger projects. Recently, I worked on the Netflix version of Saint Seiya, but it’s probably been JoJo’s Bizarre Adventure that has gotten me the most international attention. What made you want to speak out recently about the negative side of the anime industry? Around 2014, I was working on the Mushishi TV anime, but it was taken off the air because they didn’t meet the production schedule and had to take a whole season off to catch up. I felt really bad about that. Then I started looking around the industry and saw that things like that were happening more and more. Shows were not able to meet their deadlines. And that’s when I started to realize there was a problem. JoJo’s began having similar issues as well, and around 2015-2016, and it just felt like there weren’t enough people who could do the work sufficiently. The industry was getting into a situation where no one could even make storyboards correctly and the big studios could no longer find outside vendors who could do the work. What do you think are the root causes of these staffing problems? There’s a situation now where there are more and more anime shows than there used to be, and you are not allowed to reduce the quality, so there’s a lot of overwork. There didn’t used to be so much outsourcing in the industry before, but now there is lots of it. And that has increased the number of people who have to work on each project. In the past, it might take two months to complete a job, but these days you have double the number of people working to try and complete a project in one month. So if you have a series that goes from a 12-episode season, to 24 episodes, and 36 episodes, and keeps continuing then it expands the number of people who are working on the project. It requires more management and just makes everything more complicated. Instead of an anime project being made in one studio, it is outsourced to 10 different studios and everyone is working on multiple projects at the same time. And if you have to keep the quality high, while trying to shrink the timeline down to complete projects, then that just makes the job tougher and tougher. The project managers really can’t sleep. They are working hard 24/7. You tweeted that “with the increase of the number of works in recent years, some people have broken mind and body.” Do you have more specific examples? Two of my sempai died in their 40s and I definitely think it was because of overwork. A lot of people have had aneurysms or heart attacks because of overwork. Lots of people working on projects have to be stopped because of doctor’s orders telling them they need to rest. I know someone who was working as a project line manager who had an issue with a blood clot in his leg and couldn’t walk and had to take time off. There are cases where people die, and those often make the news, but there are a lot of cases that you don’t hear about where people are overworked and have to take a break for medical reasons. Terumi Nishii Links Twitter (English): www.twitter.com/nishiiterumi1 Patreon: www.patreon.com/NISHII_Terumi You can read Part 2 of this interview in the December 2019 issue, which is on newsstands now through November 5th, and available online here. You can also get a print copy of Part 1 in the October 2019 issue. | Mid | [
0.616740088105726,
35,
21.75
]
|
BBP SEED Fifth Annual Scholarship Award At the recent Senior Awards Ceremony, SEED Chairman, Bob Draffin, presented SEED’s fifth annual scholarship to this year’s winner, Phillip Tubiolo. Phil wrote an excellent essay in which he suggested ways that SEED could help promote more scientific enrichment programs and allow students to apply directly for SEED Funding Projects . The BBP SEED Foundation wishes Phil the best of luck, as he pursues a major in biomedical engineering at Stony Brook University. | High | [
0.71625344352617,
32.5,
12.875
]
|
/* * Copyright (C) 2015 Apple Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS'' * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ #pragma once #ifndef UIScriptContext_h #define UIScriptContext_h #include <JavaScriptCore/JSRetainPtr.h> #include <wtf/HashMap.h> #include <wtf/RefPtr.h> #include <wtf/text/WTFString.h> namespace WebCore { class FloatRect; } namespace WTR { class UIScriptController; class UIScriptContextDelegate { public: virtual void uiScriptDidComplete(const String& result, unsigned callbackID) = 0; }; const unsigned firstNonPersistentCallbackID = 1000; typedef enum { CallbackTypeInvalid = 0, CallbackTypeWillBeginZooming, CallbackTypeDidEndZooming, CallbackTypeDidShowKeyboard, CallbackTypeDidHideKeyboard, CallbackTypeDidEndScrolling, CallbackTypeDidStartFormControlInteraction, CallbackTypeDidEndFormControlInteraction, CallbackTypeDidShowForcePressPreview, CallbackTypeDidDismissForcePressPreview, CallbackTypeNonPersistent = firstNonPersistentCallbackID } CallbackType; class UIScriptContext { WTF_MAKE_NONCOPYABLE(UIScriptContext); public: UIScriptContext(UIScriptContextDelegate&); ~UIScriptContext(); void runUIScript(const String& script, unsigned scriptCallbackID); void requestUIScriptCompletion(JSStringRef); // For one-shot tasks callbacks. unsigned prepareForAsyncTask(JSValueRef taskCallback, CallbackType); void asyncTaskComplete(unsigned taskCallbackID); // For persistent callbacks. unsigned registerCallback(JSValueRef taskCallback, CallbackType); JSValueRef callbackWithID(unsigned callbackID); void unregisterCallback(unsigned callbackID); void fireCallback(unsigned callbackID); unsigned nextTaskCallbackID(CallbackType); JSObjectRef objectFromRect(const WebCore::FloatRect&) const; JSGlobalContextRef jsContext() const { return m_context.get(); } private: JSRetainPtr<JSGlobalContextRef> m_context; bool hasOutstandingAsyncTasks() const { return !m_callbacks.isEmpty(); } bool currentParentCallbackIsPendingCompletion() const { return m_uiScriptResultsPendingCompletion.contains(m_currentScriptCallbackID); } bool currentParentCallbackHasOutstandingAsyncTasks() const; void tryToCompleteUIScriptForCurrentParentCallback(); struct Task { unsigned parentScriptCallbackID { 0 }; JSValueRef callback { nullptr }; }; HashMap<unsigned, Task> m_callbacks; HashMap<unsigned, JSStringRef> m_uiScriptResultsPendingCompletion; UIScriptContextDelegate& m_delegate; RefPtr<UIScriptController> m_controller; unsigned m_currentScriptCallbackID { 0 }; unsigned m_nextTaskCallbackID { 0 }; }; } #endif // UIScriptContext_h | Mid | [
0.5640000000000001,
35.25,
27.25
]
|
Abstract After a hole injection layer is inserted into a polymer light-emitting diode(PLED), the positive polaron is easily injected into the polymer layer. An applied electrical field drives the positive polaron to approach and collide with the nonemissive triplet exciton. The collision between the positive polaron and neutral triplet exciton induces the exciton to emit light. Based on this physical picture, the maximum quantum efficiency of the PLEDs, 61.6%, is consistent with the experimental result of 60%. With the help of an external magnetic field, a structure of PLEDs with high electroluminescent efficiency is designed. Received 27 July 2009Accepted 03 September 2009Published online 13 October 2009 Acknowledgments: We thank Shi-Yang Liu and Xin Sun for helpful discussions. Sheng Li also thanks R. B. Tao for a visit to the Center for Quantum Manipulation at Fudan University. This work was supported by the National Science Foundation of China under Grant No. 20804039 and the Zhejiang Provincial Natural Science Foundation of China under Grant No. Y4080300. | High | [
0.672064777327935,
31.125,
15.1875
]
|
760 F.Supp.2d 779 (2011) Brandelyn ZAR, Plaintiff, v. Officer Jason PAYNE, et al., Defendants. Case No. 09-cv-249. United States District Court, S.D. Ohio, Eastern Division. January 12, 2011. *781 Phillip Douglas Lehmkuhl, Mt. Vernon, OH, for Plaintiff. Kenneth Eugene Harris, Carl Andrew Anthony, Freund, Freeze & Arnold, Columbus, OH, for Defendants. OPINION AND ORDER ALGENON L. MARBLEY, District Judge. I. INTRODUCTION This matter is before the Court on the Motion of Defendants Jason Payne and Justin Trowbridge for Summary Judgment. Motion, Doc. No. 17. Defendants request summary judgment on all claims against them. Complaint, Doc. No 2. For the reasons that follow, Defendants' motion is GRANTED in part and DENIED in part. II. FACTUAL BACKGROUND A. April 25, 2008 At approximately 10:30 p.m. on April 25, 2008, Htut Zar ("Mr. Zar") called the Mt. Vernon Police Department and waited outside of his house at 308 East Burgess Street in Mt. Vernon, Ohio for them to arrive. Officers Jason Payne ("Payne") and Justin Trowbridge ("Trowbridge") of the Mt. Vernon Police Department responded to Mr. Zar's call and arrived separately. Upon approaching Mr. Zar, Trowbridge noticed a substance on Mr. Zar's right shoulder and neck that appeared to him to be blood. Both parties agree that Mr. Zar explained that the substance was chocolate but disagree as to the particulars of Mr. Zar's explanation. The Defendants assert, supported by their own sworn affidavits, that Mr. Zar informed the officers that he and his wife, Brandelyn Zar ("Ms. Zar"), had had an argument during which Ms. Zar had thrown a glass dish containing ice cream and chocolate sauce at him. According to the Defendants, Mr. Zar told the officers that he was holding his and Ms. Zar's infant son in his arms when Ms. Zar threw the glass dish at him. Mr. Zar told Trowbridge that he was concerned for the safety of his son, then in the house with Ms. Zar. The Plaintiff asserts, supported by a sworn affidavit from Mr. Zar, that Mr. Zar told the officers that his wife had thrown a paper cup containing a Wendy's Frosty at a window of the house during an argument. Some of the Frosty had spattered on Mr. Zar. He further asserts in his affidavit that he did not tell the officers that he was concerned for the safety of his child and that he had only called the police for help in getting some of his clothing from the house and for advice on where to spend the night. While the officers were speaking with Mr. Zar, Ms. Zar was throwing Mr. Zar's belongings onto the front lawn and yelling that she wanted Mr. Zar to leave. When Payne tried to approach Ms. Zar, she closed her door and turned off her interior lights. Payne then knocked on her door. Ms. Zar opened the door in response to the knock with her sister, Kaitlyn Meadway ("Meadway"), standing behind her. Meadway was holding the Zar child in her arms; she and the child were present during the entire encounter. Ms. Zar told Payne that she did not want to talk to the police or to her husband and shut the door. Trowbridge joined Payne at the door and knocked. Ms. Zar again opened the door, stood in her doorway, told the officers she did not want to speak with them, and tried to shut the door. According to Ms. Zar, and supported by her own affidavit and the affidavits of Mr. Zar and Meadway, before Ms. Zar could *782 close the door, Trowbridge reached into her house, grabbed her wrist, and jerked her outside. Payne grabbed her other wrist and helped Trowbridge pull her from her home. At no time did either officer warn Ms. Zar that she was under arrest or would be arrested. Meadway has stated that the officers' decision to arrest Ms. Zar appeared to her to be the result of an epithet Ms. Zar directed at the officers just before they grabbed her. The Defendants allege that Ms. Zar was agitated, cursing at the officers, and pointing her finger in Trowbridge's face. Trowbridge decided to place Ms. Zar under arrest for impeding their investigation into the safety of the infant. He informed Ms. Zar that she was under arrest and tried to take hold of her right wrist to secure it. The accounts of the arrest from this point forward diverge even between the Defendants. Trowbridge in his Supplementary Report recounted that Ms. Zar pulled her wrist back, closed her fist, and punched him in the mouth. The officers again grabbed Ms. Zar and moved her away from the house and towards a patrol car. Because Ms. Zar was flailing her body, the officers took her to the ground in their attempt to regain control. Once they had gotten her to the ground, the officers secured her hands behind her back, handcuffed her, and advised her she was under arrest. While trying to walk her to a patrol car, Trowbridge picked Ms. Zar up by the right arm, allowing her to flip around and kick Payne in the throat and face. When this happened, she slipped from Trowbridge's grip and fell to the ground. Payne reported an account that differs in the details but is materially similar to Trowbridge's. He recounts that Ms. Zar punched Trowbridge in the face after they had both grabbed one of her arms and were escorting her away from the house. The officers continued to escort Ms. Zar towards the patrol car, and Ms. Zar continued to struggle by moving her legs and arms around. Trowbridge then picked Ms. Zar up, she kicked Payne in the throat, and Trowbridge dropped her on the ground. Once Ms. Zar was on the ground, the officers handcuffed her. The Plaintiff's version, based on the recollections of Meadway and Mr. Zar,[1] presents the picture of two large police officers forcefully throwing around a small woman; Meadway even avers that the officers savagely beat Ms. Zar. The Plaintiff's witnesses recount that the officers tossed Ms. Zar into her yard, face down in the mud; jumped on top of her; and handcuffed her. The officers did not tell her she was under arrest until after she was handcuffed. The Plaintiff admits that she kicked Payne but states that it happened accidentally when the officers picked her up into the air. When Payne was kicked, the officers dropped Mrs. Zar back into the mud, causing her great pain. The Plaintiff asserts that at no time did she punch Trowbridge in the face.[2] The Defendants allege that both officers sustained physical injuries and tears in their clothing as a result of their encounter with Ms. Zar. Ms. Zar also claims that she *783 sustained physical injuries as a result of the encounter. B. Arrest and Conviction Ms. Zar was arrested for persisting disorderly conduct under Mt. Vernon Ordinance 509.03(E), with resisting arrest under Mt. Vernon Ordinance 525.09, and with assault on a police officer. With the assistance of counsel, the Plaintiff entered a plea of no contest to the charge of persisting disorderly conduct on January 29, 2009. She was found guilty of that charge and sentenced to a fine of $150 plus court costs. On February 20, 2009, the Plaintiff pleaded guilty to one count of obstructing official business in violation of Ohio Rev. Code. Ann. § 2921.31(A) in exchange for the dismissal of the assault charge. The Plaintiff was sentenced on March 17, 2009 to three years of Community Control, thirty days in the Knox County Jail, submission to an out-patient drug and alcohol treatment program, and submission to drug and alcohol use monitoring. The balance of the Plaintiff's jail sentence was suspended on March 23, 2009. C. This Lawsuit The Plaintiff filed her complaint before this Court on April 1, 2009 against Defendants Jason Payne and Justin Trowbridge. Complaint, Doc. No. 2. She alleges two claims under the Fourth and Fourteenth Amendments to the United States Constitution: (1) that the Defendants unlawfully violated her rights by arresting her in her home without a warrant; and (2) that the Defendants employed unnecessary and unjustified physical force against her. The Plaintiff is seeking compensatory damages, punitive damages, and attorneys' fees and costs. The Defendants filed their Motion for Summary Judgment on March 29, 2010 requesting dismissal of all claims. Motion, Doc. No. 17. This motion has been fully briefed and argued and is now ripe for decision by this Court. III. STANDARD OF REVIEW Summary judgment is proper if "there is no genuine issue as to any material fact [such that] the movant is entitled to judgment as a matter of law." Fed.R.Civ.P. 56(c). But "summary judgment will not lie if the ... evidence is such that a reasonable jury could return a verdict for the non-moving party." Anderson v. Liberty Lobby, Inc., 477 U.S. 242, 248, 106 S.Ct. 2505, 91 L.Ed.2d 202 (1986). In considering a motion for summary judgment, a court must construe the evidence in the light most favorable to the non-moving party. Matsushita Elec. Indus. Co. v. Zenith Radio Corp., 475 U.S. 574, 587, 106 S.Ct. 1348, 89 L.Ed.2d 538 (1986). The movant therefore has the burden of establishing that there is no genuine issue of material fact. Celotex Corp. v. Catrett, 477 U.S. 317, 322-23, 106 S.Ct. 2548, 91 L.Ed.2d 265 (1986); Barnhart v. Pickrel, Schaeffer & Ebeling Co., 12 F.3d 1382, 1388-89 (6th Cir.1993). The central inquiry is "whether the evidence presents a sufficient disagreement to require submission to a jury or whether it is so one-sided that one party must prevail as a matter of law." Anderson, 477 U.S. at 251-52, 106 S.Ct. 2505. But the non-moving party "may not rest merely on allegations or denials in its own pleading." Fed.R.Civ.P. 56(e)(2); see also Celotex, 477 U.S. at 324, 106 S.Ct. 2548; Searcy v. City of Dayton, 38 F.3d 282, 286 (6th Cir.1994). The non-moving party must present "significant probative evidence" to show that there is more than "some metaphysical doubt as to the material facts." Moore v. Philip Morris Co., 8 F.3d 335, 339-40 (6th Cir.1993). When ruling on a motion for summary judgment, a district court is not required to sift through the entire record to drum up facts that might support the nonmoving party's claim. InterRoyal Corp. v. Sponseller, *784 889 F.2d 108, 111 (6th Cir.1989). Instead, the Court may rely on the evidence called to its attention by the parties. Id. IV. LAW AND ANALYSIS The Plaintiff has sued under 42 U.S.C. § 1983, which "by its terms does not create any substantive rights but rather `merely provides remedies for deprivations of rights established elsewhere.'" Radvansky v. City of Olmsted Falls, 395 F.3d 291, 302 (6th Cir.2005) (quoting Gardenhire v. Schubert, 205 F.3d 303, 310 (6th Cir.2000)). To prevail on her § 1983 claims, the Plaintiff "`must establish that a person acting under color of state law deprived [her] of a right secured by the Constitution or laws of the United States.'" Id. (quoting Waters v. City of Morristown, 242 F.3d 353, 358-59 (6th Cir.2001)). The Defendants have moved for summary judgment on both of the claims against them; the Plaintiff opposes the motion in its entirety. As a preliminary matter, the Court notes that there is some confusion in the briefs as to whether the Plaintiff's First Claim is for warrantless arrest alone or for the warrantless arrest and warrantless entry into the Plaintiff's home. The First Claim of the Complaint reads as follows: "The Conduct of Defendants Payne and Trowbridge of arresting Plaintiff Zar in her home without an arrest warrant constitutes a violation of Zar's rights guaranteed by the Fourth and Fourteenth Amendments to the U.S. Constitution and 42 U.S.C. Section 1983." Both parties have submitted arguments with respect to the warrantless arrest and the warrantless entry claims. The Court therefore concludes that the Defendants have received adequate notice of both allegations under the "course of proceedings" test applied to ambiguous § 1983 claims. Cummings v. City of Akron, 418 F.3d 676, 681 (6th Cir.2005) ("We apply a `course of the proceedings' test to determine whether defendants in a § 1983 action have received notice of the plaintiff's claims where the complaint is ambiguous.") (quoting Moore v. City of Harriman, 272 F.3d 769, 774 (6th Cir.2001) (en banc) ("Subsequent filings in a case may rectify deficiencies in the initial pleadings.")). The Court will address the warrantless arrest, warrantless entry, and excessive use of force claims in turn. A. Warrantless Arrest The Defendants argue that the Plaintiff's warrantless arrest claim is barred because the Plaintiff's state court convictions are entitled to preclusive effect in this court and establish as a matter of law that there was probable cause for her arrest. The Sixth Circuit has ruled that pleas of guilty or no contest in a state court preclude a person from later bringing a § 1983 action alleging unlawful arrest in violation of the Fourth Amendment. Walker v. Schaeffer, 854 F.2d 138 (6th Cir.1988). The two plaintiffs in Walker were arrested for disorderly conduct and reckless driving, respectively, under Ohio law. Id. at 140. Both of the plaintiffs entered no-contest pleas in open court with the assistance of attorneys; the state court found them guilty and sentenced them to a fine. Id. In their federal § 1983 action, they asserted that the arresting officers violated their rights under the Fourth Amendment "by arresting them without probable cause and then placing them in jail." Id. at 140. The district court denied the defendants' qualified immunity defense raised in their motion for summary judgment, but the Sixth Circuit reversed, concluding that "the pleas in state court made by [the plaintiffs] and the finding of guilt and imposition of fines by that court estop plaintiffs from now asserting in federal *785 court that the defendant police officers acted without probable cause." Id. at 142. See also Daubenmire v. City of Columbus, 507 F.3d 383, 389-90 (6th Cir.2007) (declining to overrule Walker and holding that "Plaintiffs are estopped by their pleas in state court from now challenging the reasonableness of their arrest"). Walker compels the same result in this case. Much like the plaintiffs in Walker, the Plaintiff in the case sub judice was arrested and convicted under Ohio law following her plea of no contest to the charge of persisting disorderly conduct and her plea of guilty to the charge of obstructing official business. Through these pleas, the Plaintiff has admitted to the factual basis for her arrest, see Walker, 141-42, and now cannot challenge the existence of probable cause. The Defendants' Motion is accordingly GRANTED and the Plaintiff's claim for unlawful arrest (First Claim) is DISMISSED.[3] B. Qualified Immunity The Defendants argue that the doctrine of qualified immunity forecloses the Plaintiff's claims for warrantless entry into her home and use of excessive force. According to the doctrine of qualified immunity, "government officials performing discretionary functions are generally shielded from liability for civil damages insofar as their conduct does not violate clearly established statutory or constitutional rights of which a reasonable person should have known." Harlow v. Fitzgerald, 457 U.S. 800, 818, 102 S.Ct. 2727, 73 L.Ed.2d 396 (1982). Qualified immunity involves the following three-step inquiry: First, we determine whether, based upon the applicable law, the facts viewed in the light most favorable to the plaintiffs show that a constitutional violation has occurred. Second, we consider whether the violation involved a clearly established constitutional right of which a reasonable person would have known. Third, we determine whether the plaintiff has offered sufficient evidence to indicate that what the official allegedly did was objectively unreasonable in light of the clearly established constitutional rights. If the answer to all three questions is "yes," qualified immunity is not proper. Champion v. Outlook Nashville, Inc., 380 F.3d 893, 901 (6th Cir.2004) (citations omitted). The Court need not consider these questions in a particular order. Pearson v. Callahan, 555 U.S. 223, 129 S.Ct. 808, 821, 172 L.Ed.2d 565 (2009) ("[T]here will be cases in which a court will rather quickly and easily decide that there was no violation of clearly established law before turning to the more difficult question [of] whether the relevant facts make out a constitutional question at all."). The second prong requires the Court to determine whether a right was "clearly established" by examining "whether it would be clear to a reasonable officer *786 that his conduct was unlawful in the situation he confronted." Saucier v. Katz, 533 U.S. 194, 202, 121 S.Ct. 2151, 150 L.Ed.2d 272 (2001). Because most rights are "clearly established" at some level of generality, the analysis of whether a right is "clearly established" must be "undertaken in light of the specific context of the case, not as a broad general proposition." Floyd v. City of Detroit, 518 F.3d 398, 405 (6th Cir.2008). Summary judgment should be denied if the undisputed facts, taken in the light most favorable to the plaintiff, show that the defendants violated clearly established rights or if, under the third prong, there is a factual dispute "such that it cannot be determined before trial whether the defendant did acts that violate clearly established rights." Poe v. Haydon, 853 F.2d 418, 426 (6th Cir.1988) (citing Green v. Carlson, 826 F.2d 647, 650-52 (7th Cir. 1987)); see also Vakilian v. Shaw, 335 F.3d 509, 515 (6th Cir.2003) (stating that summary judgment on qualified immunity grounds is improper "if genuine issues of material fact exist as to whether the officer committed acts that would violate a clearly established right"); Sova v. City of Mt. Pleasant, 142 F.3d 898, 903 (6th Cir.1998) ("Where, as here, the legal question of qualified immunity turns upon which version of the facts one accepts, the jury, not the judge, must determine liability. This is especially true considering that the District Court must view the facts in the light most favorable to the plaintiff on a motion for summary judgment.") (internal citations omitted). Against this backdrop and as explicated below, the Court concludes, first, that the facts taken in the light most favorable to the Plaintiff show that the Defendants violated her clearly established right to be free of warrantless entry into her home and, second, that factual disputes preclude a determination that the Defendant is entitled to qualified immunity on the excessive force claim. The Court therefore DENIES summary judgment with respect to Defendants' qualified immunity defense as applied to the Plaintiff's Fourth Amendment claims for warrantless entry and excessive use of force. 1. Warrantless Entry The Plaintiff alleges that the Defendants violated her right to be free from warrantless entry when they arrested her while she was inside her home. Defendants argue that they are entitled to qualified immunity because they were constitutionally permitted to effectuate their arrest of the Plaintiff in her doorway; the Plaintiff argues that no such right exists. In the alternative, the Defendants argue that there were exigent circumstances for their entrance into the Plaintiff's home. The Defendants specifically invoke the need to assure the safety of the Zar child and the need to assure their own safety after the Plaintiff punched Defendant Trowbridge in the face. The Plaintiff contends that reasonable officers in the Defendants' position would not have concluded that exigent circumstances existed because Mr. Zar never told the Defendants that his child was in danger, the officers could see the child safely in Meadway's arms during the encounter, and the Plaintiff never punched Defendant Trowbridge. a. Whether a Constitutional Violation Has Occurred "A `person may not be arrested at home without a warrant, regardless of the existence of probable cause, absent exigent circumstances.'" Estate of Bing v. City of Whitehall, 456 F.3d 555, 564 (6th Cir.2006) (quoting United States v. Bradley, 922 F.2d 1290, 1293 (6th Cir.1991), overruled on other grounds by United States v. McGlocklin, 8 F.3d 1037, 1047 (6th Cir.1993) (en banc)); see also United States v. Rohrig, 98 F.3d 1506, 1513 (6th *787 Cir.1996) ("`[S]earches and seizures inside a home without a warrant are presumptively unreasonable.'") (quoting Payton v. New York, 445 U.S. 573, 586, 100 S.Ct. 1371, 63 L.Ed.2d 639 (1980)). "Exigent circumstances are situations where real immediate and serious consequences will certainly occur if the police officer postpones action to obtain a warrant." Thacker v. City of Columbus, 328 F.3d 244, 253 (6th Cir.2003) (internal quotations omitted). There are four general categories of exigent circumstances justifying a warrantless entry into a home: "(1) hot pursuit of a fleeing felon, (2) imminent destruction of evidence, (3) the need to prevent a suspect's escape, and (4) a risk of danger to the police or others." Rohrig, 98 F.3d at 1515. In addition, the Court may consider three factors in determining whether exigent circumstances existed: "(1) whether the government has demonstrated that the need for immediate action would have been defeated if the police had taken the time to secure a warrant; (2) whether the government's interest is sufficiently important to justify a warrantless search; and (3) whether the defendant's conduct somehow diminished the reasonable expectation of privacy he would normally enjoy." Thorne v. Steubenville Police Officer, 463 F.Supp.2d 760, 772 (S.D.Ohio 2006) (citing Rohrig, 98 F.3d at 1518), aff'g in part 243 Fed.Appx. 157 (6th Cir.2007). The Defendants argue that there was no constitutional violation because the prohibition against warrantless entry into a person's home does not apply to arrests made in a person's doorway; such arrests may be effected solely on the basis of probable cause. The Defendants cite two cases from within the Sixth Circuit in which courts have concluded that an arrest begun in someone's doorway or at the threshold to their house can be considered "an outsidenot insidearrest." United States v. Archibald, 589 F.3d 289, 297 (6th Cir.2009) (doorstep arrests are considered made outside the home); United States v. McLemore, 2006 WL 572353, at *8 (E.D.Wis.2006) (arrest made at doorway of apartment is outside the home). In these cases, unlike in the instant case, the primary question to be answered was the legitimacy of the police officers' search inside the defendants' homes following a threshold arrest. In other cases directly concerning the constitutionality of warrantless doorway arrests, however, the Sixth Circuit has concluded that threshold arrests made without a warrant in the absence of exigent circumstances can be a violation of the Fourth Amendment. Denton v. Rievley, 353 Fed.Appx. 1 (6th Cir. 2009); Cummings v. City of Akron, 418 F.3d 676 (6th Cir.2005); United States v. Saari, 272 F.3d 804 (6th Cir.2001); Hameline v. Wright, 2008 WL 2696920, 2008 U.S. Dist. LEXIS 49643 (W.D.Mich.2008). Procedurally and factually similar to the matter now before the Court, these cases are ultimately more persuasive. In Cummings, two police officers were investigating a domestic disturbance call at 1115 Peerless Avenue in Akron, Ohio. Cummings, 418 F.3d at 679. The plaintiff's girlfriend told them that the plaintiff was at his house at 1125 Peerless Avenue, and the officers went there to question him. Id. The plaintiff's front door consisted of an outside screen door and an inside entry door. Id. One of the officers opened the outside screen door, knocked on the inside entry door, and waited for the plaintiff. Id. The plaintiff initially came to a window from inside his home and only came to the front door after the officers requested that he do so. Id. He partially opened his inside door and denied the officers' request to enter his house. Id. While speaking with the plaintiff, one of the officers smelled the odor of marijuana emanating *788 from inside the plaintiff's home. Id. When asked about it, the plaintiff attempted to close his front door and end the encounter. Id. He was unable to, however, because one of the police officers had placed his foot inside the doorway. Id. The officers then pushed the inside door open and entered the plaintiff's home. Id. The plaintiff filed a § 1983 lawsuit against the officers for, among other things, warrantless entry into his home in violation of the Fourth Amendment. Id. at 680. The court granted the defendants' motion for summary judgment on the warrantless entry claim on the ground that defendants were entitled to qualified immunity. Id. The Sixth Circuit reversed, distinguishing United States v. Santana, 427 U.S. 38, 96 S.Ct. 2406, 49 L.Ed.2d 300 (1976). In Santana, the Supreme Court concluded that a person standing in the doorway to her house was in a public place for the purposes of the Fourth Amendment and therefore able to be arrested on the basis of probable cause alone. Id. at 42, 96 S.Ct. 2406. The essential difference between the defendant in Santana and the plaintiff in Cummings was their expectations of privacy as revealed through their behavior. In Santana, the defendant stood in her doorway before the police arrived in full view of the general public. Santana, 427 U.S. at 42, 96 S.Ct. 2406. In Cummings, in contrast, the plaintiff only appeared at his doorway at the command of the officers, never fully opened his door, refused their request to enter his house and tried to end his encounter with them by closing his door; through these actions, he "manifested his intent to keep the inside of his home private." Cummings, 418 F.3d at 685. Because the arrest was initiated after the plaintiff manifested his desire for privacy, the warrantless arrest, unsupported by consent or exigent circumstances, violated the plaintiff's right to be free of warrantless entry into his home. Id. at 686-87. Likewise, in Hameline, the plaintiff in a § 1983 suit opened his inside entry door in response to a knock from the police. Hameline, 2008 WL 2696920, at *1-2, 2008 U.S. Dist. LEXIS 49643, at *4-5. The plaintiff refused the police officer's request to step outside his house, attempted to close his door, and was prevented from doing so by the officer's foot in the doorway. Id. at *2, 2008 U.S. Dist. LEXIS 49643 at *5. The plaintiff tried to back away from the officer into his home, but the officer grabbed his wrist. Id. Because he did not release his grip on the plaintiff, the officer was drawn into the plaintiff's house as the plaintiff tried to back away from the officer. Id. The plaintiff was subsequently arrested inside his living room. Id. at *2, 2008 U.S. Dist. LEXIS 49643 at *6. The plaintiff brought a § 1983 suit for, among other claims, warrantless entry into his home. Id. at *1, 2008 U.S. Dist. LEXIS 49643 at *1. Defendants filed a motion for summary judgment alleging qualified immunity; the district court denied the motion, concluding that the plaintiff "had a reasonable expectation of privacy as he spoke with" the officer at his door. Id. at *5, 2008 U.S. Dist. LEXIS 49643 at *13. Because the plaintiff came to the door only in response to the officer's knock, indicated throughout the conversation his desire to limit his contact with the police, refused to come outside the house, and tried to close the door, the plaintiff "clearly manifested his desire to keep his home private." Id. at *6, 2008 U.S. Dist. LEXIS 49643 at *16. The officer's entrance into the plaintiff's housecommitted when he reached into the house to grab the plaintiff's armwas therefore unreasonable, and, because no exigent circumstances existed, was in violation of the Fourth Amendment. Id. at *5-8, 2008 U.S. Dist. LEXIS 49643 at *16-17, *22. *789 The facts of the case sub judice are nearly indistinguishable from those in Cummings and Hameline, and the Court therefore concludes that the Defendants' entrance into the Plaintiff's home was unreasonable. The Plaintiff, like the plaintiffs in Cummings and Hameline, only came to her door in response to knocks by the police officer. The Plaintiff, not once but three times, tried to close her door on the Defendants. She turned off her interior lights and repeatedly told the Defendants in no uncertain terms that she wanted them to leave. In the Plaintiff's version of events, which the Court must credit, she did not extend her arm across her doorway or otherwise exit her home. She did not "expose [her]self to public view" and, through every word and gesture, indicated her intent to "maintain[] an expectation of privacy;" she was "therefore in [her] home and not a public place." Denton, 353 Fed. Appx. at 5. When the Defendants reached into the Plaintiff's home to grab her wrist and arrest her, they violated her right to be free of warrantless entry into her home. The arrest of Ms. Zar at her doorway thus constitutes a violation of her constitutional rights unless the Defendants can show the existence of an exigent circumstance justifying a warrantless entry into her home. The Defendants first argue that the need to ensure the safety of the infant was such an exigent circumstance. Assuming the Plaintiff's version of the facts to be true, as the Court must, no reasonable officer would have believed that the infant's safety was in jeopardy such that immediate action was required. According to the Plaintiff, Mr. Zar did not tell the police that his baby was in danger, and the baby was clearly visible in his aunt's arms while the police were questioning the Plaintiff. That the baby was crying could have meant any number of things, but, once the child appeared at the doorway unharmed, is not sufficient of itself to support a fear that the baby's safety was in jeopardy. The Defendants next argue that their own safety constituted an exigent circumstance. Under the Plaintiff's version of events, the Plaintiff never punched or otherwise touched the Defendants; at worst, she yelled and cursed at them. Indeed, even under Defendant Payne's account, the Defendants entered the Plaintiff's home before she punched Defendant Trowbridge. The Plaintiff's clear intent was to close her door and cease her encounter with the Defendants, who could have no reason to fear a woman who they significantly outweighed and who was behind a closed door in her own home. The Defendants have therefore not rebutted the presumption of unreasonableness, and the Court concludes that Defendants violated Plaintiff's constitutional right to be free from warrantless arrest inside her home. b. Whether the Violation Involved a Clearly Established Constitutional Right of Which a Reasonable Person Would Have Known The Plaintiff's right to be free of warrantless entry into her home when she has manifested her desire to keep her home private has been clearly established at least since the Sixth Circuit decided Cummings in 2005, three years before the Defendants' actions in this case. It is therefore clearly established law that the Defendants' entry into the Plaintiff's home was presumptively unreasonable. The absence of exigent circumstances was also clearly established. The risk of danger exception to the warrant requirement necessitates "a risk of serious injury posed to the officers or others that required swift action." United States v. Huffman, 461 F.3d 777, 783 (6th Cir.2006) *790 (citing Whren v. United States, 517 U.S. 806, 813, 116 S.Ct. 1769, 135 L.Ed.2d 89 (1996)). In Thacker v. City of Columbus, 328 F.3d 244, 254 (6th Cir.2003), the Sixth Circuit concluded that it was "a close question" whether officers who responded to a report of stabbing and were met at the door to the residence by a person with blood on his legs and shorts, with a bleeding hand, and appearing intoxicated satisfied the risk of danger exception. The facts known to the officers in Thacker presented a much stronger case for violating the constitutional sanctity of a the home than in the instant case, and it would be clear to a reasonable officer that the Defendants' conduct was unlawful in the situation they confronted. Saucier, 533 U.S. at 202, 121 S.Ct. 2151; see also Burchett v. Kiefer, 310 F.3d 937, 945 (6th Cir.2002) ("[A] right can be clearly established even if there is no case involving `fundamentally similar' or `materially similar' facts. Rather, a right is clearly established when `the reasoning, though not the holding,' of a prior court of appeals decision puts law enforcement officials on notice, or when the `premise' of one case `has clear applicability' to a subsequent set of facts.") (quoting and interpreting Hope v. Pelzer, 536 U.S. 730, 743-44, 122 S.Ct. 2508, 153 L.Ed.2d 666 (2002)). c. Whether the Plaintiff Has Offered Sufficient Evidence To Indicate That What the Defendants Allegedly Did Was Objectively Unreasonable In Light of the Clearly Established Constitutional Rights The Plaintiff has provided affidavits from herself and two witnesses to support her allegations that the Defendants entered her home without a warrant after she had manifested her desire to maintain the privacy and sanctity of her home. Her evidence rebuts the Defendants' contention that she exited her home by poking her finger in the Defendants' faces or by punching Defendant Trowbridge and constitutes sufficient evidence to support the claim that the Defendants unreasonably entered her home without a warrant in violation of her clearly established constitutional rights. The Defendants have therefore failed to show that they are entitled to qualified immunity on the Plaintiff's warrantless entry claim, and the Court DENIES the Defendants' motion for summary judgment on this count. 2. Excessive Force The Plaintiff alleges that the Defendants' use of force against herbeginning with grabbing her wrists to pull her out of her home and continuing with throwing her to the ground and dragging her to the patrol carwas unreasonable under the Fourth Amendment. The Defendants in their Motion contend that this claim is barred by the doctrine of qualified immunity because the use of force against the Plaintiff was not unreasonable and because, if a constitutional violation occurred, it was not clearly established.[4] There is a constitutional right to be free from excessive force during an arrest. Graham v. Connor, 490 U.S. 386, 109 S.Ct. 1865, 104 L.Ed.2d 443 (1989). Claims for excessive force in the course of an arrest, stop, or seizure are "properly analyzed under the Fourth Amendment's `objective reasonableness' standard." Id. at 388, 109 S.Ct. 1865. In assessing an excessive force claim in a motion for summary judgment, the Court must construe all the facts in the record in the light most *791 favorable to the Plaintiff. Schreiber v. Moe, 596 F.3d 323, 332 (6th Cir.2010). In order to determine whether the force used during an arrest or seizure was objectively unreasonable, the Court must balance "the nature and quality of the intrusion on the individual's Fourth Amendment interests against the countervailing governmental interests at stake." Graham, 490 U.S. at 396, 109 S.Ct. 1865 (internal citations omitted). The Court should look to "the severity of the crime at issue, whether the suspect poses an immediate threat to the safety of the officers or others, and whether he is actively resisting arrest or attempting to evade arrest by flight." Graham, 490 U.S. at 396, 109 S.Ct. 1865. The reasonableness must be judged from the point of view of the officer on the scene at the time the force was used. Id. In a qualified immunity analysis, the last prong requires the Court to determine "whether the plaintiff offered sufficient evidence to indicate that what the [government] official allegedly did was objectively unreasonable in light of the clearly established constitutional rights;" it cannot be decided on summary judgment where there are contentious factual disputes over the reasonableness of the use of force. Sova v. City of Mt. Pleasant, 142 F.3d 898, 903 (6th Cir.1998); see also Sample v. Bailey, 337 F.Supp.2d 1012, 1021 (N.D.Ohio 2004) (citing Sova and finding summary judgment inappropriate because "whether [the police officer's actions] were reasonable is contingent on the fact-finder's resolution of [the relevant] factual conflict."). Under the Graham v. Connor analysis, the Court must consider totality of the circumstances to ascertain whether the police used excessive force. In this case, however, many of the underlying facts that would determine whether the officers' use of force was reasonable are in dispute (i.e. whether the Plaintiff threw a glass dish at Mr. Zar and the infant, whether she punched Defendant Trowbridge in the face, and whether she purposefully kicked Defendant Payne in the throat). If the jury determines that the Plaintiff had physically assaulted Mr. Zar and the Defendants, then the Defendants' actions were reasonable under the Fourth Amendment. Bouggess v. Mattingly, 482 F.3d 886, 891 (6th Cir.2007) ("Merely resisting arrest by wrestling oneself free from officers and running away would justify the use of some force to restrain the suspect."); Burchett v. Kiefer, 310 F.3d 937, 944 (6th Cir.2002) (concluding that use of force to handcuff a person who was twisting and turning was reasonable). If, on the other hand, the jury determines that the Plaintiff did not physically assault anyone, then the Defendants' actions were not reasonable under the Fourth Amendment. Solomon v. Auburn Hills Police Dep't, 389 F.3d 167, 174 (6th Cir.2004) (denying qualified immunity on excessive force claim against officers who knocked plaintiff onto the ground, shoved her into a display case, and twisted her arm during arrest for trespassing when plaintiff was significantly smaller than either officer and was not resisting arrest). Because "the reasonableness of the use of force is the linchpin of the case" and "the legal question of qualified immunity turns upon which version of the facts one accepts, the jury, not the judge, must determine liability." Sova, 142 F.3d at 903. Thus, Defendants' Motion for Summary Judgment on Plaintiffs' excessive force claim is DENIED. V. CONCLUSION For the reasons set forth in this Opinion, Defendants' Motion for Summary Judgment is GRANTED in part and DENIED in part. The Motion is GRANTED with respect to the Plaintiff's claim for warrantless arrest in violation of the *792 Fourth Amendment (First Claim). The warrantless arrest claim is accordingly DISMISSED against all Defendants. The Motion is DENIED with respect to the Plaintiff's claims for warrantless entry (First Claim) and excessive use of force (Second Claim) under the Fourth Amendment. IT IS SO ORDERED. NOTES [1] Ms. Zar alleges that she recalls only bits and pieces after the initiation of the arrest at her doorstep. Prone to panic attacks, she blacked out and has only a partial memory of the rest of the night. [2] The Defendants attempt to allege that the Plaintiff admits that she punched Trowbridge. In her deposition testimony, the Plaintiff admits that she was struggling to get away from the officers and says that if Trowbridge got hit, it was accidentally during that struggle. She affirmatively states that she did not hit Trowbridge in the face. [3] Because the Court's conclusion under Walker disposes of this claim, the Court does not need to address the Defendants' arguments that the arrest was supported by probable cause and that they are entitled to qualified immunity. It also does not need to address the Defendants' contention that the claim is barred by Heck v. Humphrey, 512 U.S. 477, 486-87, 114 S.Ct. 2364, 129 L.Ed.2d 383 (1994) (holding that § 1983 plaintiff may not seek damages where judgment in plaintiff's favor "would necessarily imply the invalidity" of a conviction or sentence not yet invalidated on direct appeal, by executive order, or through habeas corpus). In that regard, however, the Court notes that the Plaintiff would in any case be entitled to the Heck-exception established in Powers v. Hamilton County Public Defender Comm'n, 501 F.3d 592, 601 (6th Cir.2007) for plaintiffs "precluded `as a matter of law' from seeking habeas redress." [4] It should be noted that the Plaintiff's claim for excessive force is not barred under Heck by her convictions in state court. Donovan v. Thames, 105 F.3d 291, 294 (6th Cir. 1997) (state conviction for resisting arrest did not bar federal excessive force claim). | Low | [
0.47474747474747403,
29.375,
32.5
]
|
Wonderla Bamba A high thrill 18-seated ride with the wonderla mascot, Chikku, which takes you up and down in clockwise and anti-clockwise directions, with sudden, unexpected accelerations and abrupt drops from a height of 5.5 meters. Thrilling enough to pump up the adrenaline, but secure enough to keep your mind at ease. Recommended age: 12 years and above. Not recommended for heart patients or people suffering from high BP. | Low | [
0.453302961275626,
24.875,
30
]
|
BEAT DOWN IN BELO HORIZONTE A look at the “Red Wedding” of football REUTERS/Kai Pfaffenbach I grew up in a football mad nation. The capital city of Dhaka would come to a virtual standstill when national heavyweights Abahani & Mohammedan would lock horns in a league match or a Cup final. I never had the privilege of attending one of these games at the National Stadium, and television coverage was often not guaranteed. Instead, my ears would be glued to our radio, listening to the play-by-play commentary provided by the legendary Khoda Box Mridha. This was of course before the success of the National Cricket Team, which would propel Cricket into the national consciousness as the number one sport in the country. With success came the riches, and the Bangladesh Cricket Board soon became the wealthiest sports federation in the country, while Football on the other hand started to slink back to the dark ages, with lack of funds preventing proper investments in grassroots programs and overall infrastructures. A once proud South Asian footballing nation, Bangladesh would often give perennial favourites India a good run for their money. However, decades of mismanagement meant that the 8th most populous country in the world would fall behind tiny nations like Bhutan and The Maldives. But Football and Cricket are different beasts. While the game of Cricket is typically played between nations, Football is more club oriented, aside from the big tournaments like the World Cup and the European Championship which comes around once every four years. In the meantime, you have a year-round dose of the glitzy English Premier League and the ultimate competition in club football the Champions League. When I would visit my friends to work on a school project, I would typically catch them watching an Arsenal vs Manchester United game, rather than a South Africa vs Australia cricket match. Whatever the state of the game might have been in the country, the passion for Football was well and truly alive in bedrooms across the nation. For every Bangladesh Cricket jersey on the street, you were likely to see five Real Madrid shirts. When your native country is regularly ranked between 150th-200th in the world, you seek to attach yourself to a more established footballing nation. For some reason, the history behind which I have not researched, Bangladeshis associated themselves with the two South American superpowers, Argentina and Brazil. Bangladesh was the land of Pelé. Bangladesh was the land of Maradona. And every four years, half the country would be painted blue and white, and the other half resplendent in yellow, blue and green. While ninety percent of the country (including every single person in my immediate and extended family) danced to the tune of either Samba or Tango, I fell in the tiny minority of the population who threw their support behind a slew of assorted European nations. The earliest memory I have of watching the World Cup is Italia 1990, when the Lothar Matthäus led German team lifted the Jules Rimet Trophy. Thus, began a life-long love affair with Die Mannschaft. I was hooked. I zealously started reading about the history of this great footballing nation, Gerd Müller’s poster would soon adorn my bedroom wall. I watched every old game tape I could get my hands on, and never missed another game going forward, be it friendly or competitive. Just as I was starting to get old enough to have meaningful discussions on the sport, Brazil won their fourth World Cup in 1994. This would be the beginning of a frustrating 20-year journey. In between classes, after the current teacher’s departure and the arrival of the next; in between punishing cricket drills under the thirty-five degree Dhaka sun; there would always be just enough time to have an animated discussion about Football. Who was the best striker at the moment? Which goalkeeper would you pick in your team? What formation do you prefer? Etcetera. But it would always end with one question, which country is the best? And inevitably, after exhausting every single logic, after dotting all the I’s and crossing all the T’s, the argument would end with the kid in the yellow shirt going, “Yeah whatever, who has the most World Cups? Thought so.” Drop Mic. Exit stage. Every single time. Every single goddamn time. Oh, how I longed for Die Mannschaft to meet the Canarinho’s on the football pitch. To beat them. My yet young foolish heart dismissive of any other result. Even though I knew when I would go back to those kids to brag about it, their terse reply would be “Who cares, we still have more World Cups.” Then 2002 came as a Godsend. Now battle hardened and in my mid-teens, the moment I had been waiting for had finally arrived. The itch would finally be scratched. Years of festering irritation would finally be scrubbed clean. This was supposed to be THE moment. Remarkably, for two World Cup power houses with the most semifinal appearances and the most matches played in the tournament, this was the first time they would meet in the World Cup. That too in a final! If ever the expression kill two birds with one stone was applicable, this was it. Sure, the Seleçãos had Il Fenomeno, the greatest goal scorer in World Cup history. But we had Der Titan the best player of the tournament! Nietzsche said man is the cruelest animal. Under the Yokohama sky, Il Fenomeno proved to be the cruelest animal in the world…to me. Sometimes when life gives you lemons, lemonade is just not good enough. Sometimes when life gives you lemons, you have to grab the salt and tequila. We shall overcome, we shall overcome, we shall overcome some day. July 8, 2014. Estádio Mineirão, Belo Horizonte. The Red Wedding. Twelve years after that night in Yokohama we’d get another Dance of the Dragons, this time, on Brazilian soil. Almost all pundits, experts, commentators, statistician, odds makers had Brazil beating Germany in their semi-final clash of the 2014 World Cup. Some of them, vomited out ridiculous odds citing home field advantage, the fact that Germany had never beaten Brazil in a competitive match and the fact that Brazil hadn’t lost a competitive match on home soil since 1975. After all, this was Brazil’s destiny they said, the sole reason she wanted to host the event again was to bury the ghost of their 1950 final defeat by Uruguay at the Maracanã, the only other time Brazil had hosted the tournament. This was supposed to atone for that defeat which entered Brazilian culture as the “Maracanãzo (the tragedy of the Maracanã).” What everyone conveniently chose to ignore was the fact that Germany had made the semi-finals or better in 12 of the last 16 World Cups. That Brazil was an utterly shitty team who conceded the tournament opener on an own goal, needed some officiating favours to salvage its dignity against Croatia, was saved by the woodwork (twice) against a superior Chile side in the last 16 and in the quarterfinals, took tactical fouling to a new level to get past Columbia. And oh, Brazil would also be missing their best player in Neymar (injured) and their captain Thiago Silva (suspended). But nobody cared of course, because everyone was running on a high octane mix of emotion and energy, of hype and hysteria, fueled by a fan base whipped into frenzy by a media dreaming of the ultimate glory. But, this would be the day, when the chickens would come home to the roost, the day when the world would see that the Emperor had no clothes. As the pre-match warmups started, the difference was evident. One team kicked a lot, practicing their football with a rugged bluster. The other team played with a fluid and flowing artistry. Can you guess which team did which? In 2014, the answer wasn’t what it used to be. For what it’s worth, Brazil had the most expensive team, with a squad worth over $700 million. A good portion of this was made up by the recent transfer of David Luiz from Chelsea to Paris Saint-Germain for a fee of £50 million, a world-record transfer for a defender. In a bizarre show of support, thousands of cardboard Neymar masks were handed out to fans, the whole team came into the stadium wearing baseball caps with Força Neymar written on them, and during the raucous rendition of the national anthem, stand-in-captain David Luiz and goalkeeper Julio Cesar held up a jersey of Neymar. It was all very touching indeed and by the way Julio Cesar was the first active MLS player to participate in a World Cup semifinal, therefore a special shout out to my hometown team of Toronto FC! The referee was to be Marco Rodriguez. Yes, the same guy whose nickname is the Mexican Dracula. The same guy who failed to spot Luis Suarez’s bite on Giorgio Chiellini. You just couldn’t make this stuff up. For all the pre-match Neymar hullabaloo, the host nation foolishly didn’t have the courtesy to extend Real Madrid and Argentina legend Alfredo Di Stéfano a minute’s silence before kick-off. The famous Argentinian had passed away the day before, and after half an hour, was surely sniggering away up in the heavens, laughing at how well his divine retribution had worked. With every passing second, the home supporters seemed more and more like hostages. Millions across Brazil were in dazed, damp-eyed disbelief. Brazil was having a meltdown, and the entire world was watching. Die Mannschaft was handing down a beat down of a lifetime, a beat down that will echo through the generations. 5–0 after 30 mins of play. Cinco bloody Cero. To say records were tumbling would be an understatement. Perhaps symbolically the most important one was when Miroslav Klose surpassed Il Fenomeno to become the greatest goal scorer in World Cup history, in his own motherland. Hulk was in the starting XI for the home side, but alas they needed more Avengers than that. Apparently as all of this was going on, a thunderstorm had struck the ESPN main studio, it was truly Armageddon in Belo Horizonte. At half-time, the players trudged off the field on the verge of tears; their fans however, were unable to show such restraint. It was hard to say what would have hurt the most, losing in the final, or seeing Germany stop trying too hard because it was too embarrassing. If this were a Brazilian churrascaria, one might have expected Germany to turn the stone over to the red side, signifying that their appetite was sated. No más! No más! For all the energy spent on celebrating the memory of Neymar, the player whose presence was clearly felt the most was Thiago Silva. Without their captain manning the defensive lines, the Seleçãos abandoned the ship at the first sign of danger, and without Silva, there was no one to yell at David Luiz and make sure he wasn’t being…well, David Luiz. As the newly minted £50 million central defender kept on charging into the offensive zone without any definite plan or purpose, at one point he threw a body check at Thomas Müller then lectured him for having the temerity to go down. This, with the score now on 6–0. Brilliant. I guess in a way you could argue that the team paid a beautiful tribute to Neymar. Neymar couldn’t play, so neither did the rest of the team. Dante, who replaced Thiago Silva in the lineup must have felt that he was in all nine circles of hell simultaneously by the time Schürrle thumped in the seventh goal. As a member of the historic treble winning Bayern Munich team only a year ago, he must have been used to being on the right side of the ledger most of the time, but unfortunately in this game, most of his club mates were on the opposite side of the pitch. The great Spanish side, perhaps the best of all time, won the 2010 World Cup scoring eight total goals. By the time the curtains came down in Estádio Mineirão, Germany had scored seven in a single game against the mighty Brazil in Brazil. Even Apollo Creed had put up a better fight against Drago. In all my days of watching and reading about sports, I can’t think of very many other occasions, where the score-line ended up being this one-sided in a game of such magnitude. The one that promptly comes to mind is the 1940 NFL Championship, where the Bears pulverized the Redskins 73–0, on the road no less. This was Brazil’s worst defeat in 94 years. Neymar and his colleagues shared a combined 533 shots on Instagram during the tournament. Maybe if they approached their game with a similar vigor they would have mustered more than the 112 shots they managed during their seven World Cup matches. The post-game headlines wrote themselves: “Brazilians waxed!” Brazil did not deserve to be on the same field as Germany. Home-field advantage was the only thing that allowed them to make the semifinals, and it wasn’t anywhere close to enough to save them against the Germans. Germany was not particularly brilliant; they didn’t have to be. But they were good, of that make no mistake. Their relentlessness as they powered on to get to the final tally of seven had a terrible beauty about it, the total subjugation of an opponent who, before the match, seemed to think it had the divine right to be in the final. This was not just a humiliation, a beating, a complete and utter demolition job. It was the ritual disemboweling of a team, the deconstruction not just of a squad of footballers, but of a nation’s hopes and dreams. After almost two decades of losing my voice in frustrating arguments about why Germany and Brazil were on opposite trajectories to success, about how the Germans simply had more talent than Brazil, about how Germany was simply better than Brazil, I finally had some tangible evidence to point to. It’s not 1970 Brazilian fans, your glory days are over. In the 24 months since Brazil’s World Cup degradation, there have been growing calls for the national game to undergo a “Germanification”. A group of Brazilian businessmen asked Borussia Dortmund and Bayern Munich whether they’d be interested in bringing their academies to Brazil; they refused. The Brazilian Football Federation contacted Double Pass, a Belgian football consultancy firm credited with revolutionizing German youth football, while clubs like Atlético Paranaense have hired companies like EXOS who advises the German national team on training and nutrition. Such is the new-found respect for the German game that players in the Brasileiro are now being renamed after Joachim Löw’s heroes. Flamengo’s Jonas is known as “Schweinsteiger do Nordeste (the Schweinsteiger of the Northeast)”. Ceará’s Uillian Correia is now “Uillian Kroosreia”. Nobody was referring to Ronaldinho as the “buck-toothed Jeremies”. Times have certainly changed. I know when I go back to argue with my friends who are Brazil fans, they will have the same closing remark. Who has more World Cups? Fair enough. But now, I can pass them a 7-UP and say “Drink up boys!’. Mic drop. Exit stage. For July 8, 2014, will never, ever, be forgot. You can support us here >>> https://www.patreon.com/theballpoint | Mid | [
0.5762711864406781,
34,
25
]
|
Ephraim Douglass Adams Ephraim Douglass Adams (December 18, 1865 in Decorah, Iowa – September 1, 1930 in Stanford, California) was an American educator and historian, regarded as an expert on the American Civil War and British-American relations. He was known as a great teacher, with the ability to inspire teachers and researchers, and his presentation style was copied by Stanford historian Thomas A. Bailey. Born in Iowa in 1865, he graduated from the University of Michigan in 1887, earning a Ph.D. in 1890. In the same year he was appointed special agent in charge of street railways for the 11th (1890 U.S. Census). His earlier work was done at the University of Kansas, where he became assistant professor (1891) and associate professor (1894) of history and sociology, and in 1899 professor of European history. In 1902 he was made associate professor of history in Leland Stanford Junior University, and in 1906, full professor of history at Stanford University. His work is widely cited. He is best known for The Power of Ideals in American History (1913). Bibliography The Control of the Purse in the United States Government (1894) The Influence of Grenville on Pitt's Foreign Policy, 1787-1798 (1904) British Interests and Activities in Texas, 1838-1846 (Albert Shaw Lectures, Johns Hopkins University, 1910) Lord Ashburton and the Treaty of Washington (1912) The Power of Ideals in American History (1913) Great Britain and the American Civil War (2 vols.) (1925) References External links Ephraim Douglass Adams Papers Category:American political writers Category:American male non-fiction writers Category:People from Decorah, Iowa Category:University of Michigan alumni Category:1865 births Category:1930 deaths Category:Stanford University Department of History faculty | High | [
0.8039024390243901,
25.75,
6.28125
]
|
--- abstract: | To provide constraints on their inversion, ocean sound speed profiles (SSPs) often are modeled using empirical orthogonal functions (EOFs). However, this regularization, which uses the leading order EOFs with a minimum-energy constraint on their coefficients, often yields low resolution SSP estimates. In this paper, it is shown that dictionary learning, a form of unsupervised machine learning, can improve SSP resolution by generating a dictionary of shape functions for sparse processing (e.g. compressive sensing) that optimally compress SSPs; both minimizing the reconstruction error and the number of coefficients. These learned dictionaries (LDs) are not constrained to be orthogonal and thus, fit the given signals such that each signal example is approximated using few LD entries. Here, LDs describing SSP observations from the HF-97 experiment and the South China Sea are generated using the K-SVD algorithm. These LDs better explain SSP variability and require fewer coefficients than EOFs, describing much of the variability with one coefficient. Thus, LDs improve the resolution of SSP estimates with negligible computational burden.\ © 2016 Acoustical Society of America\ \ **Keywords:** Ocean acoustics; geophysics; dictionary learning; machine learning; compressive sensing author: - Michael Bianco - Peter Gerstoft title: Dictionary learning of sound speed profiles --- \[sec:intro\]Introduction ========================= Inversion for ocean sound speed profiles (SSPs) using acoustic data is a non-linear and highly underdetermined problem.[@gerstoft94] To ensure physically realistic solutions while moderating the size of the parameter search, SSP inversion has often been regularized by modeling SSP as the sum of leading order empirical orthogonal functions (EOFs).[@leblanc80]^–^[@huang08] However, regularization using EOFs often yields low resolution estimates of ocean SSPs, which can be highly variable with fine scale fluctuations. In this paper, it is shown that the resolution of SSP estimates are improved using dictionary learning,[@rubenstein2010]^–^[@engan2000] a form of unsupervised machine learning, to generate a dictionary of regularizing shape functions from SSP data for parsimonious representation of SSPs. Many signals, including natural images[@hyvarinen2009]^,^[@jpeg2000], audio[@gersho1991], and seismic profiles[@taylor79] are well approximated using sparse (few) coefficients, provided a dictionary of shape functions exist under which their representation is sparse. Given a $K$-dimensional signal, a dictionary is defined as a set of $N$, $\ell_2$-normalized vectors which describe the signal using few coefficients. The sparse processor is then an $\ell_2$-norm cost function with an $\ell_0$-norm penalty on the number of non-zero coefficients. Signal sparsity is exploited for a number of purposes including signal compression and denoising.[@elad2010] Applications of compressive sensing,[@candes06] one approximation to the $\ell_0$-norm sparse processor, have in ocean acoustics shown improvements in beamforming,[@edel11]^–^[@choo16] geoacoustic inversion,[@yardim14] and estimation of ocean SSPs.[@bianco16] Dictionaries that approximate a given class of signals using few coefficients can be designed using dictionary learning.[@elad2010] Dictionaries can be generated ad-hoc from common shape functions such as wavelets or curvelets, however extensive analysis is required to find an optimal set of prescribed shape functions. Dictionary learning proposes a more direct approach: given enough signal examples for a given signal class, learn a dictionary of shape functions that approximate signals within the class using few coefficients. These learned dictionaries (LDs) have improved compression and denoising results for image and video data over ad-hoc dictionaries.[@elad2010; @schnass14] Dictionary learning has been applied to denoising problems in seismics [@beckouche14] and ocean acoustics [@taroudakis15; @wang16], as well as to structural acoustic health monitoring.[@alguri16] The K-SVD algorithm,[@aharon06] a popular dictionary learning method, finds a dictionary of vectors that optimally partition the data from the training set such that the few dictionary vectors describe each data example. Relative to EOFs which are derived using principal component analysis (PCA),[@hannachi2007; @monahan2009] these LDs are not constrained to be orthogonal. Thus LDs provide potentially better signal compression because the vectors are on average, nearer to the signal examples (see Fig. \[fig:featureSpace\]).[@engan2000] In this paper, LDs describing 1D ocean SSP data from the HF-97 experiment[@carbone2000]^,^[@hodgkiss2002] and from the South China Sea (SCS)[@pinkel] are generated using the K-SVD algorithm and the reconstruction performance is evaluated against EOF methods. In Section II, EOFs, sparse reconstruction methods, and compression are introduced. In Section III, the K-SVD dictionary learning algorithm is explained. In Section IV, SSP reconstruction results are given for LDs and EOFs. It is shown that each shape function within the resulting LDs explain more SSP variability than the leading order EOFs trained on the same data. Further, it is demonstrated that SSPs can be reconstructed up to acceptable error using as few as one non-zero coefficient. This compression can improve the resolution of ocean SSP estimates with negligible computational burden. *Notation*: In the following, vectors are represented by bold lower-case letters and matrices by bold uppercase letters. The $\ell_p$-norm of the vector $\mathbf{x}\in\mathbb{R}^{N}$ is defined as $\|\mathbf{x}\|_p=\big(\sum^N_{n=1}\big|x_n\big|^p\big)^{1/p}$. Using similar notation, the $\ell_0$-norm is defined as $\|\mathbf{x}\|_0=\sum^N_{n=1}\big|x_n\big|^0=\sum^N_{n=1}1_{|x_n|>0}$. The $\ell_p$-norm of the matrix $\mathbf{A}\in\mathbb{R}^{K\times M}$ is defined as $\|\mathbf{A}\|_p=\big(\sum^M_{m=1}\sum^K_{k=1}\big|a_k^m\big|^p\big)^{1/p}$. The Frobenius norm ($\ell_2$-norm) of the matrix $\mathbf{A}$ is written as $\|\mathbf{A}\|_\mathcal{F}$. The hat symbol $\widehat{}$ appearing above vectors and matrices indicates approximations to the true signals or coefficients. EOFs and compression ==================== EOFs and PCA ------------ Empirical orthogonal function (EOF) analysis seeks to reduce the dimension of continuously sampled space-time fields by finding spatial patterns which explain much of the variance of the process. These spatial patterns or EOFs correspond to the principal components, from principal component analysis (PCA), of the temporally varying field. [@hannachi2007] Here, the field is a collection of zero-mean ocean SSP anomaly vectors $\mathbf{Y}=[\mathbf{y}_1,...,\mathbf{y}_M]\in\mathbb{R}^{K\times M}$, which are sampled over $K$ discrete points in depth and $M$ instants in time. The mean value of the $M$ original observations are subtracted to obtain $\mathbf{Y}$. The variance of the SSP anomaly at each depth sample $k$, $\sigma_k^2$, is defined as $$\sigma_k^2=\frac{1}{M}\sum_{m=1}^M \big(y_m^k\big)^2 \label{eq:sspMeanSub2}$$ where $[y_1^k,...,y_M^k]$ are the SSP anomaly values at depth sample $k$ for $M$ time samples. The singular value decomposition (SVD)[@hastie2009] finds the EOFs as the eigenvectors of $\mathbf{Y}\mathbf{Y}^{\rm{T}}$ by $$\mathbf{Y}\mathbf{Y}^{\rm{T}}=\mathbf{P}\mathbf{\Lambda}^2\mathbf{P}^{\rm{T}}, \label{eq:pcaAnal}$$ where $\mathbf{P}=[\mathbf{p}_1,...,\mathbf{p}_L]\in\mathbb{R}^{K\times L}$ are EOFs (eigenvectors) and $\mathbf{\Lambda}^2=\rm{diag}([\lambda_1^2,...,\lambda_L^2])\in\mathbb{R}^{L\times L}$ are the total variances of the data along the principal directions defined by the EOFs $\mathbf{p}_l$ with $$\sum_{k=1}^K\sigma_k^2=\frac{1}{M}\rm{tr}\big{(}\mathbf{\Lambda}^2\big{)}. \label{eq:blah}$$ The EOFs $\mathbf{p}_l$ with $\lambda_1^2\ge ... \ge \lambda_L^2$ are spatial features of the SSPs which explain the greatest variance of $\mathbf{Y}$. If the number of training vectors $M\ge K$, $L=K$ and $[\mathbf{p}_1,...,\mathbf{p}_L]$ form a basis in $\mathbb{R}^{K}$. SSP reconstruction using EOFs ----------------------------- Since the leading-order EOFs often explain much of the variance in $\mathbf{Y}$, the representation of anomalies $\mathbf{y}_m$ can be compressed by retaining only the leading order EOFs $P<L$ $$\widehat{\mathbf{y}}_m=\mathbf{Q}_P\widehat{\mathbf{x}}_{P, m} \label{eq:pcaAnal222}$$ where $\mathbf{Q}_P\in\mathbb{R}^{K\times P}$ is here the dictionary containing the $P$ leading-order EOFs and $\widehat{\mathbf{x}}_{P, m}\in\mathbb{R}^{P}$ is the coefficient vector. Since the entries in $\mathbf{Q}_P$ are orthonormal, the coefficients are solved by $$\widehat{\mathbf{x}}_{P,m}=\mathbf{Q}_P^{\rm{T}}\mathbf{y}_m. \label{eq:eofPseudo}$$ For ocean SSPs, usually no more than $P= 5$ EOF coefficients have been used to reconstruct ocean SSPs.[@huang08]^,^[@gerstoft96] Sparse reconstruction --------------------- A signal $\mathbf{y}_m$, whose model is sparse in the dictionary $\mathbf{Q}_N=[\mathbf{q}_1 ,...,\mathbf{q}_N]\in\mathbb{R}^{K\times N}$ ($N$-entry sparsifying dictionary for $\mathbf{Y}$), is reconstructed to acceptable error using $T\ll K$ vectors $\mathbf{q}_n$.[@elad2010] The problem of estimating few coefficients in $\mathbf{x}_m$ for reconstruction of $\mathbf{y}_m$ can be phrased using the canonical sparse processor $$\widehat{\mathbf{x}}_{m}=\underset{\mathbf{x}_m\in\mathbb{R}^N}{\arg\min} \|\mathbf{y}_m-\mathbf{Q}\mathbf{x}_m\|_2 \ \ \text{subject to} \ \ \|\mathbf{x}_m\|_0\le T. \label{eq:sparseObject}$$ The $\ell_0$-norm penalizes the number of non-zero coefficients in the solution to a typical $\ell_2$-norm cost function. The $\ell_0$-norm constraint is non-convex and imposes combinatorial search for the exact solution to Eq. (\[eq:sparseObject\]). Since exhaustive search generally requires a prohibitive number of computations, approximate solution methods such as matching pursuit (MP) and basis pursuit (BP) are preferred.[@elad2010] In this paper, orthogonal matching pursuit (OMP)[@pati93] is used as the sparse solver. For small $T$, OMP achieves similar reconstruction accuracy relative to BP methods, but with much greater speed.[@elad2010] It has been shown that non-orthogonal, overcomplete dictionaries $\mathbf{Q}_N$ with $N>K$ (complete, $N=K$) can be designed to minimize both error and number of non-zero coefficients $T$, and thus provide greater compression over orthogonal dictionaries.[@gersho1991]^,^[@engan2000]^,^[@elad2010] While overcomplete dictionaries can be designed by concatenating ortho-bases of wavelets or Fourier shape functions, better compression is often achieved by adapting the dictionary to the data under analysis using dictionary learning techniques.[@aharon06; @engan2000] Since Eq. (\[eq:sparseObject\]) promotes sparse solutions, it provides criteria for the design of dictionary $\mathbf{Q}$ for adequate reconstruction of $\mathbf{y}_m$ with a minimum number of non-zero coefficients. Rewriting Eq.(7) with $$\underset{\mathbf{Q}}{\min}\big\{\underset{\mathbf{X}}{\min} \|\mathbf{Y}-\mathbf{Q}\mathbf{X}\|^2_\mathcal{F} \ \ \text{subject to} \ \ \forall_m,\|\mathbf{x}_m\|_0\leq T\big\}, \label{eq:dLearnObjective}$$ where $\mathbf{X=[x}_1 ,...,\mathbf{x}_M]$ is the matrix of coefficient vectors corresponding to examples $\mathbf{Y}=[\mathbf{y}_1 ,...,\mathbf{y}_M]$, reconstruction error is minimized relative to the dictionary $\mathbf{Q}$ as well as relative to the sparse coefficients. In this paper, the K-SVD algorithm, a clustering based dictionary learning method, is used to solve Eq.(\[eq:dLearnObjective\]). The K-SVD is an adaptation of the K-means algorithm for vector quantization (VQ) codebook design (a.k.a. the generalized Lloyd algorithm).[@gersho1991] The learned dictionary (LD) vectors $\mathbf{q}_n$ from this technique partition the feature space of the data rather than $\mathbb{R}^{K}$, increasing the likelihood that $\mathbf{y}_m$ is as a linear combination of few vectors $\mathbf{q}_n$ in the solution to Eq. (\[eq:sparseObject\]) (see Fig. \[fig:featureSpace\]). By increasing the number of vectors $N\ge K$ for overcomplete dictionaries, and thus the number of partitions in feature space, the sparsity of the solutions can be increased further.[@engan2000] Vector quantization ------------------- Vector quantization (VQ)[@gersho1991] compresses a class of $K$–dimensional signals $\mathbf{Y}=[\mathbf{y}_1,..., \mathbf{y}_M]\in\mathbb{R}^{K\times M}$ by optimally mapping $\mathbf{y}_m$ to a set of code vectors $\mathbf{C}=[\mathbf{c}_1,..., \mathbf{c}_N]\in\mathbb{R}^{K\times N}$ for $N<M$, called a codebook. The signals $\mathbf{y}_m$ are then quantized or replaced by the best code vector choice from $\mathbf{C}$.[@gersho1991] The mapping that minimizes mean squared error (MSE) in reconstruction $$\rm{MSE}(\mathbf{Y},\widehat{\mathbf{Y}})=\frac{1}{N}\|\mathbf{Y}-\widehat{\mathbf{Y}}\|_\mathcal{F}^2, \label{eq:distortion}$$ where $\widehat{\mathbf{Y}}=[\widehat{\mathbf{y}}_1,..., \widehat{\mathbf{y}}_M]$ is the vector quantized $\mathbf{Y}$, is the assignment of each vector $\mathbf{y}_m$ to the code vectors $\mathbf{c}_n$ based on minimum $\ell_2$–distance (nearest neighbor metric). Thus the $\ell_2$–distances from the code vectors $\mathbf{c}_n$ define a set of partitions $(R_1,..., R_N)\in\mathbb{R}^K$ (called Voronoi cells) $$R_n=\left\{i\mid\forall_{l\neq n},\|\mathbf{y}_i-\mathbf{c}_n\|_2<\|\mathbf{y}_i-\mathbf{c}_l\|_2\}\right ., \label{eq:clustering}$$ where if $\mathbf{y}_i$ falls within the cell $R_n$, $\widehat{\mathbf{y}}_i$ is $\mathbf{c}_n$. These cells are shown in Fig. \[fig:kmeans\_vs\_dlearn\](a). This is stated formally by defining a selector function $S_n$ as $$S_n(\mathbf{y}_m)=\bigg\{ \begin{matrix} \ \ 1\ \ \text{if}\ \mathbf{y}_m\in\mathit{R_n} \\ \ 0\ \ \text{otherwise}. \end{matrix} \label{eq:selector}$$ The vector quantization step is then $$\widehat{\mathbf{y}}_m=\sum_{n=1}^N S_n(\mathbf{y}_m)\mathbf{c}_n. \label{eq:quantize}$$ The operations in Eq. (\[eq:clustering\]–\[eq:selector\]) are analogous to solving the sparse minimization problem $$\widehat{\mathbf{x}}_{m}=\underset{\mathbf{x}_m\in\mathbb{R}^N}{\arg\min} \|\mathbf{y}_m-\mathbf{C}\mathbf{x}_m\|_2 \ \ \text{subject to} \ \ \|\mathbf{x}_m\|_0= 1, \label{eq:altCluster}$$ where the non-zero coefficients $x_m^n=1$. In this problem, selection of the coefficient in $\mathbf{x}_m$ corresponds to mapping the observation vector $\mathbf{y}_m$ to $\mathbf{c}_n$, similar to the selector function $S_n$. The vector quantized $\mathbf{y}_m$ is thus written, alternately from Eq. (\[eq:quantize\]), as $$\widehat{\mathbf{y}}_m=\mathbf{C}\widehat{\mathbf{x}}_m. \label{eq:altQuantize}$$ K-means ------- Given the MSE metric (Eq. (\[eq:distortion\])), VQ codebook vectors $[\mathbf{c}_1,..., \mathbf{c}_N]$ which correspond to the centroids of the data $\mathbf{Y}$ within $(R_1,..., R_N)$ minimize the reconstruction error. The assignment of $\mathbf{c}_n$ as the centroid of $\mathbf{y}_j\in R_n$ is $$\mathbf{c}_n=\frac{1}{|R_n|}\sum_{j\in R_n}\mathbf{y}_j, \label{eq:centroid}$$ where $|R_n|$ is the number of vectors $\mathbf{y}_j\in R_n$. The K-means algorithm shown in Table \[algo:kmeans\], iteratively updates $\mathbf{C}$ using the centroid condition Eq. (\[eq:centroid\]) and the $\ell_2$ nearest–neighbor criteria Eq. (\[eq:clustering\]) to optimize the code vectors for VQ. The algorithm requires an initial codebook $\mathbf{C}^0$. For example, $\mathbf{C}^0$ can be $N$ random vectors in $\mathbb{R}^K$ or selected observations from the training set $\mathbf{Y}$. The K-means algorithm is guaranteed to improve or leave unchanged the $\rm{MSE}$ distortion after each iteration and converges to a local minimum.[@gersho1991]^,^[@aharon06] Given: Training vectors $\mathbf{Y}=[\mathbf{y}_1,...,\mathbf{y}_M]\in\mathbb{R}^{K\times M}$ ----- -------------------------------------------------------------------------------------------------------------------------------------------- Initialize: index $i = 0$, codebook $\mathbf{C}^{0}=[\mathbf{c}_1^0,...,\mathbf{c}_N^0]\in\mathbb{R}^{K\times N}$, $\rm{MSE}^0$ solving Eq. (\[eq:distortion\])–(\[eq:quantize\]) I: Update codebook 1\. Partition $\mathbf{Y}$ into $N$ regions $(R_1,..., R_N)$ by $R_n=\left\{i\mid\forall_{l\neq n},\|\mathbf{y}_i-\mathbf{c}_n^i\|_2<\|\mathbf{y}_i-\mathbf{c}_l^i\|_2\}\right.$ (Eq. (\[eq:clustering\])) 2\. Make code vectors centroids of $\mathbf{y}_j$ in partitions $R_n$ $\mathbf{c}_n^{i+1}=\frac{1}{|R_n^i|}\sum_{j\in R_n^i}\mathbf{y}_j$ II. Check error 1\. Calculate $\rm{MSE}^{i+1}$ from updated codebook $\mathbf{C}^{i+1}$ 2\. If $|\rm{MSE}^{i+1}-\rm{MSE}^{i}|<\eta$ $i=i+1$, return to I else end : The K-means algorithm (Ref. .)[]{data-label="algo:kmeans"} Dictionary learning =================== Two popular algorithms for dictionary learning, the method of optimal directions (MOD)[@engan2000] and the K-SVD,[@aharon06] are inspired by the iterative K-means codebook updates for VQ (Table \[algo:kmeans\]). The $N$ columns of the dictionary $\mathbf{Q}$, like the entries in codebook $\mathbf{C}$, correspond to partitions in $\mathbb{R}^K$. However, they are constrained to have unit $\ell_2$-norm and thus separate the magnitude (coefficients $\mathbf{x}_n$) from the shapes (dictionary entries $\mathbf{q}_n$) for the sparse processing objective Eq.(\[eq:sparseObject\]). When $T=1$, the $\ell_2$-norm in Eq. (\[eq:sparseObject\]) is minimized by the dictionary entry $\mathbf{q}_n$ that has the greatest inner product with example $\mathbf{y}_m$.[@elad2010] Thus for $T=1$, $[\mathbf{q}_1,..., \mathbf{q}_N]$ define radial partitions of $\mathbb{R}^K$. These partitions are shown in Fig. \[fig:kmeans\_vs\_dlearn\](b) for a hypothetical 2D ($K=2$) random data set. This corresponds to a special case of VQ, called gain-shape VQ.[@gersho1991] However, for sparse processing, only the shapes of the signals are quantized. The gains, which are the coefficients $\mathbf{x}_m$, are solved. For $T>1$, the sparse solution is analogous to VQ, assigning examples $\mathbf{y}_m$ to dictionary entries in $\mathbf{Q}$ for up to $T$ non-zero coefficients in $\mathbf{x}_m$. Given these relationships between sparse processing with dictionaries and VQ, the MOD[@engan2000] and K-SVD[@aharon06] algorithms attempt to generalize the K-means algorithm to optimization of dictionaries for sparse processing for $T\ge1$. They are two-step algorithms which reflect the two update steps in the K-means codebook optimization: (1) partition data $\mathbf{Y}$ into regions $(R_1,..., R_N)$ corresponding to $\mathbf{c}_n$ and (2) update $\mathbf{c}_n$ to centroid of examples $\mathbf{y}_m\in R_N$. The K-means algorithm is generalized to the dictionary learning problem Eq.(\[eq:dLearnObjective\]) as two steps: 1. Sparse coding: Given dictionary $\mathbf{Q}$, solve for up to $T$ non-zero coefficients in $\mathbf{x}_m$ corresponding to examples $\mathbf{y}_m$ for $m=[1,...,M]$ 2. Dictionary update: Given coefficients $\mathbf{X}$, solve for $\mathbf{Q}$ which minimizes reconstruction error for $\mathbf{Y}$. The sparse coding step (1), which is the same for both MOD and K-SVD, is accomplished using any sparse solution method, including matching pursuit and basis pursuit. The algorithms differ in the dictionary update step. The K-SVD Algorithm ------------------- The K-SVD algorithm is here chosen for its computational efficiency, speed, and convergence to local minima (at least for $T=1$). The K-SVD algorithm sequentially optimizes the dictionary entries $\mathbf{q}_n$ and coefficients $\mathbf{x}_m$ for each update step using the SVD, and thus also avoids the matrix inverse. For $T=1$, the sequential updates of the K-SVD provide optimal dictionary updates for gain-shape VQ.[@aharon06]^,^[@gersho1991] Optimal updates to the gain-shape dictionary will, like K-means updates, either improve or leave unchanged the MSE and convergence to a local minimum is guaranteed. For $T>1$, convergence of the K-SVD updates to a local minimum depends on the accuracy of the sparse-solver used in the sparse coding stage.[@aharon06] In the K-SVD algorithm, each dictionary update step $i$ sequentially improves both the entries $\mathbf{q}_n\in\mathbf{Q}^i$ and the coefficients in $\mathbf{x}_m\in\mathbf{X}^i$, without change in support. Expressing the coefficients as row vectors $\mathbf{x}_T^n\in\mathbb{R}^N$ and $\mathbf{x}_T^j\in\mathbb{R}^N$, which relate all examples $\mathbf{Y}$ to $\mathbf{q}_n$ and $\mathbf{q}_j$, respectively, the $\ell_2$-penalty from Eq. (\[eq:dLearnObjective\]) is rewritten as $$\begin{aligned} \begin{split} \|\mathbf{Y}-\mathbf{Q}\mathbf{X}\|^2_\mathcal{F} &=\bigg\|\mathbf{Y}-\sum_{n=1}^N\mathbf{q}_n\mathbf{x}^n_T\bigg\|^2_\mathcal{F} \\ &= \|\mathbf{E}_j-\mathbf{q}_j\mathbf{x}^j_T\|^2_\mathcal{F}, \label{eq:ksvdSeparate} \end{split}\end{aligned}$$ where $$\mathbf{E}_j = \bigg{(}\mathbf{Y}-\sum_{n\ne j}\mathbf{q}_n\mathbf{x}^n_T\bigg{)}. \label{eq:ksvdSeparate2}$$ Thus, in Eq. (\[eq:ksvdSeparate\]) the $\ell_2$-penalty is separated into an error term $\mathbf{E}_j=[\mathbf{e}_{j,1},...,\mathbf{e}_{j,M}]\in\mathbb{R}^{K\times M}$, which is the error for all examples $\mathbf{Y}$ if $\mathbf{q}_j$ is excluded from their reconstruction, and the product of the excluded entry $\mathbf{q}_j$ and coefficients $\mathbf{x}_T^j\in\mathbb{R}^N$. An update to the dictionary entry $\mathbf{q}_j$ and coefficients $\mathbf{x}_T^j$ which minimizes Eq. (\[eq:ksvdSeparate\]) is found by taking the SVD of $\mathbf{E}_j$, which provides the best rank-1 approximation of $\mathbf{E}_j$. However, many of the entries in $\mathbf{x}_T^j$ are zero (corresponding to examples which don’t use $\mathbf{q}_j$). To properly update $\mathbf{q}_j$ and $\mathbf{x}_T^j$ with SVD, Eq. (\[eq:ksvdSeparate\]) must be restricted to examples $\mathbf{y}_m$ which use $\mathbf{q}_j$ $$\|\mathbf{E}_j^R-\mathbf{q}_j\mathbf{x}^j_R\|^2_\mathcal{F}, \label{eq:restricted}$$ where $\mathbf{E}_j^R$ and $\mathbf{x}_R^j$ are entries in $\mathbf{E}_j$ and $\mathbf{x}_T^j$, respectively, corresponding to examples $\mathbf{y}_m$ which use $\mathbf{q}_j$, and are defined as $$\begin{aligned} \begin{split} \mathbf{E}_j^R=\big\{\mathbf{e}_{j,l}\big|\forall_l, \ x_l^j \ne 0\big\}, \ \mathbf{x}_R^j=\big\{x_l^j\big| \ \forall_l, \ x_l^j \ne 0\big\}. \label{eq:restricted} \end{split}\end{aligned}$$ Thus for each K-SVD iteration, the dictionary entries and coefficients are sequentially updated as the SVD of $\mathbf{E}^R_j=\mathbf{USV}^{\rm{T}}$. The dictionary entry $\mathbf{q}_j^i$ is updated with the first column in $\mathbf{U}$ and the coefficient vector $\mathbf{x}^j_R$ is updated as the product of the first singular value $\mathbf{S}(1,1)$ with the first column of $\mathbf{V}$. The K-SVD algorithm is given in Table \[algo:ksvd\]. Given: $\mathbf{Y}\in\mathbb{R}^{K\times M}$, $\mathbf{Q}^0\in\mathbb{R}^{K\times N}$, $T\in\mathbb{N}$, and $i=0$ ------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Repeat until convergence: 1. $m = 1:M$ solve Eq. (\[eq:sparseObject\]) using any sparse solver a: $\widehat{\mathbf{x}}_{m}=\underset{\mathbf{x}_m\in\mathbb{R}^N}{\arg\min} \|\mathbf{y}_m-\mathbf{Q}^i\mathbf{x}_m\|_2 \ \ \text{subject to} \ \ \|\mathbf{x}_m\|_0\le T$ b: $\mathbf{X} = [\widehat{\mathbf{x}}_1 ,...,\widehat{\mathbf{x}}_M]$ 2. for $j = 1:N$ a: compute reconstruction error $\mathbf{E}_j$ as $\mathbf{E}_j=\mathbf{Y}-\sum\limits_{n\ne j}\mathbf{q}_n^i\mathbf{x}^n_T$ b: obtain $\mathbf{E}_j^R$, $\mathbf{x}_R^j$ corresponding to nonzero $\mathbf{x}_T^j$ c: apply SVD to $\mathbf{E}_j^R$ $\mathbf{E}_j^R=\mathbf{USV}^{\rm{T}}$ d: update $\mathbf{q}_j^{i}$: $\mathbf{q}_j^{i}=\mathbf{U}(:,1)$ e: update $\mathbf{x}_R^j$: $\mathbf{x}_R^j=\mathbf{V}(:,1)\mathbf{S}(1,1)$ f: $\mathbf{Q}^{i+1}=\mathbf{Q}^{i}$ $i=i+1$ : The K-SVD Algorithm (Ref. )[]{data-label="algo:ksvd"} The dictionary $\mathbf{Q}$ is initialized using $N$ randomly selected, $\ell_2$-normalized examples from $\mathbf{Y}$.[@aharon06]^,^[@elad2010] During the iterations, one or more dictionary entries may become unused. If this occurs, the unused entries are replaced using the most poorly represented examples $\mathbf{y}_m$ ($\ell_2$-normlized), determined by reconstruction error. Experimental results ==================== [cc]{}\ -- -- -- -- [cc]{}\ \ [cc]{}\ \ [cc]{}\ To demonstrate the usefulness of the dictionary learning approach, we here analyze two data sets: (1) thermistor data from the HF-97 acoustics experiment,[@carbone2000]^,^[@hodgkiss2002] conducted off the coast of Point Loma, CA and (2) conductivity, temperature, and depth (CTD) data collected across the Luzon Strait near the South China Sea (SCS).[@pinkel] Training data $\mathbf{Y}$ were derived from the data sets by converting raw thermistor and CTD data to SSPs and subtracting the mean. The HF-97 thermistor data was recorded every 15 s, over a 48 hour period, from 14 to 70 m depth, with 4 m spacing (15 points). The full 11,488 profile data set was down-sampled to $M=1000$ profiles for the training set, and SSPs were interpolated to $K=30$ points using a shape-preserving cubic spline. The SCS CTD data was recorded at about 1 m resolution from 116 to 496 m depth (384 points). From the SCS data set, $M=755$ profiles were used as the training set, and the profiles were uniformly down-sampled to $K=50$ points. The SSP data sets are shown in Fig. \[fig:sspHeat\]. Both data sets have small and large spatiotemporal variations. EOFs were calculated from the SVD (Eq. \[eq:pcaAnal\]) and LDs (learned dictionaries) were generated with the K-SVD algorithm (Table \[algo:ksvd\]), using OMP for the sparse coding stage. The number of non-zero coefficients solved with OMP for each dictionary was held fixed at exactly $T$ non-zero coefficients. The initial dictionary $\mathbf{Q}^0$ was populated using randomly selected examples from the training sets $\mathbf{Y}$. Learning SSP dictionaries from data ----------------------------------- Here, LDs and EOFs were generated using the full SSP data from HF-97 ($M=1000$) and SCS ($M=755$). The EOFs and LDs from HF-97 are shown in Fig. \[fig:eofsVSlds\]–\[fig:lds\_hf97\] and from the SCS in Fig. \[fig:lds\_soChina\]. The HF-97 LD, with $N=K$ and $T=1$, is compared to the EOFs ($K=30$) in Fig. \[fig:eofsVSlds\]. Only the leading order EOFs (Fig. \[fig:eofsVSlds\](a)) are informative of ocean SSP variability whereas all shape functions in the LD (Fig. \[fig:eofsVSlds\](b)) are informative (Fig. \[fig:eofsVSlds\](c)–(d)). This behavior is also evident for the SCS data set (Fig. \[fig:lds\_soChina\]). The EOFs ($K=50$) calculated from the full training set are shown in Fig. \[fig:lds\_soChina\](a), and the LD entries for $N=50$ and $T=1$ sparse coefficient are shown in Fig. \[fig:lds\_soChina\](b). The overcomplete LDs for the HF-97 data shown in Fig. \[fig:lds\_hf97\] and for the SCS data in Fig. \[fig:lds\_soChina\](c). As illustrated in Fig. \[fig:featureSpace\], by relaxing the requirement of orthogonality for the shape functions, the shape functions can better fit the data and thereby achieve greater compression. The Gram matrix $\mathbf{G}$, which gives the coherence of matrix columns, is defined for a matrix $\mathbf{A}$ with unit $\ell_2$-norm columns as $\mathbf{G}=|\mathbf{A}^{\rm{T}}\mathbf{A}|$. The Gram matrix for the EOFs (Fig. \[fig:eofsVSlds\](e)) shows the shapes in the EOF dictionary are orthogonal ($\mathbf{G=I}$, by definition), whereas those of the LD (Fig. \[fig:eofsVSlds\](f)) are not. Reconstruction of SSP training data ----------------------------------- In this section, EOFs and LDs are trained on the full SSP data sets $\mathbf{Y}=[\mathbf{y}_1,..., \mathbf{y}_M]$. Reconstruction performance of the EOF and LDs are then evaluated on SSPs within the training set, using a mean error metric. The coefficients for the learned $\mathbf{Q}$ and initial $\mathbf{Q}^0$ dictionaries $\widehat{\mathbf{x}}_m$ are solved from the sparse objective (Eq. (\[eq:sparseObject\])) using OMP. The least squares (LS) solution for the $T$ leading-order coefficients $\mathbf{x}_L\in\mathbb{R}^{T}$ from the EOFs $\mathbf{P}$ were solved by Eq. (\[eq:eofPseudo\]). The best combination of $T$ EOF coefficients was solved from the sparse objective (Eq. (\[eq:sparseObject\])) using OMP. Given the coefficients $\mathbf{X}=[\mathbf{x}_1,...,\mathbf{x}_m]$ describing examples $\mathbf{Y}=[\mathbf{y}_1,...,\mathbf{y}_m]$, the reconstructed examples $\widehat{\mathbf{Y}}=[\widehat{\mathbf{y}}_1,...,\widehat{\mathbf{y}}_m]$ are given by $\widehat{\mathbf{Y}}=\mathbf{Q}\widehat{\mathbf{X}}$. The mean reconstruction error $\rm{ME}$ for the training set is then $$\text{ME}=\frac{1}{KM}\|\mathbf{Y}-\widehat{\mathbf{Y}}\|_1. \label{eq:meanErrorTrain}$$ We here use the $\ell_1$-norm to stress the robustness of the LD reconstruction. To illustrate the optimality of LDs for SSP compression, the K-SVD algorithm was run using EOFs as the initial dictionary $\mathbf{Q}^0$ for $T=1$ non-zero coefficient. The convergence of $\text{ME}$ for the K-SVD iterations is shown in Fig. \[fig:ini\_dSize\](a). After 30 K-SVD iterations, the mean error of the $M=1000$ profile training set is decreased by nearly half. The convergence is much faster for $\mathbf{Q}^0$ consisting of randomly selected examples from $\mathbf{Y}$. For LDs, increasing the number of entries $N$ or increasing the number of sparse coefficients $T$ will always reduce the reconstruction error ($N$ and $T$ are decided with computational considerations). The effect of $N$ and $T$ on the mean reconstruction error for the HF-97 data is shown in Fig. \[fig:ini\_dSize\](b). The errors are calculated for the range $N=K$ to $N=4K$ and the dictionaries were optimized to use a fixed number non-zero coefficients ($T$). The reconstruction error using the EOF dictionary is compared to results from LDs $\mathbf{Q}$ with $N=3K$, using $T$ non-zero coefficients. In Fig. \[fig:error\_vs\_sparsity\]\[(a) and (c)\] results are shown for the HF-97 ($N=90$) and SCS ($N=150$) data, respectively. Coefficients describing each example $\mathbf{y}_m$, were solved (1) from the LD $\mathbf{Q}$, (2) from $\mathbf{Q}^0$, the dictionary consisting of $N$ randomly chosen examples from the training set (to illustrate improvements in reconstruction error made in the K-SVD iterations), (3) the leading order EOFs, and (4) the best combination of EOFs. The mean SSP reconstruction error using the LDs trained for each sparsity $T$ is less than EOF reconstruction, for either leading order coefficients or best coefficient combination, for all values of $T$ shown. The best combination of EOF coefficients, chosen approximately using OMP, achieves less error than the LS solution to the leading order EOFs, with added cost of search. Just one LD entry achieves the same $\rm{ME}$ as more than 6 leading order EOF coefficients, or greater than 4 EOF coefficients chosen by search (Fig. \[fig:error\_vs\_sparsity\]\[(a) and (c)\]). To illustrate the representational power of the LD entries, both true and reconstructed SSPs are shown in Fig. \[fig:sspEst\_examps\](a) for the HF-97 data and in Fig. \[fig:sspEst\_examps\](b) for the SCS data. Nine true SSP examples from each training set, for HF-97 (SCS) taken at 100 (80) point intervals from $m=100$ to 900 (80 to 720), are reconstructed using one LD coefficient. It is shown for each case, that nearly all of the SSP variability is captured using a single LD coefficient. Cross-validation of SSP reconstruction -------------------------------------- The out of sample SSP reconstruction performance of LDs and EOFs is tested using K-fold cross-validation.[@hastie2009] The entire SSP data set $\mathbf{Y}$ of $M$ profiles, for each experiment, is divided into $J$ subsets with equal numbers of profiles $\mathbf{Y}=[\mathbf{Y}_1,...,\mathbf{Y}_J]$, where the fold $\mathbf{Y}_j\in\mathbb{R}^{K\times (M/J)}$. For each of the $J$ folds: (1) $\mathbf{Y}_j$ is the set of out of sample test cases, and the training set $\mathbf{Y}_{tr}$ is $$\mathbf{Y}_{tr}=\big\{\mathbf{Y}_l\big| \ \forall_{l\ne j}\big\}; \label{eq:restricted}$$ (2) the LD $\mathbf{Q}_j$ and EOFs are derived using $\mathbf{Y}_{tr}$; and (3) coefficients estimating test samples $\mathbf{Y}_j$ are solved for $\mathbf{Q}_j$ with sparse processor Eq. (\[eq:sparseObject\]), and for EOFs by solving for leading order terms and by solving with sparse processor. The out of sample error from cross validation $\rm{ME}_{CV}$ for each method is then $$\text{ME}_{CV}=\frac{1}{KM}\sum_{j=1}^J\|\mathbf{Y}_j-\widehat{\mathbf{Y}}_j\|_1. \label{eq:kFolds}$$ The out of sample reconstruction error $\rm{ME}_{CV}$ increases over the within-training-set estimates for both the learned and EOF dictionaries, as shown in Fig. \[fig:error\_vs\_sparsity\]\[(b) and (d)\] for $J=10$ folds. The mean reconstruction error using the LDs, as in the within-training-set estimates, is less than the EOF dictionaries. For both the HF-97 (SCS) data, more than 2 (2) EOF coefficients, choosing best combination by search, or more than 3 (equal to 3) leading-order EOF coefficients solved with LS, are required to achieve the same out of sample performance as one LD entry. Solution space for SSP inversion -------------------------------- Acoustic inversion for ocean SSP is a non-linear problem. One approach is coefficient search using genetic algorithms.[@gerstoft94] Discretizing each coefficient into $H$ values, the number of candidate solutions for $T$ fixed coefficients indices is $$S_\text{fixed}=H^T. \label{eq:restricted}$$ If the coefficient indices for the solution can vary, as per dictionary learning with LD $\mathbf{Q}\in\mathbb{R}^{K\times N}$, the number of candidate solutions $S_\text{comb}$ is $$S_\text{comb}=H^T\frac{N!}{T!(N-T)!}. \label{eq:restricted}$$ Using a typical $H=100$ point discretization of the coefficients, the number of possible solutions for fixed and combinatorial dictionary indices are plotted in Fig. \[fig:searchSize\]. Assuming an unknown SSP similar to the training set, the SSP may be constructed up to acceptable resolution using one coefficient from the LD ($10^4$ possible solutions, see Fig. \[fig:searchSize\]). To achieve the similar ME, 7 EOFs coefficients are required ($10^{14}$ possible solutions, Fig. \[fig:searchSize\]) using fixed indices and the best EOF combination requires 5 EOFs ($10^{17}$ possible solutions, Fig. \[fig:searchSize\]). Conclusion ========== Given sufficient training data, dictionary learning generates optimal dictionaries for sparse reconstruction of a given signal class. Since these LDs are not constrained to be orthogonal, the entries fit the distribution of the data such that signal example is approximated using few LD entries. Relative to EOFs, each LD entry is informative to the signal variability. The K-SVD dictionary learning algorithm is applied to ocean SSP data from the HF-97 and SCS experiments. It is shown that the LDs generated describe ocean SSP variability with high resolution using fewer coefficients than EOFs. As few as one coefficient from a LD describes nearly all the variability in each of the observed ocean SSPs. This performance gain is achieved by the larger number of informative elements in the LDs over EOF dictionaries. Provided sufficient SSP training data is available, LDs can improve SSP inversion resolution with negligible computational expense. This could provide improvements to geoacoustic inversion,[@gerstoft94] matched field processing,[@bag93]^,^[@verlinden15] and underwater communications.[@carbone2000] The authors would like to thank Dr. Robert Pinkel for the use of the South China Sea CTD data. This work is supported by the Office of Naval Research, Grant No. N00014-11-1-0439. [99]{} P. Gerstoft, “Inversion of seismoacoustic data using genetic algorithms and *a posteriori* probability distributions," J. Acoust. Soc. Am. **95**(2), 770–782 (1994). L.R. LeBlanc and F.H. Middleton. “An underwater acoustic sound velocity data model," J. Acoust. Soc. Am. **67**(6), 2055–2062 (1980). M. I. Taroudakis and J. S. Papadakis, “A modal inversion scheme for ocean acoustic tomography," J. Comp. Acoust. **1**(4), 395–421 (1993). P. Gerstoft and D.F. Gingras, “Parameter estimation using multifrequency range–dependent acoustic data in shallow water,"J. Acoust. Soc. Am. **99**(5), 2839–2850 (1996). C. Park, W. Seong, P. Gerstoft, and W. S. Hodgkiss,“Geoacoustic inversion using backpropagation," IEEE J. Ocean. Eng. **35**(4), 722–731 (2010). B. A. Tan, P. Gerstoft, C. Yardim, and W. S. Hodgkiss,“Broadband synthetic aperture geoacoustic inversion," J. Acoust. Soc. Am. **134**(1), 312–322 (2013). C. F. Huang, P. Gerstoft, and W. S. Hodgkiss, “Effect of ocean sound speed uncertainty on matched-field geoacoustic inversion," J. Acoust. Soc. Am. **123**(6), EL162–EL168 (2008). R. Rubinstein, A.M. Bruckstein, and M. Elad, “Dictionaries for sparse representation modeling," Proceedings of the IEEE **98**(6), 1045–1057 (2010). M. Elad, *Sparse and Redundant Representations*, Springer, New York (2010). I. Tosic and P. Frossard, “Dictionary learning," IEEE Sig. Proc. Mag. **28**(2), 27–38 (2011). K. Schnass, “On the identifiability of overcomplete dictionaries via the minimisation principle underlying K-SVD," App. and Comp. Harm. Anal. **37**(3), 464–491 (2014). M. Aharon, M. Elad, and A. Bruckstein “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation," IEEE Trans. Sig. Proc. **54**(11), 4311–4322 (2006). K. Engan, S.O. Aase, J.H. Husøy, “Multi-frame compression: theory and design," Signal Processing **80**(10), 2121–2140 (2000). A. Hyvärinen, J. Hurri, and P.O. Hoyer, *Natural Image Statistics: A Probabilistic Approach to Early Computational Vision*, Springer science and business media (2009). C. Christopoulos, A. Skodras, and T. Ebrahimi, “The JPEG2000 still image coding system: an overview," IEEE Trans. Cons. Elec. **46**(4), 1103–1127 (2000). A. Gersho and R.M. Gray, *Vector quantization and signal compression*. Norwell, MA: Kluwer Academic, 1991. H.L. Taylor, S.C. Banks, and J.F. McCoy, “Deconvolution with the $\ell_1$–norm," Geophysics **44**(1), 39–52 (1979). E. Candés, “Compressive sampling," *Proceedings of the international congress of mathematicians*, Vol. 3, 1433–1452 (2006). G. Edelmann and C. Gaumond, “Beamforming using compressive sensing," J. Acoust. Soc. Am. **130**(4), EL232-EL237 (2011). A. Xenaki, P. Gerstoft, and K. Mosegaard, “Compressive beamforming," J. Acoust. Soc. Am. **136**(1), 260–271 (2014). P. Gerstoft, A. Xenaki, and C. F. Mecklenbräuker, “Multiple and single snapshot compressive beamforming," J. Acoust. Soc. Am. **138**(4), 2003–2014 (2015). Y. Choo and W. Song, “Compressive spherical beamforming for localization of incipient tip vortex cavitation," J. Acoust. Soc. Am. **140**(6), 4085–4090 (2016). C. Yardim, P. Gerstoft, W. S. Hodgkiss, and J. Traer, “Compressive geoacoustic inversion using ambient noise," J. Acoust. Soc. Am. **135**(3), 1245–1255 (2014). M. Bianco and P. Gerstoft, “Compressive acoustic sound speed profile estimation," J. Acoust. Soc. Am. EL **139**(3), EL90–EL94 (2016). S. Beckouche and J. Ma, “Simultaneous dictionary learning and denoising for seismic data," Geophysics **79**(3), 27–37 (2014). M. Taroudakis and C. Smaragdakis, “De-noising procedures for inverting underwater acoustic signals in applications of acoustical oceanography," *Euronoise 2015 Maastricht*, pp.1393–1398 (2015). T. Wang and W. Xu, “Sparsity-based approach for ocean acoustic tomography using learned dictionaries," *OCEANS 2016 Shanghai IEEE*, pp.1–6 (2016). K.S. Alguri and J.B. Harley, “Consolidating guided wave simulations and experimental data: a dictionary leaning approach," Proc. SPIE, Health Monitoring of Structural and Biological Systems **9805**, 98050Y-1–98050Y-10 (2016). A. Hannachi, I.T. Jolliffe, and D.B. Stephenson, “Empirical orthogonal functions and related techniques in atmospheric science: a review," International Journal of Climatology **27**(9), 1119–1152 (2007). A.H. Monahan, J.C. Fyfe, M.H. Ambaum, D.B. Stephenson, and G.R. North, “Empirical orthogonal functions: the medium is the message," Journal of Climate **22**(24), 6501–6514 (2009). N. Carbone and W.S. Hodgkiss, “Effects of tidally driven temperature fluctuations on shallow-water acoustic communications at 18kHz," IEEE Journal of Ocean. Eng. **25**(1), 84–94 (2000). W.S. Hodgkiss, W.A. Kuperman, and D.E. Ensberg, “Channel impulse response fluctuations at 6 kHz in shallow water," *Impact of littoral environmental variability of acoustic predictions and sonar performance*, Springer, Netherlands, pp. 295–302 (2002). C.T. Liu, R. Pinkel, M.K. Hsu, J. M. Klymak, H.W. Chen, and C. Villanoy, “Nonlinear internal waves from the Luzon Strait," Eos Trans. AGU **87**(42), 449–451 (2006). T. Hastie, R. Tibshirani, and J. Friedman, *The elements of statistical learning: data mining, inference and prediction*, 2nd Ed. Springer (2009). Y.C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition," in IEEE Proc. 27th. Annu. Asilomar Conf. Signals, Systems and Computers, 40–44 (1993). A.B. Baggeroer, W.A. Kuperman, and P.N. Mikhalevsky, “An overview of matched field methods in ocean acoustics," IEEE Journal of Ocean. Eng. **18**(4), 401–424 (1993). C.M. Verlinden, J. Sarkar, W.S. Hodgkiss, W.A. Kuperman, and K.G. Sabra, “Passive acoustic source localization using sources of opportunity," J. Acoust. Soc. Am. **138**(1), EL54–EL59 (2015). | High | [
0.693823915900131,
33,
14.5625
]
|
Tuesday, October 16, 2012 Shhhh!Don't spread the word! Three-day weekend. House party.White Rock House on Henry Island.You do not want to miss it.It was supposed to be the weekend of their lives—three days on Henry Island at an exclusive house party. Best friends Meg and Minnie each have their own reasons for wanting to be there, which involve their school's most eligible bachelor, T. J. Fletcher, and look forward to three glorious days of boys, bonding, and fun-filled luxury.But what they expect is definitely not what they get, and what starts out as fun turns dark and twisted after the discovery of a DVD with a sinister message: Vengeance is mine.Suddenly, people are dying, and with a storm raging outside, the teens are cut off from the rest of the world. No electricity, no phones, no internet, and a ferry that isn't scheduled to return for three days. As the deaths become more violent and the teens turn on each other, can Meg find the killer before more people die? Or is the killer closer to her than she could ever imagine? REVIEW So I just finished Ten, and I have to say, wow is it gripping. Fans of horror movies will see a lot of similar themes and scares in Ten, but even so and even knowing where it might go, it was utterly gripping. I honestly couldn’t put it down even though I knew I shouldn’t be reading this at night, (as now I’m scared and have to read something happy and more carefree), but wow if it wasn’t captivating. Ten is filled with action, death, surprises, and as mentioned before is gripping in a way that will keep readers stuck to their seats until the last page. I’m not a fan of horror movies or books, but I do recommend this for anyone looking for a shocking and captivating book, especially if you’re a fan of the horror movie genre. Perfect for the Halloween season. Tuesday, October 9, 2012 And then there was a car crash, a horrible injury, and a hospital. But before Evening Spiker's head clears a strange boy named Solo is rushing her to her mother’s research facility. There, under the best care available, Eve is left alone to heal. Just when Eve thinks she will die – not from her injuries, but from boredom—her mother gives her a special project: Create the perfect boy. Using an amazingly detailed simulation, Eve starts building a boy from the ground up. Eve is creating Adam. And he will be just perfect . . . won’t he? REVIEW Do you remember the Animorphs series? That was the product of the collaboration of Michael Grant and Katherine Applegate. Since then, I’ve also been a fan of Grant’s BZRK series, so I couldn’t wait to see what these two had up there sleeves in Eve and Adam. Eve and Adam started from a re-imagining of the story of Adam and Eve, and the two authors have done a wonderful job putting a new and fascinating spin on the old tale. Readers will be spellbound as Eve struggles with the bonds and uncomfortable disconnects of creating something as complex as a human, let alone the perfect boy. But this story isn’t just about a girl making the perfect boy instead it’s filled with action, shocking secrets, moral decisions, and of course love. I loved the way the book stars with a terrible car crash reeling the readers in immediately, and setting a precedent of action and adventure for the rest of the book. While the book doesn’t meet this precedent throughout the entire novel, there are plenty of action packed moments. Besides the action, readers will enjoy the alternating POV of the different characters. My favorite character probably was Solo. He’s a strong, vengeful character, who is forced to reconsider his motivation after seeing the world through the eyes of Eve. He is likable and full of a take charge attitude that helps keep the story going at a nice pace. Of course, while I really enjoyed the action and the characters, I did find a bit of the love triangle aspect of the book a little much. I understand that is one of the basic premises of the book, but I would have preferred it span over a longer period of time before such drastic decisions were made on these feelings. All in all I liked Eve & Adam and can’t wait for the next book in the series. As the story leaves off with readers itching to find out more about Adam and the others. He left his countryside home on the empty promise of a stranger, only to become a captive in a luxurious prison: Coudenberg Palace, the royal court of the Spanish Infanta. Nobody warned Jepp that as a court dwarf, daily injustices would become his seemingly unshakable fate. If the humiliations were his alone, perhaps he could endure them; but it breaks Jepp’s heart to see his friend Lia suffer. After Jepp and Lia attempt a daring escape from the palace, Jepp is imprisoned again, alone in a cage. Now, spirited across Europe in a kidnapper’s carriage, Jepp fears where his unfortunate stars may lead him. But he can't even begin to imagine the brilliant and eccentric new master—a man devoted to uncovering the secrets of the stars—who awaits him. Or the girl who will help him mend his heart and unearth the long-buried secrets of his past. REVIEW Rarely do I read a book that is written with such advanced skill as to evoke images of great poetry, while at the same time possessing a level of ease allowing even the most reluctant of reader to be swept into the story. All of this leaves only one word to describe Jepp, Who Defied the Stars. Magical. Now because you probably want more from a review than just one word I’ll say a bit more about the book. Jepp, Who Defied the Stars is one of the best written books I’ve read in a long while. It is full of beautiful written scenery and descriptions, yet gripping. It is filled with three dimensional characters that you see the faults in, yet still understand there decisions, as they’ve been built as humans not stick figured characters. And it moves at a startlingly quick pace for a book that spends so much time describing the surrounding world and characters in such in-depth detail. Jepp is a fantastic character. He is smart, likable, and fallible. Of course this ability to make mistakes and be swayed by emotion is one of the reasons readers will love Jepp, as they will commiserate with his journey and growth, getting chocked up at the low points and feeling elated at the high. He is a well crafted character who is just a kid learning as he’s forced into impossible situations away from home. Of course, besides making Jepp a wonderful character, Marsh also does a fantastic job recreating the world and situations of the 15 and 1600s. My personal favorite fact based location/character was Tycho and Uraniborg, which I found fascinating, reading as Tycho mapped the stars with nothing more than math they discovered and rudimentary materials. Absolutely fascinating. As you can probably tell I very much Enjoyed Jepp, Who Defied the Stars. I thought it was well written, gripping, original, and balanced story building with plot development very well. All in all I would easily recommend this book to ANYONE I happen to come across, adult or teen. Go buy it. Saturday, October 6, 2012 A god has died, and it’s up to Tara, first-year associate in the international necromantic firm of Kelethres, Albrecht, and Ao, to bring Him back to life before His city falls apart.Her client is Kos, recently deceased fire god of the city of Alt Coulumb. Without Him, the metropolis’s steam generators will shut down, its trains will cease running, and its four million citizens will riot.Tara’s job: resurrect Kos before chaos sets in. Her only help: Abelard, a chain-smoking priest of the dead god, who’s having an understandable crisis of faith.When Tara and Abelard discover that Kos was murdered, they have to make a case in Alt Coulumb’s courts—and their quest for the truth endangers their partnership, their lives, and Alt Coulumb’s slim hope of survival. REVIEW The first thing I have to say about this book is wow. From the 1st page to the last I was absolutely riveted. Even now having finished the book, the first thing I did after reading the last page was look up the authors website and see when the next book he’s writing is coming out. If that’s not a sign of a good book, I don’t know what is. Three Parts Dead has a fantastic combination of characters. All the characters are compromised, all them are filled with histories filled with shadows, and yet you find yourself rooting for these broken characters. I know you’re supposed to love the protagonist, but in this case I really did. Tara is strong, smart, and full of piss and vinegar. Her history is almost as interesting as the back-story to the world here (which I personally could have read an entire series on and loved (not just a book but an entire series.)) That brings me to the second thing I absolutely loved about Three Parts Dead, the rich history of the God Wars and the system of magic that encompassed the book. As I mentioned before I could read an entire series about the God Wars, what was mentioned about this worlds history sound fascinating and thrilling. Gods battling against craftsmen and women (basically human magicians), and the transformation that occurs to these craftsmen over time, changing them from their human bodies, to something of the stars, all in all this whole history and back-story was fascinating. As for the system of magic, as a law student I loved the fact that Gladstone created an entire system of magic based on and for lawyers. Magic is based on contracts and agreements, with the craftiest and sliest being those with the most power. Those who are not in law school may be tripped up by terms such as law of perpetuities, but it is not a huge part of the story, but something that will bring a smile to those who do understand. All in all I loved Three Parts Dead and can’t wait for the next book by Max Gladstone. It had great characters, plenty of action, and is filled with fascinating back-story and system of magic. Definitely recommended for fans of a darker sort of magic. | Mid | [
0.613513513513513,
28.375,
17.875
]
|
Q: There is no Action mapped for action name testAction I made my first struts2 application and after I tried to launch it I have the following error: There is no Action mapped for action name testAction. If the namespace is correct, what is the problem? struts.xml are locate in src folder <struts> <package name="default" namespace="/home/jsp/" extends="struts-default"> <action name="testAction" class="com.myapp.common.action.TestAction"> <result name="success">/success.jsp</result> </action> </package> </struts> A: <package name="default" namespace="/home/jsp" extends="struts-default"> <action name="testAction" class="com.myapp.common.action.TestAction"> <result name="success">success.jsp</result> </action> </package> | High | [
0.6633416458852861,
33.25,
16.875
]
|
Q: multiply list of row with one column by condition (DATA FRAME) I want to multiply a list of columns with one col so I have list of the columns = cols and but I want to multiply only rows that name=="A" .with the columns "multi" data={"col1":[2,3,4,5], "col2":[4,2,4,6], "col3":[7,6,9,11], "col4":[14,11,22,8], "name":["A","A","V","A"], "multi":[1.4,2.5,1.6,2.2]} df=pd.DataFrame.from_dict(data) cols=list(df.columns) for x in ["multi","name","col4"]: cols.remove(x) df Something like this df[cols]=df.loc[df["name"]=="A"]*df["multi"] A: try this df.loc[df.name=='A', cols] = df[df.name=='A'].apply(lambda r:r[cols]*r['multi'], axis=1) | High | [
0.661904761904761,
34.75,
17.75
]
|
Normocapnic anaesthesia with trichloroethylene for intraocular surgery. Measurements of intraocular pressure (IOP) by applanation tonometry in twelve patients undergoing lens extraction showed that a normocapnic anaesthetic technique using 0.4% trichloroethylene with controlled ventilation of the lungs (IPPV) with large tidal volumes (14 ml/kg) reduced IOP by 13--20%. There was only a small reduction in arterial pressure. Normocapnia was easy to achieve by use of the single-limb co-axial Penlon (Bain type) anaesthetic breathing circuit in conjunction with an electrically-driven, small and inexpensive lung ventilator. The anaesthetic technique described using trichloroethylene is suitable for lens extraction surgery when it is desired to avoid a halothane anaesthetic for any reason. | High | [
0.6666666666666661,
34.75,
17.375
]
|
Q: MySQL: split value in column to get multiple rows I have some data in a table like so: product_id | categories ----------------+------------- 10 | 9,12 11 | 8 12 | 11,18,5 I want a select statement that would produce this output: product_id | category_id ----------------+------------- 10 | 9 10 | 12 11 | 8 12 | 11 12 | 18 12 | 5 I don't know how to phrase this scenario to be able to google it. A: What you are looking for is the inverse of a GROUP BY aggregate query using the GROUP_CONCAT. If you are willing to store the results in a temp table, I got just the thing. First, here is the code to use you sample data in a table called prod and a temp table called prodcat to hold the results you are looking for. use test drop table if exists prod; drop table if exists prodcat; create table prod ( product_id int not null, categories varchar(255) ) engine=MyISAM; create table prodcat ( product_id int not null, cat int not null ) engine=MyISAM; insert into prod values (10,'9,12'),(11,'8'),(12,'11,18,5'); select * from prod; Here it is loaded mysql> use test Database changed mysql> drop table if exists prod; Query OK, 0 rows affected (0.00 sec) mysql> drop table if exists prodcat; Query OK, 0 rows affected (0.00 sec) mysql> create table prod -> ( -> product_id int not null, -> categories varchar(255) -> ) engine=MyISAM; Query OK, 0 rows affected (0.07 sec) mysql> create table prodcat -> ( -> product_id int not null, -> cat int not null -> ) engine=MyISAM; Query OK, 0 rows affected (0.06 sec) mysql> insert into prod values -> (10,'9,12'),(11,'8'),(12,'11,18,5'); Query OK, 3 rows affected (0.00 sec) Records: 3 Duplicates: 0 Warnings: 0 mysql> select * from prod; +------------+------------+ | product_id | categories | +------------+------------+ | 10 | 9,12 | | 11 | 8 | | 12 | 11,18,5 | +------------+------------+ 3 rows in set (0.00 sec) mysql> OK, you need query to put together each product_id with each category. Here it is: select concat('insert into prodcat select ',product_id,',cat from (select NULL cat union select ', replace(categories,',',' union select '),') A where cat IS NOT NULL;') ProdCatQueries from prod; Here it is executed mysql> select concat('insert into prodcat select ',product_id,',cat from (select NULL cat union select ', -> replace(categories,',',' union select '),') A where cat IS NOT NULL;') ProdCatQueries from prod; +----------------------------------------------------------------------------------------------------------------------------------+ | ProdCatQueries | +----------------------------------------------------------------------------------------------------------------------------------+ | insert into prodcat select 10,cat from (select NULL cat union select 9 union select 12) A where cat IS NOT NULL; | | insert into prodcat select 11,cat from (select NULL cat union select 8) A where cat IS NOT NULL; | | insert into prodcat select 12,cat from (select NULL cat union select 11 union select 18 union select 5) A where cat IS NOT NULL; | +----------------------------------------------------------------------------------------------------------------------------------+ 3 rows in set (0.00 sec) mysql> Let me run each line by hand mysql> insert into prodcat select 10,cat from (select NULL cat union select 9 union select 12) A where cat IS NOT NULL; Query OK, 2 rows affected (0.07 sec) Records: 2 Duplicates: 0 Warnings: 0 mysql> insert into prodcat select 11,cat from (select NULL cat union select 8) A where cat IS NOT NULL; Query OK, 1 row affected (0.00 sec) Records: 1 Duplicates: 0 Warnings: 0 mysql> insert into prodcat select 12,cat from (select NULL cat union select 11 union select 18 union select 5) A where cat IS NOT NULL; Query OK, 3 rows affected (0.00 sec) Records: 3 Duplicates: 0 Warnings: 0 mysql> OK, good. The queries work. Did the prodcat table populate properly? mysql> select * from prodcat; +------------+-----+ | product_id | cat | +------------+-----+ | 10 | 9 | | 10 | 12 | | 11 | 8 | | 12 | 11 | | 12 | 18 | | 12 | 5 | +------------+-----+ 6 rows in set (0.00 sec) mysql> OK Great. It has the data. To be honest, I think SQL Server can perform all of this in a single pivot query without a handmade temp table. I could have taken it to another level and concatenated all the queries into a single query, but the SQL would have been insanely long. If your actual query had 1000s of rows, a single MySQL would not have been practical. Instead of running the 3 INSERT queries by hand, you could echo the 3 INSERT queries to a text file and execute it as a script. Then, you have a table with the products and categories combinations individually written. A: There isn't a built-in MySQL trickery per se, but you can store a custom procedure that will accomplish your goal in a clever way. Assuming your products table has the columns product_id, categories, and the new category_id: DELIMITER $$ CREATE FUNCTION SPLIT_STRING(val TEXT, delim VARCHAR(12), pos INT) RETURNS TEXT BEGIN DECLARE output TEXT; SET output = REPLACE(SUBSTRING(SUBSTRING_INDEX(val, delim, pos), CHAR_LENGTH(SUBSTRING_INDEX(val, delim, pos - 1)) + 1), delim, ''); IF output = '' THEN SET output = null; END IF; RETURN output; END $$ CREATE PROCEDURE TRANSFER_CELL() BEGIN DECLARE i INTEGER; SET i = 1; REPEAT INSERT INTO products (product_id, category_id) SELECT product_id, SPLIT_STRING(categories, ',', i) FROM products WHERE SPLIT_STRING(categories, ',', i) IS NOT NULL; SET i = i + 1; UNTIL ROW_COUNT() = 0 END REPEAT; END $$ DELIMITER ; CALL TRANSFER_CELL() ; Afterwards, I would delete all rows WHERE categories NOT NULL. | High | [
0.663536776212832,
26.5,
13.4375
]
|
Send questions and comments to [email protected] and follow me on Twitter!Question: Looking at the Best Drama shortlist from last year as an example, do you think many of the usual suspects like Mad Men and Breaking Bad may have their ... Question: Looking at the Best Drama shortlist from last year as an example, do you think many of the usual suspects like Mad Men and Breaking Bad may have their best days behind them (maybe not so much objectively as much as in short-attentioned minds of many voters), along with Homeland seeming to have edged ever-so-slightly into ludicrousness (get pacemaker serial number and induce heart attack, all without Chloe opening a socket), Downton Abbey now having a "perennial obligatory nominee" vibe, and Boardwalk Empire maybe not even deserving to make the final cut anymore, could this be the year that Game of Thrones finally breaks out of the fantasy ghetto and gets enough votes to have its name called when the big envelope is opened? Scenes so far this year involving Lannisters sitting across a table from and talking to (or yelling at) each other are among the most enjoyable I've seen from any show all season. Thrilling scenes such as Dany acquiring her army of soon-to-be freed slaves, the Night's Watch rebellion and the Hound's trial by combat leave the viewer breathless. Quietly mesmerizing speeches such as Jaime confessing to a selfless act that saved thousands but was dubiously honored with "Kingslayer" for his trouble (all while holding his stump in front of him, reminding us that, yes, the show actually did go there). And since producers have kept the broad strokes close to George R.R. Martin's original story, there's no reason not to expect many viewers to require oral surgery by the end of the season given how hard jaws will hit the floor as the events those of us who are reading ahead are giddily awaiting will finally come to pass. In spite of passing within shouting distance of crazytown, I still think Homeland had a very good season, I'd say that Breaking Bad has not shown a dropoff in quality, I believe that Justified reached and arguably exceeded the greatness of season 2, and although those are the only dramas that come close, Game of Thrones leaves them all in the dust. Now that we're close to the end of the current TV season, how do you think Game of Thrones stacks up this year? - Mike Page 2 of 9 - Matt Roush: This has been a remarkable season so far for Game of Thrones for all of the reasons you so cleverly articulate. It absolutely deserves the Emmy nods it will get, and if the momentum continues as we expect it to, up to and including the "events" you hint at thankfully without spoiling, Thrones could end up a front-runner, and wouldn't that be great. Mad Men for me is only now starting to catch fire this season, and between Breaking Bad's incomplete half-season (to me, a corporate misstep on AMC's part to split the final season into two) and Homeland's lurch into ludicrousness in the second half of the season, this does present an opportunity for Thrones. I would take issue that Downton Abbey's inclusion among the top dramas is a rubber-stamped inevitability. Few current dramas bring such joy to so many, and I'd hate to see it ignored. Ditto The Good Wife, which had another tremendous season. I'd also like to see FX's The Americans acknowledged for its smart, taut first season, although maybe not at the expense of Justified. And then there's Rectify, which like Breaking Bad suffers from feeling incomplete with only six hours in the first season. But back to Thrones' chances of winning: Just as it took the Lord of the Rings movies several tries before winning the big Oscar, there is a sense now that Thrones at its best - and this season it is at its best - transcends genre and is so magnificently produced that it deserves to be taken very seriously as a contender. Question: Three things are on my mind this week: 1) Having seen all of it now, Sundance Channel's Rectify is a bona fide masterpiece, up there with Breaking Bad for me (no coincidence that two of its producers are from that show), so thanks for highlighting it in a recent piece. It's great news that it's coming back for a 10-episode second season, though I have to wonder how this show didn't land on Sundance's sister network AMC considering the wider exposure there and the sad fact that AMC will be losing its two best shows over the next year. 2) Is Tatiana Maslany of Orphan Black now TV's most versatile actress or merely its hardest-working? She has to be pulling 16-hour days to do this, and I'm in awe of the way she completely differentiates her characters. 3) The moment I wake up I say a little prayer asking that the "Untitled Greg Garcia Project" isn't picked up. That sounds mean, but Margo Martindale is simply too valuable to The Americans for her to be lost to a sitcom, don't you agree? Seeing her go toe-to-toe with Felicity Elizabeth is one of the great joys of this TV season, almost as much fun as Pete Campbell vs. Lane Pryce on Mad Men last year. - Brian Page 3 of 9 - Matt Roush: You must be eavesdropping on my own private conversations, where each of these topics has been front and center lately. 1) Given the deceptively quiet nature and slow dramatic build of Rectify, Sundance may be a more compatible home for this series, but I was glad to see AMC give the pilot episode a boost by re-airing it after Mad Men recently. This is definitely worth seeking out. 2) I've been in discussion with other TV critics recently on a project, and Tatiana Maslany's tour de force on Orphan Black keeps coming up as something we're all impressed by. So fun to welcome new talents to the TV party, and that includes Aden Young's haunting work on Rectify as well. 3) I'm just happy to know that the wonderful Margo Martindale is so in demand. But yes, one of the highlights of The Americans was the crackling hostility between Elizabeth (Keri Russell, another breakout star this season) and her handler Claudia aka "Granny," who is much deadlier and cannier than she appears. Margo has great comic chops as well, so wherever she lands, I'll be happy to see her - and career-wise, landing a sitcom gig on CBS is nothing to scoff at. But I'll secretly be hoping Claudia's time in Russia is short-lived as well. Question: How did The Following go so badly wrong? I started watching largely due to your review of the pilot and the strength of Kevin Bacon and James Purefoy. How did they go from the gripping pilot to the clichéd ending of a dark and deserted lighthouse and the villain "dying" in a fire after a knock down fight the hero goaded him into by insulting his favorite writer? And of course there are enough question marks around the body that the villain may still be alive, sure didn't see that one coming. Did the writers use up what little originality they actually had writing the pilot? - Jason Matt Roush: The sad fate of Agent Parker aside (and nice cell reception, by the way, in her underground tomb), that finale was such a paint-by-numbers collection of hoary and lazy horror-show clichés, from the fiery "demise" of Joe Carroll to the "shock" ending of Ryan's psycho neighbor lurking to stab our heroes for a cynically dark finish. As Ryan and Claire kept snarling at Joe throughout the ridiculous finale: So predictable! Still, I have no reservations about having recommended this show based on its scary pilot with its unexpectedly nasty twists, and for a good part of the season, the intrigues among Carroll's "following" (especially the Emma-Jacob-Paul triangle) helped make up for silly episodic plotting that made the good guys look like they'd been trained by Barney Fife. But ultimately, The Following couldn't sustain the diabolical nature of its premise over the course of a season, which often happens to shows that might have made better movies or miniseries. I'm sure I'll check into the second season out of curiosity next year, but my expectations certainly have been significantly lowered. Page 4 of 9 - Question: Shut the front door! The Good Wife had one of the best finales of this season or any season. Fun in the courtroom, making out in a limo, political intrigue, office politics, a deluge of guest stars and even Mother Florrick had a nice surprise twist. Can't wait to see where it all lands next year. A little worried for Cary, though; I don't think you would ever want to cross Kalinda. This episode has got to win an Emmy for best writing. - Gary T Matt Roush: Couldn't agree more, as my own write-up in advance of the episode's airing will remind you. This is everything I could want in a season finale: pure entertainment, great twists, superb performances (including some of the best guest-star work in the business), smart writing. But that's just our opinion. Here's an opposing viewpoint. Question: We've always enjoyed The Good Wife and found it to be a cut above CBS' standard procedural fare, but the recent season finale was, to say the least, not the show's finest hour. I don't need strict realism in my TV viewing, but the show's many "twists" were so far-fetched as to be eye-rolling, and frequently took us out of the action. In what universe would a candidate's wife serve as the attorney of record in an election fraud case where the primary witness was her (and the candidate-in-question's) son? Or how about the complete lack of media attention to an all-night emergency election hearing that could dictate the outcome of a gubernatorial race? Yeah, we all know how hands-off the Chicago culture is when it comes to politics. And if those completely implausible scenarios weren't enough, Alicia leaves her husband's victory celebration and no one, including the media, notices? And to do what? Join a new, upstart law firm after just being made partner at the one she's at? Regardless of the silliness of the job change idea, I don't see how a sitting Governor's spouse, particularly of a state as large and influential as Illinois, could even practice courtroom law at all, with almost everything being a conflict-of-interest with her husband as the state government's leader. A problem The Good Wife has always skirted with Alicia frequently going up against her husband's ADA's, something that could never occur in the real world. (Just ask Hillary Clinton or Michelle Obama.) Sure, a governor's wife can work - it's not 1950 - but as a lawyer she'd likely be relegated to teaching, advising or reviewing other's work simply due to ethical guidelines, not to mention political expediency. There's no way Alicia could realistically continue to be a courtroom presence with Peter in the governor's mansion. Page 5 of 9 - And let's not forget the mega plot-hole where Peter Florrick's "fixer" is found to be stuffing a ballot box ... the same ballot box that Peter's son saw tampered with, which is what prompted campaign manager Eli Gold to get the fixer involved in the first place with his "Do I wanna' know?" order to increase votes. Meaning: he couldn't have been already out stuffing boxes when Zach spotted the broken seal. He'd need a time machine to be the culprit! And how on earth would only one side think to try to obtain surveillance footage in the wake of this allegation? Such a request is basic evidence-gathering 101 at this point. All of it dumb, dumb, dumb, something I never thought I'd accuse The Good Wife of being. So what's your take, has this once-great show jumped the shark? - Susan Matt Roush: I don't understand the timeline issue in regards to the fixer and the surveillance video, but I'll let that go, because clearly this is a case where nit-picking got in the way of enjoying a sensational hour of heightened (which isn't always a bad thing) entertainment. To enjoy The Good Wife, you have to accept the premise that Alicia is pursuing her own high-profile career, often in an uncomfortable media spotlight, while her (up-to-now) estranged husband chases his own political goals. I felt the episode made a pretty good case for the extraordinary (and secret) nature of these late-night emergency proceedings, and even while I also personally wondered about the propriety of Alicia grilling her own son on the stand, I enjoyed the drama - and was astonished at what it revealed about the corruption within Peter's campaign. Which like Nixon with the Watergate scandal wasn't even necessary, because he would have won anyway. (And you know this is going to come back to haunt him, and challenge the Florrick marriage, in the future.) I have to give shows like this some leeway to be outrageous and maybe even incredible to accommodate such thrilling game-changers as unfolded in the finale. So no, I clearly don't agree The Good Wife jumped the shark here. In fact, it presented a new shark tank full of possibilities I can't wait to dive into. Question: I absolutely love History Channel's Vikings! It is a fantastic show in every sense. I cannot get enough! But I was crushed to discover the return date as being sometime in 2014. Do you think there is any chance of it returning in the fall (2013) like most series? Vikings is like a drug for me - I'm hooked! Thank you for any info you can offer up. - Cheri Page 6 of 9 - Matt Roush:Vikings operates on a cable, not network, schedule, and produces a considerably fewer number of episodes per season than "most series" that premiere in the fall, again comparing it to the network model. So no, despite its success and all of your exclamation points, Vikings isn't likely to return until winter/spring 2014 at the earliest. It doesn't start production on the new episodes until this summer, and this is a fairly ambitious physical production, so it's probably best to give them time to do it right and be patient. Question: I was wondering if you could give your thoughts on why you feel The Voice is so popular compared to American Idol in particular. All I hear is about how Idol's ratings are falling, even though this year's top 5 is one of the strongest in the show's history. The Voice is okay, but I gave up on the last two seasons midway through, and this year I DVR it but always watch Idol live. I much preferred The X Factor to The Voice last fall as well. The judges' banter on Voice gets old really fast, and they never have anything bad to say about the contestants after performances. People say Idol is too much about the judges this year, but making the judges mentors/coaches only makes it more about the judges. Plus, the footage of contestants preparing with their coaches for the battle and knockout rounds is so boring. The only somewhat valid complaints I hear about Idol are a) the weaker crop of males this year and b) the tired song choices. But Idol's ratings have only gotten worse over the last few weeks, even though the guys are gone and the songs have been a bit more current (especially last week). I know I'm in the minority in being lukewarm on The Voice, so I'd love to hear your thoughts. - Melissa Matt Roush: As always, these things tend to be subjective, and it's not like the Idol fan base has vanished altogether, but a lot of the current shift is probably due to what seems newer and fresher, and in that regard, The Voice has Idol beat, especially during the first stages of the Voice seasons. Those blind auditions are terrific TV, and adding the "steal" to the "battle" rounds also keeps the suspense high. I tend to agree that once the teams are chosen, some of the energy goes out of The Voice, and judges doubling as coaches harms The X Factor to an even greater degree - the only thing I can see that X Factor has brought to TV is a further and faster weakening of the Idol brand on Fox. The franchise may never recover from losing Simon Cowell to his greed for forcing The X Factor on American TV. But I disagree about The Voice's mentoring segments and the sparkling chemistry of its judges' banter. Process is almost always an important part of these competition shows, whether it's the glimpses of rehearsal with the Dancing With the Stars teams or, in one of Idol's best moves all season, showing the singers getting pointers (whether they choose to listen or not) from a charismatic pro like Harry Connick, Jr. - and he was right about over-singing the classics. There may also be some truth in the theory that given Idol's popularity with tween girls, the lack of a male heartthrob this season may have hurt the show, despite the overall excellence of the Top 5 girls (go, Candice) and the fact that Idol provides a better showcase and springboard for individual talents than The Voice has to date. But speaking of divas, if the Idol producers thought that it would be a good idea to cast two prima-donna judges whose apparent contempt for each other casts a pall over the entire panel, they were again sorely mistaken. In conclusion: Go, Candice. Page 7 of 9 - Question: Is it just me, or is Smash becoming a good show? - Renee Matt Roush: Ironically, Smash did become a better show - if not exactly a good one - once it was banished to Saturdays, because the storylines became even more focused on the shows within the show: the Broadway opening of Bombshell and the off-Broadway non-profit-theater development of The Hit List. This is where Smash excels, and the musical numbers, whether uptown or down, remain a very persuasive reason to watch. But if we're talking shark-jumping (something I prefer not to do in most cases), I doubt Smash could ever recover from Karen and Derek's stupefying joint decision to abandon Broadway for the faux Rent - after all the who'll-play-Marilyn melodrama of the first season - which was almost as believable as Bobby resurrected in the shower in Dallas. And the petulant Jimmy Collins character has developed into one of the most aggravating nuisances in recent TV history: going on stage high, clashing with superiors who know better, expressing no gratitude for those who've sacrificed for his show, and in one of the more eye-rolling moments, brawling with his drug-dealing brother at Bombshell's opening-night party. When Derek bristled about still being in high school after one of Jimmy's antics, I had to agree. Even Glee seems a paragon of subtlety and realism compared to the writing on this show. But I'll stick with it for the music, for my helpless devotion for anything that tries to capture the magic of Broadway, and to admire its ambition despite its flaws. Question: Do you know the reason why the titles of Covert Affairs episodes were taken from David Bowie song titles? For example, the last episode of the season was titled "Lady Stardust." Just very curious as I could not see the connection. - DK Matt Roush: Here's an explanation from executive producers Matt Corman and Chris Ord: "Each season we pick a band we love, and name the episodes after their song titles. We try to pick song titles that fit the themes of the episodes, but sometimes they're just songs we love!" (According to the show, Season 1 was Led Zeppelin, Season 2 was R.E.M., Season 3 was Bowie, and it looks like Season 4 will be the Pixies.) For those marking time until the hectic summer TV season gets underway, Covert Affairs starts its fourth season July 16, paired with Suits. Page 8 of 9 - Question: Several years ago, it used to be that just about every TV series showed an episode name and it was close to the beginning of every program. Now on most TV shows, it is missing, and only a very few TV series have a name of the episode. Why is that? - Abel Matt Roush: Many producers still assign titles to their episodes, and sometimes, as in the previous question, they're in homage to a favorite artist (also, think Desperate Housewives and Stephen Sondheim) or they serve as playful variations on a theme, like the mischievous food titles on episodes of Hannibal ("Aperitif," Amuse Bouche," "Entrée"). Whether those titles are shown on screen at the start of an episode seems to be an individual creative choice, and I can't really say why some do it and others don't. You can usually find the episode titles by going online or to the on-screen guide. Question: I really enjoy your column. Thank you for taking the time to really answer the questions asked of you in detail. You get to the nuts and bolts of things. My question is about sci-fi programming. When Sci-Fi became Syfy (a day I rue), they seemed to really kick true "sci-fi" to the curb. With the genre being so very strong and with the glut of cable programming being so obnoxious, why is there not more slotting available for proven programs like the following: Babylon 5, Star Trek (TNG, Voyager, DS9), Stargate (SG-1, Atlantis, Universe), Earth: Final Conflict, Andromeda, Farscape, etc, etc, etc. All of these shows have multiple seasons and plenty of content, not to mention tie-in movies. Why is it (traditional sci-fi) seemingly ignored? The Science Channel does Firefly and Fringe to some success. Why not more on another network? Heck, even one devoted to the genre. - Andy Matt Roush: I wish I had a better or more satisfying answer than the obvious one, but whenever I've seen Syfy (and other) execs address this question, it almost always boils down to one thing: ratings. That said, I'm hopeful that if Defiance continues to hold up, it will encourage Syfy programmers to get more ambitious in their series development, perhaps once again reaching for and beyond the stars. | Mid | [
0.589928057553956,
30.75,
21.375
]
|
Ca2+ current in rabbit carotid body glomus cells is conducted by multiple types of high-voltage-activated Ca2+ channels. Ca2+ current in rabbit carotid body glomus cells is conducted by multiple types of high-voltage-activated Ca2+ channels. J. Neurophysiol. 78: 2467-2474, 1997. Carotid bodies are sensory organs that detect changes in arterial oxygen. Glomus cells are presumed to be the initial sites for sensory transduction, and Ca2+-dependent neurotransmitter release from glomus cells is believed to be an obligatory step in this response. Some information exists on the Ca2+ channels in rat glomus cells. However, relatively little is known about the types of Ca2+ channels present in rabbit glomus cells, the species in which most of the neurotransmitter release studies have been performed. Therefore we tested the effect of specific Ca2+ channel blockers on current recorded from freshly dissociated, adult rabbit carotid body glomus cells using the whole cell configuration of the patch-clamp technique. Macroscopic Ba2+ current elicited from a holding potential of -80 mV activated at a Vm of approximately -30 mV, peaked between 0 and +10 mV and did not inactivate during 25-ms steps to positive test potentials. Prolonged ( approximately 2 min) depolarized holding potentials inactivated the current with a V1/2 of -47 mV. There was no evidence for T-type channels. On steps to 0 mV, 6 mM Co2+ decreased peak inward current by 97 +/- 1% (mean +/- SE). Nisoldipine (2 mu M), 1 mu M omega-conotoxin GVIA, and 100 nM omega-agatoxin IVa each blocked a portion of the macroscopic Ca2+ current (30 +/- 5, 33 +/- 5, and 19 +/- 3% after rundown correction, respectively). Simultaneous application of these blockers revealed a resistant current that was not affected by 1 mu M omega-conotoxin MVIIC. This resistant current constituted 27 +/- 5% of the total macroscopic Ca2+ current. Each blocker had an effect in every cell so tested. However, the relative proportion of current blocked varied from cell to cell. These results suggest that L, N, P, and resistant channel types each conduct a significant proportion of the macroscopic Ca2+ current in rabbit glomus cells. Hypoxia-induced neurotransmitter release from glomus cells may involve one or more of these channels. | High | [
0.6995645863570391,
30.125,
12.9375
]
|
1. Field of the Invention The invention relates to providing emergency route guidance to guide users away from a disaster, and more particularly, to using a portable electronic device to provide navigation instructions away from a disaster when the user requests emergency route guidance from the portable electronic device. 2. Description of the Prior Art Global Positioning System (GPS) based navigation devices are well known and are widely employed as in-car navigation devices. Common functions of a navigation device include providing a map database for generating navigation instructions that are then shown on a display of the navigation device. These navigation devices are often mounted on or in the dashboard of a vehicle using a suction mount or other mounting means. The term “navigation device” refers to a device that enables a user to navigate to a pre-defined destination. The device may have an internal system for receiving location data, such as a GPS receiver, or may merely be connectable to a receiver that can receive location data. The device may compute a route itself, or communicate with a remote server that computes the route and provides navigation information to the device, or a hybrid device in which the device itself and a remote server both play a role in the route computation process. Portable GPS navigation devices are not permanently integrated into a vehicle but instead are devices that can readily be mounted in or otherwise used inside a vehicle. Generally (but not necessarily), they are fully self-contained—i.e. include an internal GPS antenna, navigation software and maps and can hence plot and display a route to be taken. Personal navigation devices strive to guide users on the best possible route in order to minimize the time needed to travel from one point to another. However, in the event of a disaster occurring near the user, the user may be less concerned about arriving at a particular destination, and may be more concerned about simply getting away from the user' s current location. Recent natural disasters around the globe highlight the possible need for users to be able to receive guidance to a safe area. How to get to a safe area might be known for some local residents. Unfortunately, tourists visiting an unfamiliar city or country cannot easily benefit from the knowledge of the local residents. In the past, confusion among drivers has even caused drivers to drive their vehicles straight toward a disaster, such as a tsunami or flood, instead of driving away from it. For tourists not very familiar with their current location, finding an appropriate destination to use for escaping a disaster would be very difficult. Even for those users having a navigation device, choosing an escape route is not always a simple matter in the case of a disaster. This is because there is not usually a given destination that the user must enter into the navigation device. Instead, the user often is trying to get away from the user' s current location. In other words, the navigation device needs to guide the user away from a given location, which is the opposite of the navigation device' s typical job of guiding the user to a destination. Therefore, there exists a need for a navigation device which can provide emergency route guidance in order to guide users away from a disaster. | Mid | [
0.6224256292906171,
34,
20.625
]
|
Q: Any way to prevent bedwetting before it's actually starting? All bedwetting questions here, as well as the Wikipedia article deal with treating the problem when it occurs, after it already started. In that aritlce is says: Bedwetting has a strong genetic component. Children whose parents were not enuretic have only a 15% incidence of bedwetting. When one or both parents were bedwetters, the rates jump to 44% and 77% respectively So my daughter has high chance to become a bedwetter - still didn't discuss it with my wife, so it might even be 77%. I would like to know if there is anything I can do as parent to help my daughter "in advance"? I won't go for medical treatment of course, was thinking maybe giving more focus when potty training her on certain things and explain. This is all pretty far in the horizon as we still did not start potty training her, but better be prepared in my opinion. If anyone has same experience or some good advice, it will be welcome. A: I have four kids. My youngest two (twin boys) are four, my daughter is six and my son is seven. Of the four, my youngest son (youngest by two minutes) has absolutely no problems staying dry through the night. I honestly don't remember the last time he had an accident. His twin brother is the exact opposite and always needs diapers/pull-ups at night. My daughter usually stays dry through the night, but has occasional problems. My oldest has the most problems. He has problems staying dry through the night and occasionally will have accidents during the day. We tried everything with my oldest to try to get him to stop bed wetting. We tried alarm clocks, cutting fluids before bed, rewards, etc... We recently bought a book called, Waking Up Dry: A Guide to Help Children Overcome bed wetting. Out of all of them, the book seems to be the most helpful, but not in the way you might thing. The book helps the child and parent understanding the underlying causes of bed wetting and make the child (and parent) realize wetting the bed does not mean there is anything wrong with you. So, if I had to give any suggestion, I would first suggest by reading this book or some similar material. It will help give you a good understanding of the causes and techniques to prevent bed wetting. Second, I would suggest not punishing or scolding your child . I honestly and regretfully have to admit that we made this mistake. Lastly, relax. Bed wetting is normally not a serious medical condition. It is more likely to be emotionally harmful than physical harmful. | Mid | [
0.6384039900249371,
32,
18.125
]
|
XIII, the 2003 stealth first-person shooter based on a graphic novel of the same name, is getting a remake this fall. The remake is being handled by PlayMagic who say it will retain the style of the original, with a targeted release date of November 13, 2019 on Xbox One, PS4, Switch, and PC. | Low | [
0.45194274028629805,
27.625,
33.5
]
|
I have been reading this Cato Institute legal research paper by Michael Cannon and its making me realize that the GOP governors could really make Obamacare a political liablity for the Democrats by refusing to set up a state run health exchange. It is a fascinating read and I highly recommend it. In summary, it looks like the Obama Adminstration ultimately is going to have to count on a second John Roberts rescue. Heres why. First, from Philip Klein: With the election over and Obama reelected, repealing the law is not going to happen over the next four years. So 30 Republican governors will have to make a decision about whether they want to help the federal government implement Obamacare, or keep the onus on the Obama administration. One of the silver linings of the Supreme Court decision is that it gave states the ability to opt out of the Medicaid expansion. Medicaid is one of the programs that is crushing state budgets and if implemented as intended, Obamacare will add 18 million beneficiaries to the programs rolls. Though the federal government lures states with a honey pot in the short term covering all of the expansion through 2016, by 2020 the states will be asked to kick in 10 percent of the cost, amounting to billions of dollars of spending imposed on states nationwide each year. It would be to the long-term benefit of governors to opt (out) of the expansion... Never surrender and never give up hope. In a few years this monstrosity will be absorbing so much of the federal budget that the only alternative for the left will be to print money and bring on massive monetary inflation. We may have to endure some real hard times to open some folks eyes but that will do the trick. Unfortunately, some (many actually) will never learn. They have never in their life been taught to think logically, only emotionally. Here is a good article that explains why we shouldn't (if possible) just let the system fall apart. I feel the same as you, but the article brings up some good historical evidence for not allowing that to happen. The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States; There are two and only two choices. Either the tax has to be apportioned or it must be uniform. Smaller states would never have allowed the larger states to tax me and not thee. It is becoming more clear that 0bamacare taxpenalty is neither. Scalia pointed out in his opinion that this is still a pending issue because neither side was given the opportunity to address it in the original case. But when a federal appeal court rules that banning race from consideration in admission to public universities is unconstitutional, the federal bench as become little for than a sad joke. This insurance fine cr** isn't going to work. You watch...they'll change it and will double what they take out of your check for Medicare.... And if the penalty is a tax....then every person who is working will pay...including those making less than $250,000...you know....the middle class folks who will NEVER be asked to pay more....cough, cough What the Republican governors should do is remove the negotiating rights of state workers and force them into the same type of health insurance and retirement program as private sector workers. Right now, the SEIU members and teachers believe that their health insurance coverage is sacrosanct, that they will never be forced to into the same rationed care system that they have pushed on the rest of us. We need to make sure that they suffer the same fate as the rest of us and know that it is coming, especially the teachers. Oh, that goes double for politicians. Not true, methinks. All indirect taxes must be uniform and almost all direct taxes must be apportioned. The income tax, which is a direct tax, does not need to be apportioned, per the 16th amendment. No doubt the Obamacare mandate taxalty/penatax will be deemed an income tax. By the way, the correct answer for what is the taxalty is that it is a tax on insurance owning status, and therefore an illegal direct tax. SCOTUS will, if it hasn’t already, declare it either an income tax or to not know what the hell it is except that it’s legal. “a tax on inactivity does not fall within the prescribed functions for which taxes may be levied” It is not so much a tax on nonaction as on the status of not possessing insurance. Which makes it akin to all manner of direct taxes, for instance the tax on having earned so much income in the last year or a tax on ownersio of property. “nor the forms of taxes permitted.” True. If it us a direct tax, and it is, it must be apportioned. If it is an income tax, which it isn’t, it wouldn’t need to. But it isn’t, so it doesn’t. Yes and no. Yes in the sense that the income tax, for instance, is a tax is effectively a tax on the activity that earned you income. No in the sense that the income tax, for instance, is a tax on the status of having earned a certain amount of money in a certain period of time. I see where you’re coming from; it’s the same place that recognizes the absurdity of regulating inactivity. You can’t regulate something that doesn’t exist. But taxation is different: you can tax something that doesn’t exist by taxing the status resulting from its nonexistence. “Actually it’s not as tax at all — it’s a penalty” Absolutely, but as in many things SCOTUS forces us to play makebelieve. I believe this is what will happen in 2014. It will take a while for a suit by someone with standing—in this case, a taxpayer—to wend its way upward, and possibly the Obamination will have several wise Latinas in place by then, so it will all be for schloe, but one can hope. Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works. | Mid | [
0.5508474576271181,
32.5,
26.5
]
|
Avner Shaki Avner-Hai Shaki (, 5 February 1926 - 28 May 2005) was an Israeli politician who served as a government minister in the late 1980s and early 1990s. Biography Born in Safed during the Mandate era, Shaki studied law, gaining a PhD from the Hebrew University of Jerusalem. He later worked as a lecturer in universities in Canada and the United States. On 16 July 1970 he entered the Knesset on the National Religious Party's list as a replacement for the deceased Haim-Moshe Shapira, and in September that year Shaki was appointed Deputy Minister of Education and Culture. He left the cabinet on 17 July 1972, and on 22 May 1973 he left the party to sit as an Independent, remaining an MK until the next elections. Shaki returned to the NRP and was elected to the Knesset in 1984. Re-elected in 1988, he was appointed Minister without Portfolio responsible for Jerusalem Affairs in December that year. In 1990 he was appointed Minister of Religious Affairs, serving until the Likud-led coalition lost power following the 1992 elections. Shaki was re-elected in 1992 and 1996, but lost his seat in the 1999 elections. References External links Category:1926 births Category:2005 deaths Category:People from Safed Category:Jews in Mandatory Palestine Category:Hebrew University of Jerusalem alumni Category:Israeli educators Category:Israeli lawyers Category:National Religious Party leaders Category:Members of the 7th Knesset (1969–1974) Category:Members of the 11th Knesset (1984–1988) Category:Members of the 12th Knesset (1988–1992) Category:Members of the 13th Knesset (1992–1996) Category:Members of the 14th Knesset (1996–1999) Category:Deputy ministers of Israel | High | [
0.6912,
27,
12.0625
]
|
The Obama Administration once again treating an American ally badly … I guess this is one way of making Hillary Clinton look good. The Daily Beast is reporting that Sec. of State John Kerry said during a closed door meeting of the Trilateral Commission on Friday that if Israel doesn’t make peace soon, it could become ‘an apartheid state,’ like the old South Africa. That’s the way John Kerry, piss off one of America’s greatest allies, that will win them over. Hey Kerry, I must have missed which one want to eradicate Israeli’s? And who did Fatah just embracelast week … can you say Hamas. This is unbecoming of a Secretary of State and usually a president would be asking for a resignation, but not when they are doing Obama’s foreign policy bidding. So Barack, then I told them if they do not accept our all or nothing peace solution, Israel could become ‘An Apartheid State’. I see John, you do realize that was not meant to be repeated publicly, right? The secretary of state said that if Israel doesn’t make peace soon, it could become ‘an apartheid state,’like the old South Africa. Jewish leaders are fuming over the comparison. If there’s no two-state solution to the Israeli-Palestinian conflict soon, Israel risks becoming “an apartheid state,” Secretary of State John Kerry told a room of influential world leaders in a closed-door meeting Friday. Senior American officials have rarely, if ever, used the term “apartheid” in reference to Israel, and President Obama has previously rejected the idea that the word should apply to the Jewish state. Kerry’s use of the loaded term is already rankling Jewish leaders in America—and it could attract unwanted attention in Israel, as well. It wasn’t the only controversial comment on the Middle East that Kerry made during his remarks to the Trilateral Commission, a recording of which was obtained by The Daily Beast. Kerry also repeated his warning that a failure of Middle East peace talks could lead to a resumption of Palestinian violence against Israeli citizens. He suggested that a change in either the Israeli or Palestinian leadership could make achieving a peace deal more feasible. He lashed out against Israeli settlement-building. And Kerry said that both Israeli and Palestinian leaders share the blame for the current impasse in the talks. | Low | [
0.530973451327433,
30,
26.5
]
|
Plug in your board and wait for Windows to begin it's driver installation process. After a few moments, the process will fail, despite its best efforts. Click on the Start Menu, right click on "Computer", and select "Manage". Select "Device Manager" from the left, then look for "Unknown device" or "Arduino *Board Name*" under "Other devices" Right click on the "Unknown device" or "Arduino *Board Name*" port and choose the "Update Driver Software" option. Next, choose the "Browse my computer for Driver software" option. Finally, navigate to and select the Arduino driver file, named "Arduino*BoardName*.inf", located in the "Drivers/Arduino" folder of the ROBOTC software (typically in C:/Program Files/Robomatter Inc/ROBOTC Development Environment/) If Windows is unable to verify the publisher of this driver, please select "Install this driver software anyway. | Mid | [
0.6507592190889371,
37.5,
20.125
]
|
Association between early advanced life support and good neurological outcome in out of hospital cardiac arrest: A propensity score analysis. Out-of-hospital cardiac arrest (OHCA) is an important public health problem. The French organization, combining OHCA basic life support (BLS) and advanced life support (ALS), has been recently questioned. The study was conducted to evaluate the association between early ALS (E-ALS) arrival and good neurological outcome at 1 month in nontraumatic OHCA patients. Retrospective cohort study using data from RéAC, multicentre OHCA database since June 2011. Adult patients with nontraumatic cardiac arrest were identified, and firefighters' (BLS) arrival time was recorded. The main analysis was performed after multiple imputation, using propensity score matching with a variable ratio. Sensitivity analyses were also performed. The exposure was early ALS (E-ALS), start of ALS before. or simultaneously with BLS. The primary outcome was the cerebral performance category (CPC) at day 30 after the cardiac arrest (1-2 vs 3-5), while cumulative incidence of return of spontaneous circulation (ROSC) defined secondary outcomes. Between January 2013 and January 2016, a total of 30 672 adult nontraumatic OHCA with resuscitation were identified, from whom 20 804 were included, 2711 in the E-ALS group and 18 093 in the control group. Based on the matched sample, patients in the E-ALS group had a significantly lower rate of good neurological outcome than those in the control group (OR, 0.95; 95% CI, 0.93-0.96). Sensitivity analyses were mostly consistent with this result. Cumulative incidence of ROSC was higher in delayed ALS (D-ALS) group. This study showed that patients in the E-ALS group were less likely to have a good neurological outcome. One explanation of this unexpected result could be the total duration of resuscitation performed, which may be interrupted prematurely in cases of E-ALS. | Mid | [
0.620408163265306,
38,
23.25
]
|
Enhanced sensitivity and long-term G2 arrest in hydrogen peroxide-treated Ku80-null cells are unrelated to DNA repair defects. While the Ku complex, comprised of Ku70 and Ku80, is primarily involved in the repair of DNA double-strand breaks, it is also believed to participate in additional cellular processes. Here, treatment of embryo fibroblasts (MEFs) derived from either wild-type or Ku80-null (Ku80(-/-)) mice with various stress agents revealed that hydrogen peroxide (H(2)O(2)) was markedly more cytotoxic for Ku80(-/-) MEFs and led to their long-term accumulation in the G2 phase. This differential response was not due to differences in DNA repair, since H(2)O(2)-triggered DNA damage was repaired with comparable efficiency in both Wt and Ku80(-/-) MEFs, but was associated with differences in the expression of important cell cycle regulatory genes. Our results support the notion that Ku80-mediated cytoprotection and G2-progression are not only dependent on the cell's DNA repair but also may reflect Ku80's influence on additional cellular processes such as gene expression. | Mid | [
0.617283950617283,
31.25,
19.375
]
|
Combine water, 1 teaspoon cumin, 4 sliced garlic cloves, and chicken in a large saucepan. Cover and bring to a boil over medium-high heat. Reduce heat to medium-low; cook 10 minutes or until chicken is done. Drain, and place chicken on a cutting board. Cut chicken across grain into thin slices; keep warm.Remove 2 tablespoons adobo sauce from can; set aside. Remove 2 chipotle chiles from can; finely chop and set aside. Reserve remaining chiles and adobo sauce for another use. Split rolls in half; arrange in a single layer, cut sides up, on a baking sheet. Broil 1 minute or until lightly toasted. Remove top halves of rolls from baking sheet. Divide chicken mixture evenly among bottom halves of rolls, and top chicken mixture evenly with cheese. Broil chicken-topped rolls 2 minutes or until cheese melts. Remove from oven; top with onion and top roll halves. Serve immediately. --------------------IBS-A for 20 years with terrible bloating and gas. On the diet since April 2004. Remember this from Heather's information pages: "You absolutely must eat insoluble fiber foods, and as much as safely possible, but within the IBS dietary guidelines. Treat insoluble fiber foods with suitable caution, and you'll be able to enjoy a wide variety of them, in very healthy quantities, without problem." Please eat IF foods! LEGAL DISCLAIMER - This website is not intended to replace the services of a physician, nor does it constitute a doctor-patient relationship. Information on this web site is provided for informational purposes only and is not a substitute for professional medical advice. You should not use the information on this web site for diagnosing or treating a medical or health condition. If you have or suspect you have an urgent medical problem, promptly contact your professional healthcare provider. Any application of the recommendations in this website is at the reader's discretion. Heather Van Vorous, HelpForIBS.com, and Heather & Company for IBS, LLC are not liable for any direct or indirect claim, loss or damage resulting from use of this website and/or any web site(s) linked to/from it. Readers should consult their own physicians concerning the recommendations on these message boards. | Mid | [
0.5674044265593561,
35.25,
26.875
]
|
Pipes containing fluid under pressure are prone to leak from a number of causes, including corrosion, freezing, deterioration of fixtures, etc. This is of particular concern in a household residence where protracted leakage of plumbing pipes can cause much property damage to the structure and contents. A number of systems have been devised to automatically shut off water in a plumbing system in the event of a leak. One example is found in U.S. Pat. No. 5,038,820 to Phillip Ames, et al. This system uses a "pivotal flapper" positioned within a pipe, which flapper is pivoted upward due to water flow in the pipe. The flapper then operates a switch which starts a timer which, in turn, operates a motor to control a valve after the expiration of a preset time period. Another example is found in U.S. Pat. No. 4,589,435 to Aldrich, which is very similar in that it uses a probe positioned within a pipe and which is moved by fluid flow to trigger a timing circuit. The timer, in turn, controls a solenoid which closes off a valve to shut down fluid flow after a preset time period. Neither the Ames or the Aldrich patent allows their systems to be set to be triggered at different flow rates. This is a problem, for example, where it is desirable to allow a certain minimal flow volume for humidifiers, ice makers, etc., but to shut off in response to a larger flow volume. In addition, both Ames or Aldrich are relatively complex systems which makes them expensive. It is clear, then, that an improved automatic shut-off device for closing plumbing or other fluid carrying pipes is needed. Such a device should preferably be simple and inexpensive, but be capable of adjustment to allow for different flow thresholds upon installation. | Mid | [
0.592445328031809,
37.25,
25.625
]
|
Q: Prove f is analytic and periodic Suppose that there are entire functions $\{f_n\}$ so that for all complex numbers $x+iy$ $$\sum_{n=1}^{\infty} |f_n(x+iy)|^{\frac{1}{n}} \leq e^x$$ Show that $f(z)=\sum_{n=1}^{\infty} f_n(x+iy)$ is analytic on $\{\Re(z) < 0\}$ and has period $2\pi i$. I don't know how to grid rid of $\frac{1}{n}$. Can anybody give me some ideas? How to prove it's uniformly convergent on any compact subset? And how to prove it's periodic? A: For every $n$, the function $e^{-nz}f_n(z)$ is bounded by $1$ in $\mathbb C$, hence constant by Liouville's theorem. In other words, $f_n(z)=c_n e^{nz}$ where $|c_n|\le 1$. Any compact subset $K$ of the left halfplane is contained in some halfplane $x\le x_0$ with $x_0<0$. On $K$ we have $|f_n(z)|\le (e^{x_0})^n$ for all $n$. Since $e^{x_0}<1$, the Weierstrass test for uniform convergence applies. And since all $f_n$ are $2\pi i$-periodic, so is $f$. | High | [
0.6817576564580561,
32,
14.9375
]
|
Kenya parliament votes to withdraw from ICC MPs vote to withdraw country from jurisdiction of International Criminal Court, as president and deputy face charges. 05 Sep 2013 21:20 GMT Kenya's parliament has voted to back a call for the government to pull out of the International Criminal Court, where the country's president and his deputy are facing trial for crimes against humanity. The motion "to suspend any links, cooperation and assistance" to the court was overwhelming approved by the National Assembly on Thursday. Parliament is dominated by the alliance that brought President Uhuru Kenyatta and his deputy William Ruto to power in a March vote. I am setting the stage to redeem the image of the Republic of Kenya Aden Duale, MP who proposed motion The two men are accused of orchestrating post-election bloodshed more than five years ago. Both deny the charges. Many Kenyan politicians have branded the ICC a "neo-colonialist" institution that only targets Africans, prompting the debate on a possible departure from the Rome Statute of the ICC. "I am setting the stage to redeem the image of the Republic of Kenya," Aden Duale, the majority leader from Kenyatta's Jubilee coalition, said on behalf of the motion. Opposing the resolution, minority leader Francis Nyenze warned: "We'll be seen as a pariah state, we'll be seen as people who are reactionary and who want to have their way." Al Jazeera's Catherine Soi, reporting from Nairobi, said that Kenya had the support of African Union in this matter, and that other African countries could now follow suit. "This motion and what comes after is very significant in many ways. Not only is it a show of defiance against the International Criminal Court, it also sets a precedent for other African countries that would feel aggrieved enough to start processes of their own," she said after the vote. Voluntary sign-up The Hague-based court was set up in 2002 to try the world's worst crimes, and countries voluntarily sign up to join. Any actual withdrawal requires the submission of a formal request to the United Nations, a process that would take at least a year. A withdrawal could however preclude the ICC from investigating and prosecuting any future crimes. We'll be seen as a pariah state, we'll be seen as people who are reactionary and who want to have their way Francis Nyenze, opposition MP Cases could then only be brought before the court if the government decides to accept ICC jurisdiction or the UN Security Council makes a referral. Amnesty International condemned Kenya's move. "This move is just the latest in a series of disturbing initiatives to undermine the work of the ICC in Kenya and across the continent," said Netsanet Belay, Amnesty's Africa director. The rights group called on "each and every parliamentarian to stand against impunity and reject this proposal," warning that "a withdrawal would strip the Kenyan people of one of the most important human rights protections and potentially allow crimes to be committed with impunity in the future". Kenya's 2007 elections were marred by allegations of vote rigging, but what began as political riots quickly turned into ethnic killings and reprisal attacks, plunging Kenya into its worst wave of violence since independence in 1963. Kenyatta and Ruto were fierce rivals in the 2007 vote, but teamed up together and were elected in March in peaceful polls. Judicial process 'in motion' Earlier on Thursday, the ICC's prosecutor said that justice must run its course in the cases against Kenyatta and Ruto. "The judicial process is now in motion at the International Criminal Court. Justice must run its course," said Fatou Bensouda, the court's chief prosecutor, in a video statement on the court's website. Ruto's trial comes about two months ahead of that of Kenyatta, who faces five charges of crimes against humanity, including murder, rape, persecution and deportation. Both Kenyatta and Ruto have said they will cooperate fully with the court and deny the charges against them. William Schabas, an international legal expert, told Al Jazeera that Kenya's obligations under the Rome Statute regarding those who are already being prosecuted "continue even if the country decides to pull out of the courts". "In a strictly legal sense, there's no obstacle [to their continuing prosecution] but it's probably going to be harder to get Kenya to cooperate," he said. Regarding popular support for the motion, Al Jazeera's Soi said: "It really depends on which side you look at. The ruling coalition says this debate is being supported, but surveys show that Kenyans don't want their own country to pull out of the court." | Mid | [
0.547826086956521,
31.5,
26
]
|
Q: overflow-x not working and only few columns are displaying on the screen The below is the css used for my table. I have 15 columns. Even though I use overflow-x: scroll, it is not having any effect on the table and is displaying only half of the columns on the screen. Can anyone help on this? .table {
font-family: Roboto,"Helvetica Neue",sans-serif;
border-collapse: collapse;
background-color: white;
//width: 1000px;
overflow-x: scroll;
overflow-y: scroll;
}
.table td {
border: 1px solid #ddd;
padding: 6px;
min-width: 150px;
font-size: small;
}
.table th {
padding-top: 4px;
padding-bottom: 4px;
text-align: left;
background-color: whitesmoke;
font-size: medium;
font-weight: bold;
color: black;
border: 1px solid #ddd;
} <div class="rule-container mat-elevation-z20">
<div fxLayout="row wrap" style="background-color: #a8b4c6">
<div fxFlex="85" fxLayoutAlign="center">
<p class="rules-class"><b>RULES</b></p>
</div>
<div fxFlex="15" fxLayoutAlign="flex-end">
<mat-icon class="close-table" (click)="closeMe()">close</mat-icon>
</div>
</div>
<table class="method-rules-table">
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
---
</tr>
<tr *ngFor="let element of dataSource2" [style.color]="element.rowColor">
<td style="min-width: 250px;">Row1</td>
<td>Row2</td>
<td>Row3</td>
<td>Row4</td>
<td>Row5</td>
----
</tr>
</table>
</div> A: You need to define the table-layout to fix the width and height of your table to the .table class. And then wrap the table with a table container for scroll bars. Now your.table class will become table { font-family: arial, sans-serif; border-collapse: collapse; table-layout:fixed; width:400px; height:100px; } Have a outer container like .table-container .container { overflow:scroll; width:400px; height:200px; } Working link: http://jsbin.com/kutokabojo/edit?html,css,output | Mid | [
0.602739726027397,
33,
21.75
]
|
Category Archives: Set Theory Post navigation A while back we featured a post about why learning mathematics can be hard for programmers, and I claimed a major issue was not understanding the basic methods of proof (the lingua franca between intuition and rigorous mathematics). I boiled these down to the “basic four,” direct implication, contrapositive, contradiction, and induction. But in mathematics there is an ever growing supply of proof methods. There are books written about the “probabilistic method,” and I recently went to a lecture where the “linear algebra method” was displayed. There has been recent talk of a “quantum method” for proving theorems unrelated to quantum mechanics, and many more. So in continuing our series of methods of proof, we’ll move up to some of the more advanced methods of proof. And in keeping with the spirit of the series, we’ll spend most of our time discussing the structural form of the proofs. This time, diagonalization. Diagonalization Perhaps one of the most famous methods of proof after the basic four is proof by diagonalization. Why do they call it diagonalization? Because the idea behind diagonalization is to write out a table that describes how a collection of objects behaves, and then to manipulate the “diagonal” of that table to get a new object that you can prove isn’t in the table. The simplest and most famous example of this is the proof that there is no bijection between the natural numbers and the real numbers. We defined injections, and surjections and bijections, in two earlier posts in this series, but for new readers a bijection is just a one-to-one mapping between two collections of things. For example, one can construct a bijection between all positive integers and all even positive integers by mapping to . If there is a bijection between two (perhaps infinite) sets, then we say they have the same size or cardinality. And so to say there is no bijection between the natural numbers and the real numbers is to say that one of these two sets (the real numbers) is somehow “larger” than the other, despite both being infinite in size. It’s deep, it used to be very controversial, and it made the method of diagonalization famous. Let’s see how it works. Theorem: There is no bijection from the natural numbers to the real numbers . Proof. Suppose to the contrary (i.e., we’re about to do proof by contradiction) that there is a bijection . That is, you give me a positive integer and I will spit out , with the property that different give different , and every real number is hit by some natural number (this is just what it means to be a one-to-one mapping). First let me just do some setup. I claim that all we need to do is show that there is no bijection between and the real numbers between 0 and 1. In particular, I claim there is a bijection from to all real numbers, so if there is a bijection from then we could combine the two bijections. To show there is a bijection from , I can first make a bijection from the open interval to the interval by mapping to . With a little bit of extra work (read, messy details) you can extend this to all real numbers. Here’s a sketch: make a bijection from to by doubling; then make a bijection from to all real numbers by using the part to get , and use the part to get by subtracting 1 (almost! To be super rigorous you also have to argue that the missing number 1 doesn’t change the cardinality, or else write down a more complicated bijection; still, the idea should be clear). Okay, setup is done. We just have to show there is no bijection between and the natural numbers. The reason I did all that setup is so that I can use the fact that every real number in has an infinite binary decimal expansion whose only nonzero digits are after the decimal point. And so I’ll write down the expansion of as a row in a table (an infinite row), and below it I’ll write down the expansion of , below that , and so on, and the decimal points will line up. The table looks like this. The ‘s above are either 0 or 1. I need to be a bit more detailed in my table, so I’ll index the digits of by , the digits of by , and so on. This makes the table look like this It’s a bit harder to read, but trust me the notation is helpful. Now by the assumption that is a bijection, I’m assuming that every real number shows up as a number in this table, and no real number shows up twice. So if I could construct a number that I can prove is not in the table, I will arrive at a contradiction: the table couldn’t have had all real numbers to begin with! And that will prove there is no bijection between the natural numbers and the real numbers. Here’s how I’ll come up with such a number (this is the diagonalization part). It starts with 0., and it’s first digit after the decimal is . That is, we flip the bit to get the first digit of . The second digit is , the third is , and so on. In general, digit is . Now we show that isn’t in the table. If it were, then it would have to be for some , i.e. be the -th row in the table. Moreover, by the way we built the table, the -th digit of would be . But we defined so that it’s -th digit was actually . This is very embarrassing for (it’s a contradiction!). So isn’t in the table. It’s the kind of proof that blows your mind the first time you see it, because it says that there is more than one kind of infinity. Not something you think about every day, right? The Halting Problem The second example we’ll show of a proof by diagonalization is the Halting Theorem, proved originally by Alan Turing, which says that there are some problems that computers can’t solve, even if given unbounded space and time to perform their computations. The formal mathematical model is called a Turing machine, but for simplicity you can think of “Turing machines” and “algorithms described in words” as the same thing. Or if you want it can be “programs written in programming language X.” So we’ll use the three words “Turing machine,” “algorithm,” and “program” interchangeably. The proof works by actually defining a problem and proving it can’t be solved. The problem is called the halting problem, and it is the problem of deciding: given a program and an input to that program, will ever stop running when given as input? What I mean by “decide” is that any program that claims to solve the halting problem is itself required to halt for every possible input with the correct answer. A “halting problem solver” can’t loop infinitely! So first we’ll give the standard proof that the halting problem can’t be solved, and then we’ll inspect the form of the proof more closely to see why it’s considered a diagonalization argument. Theorem: The halting program cannot be solved by Turing machines. Proof. Suppose to the contrary that is a program that solves the halting problem. We’ll use as a black box to come up with a new program I’ll call meta-, defined in pseudo-python as follows. In words, meta- accepts as input the source code of a program , and then uses to tell if halts (when given its own source code as input). Based on the result, it behaves the opposite of ; if halts then meta- loops infinitely and vice versa. It’s a little meta, right? Now let’s do something crazy: let’s run meta- on itself! That is, run metaT(metaT) So meta. The question is what is the output of this call? The meta- program uses to determine whether meta- halts when given itself as input. So let’s say that the answer to this question is “yes, it does halt.” Then by the definition of meta-, the program proceeds to loop forever. But this is a problem, because it means that metaT(metaT) (which is the original thing we ran) actually does not halt, contradicting ‘s answer! Likewise, if says that metaT(metaT) should loop infinitely, that will cause meta- to halt, a contradiction. So cannot be correct, and the halting problem can’t be solved. This theorem is deep because it says that you can’t possibly write a program to which can always detect bugs in other programs. Infinite loops are just one special kind of bug. But let’s take a closer look and see why this is a proof by diagonalization. The first thing we need to convince ourselves is that the set of all programs is countable (that is, there is a bijection from to the set of all programs). This shouldn’t be so hard to see: you can list all programs in lexicographic order, since the set of all strings is countable, and then throw out any that are not syntactically valid programs. Likewise, the set of all inputs, really just all strings, is countable. The second thing we need to convince ourselves of is that a problem corresponds to an infinite binary string. To do this, we’ll restrict our attention to problems with yes/no answers, that is where the goal of the program is to output a single bit corresponding to yes or no for a given input. Then if we list all possible inputs in increasing lexicographic order, a problem can be represented by the infinite list of bits that are the correct outputs to each input. For example, if the problem is to determine whether a given binary input string corresponds to an even number, the representation might look like this: 010101010101010101... Of course this all depends on the details of how one encodes inputs, but the point is that if you wanted to you could nail all this down precisely. More importantly for us we can represent the halting problem as an infinite table of bits. If the columns of the table are all programs (in lex order), and the rows of the table correspond to inputs (in lex order), then the table would have at entry a 1 if halts and a 0 otherwise. here is 1 if halts and 0 otherwise. The table encodes the answers to the halting problem for all possible inputs. Now we assume for contradiction sake that some program solves the halting problem, i.e. that every entry of the table is computable. Now we’ll construct the answers output by meta- by flipping each bit of the diagonal of the table. The point is that meta- corresponds to some row of the table, because there is some input string that is interpreted as the source code of meta-. Then we argue that the entry of the table for contradicts its definition, and we’re done! So these are two of the most high-profile uses of the method of diagonalization. It’s a great tool for your proving repertoire. A while back Peter Norvig posted a wonderful pair of articles about regex golf. The idea behind regex golf is to come up with the shortest possible regular expression that matches one given list of strings, but not the other. “Regex Golf,” by Randall Munroe. In the first article, Norvig runs a basic algorithm to recreate and improve the results from the comic, and in the second he beefs it up with some improved search heuristics. My favorite part about this topic is that regex golf can be phrased in terms of a problem called set cover. I noticed this when reading the comic, and was delighted to see Norvig use that as the basis of his algorithm. The set cover problem shows up in other places, too. If you have a database of items labeled by users, and you want to find the smallest set of labels to display that covers every item in the database, you’re doing set cover. I hear there are applications in biochemistry and biology but haven’t seen them myself. If you know what a set is (just think of the “set” or “hash set” type from your favorite programming language), then set cover has a simple definition. Definition (The Set Cover Problem): You are given a finite set called a “universe” and sets each of which is a subset of . You choose some of the to ensure that every is in one of your chosen sets, and you want to minimize the number of you picked. It’s called a “cover” because the sets you pick “cover” every element of . Let’s do a simple. Let and Then the smallest possible number of sets you can pick is 2, and you can achieve this by picking both or both . The connection to regex golf is that you pick to be the set of strings you want to match, and you pick a set of regexes that match some of the strings in but none of the strings you want to avoid matching (I’ll call them ). If is such a regex, then you can form the set of strings that matches. Then if you find a small set cover with the strings , then you can “or” them together to get a single regex that matches all of but none of . Set cover is what’s called NP-hard, and one implication is that we shouldn’t hope to find an efficient algorithm that will always give you the shortest regex for every regex golf problem. But despite this, there are approximation algorithms for set cover. What I mean by this is that there is a regex-golf algorithm that outputs a subset of the regexes matching all of , and the number of regexes it outputs is such-and-such close to the minimum possible number. We’ll make “such-and-such” more formal later in the post. What made me sad was that Norvig didn’t go any deeper than saying, “We can try to approximate set cover, and the greedy algorithm is pretty good.” It’s true, but the ideas are richer than that! Set cover is a simple example to showcase interesting techniques from theoretical computer science. And perhaps ironically, in Norvig’s second post a header promised the article would discuss the theory of set cover, but I didn’t see any of what I think of as theory. Instead he partially analyzes the structure of the regex golf instances he cares about. This is useful, but not really theoretical in any way unless he can say something universal about those instances. I don’t mean to bash Norvig. His articles were great! And in-depth theory was way beyond scope. So this post is just my opportunity to fill in some theory gaps. We’ll do three things: Show formally that set cover is NP-hard. Prove the approximation guarantee of the greedy algorithm. Show another (very different) approximation algorithm based on linear programming. Along the way I’ll argue that by knowing (or at least seeing) the details of these proofs, one can get a better sense of what features to look for in the set cover instance you’re trying to solve. We’ll also see how set cover depicts the broader themes of theoretical computer science. NP-hardness The first thing we should do is show that set cover is NP-hard. Intuitively what this means is that we can take some hard problem and encode instances ofinside set cover problems. This idea is called a reduction, because solving problem will “reduce” to solving set cover, and the method we use to encode instance of as set cover problems will have a small amount of overhead. This is one way to say that set cover is “at least as hard as” . The hard problem we’ll reduce to set cover is called 3-satisfiability (3-SAT). In 3-SAT, the input is a formula whose variables are either true or false, and the formula is expressed as an OR of a bunch of clauses, each of which is an AND of three variables (or their negations). This is called 3-CNF form. A simple example: The goal of the algorithm is to decide whether there is an assignment to the variables which makes the formula true. 3-SAT is one of the most fundamental problems we believe to be hard and, roughly speaking, by reducing it to set cover we include set cover in a class called NP-complete, and if any one of these problems can be solved efficiently, then they all can (this is the famous P versus NP problem, and an efficient algorithm would imply P equals NP). So a reduction would consist of the following: you give me a formula in 3-CNF form, and I have to produce (in a way that depends on !) a universe and a choice of subsets in such a way that has a true assignment of variables if and only if the corresponding set cover problem has a cover using sets. In other words, I’m going to design a function from 3-SAT instances to set cover instances, such that is satisfiable if and only if has a set cover with sets. Why do I say it only for sets? Well, if you can always answer this question then I claim you can find the minimum size of a set cover needed by doing a binary search for the smallest value of . So finding the minimum size of a set cover reduces to the problem of telling if theres a set cover of size . Now let’s do the reduction from 3-SAT to set cover. If you give me where each is a clause and the variables are denoted , then I will choose as my universe to be the set of all the clauses and indices of the variables (these are all just formal symbols). i.e. The first part of will ensure I make all the clauses true, and the last part will ensure I don’t pick a variable to be both true and false at the same time. To show how this works I have to pick my subsets. For each variable , I’ll make two sets, one called and one called . They will both contain in addition to the clauses which they make true when the corresponding literal is true (by literal I just mean the variable or its negation). For example, if uses the literal , then will contain but will not. Finally, I’ll set , the number of variables. Now to prove this reduction works I have to prove two things: if my starting formula has a satisfying assignment I have to show the set cover problem has a cover of size . Indeed, take the sets for all literals that are set to true in a satisfying assignment. There can be at most true literals since half are true and half are false, so there will be at most sets, and these sets clearly cover all of because every literal has to be satisfied by some literal or else the formula isn’t true. The reverse direction is similar: if I have a set cover of size , I need to use it to come up with a satisfying truth assignment for the original formula. But indeed, the sets that get chosen can’t include both a and its negation set , because there are of the elements , and each is only in the two . Just by counting if I cover all the indices , I already account for sets! And finally, since I have covered all the clauses, the literals corresponding to the sets I chose give exactly a satisfying assignment. Whew! So set cover is NP-hard because I encoded this logic problem 3-SAT within its rules. If we think 3-SAT is hard (and we do) then set cover must also be hard. So if we can’t hope to solve it exactly we should try to approximate the best solution. The greedy approach The method that Norvig uses in attacking the meta-regex golf problem is the greedy algorithm. The greedy algorithm is exactly what you’d expect: you maintain a list of the subsets you’ve picked so far, and at each step you pick the set that maximizes the number of new elements of that aren’t already covered by the sets in . In python pseudocode: Theorem: If it is possible to cover by the sets in , then the greedy algorithm always produces a cover that at worst has size , where is the size of the smallest cover. Moreover, this is asymptotically the best any algorithm can do. One simple fact we need from calculus is that the following sum is asymptotically the same as : Proof. [adapted from Wan] Let’s say the greedy algorithm picks sets in that order. We’ll set up a little value system for the elements of . Specifically, the value of each is 1, and in step we evenly distribute this unit value across all newly covered elements of . So for each covered element gets value , and if covers four new elements, each gets a value of 1/4. One can think of this “value” as a price, or energy, or unit mass, or whatever. It’s just an accounting system (albeit a clever one) we use to make some inequalities clear later. In general call the value of element the value assigned to at the step where it’s first covered. In particular, the number of sets chosen by the greedy algorithm is just . We’re just bunching back together the unit value we distributed for each step of the algorithm. Now we want to compare the sets chosen by greedy to the optimal choice. Call a smallest set cover . Let’s stare at the following inequality. It’s true because each counts for a at most once in the left hand side, and in the right hand side the sets in must hit each at least once but may hit some more than once. Also remember the left hand side is equal to . Now we want to show that the inner sum on the right hand side, , is at most . This will in fact prove the entire theorem: because each set has size at most , the inequality above will turn into And so , which is the statement of the theorem. So we want to show that . For each define to be the number of elements in not covered in . Notice that is the number of elements of that are covered for the first time in step . If we call the smallest integer for which , we can count up the differences up to step , we get The rightmost term is just the cost assigned to the relevant elements at step . Moreover, because covers more new elements than (by definition of the greedy algorithm), the fraction above is at most . The end is near. For brevity I’ll drop the from . And that proves the claim. I have three postscripts to this proof: This is basically the exact worst-case approximation that the greedy algorithm achieves. In fact, Petr Slavik proved in 1996 that the greedy gives you a set of size exactly in the worst case. This is also the best approximation that any set cover algorithm can achieve, provided that P is not NP. This result was basically known in 1994, but it wasn’t until 2013 and the use of some very sophisticated tools that the best possible bound was found with the smallest assumptions. In the proof we used that to bound things, but if we knew that our sets (i.e. subsets matched by a regex) had sizes bounded by, say, , the same proof would show that the approximation factor is instead of . However, in order for that to be useful you need to be a constant, or at least to grow more slowly than any polynomial in , since e.g. . In fact, taking a second look at Norvig’s meta regex golf problem, some of his instances had this property! Which means the greedy algorithm gives a much better approximation ratio for certain meta regex golf problems than it does for the worst case general problem. This is one instance where knowing the proof of a theorem helps us understand how to specialize it to our interests. Norvig’s frequency table for president meta-regex golf. The left side counts the size of each set (defined by a regex) The linear programming approach So we just said that you can’t possibly do better than the greedy algorithm for approximating set cover. There must be nothing left to say, job well done, right? Wrong! Our second analysis, based on linear programming, shows that instances with special features can have better approximation results. In particular, if we’re guaranteed that each element occurs in at most of the sets , then the linear programming approach will give a -approximation, i.e. a cover whose size is at worst larger than OPT by a multiplicative factor of . In the case that is constant, we can beat our earlier greedy algorithm. The technique is now a classic one in optimization, called LP-relaxation (LP stands for linear programming). The idea is simple. Most optimization problems can be written as integer linear programs, that is there you have variables and you want to maximize (or minimize) a linear function of the subject to some linear constraints. The thing you’re trying to optimize is called the objective. While in general solving integer linear programs is NP-hard, we can relax the “integer” requirement to , or something similar. The resulting linear program, called the relaxed program, can be solved efficiently using the simplex algorithm or another more complicated method. The output of solving the relaxed program is an assignment of real numbers for the that optimizes the objective function. A key fact is that the solution to the relaxed linear program will be at least as good as the solution to the original integer program, because the optimal solution to the integer program is a valid candidate for the optimal solution to the linear program. Then the idea is that if we use some clever scheme to round the to integers, we can measure how much this degrades the objective and prove that it doesn’t degrade too much when compared to the optimum of the relaxed program, which means it doesn’t degrade too much when compared to the optimum of the integer program as well. If this sounds wishy washy and vague don’t worry, we’re about to make it super concrete for set cover. We’ll make a binary variable for each set in the input, and if and only if we include it in our proposed cover. Then the objective function we want to minimize is . If we call our elements , then we need to write down a linear constraint that says each element is hit by at least one set in the proposed cover. These constraints have to depend on the sets , but that’s not a problem. One good constraint for element is In words, the only way that an will not be covered is if all the sets containing it have their . And we need one of these constraints for each . Putting it together, the integer linear program is The integer program for set cover. Once we understand this formulation of set cover, the relaxation is trivial. We just replace the last constraint with inequalities. For a given candidate assignment to the , call the objective value (in this case ). Now we can be more concrete about the guarantees of this relaxation method. Let be the optimal value of the integer program and a corresponding assignment to achieving the optimum. Likewise let be the optimal things for the linear relaxation. We will prove: Theorem: There is a deterministic algorithm that rounds to integer values so that the objective value , where is the maximum number of sets that any element occurs in. So this gives a -approximation of set cover. Proof. Let be as described in the theorem, and call to make the indexing notation easier. The rounding algorithm is to set if and zero otherwise. To prove the theorem we need to show two things hold about this new candidate solution : The choice of all for which covers every element. The number of sets chosen (i.e. ) is at most times more than . Since , so if we can prove number 2 we get , which is the theorem. So let’s prove 1. Fix any and we’ll show that element is covered by some set in the rounded solution. Call the number of times element occurs in the input sets. By definition , so . Recall was the optimal solution to the relaxed linear program, and so it must be the case that the linear constraint for each is satisfied: . We know that there are terms and they sums to at least 1, so not all terms can be smaller than (otherwise they’d sum to something less than 1). In other words, some variable in the sum is at least , and so is set to 1 in the rounded solution, corresponding to a set that contains . This finishes the proof of 1. Now let’s prove 2. For each , we know that for each , the corresponding variable . In particular . Now we can simply bound the sum. The second inequality is true because some of the are zero, but we can ignore them when we upper bound and just include all the . This proves part 2 and the theorem. I’ve got some more postscripts to this proof: The proof works equally well when the sets are weighted, i.e. your cost for picking a set is not 1 for every set but depends on some arbitrarily given constants . We gave a deterministic algorithm rounding to , but one can get the same result (with high probability) using a randomized algorithm. The idea is to flip a coin with bias roughly times and set if and only if the coin lands heads at least once. The guarantee is no better than what we proved, but for some other problems randomness can help you get approximations where we don’t know of any deterministic algorithms to get the same guarantees. I can’t think of any off the top of my head, but I’m pretty sure they’re out there. For step 1 we showed that at least one term in the inequality for would be rounded up to 1, and this guaranteed we covered all the elements. A natural question is: why not also round up at most one term of each of these inequalities? It might be that in the worst case you don’t get a better guarantee, but it would be a quick extra heuristic you could use to post-process a rounded solution. Solving linear programs is slow. There are faster methods based on so-called “primal-dual” methods that use information about the dual of the linear program to construct a solution to the problem. Goemans and Williamson have a nice self-contained chapter on their website about this with a ton of applications. Additional Reading Williamson and Shmoys have a large textbook called The Design of Approximation Algorithms. One problem is that this field is like a big heap of unrelated techniques, so it’s not like the book will build up some neat theoretical foundation that works for every problem. Rather, it’s messy and there are lots of details, but there are definitely diamonds in the rough, such as the problem of (and algorithms for) coloring 3-colorable graphs with “approximately 3” colors, and the infamous unique games conjecture. I wrote a post a while back giving conditions which, if a problem satisfies those conditions, the greedy algorithm will give a constant-factor approximation. This is much better than the worst case -approximation we saw in this post. Moreover, I also wrote a post about matroids, which is a characterization of problems where the greedy algorithm is actually optimal. Set cover is one of the main tools that IBM’s AntiVirus software uses to detect viruses. Similarly to the regex golf problem, they find a set of strings that occurs source code in some viruses but not (usually) in good programs. Then they look for a small set of strings that covers all the viruses, and their virus scan just has to search binaries for those strings. Hopefully the size of your set cover is really small compared to the number of viruses you want to protect against. I can’t find a reference that details this, but that is understandable because it is proprietary software. Discussion: Let’s prove correctness. Say that is the unknown value that occurs more than times. The idea of the algorithm is that if you could pair up elements of your stream so that distinct values are paired up, and then you “kill” these pairs, then will always survive. The way this algorithm pairs up the values is by holding onto the most recent value that has no pair (implicitly, by keeping a count how many copies of that value you saw). Then when you come across a new element, you decrement the counter and implicitly account for one new pair. Let’s analyze the complexity of the algorithm. Clearly the algorithm only uses a single pass through the data. Next, if the stream has size , then this algorithm uses space. Indeed, if the stream entirely consists of a single value (say, a stream of all 1’s) then the counter will be at the end, which takes bits to store. On the other hand, if there are possible values then storing the largest requires bits. Finally, the guarantee that one value occurs more than times is necessary. If it is not the case the algorithm could output anything (including the most infrequent element!). And moreover, if we don’t have this guarantee then every algorithm that solves the problem must use at least space in the worst case. In particular, say that , and the first items are all distinct and the last items are all the same one, the majority value . If you do not know in advance, then you must keep at least one bit of information to know which symbols occurred in the first half of the stream because any of them could be . So the guarantee allows us to bypass that barrier. This algorithm can be generalized to detect items with frequency above some threshold using space . The idea is to keep counters instead of one, adding new elements when any counter is zero. When you see an element not being tracked by your counters (which are all positive), you decrement all the counters by 1. This is like a -to-one matching rather than a pairing. Greedy algorithms are by far one of the easiest and most well-understood algorithmic techniques. There is a wealth of variations, but at its core the greedy algorithm optimizes something using the natural rule, “pick what looks best” at any step. So a greedy routing algorithm would say to a routing problem: “You want to visit all these locations with minimum travel time? Let’s start by going to the closest one. And from there to the next closest one. And so on.” Because greedy algorithms are so simple, researchers have naturally made a big effort to understand their performance. Under what conditions will they actually solve the problem we’re trying to solve, or at least get close? In a previous post we gave some easy-to-state conditions under which greedy gives a good approximation, but the obvious question remains: can we characterize when greedy algorithms give an optimal solution to a problem? The answer is yes, and the framework that enables us to do this is called a matroid. That is, if we can phrase the problem we’re trying to solve as a matroid, then the greedy algorithm is guaranteed to be optimal. Let’s start with an example when greedy is provably optimal: the minimum spanning tree problem. Throughout the article we’ll assume the reader is familiar with the very basics of linear algebra and graph theory (though we’ll remind ourselves what a minimum spanning tree is shortly). For a refresher, this blog has primers on both subjects. But first, some history. History Matroids were first introduced by Hassler Whitney in 1935, and independently discovered a little later by B.L. van der Waerden (a big name in combinatorics). They were both interested in devising a general description of “independence,” the properties of which are strikingly similar when specified in linear algebra and graph theory. Since then the study of matroids has blossomed into a large and beautiful theory, one part of which is the characterization of the greedy algorithm: greedy is optimal on a problem if and only if the problem can be represented as a matroid. Mathematicians have also characterized which matroids can be modeled as spanning trees of graphs (we will see this momentarily). As such, matroids have become a standard topic in the theory and practice of algorithms. Minimum Spanning Trees It is often natural in an undirected graph to find a connected subset of edges that touch every vertex. As an example, if you’re working on a power network you might want to identify a “backbone” of the network so that you can use the backbone to cheaply travel from any node to any other node. Similarly, in a routing network (like the internet) it costs a lot of money to lay down cable, it’s in the interest of the internet service providers to design analogous backbones into their infrastructure. A minimal subset of edges in a backbone like this is guaranteed to form a tree. This is simply because if you have a cycle in your subgraph then removing any edge on that cycle doesn’t break connectivity or the fact that you can get from any vertex to any other (and trees are the maximal subgraphs without cycles). As such, these “backbones” are called spanning trees. “Span” here means that you can get from any vertex to any other vertex, and it suggests the connection to linear algebra that we’ll describe later, and it’s a simple property of a tree that there is a unique path between any two vertices in the tree. An example of a spanning tree When your edges have nonnegative weights , we can further ask to find a minimum cost spanning tree. The cost of a spanning tree is just the sum of its edges, and it’s important enough of a definition to offset. Definition: A minimum spanning tree of a weighted graph (with weights for ) is a spanning tree which minimizes the quantity There are a lot of algorithms to find minimal spanning trees, but one that will lead us to matroids is Kruskal’s algorithm. It’s quite simple. We’ll maintain a forest in , which is just a subgraph consisting of a bunch of trees that may or may not be connected. At the beginning is just all the vertices with no edges. And then at each step we add to the edge whose weight is smallest and also does not introduce any cycles into . If the input graph is connected then this will always produce a minimal spanning tree. Proof. Call the forest produced at step of the algorithm. Then is the set of all vertices of and is the final forest output by Kruskal’s (as a quick exercise, prove all spanning trees on vertices have edges, so we will stop after rounds). It’s clear that is a tree because the algorithm guarantees no will have a cycle. And any tree with edges is necessarily a spanning tree, because if some vertex were left out then there would be edges on a subgraph of vertices, necessarily causing a cycle somewhere in that subgraph. Now we’ll prove that has minimal cost. We’ll prove this in a similar manner to the general proof for matroids.Indeed, say you had a tree whose cost is strictly less than that of (we can also suppose that is minimal, but this is not necessary). Pick the minimal weight edge that is not in . Adding to introduces a unique cycle in . This cycle has some strange properties. First, has the highest cost of any edge on . For otherwise, Kruskal’s algorithm would have chosen it before the heavier weight edges. Second, there is another edge in that’s not in (because was a tree it can’t have the entire cycle). Call such an edge . Now we can remove from and add . This can only increase the total cost of , but this transformation produces a tree with one more edge in common with than before. This contradicts that had strictly lower weight than , because repeating the process we described would eventually transform into exactly, while only increasing the total cost. Just to recap, we defined sets of edges to be “good” if they did not contain a cycle, and a spanning tree is a maximal set of edges with this property. In this scenario, the greedy algorithm performed optimally at finding a spanning tree with minimal total cost. Columns of Matrices Now let’s consider a different kind of problem. Say I give you a matrix like this one: In the standard interpretation of linear algebra, this matrix represents a linear function from one vector space to another , with the basis of being represented by columns and the basis of being represented by the rows. Column tells you how to write as a linear combination of the , and in so doing uniquely defines . Now one thing we want to calculate is the rank of this matrix. That is, what is the dimension of the image of under ? By linear algebraic arguments we know that this is equivalent to asking “how many linearly independent columns of can we find”? An interesting consequence is that if you have two sets of columns that are both linearly independent and maximally so (adding any other column to either set would necessarily introduce a dependence in that set), then these two sets have the same size. This is part of why the rank of a matrix is well-defined. If we were to give the columns of costs, then we could ask about finding the minimal-cost maximally-independent column set. It sounds like a mouthful, but it’s exactly the same idea as with spanning trees: we want a set of vectors that spans the whole column space of , but contains no “cycles” (linearly dependent combinations), and we want the cheapest such set. So we have two kinds of “independence systems” that seem to be related. One interesting question we can ask is whether these kinds of independence systems are “the same” in a reasonable way. Hardcore readers of this blog may see the connection quite quickly. For any graph , there is a natural linear map from to , so that a linear dependence among the columns (edges) corresponds to a cycle in . This map is called the incidence matrix by combinatorialists and the first boundary map by topologists. The map is easy to construct: for each edge you add a column with a 1 in the -th row and a in the -th row. Then taking a sum of edges gives you zero if and only if the edges form a cycle. So we can think of a set of edges as “independent” if they don’t contain a cycle. It’s a little bit less general than independence over , but you can make it exactly the same kind of independence if you change your field from real numbers to . We won’t do this because it will detract from our end goal (to analyze greedy algorithms in realistic settings), but for further reading this survey of Oxley assumes that perspective. So with the recognition of how similar these notions of independence are, we are ready to define matroids. The Matroid So far we’ve seen two kinds of independence: “sets of edges with no cycles” (also called forests) and “sets of linearly independent vectors.” Both of these share two trivial properties: there are always nonempty independent sets, and every subset of an independent set is independent. We will call any family of subsets with this property an independence system. Definition: Let be a finite set. An independence system over is a family of subsets of with the following two properties. is nonempty. If , then so is every subset of . This is too general to characterize greedy algorithms, so we need one more property shared by our examples. There are a few things we do, but here’s one nice property that turns out to be enough. Definition: A matroid is a set and an independence system over with the following property: If are in with , then there is an element such that . In other words, this property says if I have an independent set that is not maximally independent, I can grow the set by adding some suitably-chosen element from a larger independent set. We’ll call this the extension property. For a warmup exercise, let’s prove that the extension property is equivalent to the following (assuming the other properties of a matroid): For every subset , all maximal independent sets contained in have equal size. Proof. For one direction, if you have two maximal sets that are not the same size (say is bigger), then you can take any subset of whose size is exactly , and use the extension property to make larger, a contradiction. For the other direction, say that I know all maximal independent sets of any have the same size, and you give me . I need to find an that I can add to and keep it independent. What I do is take the subset . Now the sizes of don’t change, but can’t be maximal inside because it’s smaller than ( might not be maximal either, but it’s still independent). And the only way to extend is by adding something from , as desired. So we can use the extension property and the cardinality property interchangeably when talking about matroids. Continuing to connect matroid language to linear algebra and graph theory, the maximal independent sets of a matroid are called bases, the size of any basis is the rank of the matroid, and the minimal dependent sets are called circuits. In fact, you can characterize matroids in terms of the properties of their circuits, which are dual to the properties of bases (and hence all independent sets) in a very concrete sense. But while you could spend all day characterizing the many kinds of matroids and comatroids out there, we are still faced with the task of seeing how the greedy algorithm performs on a matroid. That is, suppose that your matroid has a nonnegative real number associated with each . And suppose we had a black-box function to determine if a given set is independent. Then the greedy algorithm maintains a set , and at every step adds a minimum weight element that maintains the independence of . If we measure the cost of a subset by the sum of the weights of its elements, then the question is whether the greedy algorithm finds a minimum weight basis of the matroid. The answer is even better than yes. In fact, the answer is that the greedy algorithm performs perfectly if and only if the problem is a matroid! More rigorously, Theorem: Suppose that is an independence system, and that we have a black-box algorithm to determine whether a given set is independent. Define the greedy algorithm to iteratively adds the cheapest element of that maintains independence. Then the greedy algorithm produces a maximally independent set of minimal cost for every nonnegative cost function on , if and only if is a matroid. It’s clear that the algorithm will produce a set that is maximally independent. The only question is whether what it produces has minimum weight among all maximally independent sets. We’ll break the theorem into the two directions of the “if and only if”: Part 1: If is a matroid, then greedy works perfectly no matter the cost function.Part 2: If greedy works perfectly for every cost function, then is a matroid. Proof of Part 1. Call the cost function , and suppose that the greedy algorithm picks elements (in that order). It’s easy to see that . Now if you give me any list of independent elements that has , I claim that for all . This proves what we want, because if there were a basis of size with smaller weight, sorting its elements by weight would give a list contradicting this claim. To prove the claim, suppose to the contrary that it were false, and for some we have . Moreover, pick the smallest for which this is true. Note , and so we can look at the special sets and . Now , so by the matroid property there is some between and so that is an independent set (and is not in ). But then , and so the greedy algorithm would have picked before it picks (and the strict inequality means they’re different elements). This contradicts how the greedy algorithm runs, and hence proves the claim. Proof of Part 2. We’ll prove this contrapositively as follows. Suppose we have our independence system and it doesn’t satisfy the last matroid condition. Then we’ll construct a special weight function that causes the greedy algorithm to fail. So let be independent sets with , but for every adding to never gives you an independent set. Now what we’ll do is define our weight function so that the greedy algorithm picks the elements we want in the order we want (roughly). In particular, we’ll assign all elements of a tiny weight we’ll call . For elements of we’ll use , and for we’ll use , with for everything else. In a more compact notation: We need two things for this weight function to screw up the greedy algorithm. The first is that , so that greedy picks the elements in the order we want. Note that this means it’ll first pick all of , and then all of , and by assumption it won’t be able to pick anything from , but since is assumed to be non-maximal, we have to pick at least one element from and pay for it. So the second thing we want is that the cost of doing greedy is worse than picking any maximally independent set that contains (and we know that there has to be some maximal independent set containing ). In other words, if we call the size of a maximally independent set, we want This can be rearranged (using the fact that ) to The point here is that the greedy picks too many elements of weight , since if we were to start by taking all of (instead of all of ), then we could get by with one fewer. That might not be optimal, but it’s better than greedy and that’s enough for the proof. So we just need to make large enough to make this inequality hold, while still maintaining . There are probably many ways to do this, and here’s one. Pick some , and set It’s trivial that and . For the rest we need some observations. First, the fact that implies that . Second, both and are nonempty, since otherwise the second property of independence systems would contradict our assumption that augmenting with elements of breaks independence. Using this, we can divide by these quantities to get This proves the claim and finishes the proof. As a side note, we proved everything here with respect to minimizing the sum of the weights, but one can prove an identical theorem for maximization. The only part that’s really different is picking the clever weight function in part 2. In fact, you can convert between the two by defining a new weight function that subtracts the old weights from some fixed number that is larger than any of the original weights. So these two problems really are the same thing. This is pretty amazing! So if you can prove your problem is a matroid then you have an awesome algorithm automatically. And if you run the greedy algorithm for fun and it seems like it works all the time, then that may be hinting that your problem is a matroid. This is one of the best situations one could possibly hope for. But as usual, there are a few caveats to consider. They are both related to efficiency. The first is the black box algorithm for determining if a set is independent. In a problem like minimum spanning tree or finding independent columns of a matrix, there are polynomial time algorithms for determining independence. These two can both be done, for example, with Gaussian elimination. But there’s nothing to stop our favorite matroid from requiring an exponential amount of time to check if a set is independent. This makes greedy all but useless, since we need to check for independence many times in every round. Another, perhaps subtler, issue is that the size of the ground set might be exponentially larger than the rank of the matroid. In other words, at every step our greedy algorithm needs to find a new element to add to the set it’s building up. But there could be such a huge ocean of candidates, all but a few of which break independence. In practice an algorithm might be working with implicitly, so we could still hope to solve the problem if we had enough knowledge to speed up the search for a new element. There are still other concerns. For example, a naive approach to implementing greedy takes quadratic time, since you may have to look through every element of to find the minimum-cost guy to add. What if you just have to have faster runtime than ? You can still be interested in finding more efficient algorithms that still perform perfectly, and to the best of my knowledge there’s nothing that says that greedy is the only exact algorithm for your favorite matroid. And then there are models where you don’t have direct/random access to the input, and lots of other ways that you can improve on greedy. But those stories are for another time. Post navigation Write code, not cover letters Triplebyte's common application lets talented programmers skip resume and recruiter screens while applying to multiple top tech companies at once. Beat their online coding quiz to get started. People interested in math and physics tend to do well. | Mid | [
0.646108663729809,
27.5,
15.0625
]
|
/* * libjingle * Copyright 2013 Google Inc. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * 3. The name of the author may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef TALK_APP_WEBRTC_AUDIOTRACKRENDERER_H_ #define TALK_APP_WEBRTC_AUDIOTRACKRENDERER_H_ #include "talk/media/base/audiorenderer.h" #include "webrtc/base/thread.h" namespace webrtc { // Class used for AudioTrack to get the ID of WebRtc voice channel that // the AudioTrack is connecting to. // Each AudioTrack owns a AudioTrackRenderer instance. // AddChannel() will be called when an AudioTrack is added to a MediaStream. // RemoveChannel will be called when the AudioTrack or WebRtc VoE channel is // going away. // This implementation only supports one channel, and it is only used by // Chrome for remote audio tracks." class AudioTrackRenderer : public cricket::AudioRenderer { public: AudioTrackRenderer(); ~AudioTrackRenderer(); // Implements cricket::AudioRenderer. virtual void AddChannel(int channel_id) OVERRIDE; virtual void RemoveChannel(int channel_id) OVERRIDE; private: int channel_id_; }; } // namespace webrtc #endif // TALK_APP_WEBRTC_AUDIOTRACKRENDERER_H_ | Mid | [
0.5900990099009901,
37.25,
25.875
]
|
@model DisplayPackageViewModel
@{
ViewBag.Tab = "Packages";
Bundles.Reference("Content/dist/chocolatey.slim.css");
Bundles.Reference("Content/account.css");
Bundles.Reference("Scripts");
}
<section id="secondaryNav">
@Html.Partial("~/Views/Shared/_AuthenticationSubNavigation.cshtml")
</section>
<section class="container py-3 py-xl-5" id="account">
<div class="row">
<div class="col-xl-10 mx-auto">
<h2 class="text-center text-xl-left"><em>@Model.Title @Model.Version</em> Listing</h2>
<hr />
<p><strong>Permanently deleting packages is not supported, but you can control how they are listed.</strong></p>
<p>
Unlisting a package hides the package from search results and all NuGet commands, but packages
are still available for download. For example, they can still be downloaded as dependencies to
other packages.
</p>
@if (!Model.Listed && Model.Status != PackageStatusType.Approved && Model.Status != PackageStatusType.Exempted)
{
<div class="callout callout-danger">
<p>Until this package is approved, it is not allowed to be listed.</p>
</div>
}
else
{
<div class="row mt-5">
<div class="col-lg-8 mx-auto">
<div class="card">
<div class="card-body">
@using (Html.BeginForm())
{
<fieldset class="form" id="unlist-form">
<legend class="d-none">Edit @Model.Title Package</legend>
@Html.AntiForgeryToken()
<div class="form-field my-1 d-flex justify-content-center">
<label for="Listed" class="checkbox">
@Html.EditorFor(package => package.Listed)
List @Model.Title @Model.Version in search results.
<span class="checkmark"></span>
</label>
</div>
<p>
Unchecking this box means your package cannot be installed directly and it will
not show up in search results.
</p>
<button class="btn btn-primary d-block mt-3 mx-auto" type="submit" value="Save" title="Save Changes">Save Changes</button>
<p class="mb-0 mt-2 text-center"><small><a class="cancel" href="@Url.Action("DisplayPackage")" title="Cancel Changes and go back to package page.">Cancel</a></small></p>
</fieldset>
}
</div>
</div>
</div>
</div>
}
<h3 class="mt-5">Why can’t I delete my package?</h3>
<p>
Our policy is to only permanently delete Chocolatey packages that really need it, such as
packages that contain passwords, malicious/harmful code, etc. This policy is very similar
to the policies employed by other package managers such as
<a href="http://help.rubygems.org/kb/gemcutter/removing-a-published-rubygem" title="">Ruby Gems</a>.
</p>
<p>
Unlisting the package will remove the package from
being available in the Chocolatey Gallery. The package is still available for download as a dependency for
two main reasons.
</p>
<ul>
<li>
Other packages may depend on that package. Those packages might not necessarily be in this gallery.
</li>
<li>
Helps ensure that important community owned packages are not mass deleted.
</li>
</ul>
<p class="mb-0">
If you need the package permanently removed, click on the <a href="@Url.Action(MVC.Packages.ReportAbuse(Model.Id, Model.Version))" title="Report Abuse">Report Abuse</a> link and we'll take care
of it for you. PLEASE ONLY DO THIS IF THERE IS AN URGENT PROBLEM WITH THE PACKAGE.
(Passwords, malicious code, etc). Even if you remove it, it’s prudent to immediately
reset any passwords/sensitive data you accidentally pushed instead of waiting for us to delete
the package.
</p>
</div>
</div>
</section> | Mid | [
0.5570175438596491,
31.75,
25.25
]
|
1961 1566590851821 httpcache-v1 Method: POST URL: https://www.notion.so/api/v3/getRecordValues Body:+110 { "requests": [ { "id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "table": "block" } ] } Response:+1761 { "results": [ { "role": "comment_only", "value": { "alive": true, "content": [ "1c037d70-9871-4d47-b5cb-745f51df1643", "fb620f08-308b-458a-944f-99262db24b80", "02c8376c-ab07-4b00-9f50-c8ac7a638f41", "439b2c76-15aa-47be-b68c-9fe1c35427a3", "c43aee68-4757-4aee-bd06-c15cc3a1f4c4", "006b34ae-9e1a-40ac-b121-264652a4291d", "d2cd2b1e-c9a0-4167-a041-9ba8d9ab908c", "df6e7739-35a4-409b-9f76-c95d9268d440", "803b511f-2530-46c7-982f-dd5fe4952222", "34767039-05c7-4026-94c3-5b9b1a355df7", "c590bdff-a684-441d-9574-558d4c4fd332", "df8d9505-0bd4-46f3-92c3-6cbb34a89ff6", "531bd61e-95a7-4aa8-a821-3d4d2633694e", "c62cf4bb-4a93-4ebc-888f-f0c38fcc4a1a", "0258bf63-2d8f-4833-94d6-aa1402fd49a8", "4462df51-4b75-435a-9e2c-02e0bf3addb0", "40f2b7ca-0506-4eec-b563-ab10ee4166ce", "2949210a-33da-48ce-a4e8-3faf7417f2d2", "cb0a943a-4ab2-4bf4-9f76-db48c8280a2a" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293067469, "format": { "page_full_width": true, "page_small_text": true }, "id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293067469, "parent_id": "1b338041-238f-4b08-83a8-9e23a3989c88", "parent_table": "block", "properties": { "title": [ [ "Rewritten - tales of rewriting software from X to Go" ] ] }, "type": "page", "version": 0 } } ] } 24809 1566590851822 httpcache-v1 Method: POST URL: https://www.notion.so/api/v3/loadPageChunk Body:+152 { "chunkNumber": 0, "cursor": { "stack": [] }, "limit": 50, "pageId": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "verticalColumns": false } Response:+24568 { "cursor": { "stack": [] }, "recordMap": { "block": { "006b34ae-9e1a-40ac-b121-264652a4291d": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "006b34ae-9e1a-40ac-b121-264652a4291d", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "Node.js to Go:" ] ] }, "type": "text", "version": 0 } }, "0258bf63-2d8f-4833-94d6-aa1402fd49a8": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "0258bf63-2d8f-4833-94d6-aa1402fd49a8", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "ResourceGuru", [ [ "a", "http://blog.resourceguruapp.com/go-language-google-cloud-platform/" ] ] ] ] }, "type": "bulleted_list", "version": 0 } }, "02c8376c-ab07-4b00-9f50-c8ac7a638f41": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "02c8376c-ab07-4b00-9f50-c8ac7a638f41", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "Scala to Go:" ] ] }, "type": "text", "version": 0 } }, "1b338041-238f-4b08-83a8-9e23a3989c88": { "role": "comment_only", "value": { "alive": true, "content": [ "9afe3485-f220-4f1b-b432-17d70f7b87d4", "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "513bb8df-0858-4d93-8a58-166db0b3994f" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512356289876, "id": "1b338041-238f-4b08-83a8-9e23a3989c88", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1530074957024, "parent_id": "6f70163e-a5b8-4ba9-928a-faa2e45d1f51", "parent_table": "block", "properties": { "title": [ [ "Advocacy" ] ] }, "type": "toggle", "version": 3 } }, "1c037d70-9871-4d47-b5cb-745f51df1643": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293076282, "id": "1c037d70-9871-4d47-b5cb-745f51df1643", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293076282, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "type": "text", "version": 0 } }, "2949210a-33da-48ce-a4e8-3faf7417f2d2": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "2949210a-33da-48ce-a4e8-3faf7417f2d2", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "https://github.com/golang/go/wiki/FromXToGo", [ [ "a", "https://github.com/golang/go/wiki/FromXToGo" ] ] ] ] }, "type": "bulleted_list", "version": 0 } }, "34767039-05c7-4026-94c3-5b9b1a355df7": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "34767039-05c7-4026-94c3-5b9b1a355df7", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "C++ to Go:" ] ] }, "type": "text", "version": 0 } }, "40f2b7ca-0506-4eec-b563-ab10ee4166ce": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "40f2b7ca-0506-4eec-b563-ab10ee4166ce", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "Other lists:" ] ] }, "type": "text", "version": 0 } }, "439b2c76-15aa-47be-b68c-9fe1c35427a3": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "439b2c76-15aa-47be-b68c-9fe1c35427a3", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "trading data analysis", [ [ "a", "http://blog.fmpwizard.com/blog/go_making_a_program_70_faster_by_avoiding_common_mistakes" ] ] ], [ ", at " ], [ "Ascendant Compliance Management", [ [ "a", "https://www.ascendantcompliancemanager.com/" ] ] ], [ ", rewriten sometime in 2014" ] ] }, "type": "bulleted_list", "version": 0 } }, "4462df51-4b75-435a-9e2c-02e0bf3addb0": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "4462df51-4b75-435a-9e2c-02e0bf3addb0", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "Skroutz", [ [ "a", "https://engineering.skroutz.gr/blog/rewriting-web-analytics-tracking-in-go/?utm_source=golangweekly\u0026utm_medium=email" ] ] ] ] }, "type": "bulleted_list", "version": 0 } }, "531bd61e-95a7-4aa8-a821-3d4d2633694e": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "531bd61e-95a7-4aa8-a821-3d4d2633694e", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "SendGrid's incoming server", [ [ "a", "https://www.reddit.com/r/golang/comments/49955m/ask_golang_who_is_using_go/d0q7yzq" ] ] ], [ ", 130x improvement" ] ] }, "type": "bulleted_list", "version": 0 } }, "568ac4c0-64c3-4ef6-a6ad-0b8d77230681": { "role": "comment_only", "value": { "alive": true, "content": [ "08e19004-306b-413a-ba6e-0e86a10fec7a", "623523b6-7e15-48a0-b525-749d6921465c", "25a256f9-0ce4-4eb7-8839-0ecc3cf9cd65", "d61b4f94-b10d-4d80-8d3d-238a4e7c4d10", "4da97980-9fb6-45cb-886a-51c656751d35", "aea20e01-890c-4874-ae08-4557d7789195", "c9bef0f1-c8fe-40a2-bc8b-06ace2bd7d8f", "ee0eee35-e706-4e75-9b2f-69d1d03125b2", "9a07ca64-c0c1-4dc0-9e8b-d134b348678d", "db9e9c03-e3e8-4287-a51d-4da5d507138b", "c5210d90-4251-437b-95d8-87da49bd8706", "ec1723d0-39f3-4a5c-a305-68a0deb2ad76", "e4132d5a-4401-4b2a-ad81-d8158c803ad1", "03ece883-f7df-4ce7-8596-73d04811479e", "36859b86-c5ac-423e-a037-4f3a4331b814" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1528059171080, "format": { "page_full_width": true, "page_small_text": true }, "id": "568ac4c0-64c3-4ef6-a6ad-0b8d77230681", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1555525560000, "parent_id": "bc202e06-6caa-4e3f-81eb-f226ab5deef7", "parent_table": "space", "permissions": [ { "role": "editor", "type": "user_permission", "user_id": "bb760e2d-d679-4b64-b2a9-03005b21870a" }, { "allow_duplicate": false, "allow_search_engine_indexing": false, "role": "comment_only", "type": "public_permission" } ], "properties": { "title": [ [ "Website" ] ] }, "type": "page", "version": 370 } }, "6f70163e-a5b8-4ba9-928a-faa2e45d1f51": { "role": "comment_only", "value": { "alive": true, "content": [ "1b338041-238f-4b08-83a8-9e23a3989c88", "25c1809f-e05f-43c0-8b3d-af1cce2d5945", "56c7102b-120f-42a3-81ff-d4673507a0d3", "1cae71c4-e3f2-40a6-8cbf-380eac594d37", "5a832dad-dc7e-45c9-9025-807c013cfa8b", "fe3aac0b-2171-4dd8-8a69-f6889f05a8ac", "accb7fc5-d702-4e86-9ab0-41fd211dfe15", "e0c915d3-04e0-4da7-b455-6aa03929dfca", "b1cff481-c77e-43e4-a604-6b5582c12fdf", "74400c4f-5c50-4d60-9893-22638b9e5037", "98890a03-3ba4-445c-8d2a-3bcd3894ceea", "08e41706-8555-46c3-a074-dae41cf910d8", "ed2347d7-29bd-4c41-9b2b-52fed11e4ec7", "591cfada-b10d-443a-bd75-1a3365cbeef9", "d19571fb-1515-49bf-b875-c26c46f75837", "e3aa0199-ee36-492a-90fb-92aa3fe8ba25", "bc5ada73-f538-449e-91d3-61f6857e2ebc", "6435bf2e-2453-4c41-94bf-08cb397eeda3", "bf2363a5-6186-4fa5-8c08-a3c6d2305f97", "9d54f7ea-1a6e-4b3c-b788-efc3cfcf9e92" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1474753820011, "format": { "page_full_width": true, "page_small_text": true }, "id": "6f70163e-a5b8-4ba9-928a-faa2e45d1f51", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1537164927987, "parent_id": "aea20e01-890c-4874-ae08-4557d7789195", "parent_table": "block", "properties": { "title": [ [ "Go" ] ] }, "type": "page", "version": 27 } }, "803b511f-2530-46c7-982f-dd5fe4952222": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "803b511f-2530-46c7-982f-dd5fe4952222", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "Scaledrone's " ], [ "websocket servers", [ [ "a", "http://blog.scaledrone.com/posts/nodejs-to-go" ] ] ] ] }, "type": "bulleted_list", "version": 0 } }, "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe": { "role": "comment_only", "value": { "alive": true, "content": [ "1c037d70-9871-4d47-b5cb-745f51df1643", "fb620f08-308b-458a-944f-99262db24b80", "02c8376c-ab07-4b00-9f50-c8ac7a638f41", "439b2c76-15aa-47be-b68c-9fe1c35427a3", "c43aee68-4757-4aee-bd06-c15cc3a1f4c4", "006b34ae-9e1a-40ac-b121-264652a4291d", "d2cd2b1e-c9a0-4167-a041-9ba8d9ab908c", "df6e7739-35a4-409b-9f76-c95d9268d440", "803b511f-2530-46c7-982f-dd5fe4952222", "34767039-05c7-4026-94c3-5b9b1a355df7", "c590bdff-a684-441d-9574-558d4c4fd332", "df8d9505-0bd4-46f3-92c3-6cbb34a89ff6", "531bd61e-95a7-4aa8-a821-3d4d2633694e", "c62cf4bb-4a93-4ebc-888f-f0c38fcc4a1a", "0258bf63-2d8f-4833-94d6-aa1402fd49a8", "4462df51-4b75-435a-9e2c-02e0bf3addb0", "40f2b7ca-0506-4eec-b563-ab10ee4166ce", "2949210a-33da-48ce-a4e8-3faf7417f2d2", "cb0a943a-4ab2-4bf4-9f76-db48c8280a2a" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293067469, "format": { "page_full_width": true, "page_small_text": true }, "id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293067469, "parent_id": "1b338041-238f-4b08-83a8-9e23a3989c88", "parent_table": "block", "properties": { "title": [ [ "Rewritten - tales of rewriting software from X to Go" ] ] }, "type": "page", "version": 0 } }, "aea20e01-890c-4874-ae08-4557d7789195": { "role": "comment_only", "value": { "alive": true, "content": [ "6f70163e-a5b8-4ba9-928a-faa2e45d1f51", "ed055f63-753e-42ef-9025-e11ac9062c35" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1530068313902, "id": "aea20e01-890c-4874-ae08-4557d7789195", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1554270960000, "parent_id": "568ac4c0-64c3-4ef6-a6ad-0b8d77230681", "parent_table": "block", "properties": { "title": [ [ "Programming:" ] ] }, "type": "text", "version": 48 } }, "c43aee68-4757-4aee-bd06-c15cc3a1f4c4": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "c43aee68-4757-4aee-bd06-c15cc3a1f4c4", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "rewriting most of the code", [ [ "a", "http://jimplush.com/talk/2015/12/19/moving-a-team-from-scala-to-golang/" ] ] ], [ " at CrowdStrike" ] ] }, "type": "bulleted_list", "version": 0 } }, "c590bdff-a684-441d-9574-558d4c4fd332": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "c590bdff-a684-441d-9574-558d4c4fd332", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "Google's " ], [ "Flywheel", [ [ "a", "http://matt-welsh.blogspot.com/2013/08/rewriting-large-production-system-in-go.html" ] ] ], [ ", " ], [ "source 2", [ [ "a", "http://matt-welsh.blogspot.com/2015/04/flywheel-googles-data-compression-proxy.html" ] ] ] ] }, "type": "bulleted_list", "version": 0 } }, "c62cf4bb-4a93-4ebc-888f-f0c38fcc4a1a": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "c62cf4bb-4a93-4ebc-888f-f0c38fcc4a1a", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "Ruby or Ruby on Rails to Go:" ] ] }, "type": "text", "version": 0 } }, "cb0a943a-4ab2-4bf4-9f76-db48c8280a2a": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293080067, "id": "cb0a943a-4ab2-4bf4-9f76-db48c8280a2a", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293080067, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "type": "text", "version": 0 } }, "d2cd2b1e-c9a0-4167-a041-9ba8d9ab908c": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "d2cd2b1e-c9a0-4167-a041-9ba8d9ab908c", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "Digg's " ], [ "S3 cache", [ [ "a", "https://medium.com/@theflapjack103/the-way-of-the-gopher-6693db15ae1f#.bpnrzcq4y" ] ] ] ] }, "type": "bulleted_list", "version": 0 } }, "df6e7739-35a4-409b-9f76-c95d9268d440": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "df6e7739-35a4-409b-9f76-c95d9268d440", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "Poptip", [ [ "a", "https://www.youtube.com/watch?v=mBy20FgB68Q" ] ] ] ] }, "type": "bulleted_list", "version": 0 } }, "df8d9505-0bd4-46f3-92c3-6cbb34a89ff6": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "df8d9505-0bd4-46f3-92c3-6cbb34a89ff6", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "Python to Go:" ] ] }, "type": "text", "version": 0 } }, "fb620f08-308b-458a-944f-99262db24b80": { "role": "comment_only", "value": { "alive": true, "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512293077032, "id": "fb620f08-308b-458a-944f-99262db24b80", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512293077032, "parent_id": "87df2b0f-c4ff-4fd5-bc41-313afc0f02fe", "parent_table": "block", "properties": { "title": [ [ "This is a compilation of articles that describe rewriting a software project from language X to Go." ] ] }, "type": "text", "version": 0 } } }, "notion_user": { "bb760e2d-d679-4b64-b2a9-03005b21870a": { "role": "reader", "value": { "clipper_onboarding_completed": true, "email": "[email protected]", "family_name": "Kowalczyk", "given_name": "Krzysztof", "id": "bb760e2d-d679-4b64-b2a9-03005b21870a", "mobile_onboarding_completed": true, "onboarding_completed": true, "profile_photo": "https://s3-us-west-2.amazonaws.com/public.notion-static.com/2dcaa66c-7674-4ff6-9924-601785b63561/head-bw-640x960.png", "version": 182 } } }, "space": {} } } 9859 1566590851822 httpcache-v1 Method: POST URL: https://www.notion.so/api/v3/getRecordValues Body:+286 { "requests": [ { "id": "513bb8df-0858-4d93-8a58-166db0b3994f", "table": "block" }, { "id": "9afe3485-f220-4f1b-b432-17d70f7b87d4", "table": "block" }, { "id": "ed055f63-753e-42ef-9025-e11ac9062c35", "table": "block" } ] } Response:+9483 { "results": [ { "role": "comment_only", "value": { "alive": true, "content": [ "278a93ee-90b4-4bd8-b162-2c9c8fb8ed82", "520d5737-a3a8-4d8a-aae1-24c3054d0a7f", "838dd60d-5edf-4e92-bd77-e9a7ae6a2904", "4c8f9794-7de2-4dbe-a886-d77b67b6fb88", "ffbb3f39-8938-4f24-bf75-5848a722f62f", "67918a92-e15b-4fe2-a926-36b87640ce45", "61ca196c-c5e1-49ae-8c4c-b6122118c725", "d8ee28aa-da86-4c9d-aa55-d12b19d1f31d", "7c615b77-27f9-461a-8787-3d9281df9350", "cd83e5b1-195c-4f3a-ab45-2506bb47b8f4", "a1a7d2fb-2067-4e22-bc4a-e6fff8ce40e1", "7d40426b-ede3-4f32-a3a1-a16891be472a", "ba7a948d-6b2b-4f81-8b73-181845cfdf36", "69860904-09e4-42cd-ade4-7384065fe062", "fa1682a0-222e-4185-8f2c-36a54dc82ce7", "3f6c3766-cab7-40d6-b110-4aa1ce8becdc", "daec0e3e-4a09-4053-a4dc-044e80a85d12", "c258d483-1eca-47cb-9699-88dd9838d352", "06cefd17-389f-4833-83da-5a4be36a985e", "c66a3bcf-9118-4147-8d64-589cd9f74af7", "6395accd-52b7-41b7-bac9-acfa7720877d", "7d403a45-72c0-4d9c-a0e1-3bc499f10b93", "78af8845-c563-42ba-9d8a-f9143e232a6b", "b6eccc8e-b2ef-4fd0-ba80-44c30350c1fa", "1942a52d-0404-493e-83b6-7e224c63aa94", "47056204-914f-45f2-9fea-bda0534e4c26", "345ebb78-aeac-4459-a90a-bff2df0f8542", "35e182ff-be92-4fc2-b8a2-c89b4866838b", "a42b8e43-91cb-4b59-bd77-e3fb56b5c1fe", "970781fd-dde4-4de6-9588-c69b8061dbdf", "d05ef6b4-1956-4524-bba2-052498dc5a8e", "50a36a07-80da-4f98-9ae9-eb1581ab866d", "d3437c1d-8a98-47be-b6c2-a99557fd32b3", "4a5f9dca-cea7-46bf-be28-89a83a5d2b83", "714c3d8e-c5bb-4dc9-aec6-4dbd0d968173", "15634ca3-532e-4cbc-b4e2-2f2fe5088944", "98c03cdd-7aa4-4a3b-aed4-7438b5e09ce7", "3550c250-7a9f-452a-83d5-1c4356ab1aa7", "01238923-b3cc-4ccb-a897-4324ad6e4349", "cb09739d-f7ab-4312-8e1f-24477351aa38", "dd63bd9b-a772-46fd-a9e6-9a17f1fab8bb", "de51e2a1-7c3b-48f1-bd07-d5744b13af61", "26fa85a6-39f8-448e-9dd6-55f7be50d83f", "a425a14e-53ef-4c37-8979-2a170a55171e", "0429dae4-beac-46e1-b192-4bec148e3f21", "855b36a0-8800-4f5e-ae0b-cab010611580", "6060e6a4-849b-46c1-9a0b-8faf529f067e", "668e31da-fda7-45da-9c0d-2e43f00ce938", "769d2742-d492-408e-9673-5fa18ddc8070", "2918e2aa-fffe-4af8-9ea0-fa62bc3e01bc", "e574b504-645b-4169-bbda-9dd8e3c4bc28", "9affd9cc-f3d6-4383-beff-70094f65a8b8", "6f4ac9c7-4ac1-4b10-9702-fc17e17129ec", "a275957f-0698-44b7-8c75-8451f9f2c9e1", "b1d65ee1-45a3-4e00-8e82-4a70ec36ec4d", "43f385ab-1f50-4fa8-ba60-62cbd4b4af78", "1445cd70-0521-4574-8506-ee086765e8a5", "6c9de30f-5627-45d4-90a1-da0513a1a911", "98f1eccf-2369-419e-9ee7-d51b9815201c", "462948ce-b443-40da-9165-63cef6c7b85d", "29b1c7b5-0c33-4ccc-97b4-57b97aba25b1", "9ffd55ea-1925-4f59-9063-b6ef63f9b2c9", "6d61c38f-659c-4721-8273-7aac0743fa66", "09bb8011-b0dc-474b-8dd3-391c10d162cb", "30bd102c-54c8-4926-ac0d-b3a2d010102b", "0d236162-3bc1-4859-839b-c17248c05f09", "49172ab0-441f-4883-8712-8653586274ee", "83aaabdf-a796-4a5f-9896-b286a23e1798", "39d527ba-a0d2-491d-96ee-338131dc80d2", "26f93ce5-4ef7-41dc-9950-b8ac0518f500", "39fda21e-37de-48e1-a335-5f954fd97dd7", "1dc3bc9f-4bac-465c-a8c7-f65b0307139f", "9d965ca9-a33a-42db-9091-b37d44916dfd", "201f6ab3-9245-40a5-939d-a989b8c854b0", "7e9e92ca-601e-46bf-97f5-56ee1721f32d", "6a271e4b-46b7-4478-80cf-f919f984a707", "83d72b74-e58c-4816-b810-cec33f808b84", "545eaa87-e2b4-46e4-a6e8-c0cfbc8e8129", "e3d927ca-3806-40af-8f8e-3bb1a06a47fc", "dfae00f8-42a3-4618-8af7-03166b399ab2", "d764ef52-e0aa-4940-b4d8-7918556d6b64", "e2b208ee-16ee-4409-82cd-67ddeedaedbb", "ee9e7fe8-5fda-46c8-b42b-ef0fb8891c6f", "5f7adeb0-8e9b-41e2-9fef-9604adb12969", "74e053e3-2d38-47a7-98d6-61d024702a30", "f400b525-365e-4d45-b1fc-7e485a911991", "62875172-fb95-410a-bf6c-430b36f2d42c", "fb7fab52-e993-4809-a4b5-83bc0d050dfc", "2ac917a9-dbd4-4d34-9702-d43cfbf3f27f", "101bb42c-9ca6-4055-96b6-ee29d3e96f13", "cbc60ea7-7833-474c-96b6-fc8b90eaa815", "e207fd47-56f4-4b2f-916c-2470dbf85219", "41af6b61-f58a-4823-84dc-04c681a9d7d3", "98895b0e-22a6-4a6f-b46c-c977a04b9707", "dee96158-058b-4dd4-9036-c70f76a2b1c5", "2f6c2ae1-49f4-44e0-a876-d2771431d464" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512335490738, "format": { "page_full_width": true, "page_small_text": true }, "id": "513bb8df-0858-4d93-8a58-166db0b3994f", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512335490738, "parent_id": "1b338041-238f-4b08-83a8-9e23a3989c88", "parent_table": "block", "properties": { "title": [ [ "companies using go" ] ] }, "type": "page", "version": 0 } }, { "role": "comment_only", "value": { "alive": true, "content": [ "4d357ec7-c389-4335-adb8-4385187752be" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1512356347567, "id": "9afe3485-f220-4f1b-b432-17d70f7b87d4", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1512356347567, "parent_id": "1b338041-238f-4b08-83a8-9e23a3989c88", "parent_table": "block", "properties": { "title": [ [ "Big projects written in Go" ] ] }, "type": "page", "version": 0 } }, { "role": "comment_only", "value": { "alive": true, "content": [ "029ce27b-00a6-4965-9293-7b39ea0c09ca", "37c6f481-9b67-492d-ab22-bde3ee34d74f", "6f7a3240-5fb0-4720-9c2c-fc0195de32a2", "616c08e7-0c86-428c-a12c-3bf6742f50cb", "c117379a-1178-42fc-958b-914b5ec633fe", "104f2fcc-6bb4-4caa-9164-92429978c6bd", "341257f8-9f5e-4da5-8342-b5d483714a22", "6b8c6ff9-35bf-4dc6-995f-2e808886815a", "6d840b71-269e-40bc-b238-618f2392636a", "b66503d6-af00-4808-8fdf-e08a519f6b42", "212d1b62-edfa-45f7-a9b0-79c0859f845e", "054b9945-cd4b-4aa8-82b1-4142e123dcc9", "a2cf6835-0ff1-4b06-8641-6bb712970115", "aa6503d1-331e-40c2-aec7-f095d570f09b", "273e0f95-302a-4cd8-8713-9a3a53e0d833", "4f06fa0b-e001-455b-bbbf-4e7f76d654db", "66e8d90a-7ce6-4807-8150-1be702f63c73", "5baeed82-cfe4-400d-96bc-90dbb2d216bf", "03703920-2597-4d4b-bd22-1b1ee16ef9f1", "c6407faa-1749-48e3-a046-c422e1282c2e", "034bb138-5c30-4b74-b002-f5eaf318a885", "e83a79e1-e787-440c-ae2e-e66fe2bbd9f6", "46629540-c145-4cb4-9247-b9294610a822", "ad35f4e1-c713-4378-9bca-776777a64062", "16960a0b-18f8-4b77-8b2d-f53a23d38233", "a9b1b85b-6aec-4a86-87c6-5f51b76c6ea2", "f54d25c9-a933-4407-8fb6-81954de04386", "e860028f-1a9e-45fe-a230-38900def85df", "9637571e-f4ab-4f37-ba35-9b0e9e6af1dc", "bce2d6d8-5326-47eb-aa57-7611465cbbea", "b8e269b2-a4c6-4aa3-aa05-8e9011138841", "4aa47bab-3afd-4117-9d9b-a8503f5c0eb7", "242a5d98-d3d9-4e3a-8f0c-eb0c0854014a", "57d799bc-74d1-4bd9-8516-fb2fe88ae7e9", "2a1d61ae-2207-4571-8ed2-b76b6ea90ee3", "517c8d5f-058a-440c-8d5a-8fc6fcd98ef6", "1a9c0c00-e3b4-472a-ace2-26ec1d331fa0", "bf444c6d-9a7d-4407-b967-06bb6970d336", "6ff371ec-e9b0-4565-abdb-02aae2160ead", "2d0aa106-98e6-4432-aa01-96c03b9b6cf6", "9edd6eb0-11f2-4f9d-b628-85c0f2a1c392", "36a1a27f-8dcf-4609-8754-96f7122eaf96", "08e7ba49-aeca-4432-b9fa-37168f841114", "76a34532-cc84-43dc-80b5-43e188704277", "45eb2162-74fd-48e2-b6e9-8d57d0bd673c", "02cbb85a-6baf-4061-82d3-5bb1b11b422d", "685312b2-9fc2-4b04-b0e1-02b0348fafe0", "a5c85fe6-1334-4e97-9826-460e6463e1d5", "4dbe88c5-650b-4a46-b73b-71ccadc13647" ], "created_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "created_time": 1495403335718, "format": { "page_full_width": true, "page_small_text": true }, "id": "ed055f63-753e-42ef-9025-e11ac9062c35", "last_edited_by": "bb760e2d-d679-4b64-b2a9-03005b21870a", "last_edited_time": 1545566399253, "parent_id": "aea20e01-890c-4874-ae08-4557d7789195", "parent_table": "block", "properties": { "title": [ [ "C++" ] ] }, "type": "page", "version": 43 } } ] } | Mid | [
0.573951434878587,
32.5,
24.125
]
|
Anti-inflammatory and wound healing effects of gel containing Kaempferia marginata extract. Kaempferia marginata Carey (Zingiberaceae family) has been traditionally used in the treatment of inflammation. The whole plant decoction is used for treatment of fever. Ethanol extracts which exert potent anti-inflammatory properties are selected for wound healing assay. This study aimed to investigate activities of the extract and gel formulation on anti-inflammatory and wound healing activities. The anti-inflammatory and wound healing properties from this plant could support its traditional uses and obtain a new pharmaceutical product with good physical, chemical and biological stabilities. The anti-inflammatory activity was tested on anti-nitric oxide (NO) production using RAW264.7 cells and wound healing assay using human dermal fibroblast (HDF) cells. The results found that the anti-inflammatory activity of gel containing K. marginata at 10% w/w showed the highest activity with an IC50 value of 12.50 μg/ml (Diclofenac gel, IC50 value = 64.90 μg/ml). It also revealed that wound healing activities of K. marginata gel (5% w/w) showed the highest % cell viability (134.05%) and the highest % cell migration at 85.20 using HDF cells. The present study shows that gel containing K. marginata extracts have good anti-inflammatory and wound healing properties in vitro. | High | [
0.6747572815533981,
34.75,
16.75
]
|
Q: SemanticImport fails with all files on Mac OS 10.13 MacOS 10.13 Mathematica 11.2.0.0 This is part of an assignment - my Semanticimport fails with all files though. the error: Dataset ExtractRawData::dataextr: Data extraction failed. The code is in the screenshot attached - all semanticImports fail, not only this specific csv. If I do a simpler import like: dataSemantic = Import[("/Users/dave/Dropbox/Uni/appliedempirical/DATA A0 2.csv")]; dataA0 = Dataset@dataSemantic; it all works out. Can somebody help me with this? EDIT: I have now tried to use semanticImportString and drag and dropped the file. it seems to import it, but after querying I still get an error: SemanticImportString[Missing["PartInvalid", "age"], Missing["KeyAbsent", "age"]] A: This is a bug specific to macOS 10.13 ("High Sierra"). It is caused by incompatibility between the operating system and the golang runtime library used by the binary component of SemanticImport (and of course SemanticImportString). A fix for this issue has been released via paclet update. The update should be installed automatically when SemanticImport is first used in a fresh session, and the following code may be run to obtain the update manually PacletSiteUpdate /@ PacletSites[]; PacletUpdate["SemanticImport"] after which the paclet version should be 0.0.457. | High | [
0.6675824175824171,
30.375,
15.125
]
|
Accurate evaluation of the angular-dependent direct correlation function of water. The direct correlation function (DCF) plays a pivotal role in addressing the thermodynamic properties with non-mean-field statistical theories of liquid state. This work provides an accurate yet efficient calculation procedure for evaluating the angular-dependent DCF of bulk SPC∕E water. The DCF here represented in a discrete angles basis is computed with two typical steps: the first step involves solving the molecular Ornstein-Zernike equation with the input of total correlation function extracted from simulation; the resultant DCF is then polished in second step at small wavelength for all orientations in order to match correct thermodynamic properties. This function is also discussed in terms of its rotational invariant components. In particular, we show that the component c112(r) that accounts for dipolar symmetry reaches already its long-range asymptotic behavior at a short distance of 4 Å. With the knowledge of DCF, the angular-dependent bridge function of bulk water is thereafter computed and discussed in comparison with referenced hard-sphere bridge functions. We conclude that, even though such hard-sphere bridge functions may be relevant to improve the calculation of Helmholtz free energies in integral equations or density functional theory, they are doomed to fail at a structural level. | High | [
0.6604026845637581,
30.75,
15.8125
]
|
[Post-traumatic lumbar hernia and abdominal wall reconstruction technique. A case report]. Little less than half of the occurrences acquired lumbar hernias are caused by traumatisms: direct parietal contusions, iliac crest biopsies or fractures. Regarding their frequency, they are rare but generally underdiagnosed. Abdominal wall reconstruction is motivated by the risk of hernia strangulation, but also aims to rebuild continent abdominal muscles, allowing the loss of discomfort or worsening risk as well as to resume physical activities. We report a case of parietal reconstruction of a traumatism-induced lumbar hernia in a 59-year-old male patient. Scanner showed lumbar disinsertion of abdominal transversus and both obliquus externus and internus muscles. The pressure exerted on abdominal muscles, greater than the elastic resistance of the insertion aponeurosis, caused their tearing. The flexibility and elasticity of the skin allowed the sustainment of its integrity. We applied Welti-Eudel's technique to suture the dorsal edge of the transverse and oblique intern muscles with a flap coming from lumbo-dorsal fasciae of sacrospinalis muscles. A parietal prosthesis is inserted between this deep level and the obliquus externus, which is restored. Fifteenth month's results, both morphological and functional, are excellent. Check scanner shows anatomical restitution of abdominal muscles. The scanner of abdominal muscles is the leading complementary exam. It is repeated with a gap, so that the hematoma does not disturb its interpretation. Surgical indication is definite for active adults. Parietal prosthetic strengthening, bone inserted between two muscular levels, avoid late loosening. It has no immediate mechanical value, which is secured by abdominal girdle during healing. | High | [
0.688741721854304,
32.5,
14.6875
]
|
Cytotoxic lignans from the stems of Herpetospermum pedunculosum. A bioassay-guided chemical investigation on the ethyl acetate extract of the stems of Herpetospermum pedunculosum led to the isolation and identification of 22 lignans including 6 previously undescribed ones, herpetosiols A-F. Their structures including stereochemistries were elucidated by analysis of NMR, HRMS and ECD data. The in vitro cytotoxic activities of all isolates were studied against human gastric carcinoma SGC7901, lung carcinoma A549, breast carcinoma MDA-MB-231 and hepatocellular carcinoma HepG2 cell lines. Among them, eight lignans exhibited anti-proliferative effects against four tumor cell lines with IC50 ranging from 1.7 ± 0.1 to 32.6 ± 1.1 μM. Hedyotol-B displayed potent inhibitory effect with IC50 values of 1.7 ± 0.1 μM against SGC7901 and 6.1 ± 0.5 μM against A549, respectively. | Mid | [
0.6535626535626531,
33.25,
17.625
]
|
#!/bin/sh # # Copyright (c) 2014-2016 Martin Raiber # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. # IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY # CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, # TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. set -e . SYSCONFDIR/mariadbxtrabackup.conf if [ "x$MARIADB_TEMP_INCRDIR" = "x" ] then echo "Mariadb incremental temp dir is empty" exit 1 fi [ ! -e "$MARIADB_TEMP_INCRDIR/last" ] || rm -R "$MARIADB_TEMP_INCRDIR/last" mv $MARIADB_TEMP_INCRDIR/last.new $MARIADB_TEMP_INCRDIR/last # Argument one null means main client if [ $1 = 0 ] then ! [ -e $MARIADB_TEMP_INCRDIR/last_names ] || rm $MARIADB_TEMP_INCRDIR/last_names mv $MARIADB_TEMP_INCRDIR/blockalign.data.new $MARIADB_TEMP_INCRDIR/blockalign.data else CURR_BACKUP=$(cat "$MARIADB_TEMP_INCRDIR/curr_name") echo "$CURR_BACKUP" >> "$MARIADB_TEMP_INCRDIR/last_names" rm "$MARIADB_TEMP_INCRDIR/curr_name" fi | Low | [
0.5,
31.125,
31.125
]
|
Evaluation of the water sorption of luting cements in different solutions. To evaluate and compare the water sorption of three luting cements in three different solutions: distilled water and artificial saliva with different pH values (7.4 and 3.0). Resin-modified glass-ionomer cement (GC Fuji Plus) and two resin cements (Multilink Automix and Variolink II) were used. A total of 45 specimens - 15 specimens (15x1 mm) for each cement were prepared according to ISO standard 4049:2009. The water sorptions of the cements were calculated by weighing the specimens before and after immersion and desiccation. . Nonparametric statistic methods were applied. GC Fuji Plus cement showed significantly higher values of water sorption in all three solutions of both resin cements (p<0.009) and significantly higher values of sorption in artificial saliva pH 3.0. Multilink Automix showed significantly higher values of water sorption compared with Variolink II in artificial saliva pH 7.4, and higher values of sorption in this solution compared with pH value 3.0. Water sorption values are mainly influenced by the proportion of hydrophilic matrix, the type and composition of filler, and the pH value of solutions. | Mid | [
0.6106557377049181,
37.25,
23.75
]
|
Q: "comparison between signed and unsigned integer expressions" with only unsigned integers This warning should not appear for this code should it? #include <stdio.h> int main(void) { unsigned char x = 5; unsigned char y = 4; unsigned int z = 3; puts((z >= x - y) ? "A" : "B"); return 0; } z is a different size but it is the same signedness. Is there something about integer conversions that I'm not aware about? Here's the gcc output: $ gcc -o test test.c -Wsign-compare test.c: In function ‘main’: test.c:10:10: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] puts((z >= x - y) ? "A" : "B"); ^ $ gcc --version gcc (Debian 4.9.1-15) 4.9.1 If z is an unsigned char I do not get the error. A: The issue is that additive operators perform the usual arithmetic conversions on arithmetic types which. In this case it results in the integer promotions being performed on the operands, which results in unsigned char being converted to int since signed int can represent all the values of the type of unsigned char. A related thread Why must a short be converted to an int before arithmetic operations in C and C++? explains the rationale for promotions. | Mid | [
0.597619047619047,
31.375,
21.125
]
|
A former London postman who joined Isis has been charged with involvement in a mass execution in Syria. Harry Sarfo, who is already serving three years in a German prison for terror offences, was not accused of murder until footage of the massacre emerged last year. The federal public prosecutor’s office said he was charged with six counts of murder and violating human rights law at a specialist state security court in Hamburg. “In mid-June 2015 the so-called Islamic State had six prisoners executed on Palmyra’s market square,” a spokesperson said. “Sarfo belonged to the six-member squad that carried out the execution and he was armed with a pistol. “Together with other members of his group, he guarded the prisoners and prevented them from escaping.” Harry Sarfo (top right) was seen pointing a gun towards captives during an Isis execution in Palmyra, Syria, in June 2015 (Washington Post) Prosecutors said Sarfo led one of the captives to the middle of the street, where they were then shot, adding: “During the shooting, he stopped at the side of the road so as not to be hit by a bullet himself. “From there, he took aim and fired at the bodies lying on the ground.” In an interview with The Independent in January 2016, the 28-year-old said he never fought for the terror group during his three months in the “caliphate”. While failing to mention his own involvement in the atrocity, Sarfo named his worst memory of Syria as the “execution of six men shot in the head by Kalashnikovs”, identifying it as one of the events that drove him to flee the terrorist group’s “barbarity”. But Sarfo, who grew up in the UK after moving from Germany as a child, was caught on video herding captives to be executed in the Syrian city of Palmyra. Footage of the massacre obtained by the Washington Post shows Sarfo with a group of Isis fighters led by Austrian Isis fighter Mohamed Mahmoud and German militant Yamin Abou-Zand. He had already appeared in a propaganda video that showed the pair shooting Syrian captives dead in the ancient ruins of Palmyra, while calling on Isis supporters to travel to Isis territories or “kill infidels wherever you find them” in Europe. Former London student in Isis execution video In the second video cited by German prosecutors, which was not released by Isis’ propaganda agency, Sarfo is seen apparently herding one of six captives wearing combat fatigues with their hands bound into a public square in Palmyra. Sarfo stands immobile by a wall for opening seconds of the fusillade, but he then pulls out a pistol and aims it at the men on the ground. The camera is briefly obscured but Sarfo appears to fire towards unmoving victims. It is unclear whether a bullet hit and whether the captives were already dead. German prosecutors said five of those killed were members of the Syrian army, while the sixth was a Sunni preacher condemned by Isis, which itself claims to represent Sunni Muslims. Footage of the massacre was leaked by a source inside Isis, which is intensifying efforts to discredit defectors and featured Sarfo in a recent propaganda magazine decrying “fools who strayed” and spread “lies and falsehoods”. Since being jailed he has spoken out against Isis’ ideology and said he wants to work with young men and women at risk of radicalisation. “I've realised that what they are claiming to be Islamic is totally un-Islamic,” he told The Independent in an interview conducted via his lawyer from prison. “I came to the conclusion that this is not the path to paradise, it is the path to hell.” Palmyra recaptured by Syrian government forces Show all 10 1 /10 Palmyra recaptured by Syrian government forces Palmyra recaptured by Syrian government forces Palmyra recaptured by Syrian pro-government forces Graffiti on the ancient stones reads in Arabic ‘Shooting without the permission of the chief is prohibited’ Getty Palmyra recaptured by Syrian government forces Palmyra recaptured by Syrian pro-government forces Damaged artefacts lay inside the museum of the historic city of Palmyra Reuters Palmyra recaptured by Syrian government forces Palmyra recaptured by Syrian pro-government forces Syrian pro-government forces rest by Palmyra Citadel as they take control of the city from the hands of Isis Getty Palmyra recaptured by Syrian government forces Palmyra recaptured by Syrian pro-government forces The UNESCO world heritage site appears surprisingly intact after its recapture from the militant group Getty Palmyra recaptured by Syrian government forces Palmyra recaptured by Syrian pro-government forces Many had feared the ancient city would be destroyed following its capture by Isis in May Getty Palmyra recaptured by Syrian government forces Palmyra recaptured by Syrian pro-government forces Smoke billows from the Palmyra Citadel as Assad’s forces drive the Jihadist group from the city Getty Palmyra recaptured by Syrian government forces Palmyra recaptured by Syrian pro-government forces Palmyra is one of the ‘most important cultural centers of the world’ Unesco says Getty Palmyra recaptured by Syrian government forces Palmyra recaptured by Syrian pro-government forces Pro-government forces play football in the streets following the recapture of the city Getty Palmyra recaptured by Syrian government forces Palmyra recaptured by Syrian pro-government forces The extent of the destruction caused by Isis’ 10 month occupation of the city has yet to be fully realised Getty Palmyra recaptured by Syrian government forces Palmyra recaptured by Syrian pro-government forces The City Council of Palmyra building in ruins Reuters Sarfo was sentenced to three years in prison for membership of a foreign terrorist organisation in July last year, having travelled to Syria in March 2015. Prosecutors have also opened a separate case into accusations of war crimes, which continues. Sarfo fled back to Germany in July 2015 and was immediately arrested upon his arrival at Bremen airport. A German citizen of Ghanaian descent, he converted to Islam aged 20 in London, where he attended Leyton Sixth Form College and Newham College of Further Education. He worked at Wickes and as a postman for Royal Mail before being sent back to Germany to serve a prison sentence for involvement in a 2010 armed robbery at a supermarket. After being jailed with a known al-Qaeda recruiter, Sarfo said he “learned the ideology of jihad” and joined an extremist mosque after being freed, later deciding to join Isis after being repeatedly searched, detained and questioned by counter-terror police. He said he trained in Isis’ special forces in its Syrian territories but fled the group before taking part in any operations, maintaining he did not kill anyone and refused to launch terror attacks in Europe. | Low | [
0.493827160493827,
25,
25.625
]
|
Q: Using SIMPLEPIE to fetch Facebook Fan Page RSS I'm using simplepie to fetch the rss of my a facebook fanpage and it works fine except that it repeats the images several times for all the posts that were inserted on that facebook fanpage through RSS for Pages that I use to get all my behance updates directly on facebook. Don't know how to do a proper jsfiddle with the simplepie API so can't show you much besides my current code. <?php // Make sure SimplePie is included. You may need to change this to match the location of simplepie.inc. require_once('php/simplepie.inc'); // We'll process this feed with all of the default options. $feed = new SimplePie(); // Set which feed to process. $feed->set_feed_url('http://www.facebook.com/feeds/page.php?format=rss20&id=242469109162998'); // Run SimplePie. $feed->init(); // This makes sure that the content is sent to the browser as text/html and the UTF-8 character set (since we didn't change it). $feed->handle_content_type(); // Let's begin our XHTML webpage code. The DOCTYPE is supposed to be the very first thing, so we'll keep it on the same line as the closing-PHP tag. ?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <title>Sample SimplePie Page</title> <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> </head> <body> <div id="getall"> <?php /* Here, we'll loop through all of the items in the feed, and $item represents the current item in the loop. */ foreach ($feed->get_items() as $item): ?> <div class="item"> <h2><a href="<?php echo $item->get_permalink(); ?>"><?php echo $item->get_title(); ?></a></h2> <p><?php echo $item->get_description(); ?></p> <p><small>Posted on <?php echo $item->get_date('j F Y | g:i a'); ?></small></p> </div> <?php endforeach; ?> </div> What do y'all have to say? Any work around? A: Why not to use the php sdk and get the feed with an api call? # After de facebook object instantiation $feedArray = $facebook->api('pageID/feed'); Then you only apply the foreach to the $feedArray and you display it as you want it. | Low | [
0.5316973415132921,
32.5,
28.625
]
|
Electric vehicles and electric-hybrid vehicles are gaining in popularity with consumers. The electric motors in these vehicles are typically powered from multiple storage batteries disposed in a battery pack in the vehicle. If the battery needs to be recharged while the vehicle is parked, a wired coupling device is connected to the vehicle, typically by the vehicle operator. However, some operators object to having to ‘plug-in’ their vehicle each time the vehicle is parked. Wireless or connector less battery chargers have been proposed, see U.S. Pat. No. 5,498,948 issued Mar. 12, 1996 to Bruni et al. and U.S. Pat. No. 8,008,888 issued Aug. 30, 2011 to Oyobe et al. A known wireless battery charger includes a source resonator or charging pad lying on a parking surface under the vehicle being charged, and a corresponding capture resonator mounted underneath the vehicle. Such wireless battery chargers are most efficient when the vehicle is parked such that the source resonator and capture resonator are horizontally (i.e. laterally and longitudinally) aligned. However, as the source resonator and the capture resonator are underneath the vehicle and/or out of the vehicle operator's view, it is difficult for the vehicle operator to judge where to park the vehicle so that the source resonator and the capture resonator are aligned. Some current wireless charging systems rely on methods to align the capture resonator attached to the undercarriage of a vehicle with its corresponding source resonator using trial and error positioning of the vehicle relative to the source resonator. These methods are time intensive, with poor repeatable results. Other wireless charging systems utilize wheel stops to align the capture resonator on the vehicle with the source resonator. While these systems may provide precise alignment for one particular vehicle configuration, they are unlikely to provide adequate alignment for a wide variety of vehicles wherein the spatial relationship between the wheels and capture resonator differ. Still other wireless charging systems provide a magnetic beacon signal to guide the vehicle to align the capture resonator with the source resonator. Examples of such systems are described in U.S. patent application Ser. No. 13/677,362 and U.S. patent application Ser. No. 13/677,369, both filed Nov. 15, 2012. The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also be inventions. | Mid | [
0.633569739952718,
33.5,
19.375
]
|
Car -1 -1 -10 405 182 484 243 1.52 1.65 4.09 -4.48 1.46 20.09 -1.67 1.00 Car -1 -1 -10 82 187 277 275 1.49 1.66 4.52 -8.40 1.54 14.56 -1.68 1.00 Car -1 -1 -10 524 179 581 235 1.61 1.66 3.69 -1.85 1.45 22.74 -1.69 1.00 Car -1 -1 -10 290 187 371 229 1.38 1.64 4.24 -9.88 1.48 25.92 -1.66 1.00 Car -1 -1 -10 507 181 529 199 1.45 1.59 3.43 -7.34 1.14 59.14 -1.63 0.97 Car -1 -1 -10 348 181 401 214 1.49 1.63 4.04 -11.27 1.34 34.97 -1.67 0.89 Car -1 -1 -10 260 184 320 214 1.50 1.63 4.01 -16.37 1.45 37.20 -1.66 0.85 Car -1 -1 -10 -14 189 172 261 1.45 1.64 4.06 -12.12 1.55 16.79 -1.65 0.81 Car -1 -1 -10 153 170 245 218 1.55 1.63 3.73 -14.29 1.08 25.30 -1.74 0.41 Car -1 -1 -10 -3 182 65 233 1.46 1.47 3.53 -17.94 1.39 22.41 0.96 0.02 | Low | [
0.48243559718969503,
25.75,
27.625
]
|
If anthropology is global, then so are its ethical dilemmas. This paper reviews three personal situations of the author as a student: an NGO activist in the women's movement and a UN official working on women's health and tobacco. Each situation, the kinds of ethical issue posed and lessons for the future direction of ethics in anthropology are outlined. The paper argues that applied anthropology is embedded in the position of the field of anthropology within the politics of the State and that this defines the parameters of individual choice. Contradictions in ethical situations and decision-making are posed by changing from the distant observer to active involvement and these may not be resolved by logic alone. Furthermore, governance and defining global ethical standards such as for the behavior of the tobacco multinational corporations is a fascinating new realm for anthropological ethical resolve. | High | [
0.6616915422885571,
33.25,
17
]
|
BACKGROUND ========== Surgical correction of elongated labia minora (LM) is one of the most sought reconstructive procedures after plastic surgery.^[@R1],[@R2]^ The illusion of simplicity of such operations and failure to observe important technical aspects often lead to total resection of the labia, causing suffering of patients and problems for operating doctor. In such cases, reconstructive surgery is the only possible way to rectify the situation. MATERIALS AND METHODS ===================== We performed 10 reconstructions of LM. In 6 cases, the absence of LM is the result of the surgical correction of elongated LM \[excessive resection edge; 4 cases: bilateral absence and 2 cases: unilateral (after correction of asymmetry LM)\]. In 2 cases, unilateral absence of LM was the result of cysts excision of LM and in the other 2 cases a result of injuries in childhood. Saved the hood of clitoris, formed by the lateral pedicle of LM, an adequate supply of these tissues, or extra folds of skin extending from the anterior commissure to the middle third of LM^[@R1],[@R2]^ was the plastic material for reconstruction. We have formed 2 (bilateral) or 1 (from the side of the lesion when unilateral absence) long flap with back pedicle that has been raised above the base, laid back, and sewn to defect of LM.^[@R2],[@R3]^ RESULTS AND DISCUSSION ====================== The causes of acquired deformities of the LM are injuries and surgery operations, especially excessive resection, performed for the treatment of elongation or hypertrophy. Reconstruction of LM is a difficult task, because such corrections are made with a limited amount of local tissue. Psychological aspects are also very important because the patient's expectations, "ideal" result of the operation, cannot always be achieved. Nevertheless, in most cases, you can create normal aesthetic and functional LM. Flaps have a good blood supply, which allows to create flaps without the risk of necrosis. CONCLUSIONS =========== \(1\) Absence of LM is often a result of aesthetic surgery; therefore, these operations must be performed by qualified specialist depending on the length of the LM and preoperative marking, avoiding excessive resection. (2) Reconstruction of LM from local tissue allows the surgeon to create perfect normal labia and save patients from suffering. Presented at the Plastic Surgery, Esthetic Medicine and Cosmetology, IV National Congress, Moscow, Russia, December 3--5, 2015. **Disclosure:** The authors have no financial interest to declare in relation to the content of this article. The Article Processing Charge was paid for by the authors. **Plastic Surgery, Esthetic Medicine and Cosmetology, IV National Congress**, in Moscow, Russia on December 3-5 2015. | High | [
0.6697459584295611,
36.25,
17.875
]
|
China’slargest bank has backed out of a deal to finance a proposed Iran-to-Pakistan gas pipeline that is opposed by the United States, a potential sign of the lengthening reach of U.S. economic sanctions on Iran. Pakistani officials confirmed Wednesday that Industrial and Commercial Bank of China had withdrawn from plans to head a consortium that would finance the $1.6-billion Pakistani portion of the cross-border pipeline, apparently over concern that the bank could be excluded from the U.S. economy. The move suggests that even Chinese companies, which have staunchly resisted U.S. and European efforts to punish Iran for its nuclear program, are beginning to bend to the sanctions on Tehran. China’s unwillingness to fully cooperate had been one of the greatest challenges to the international effort to put economic pressure on Iran. China’s decision is a setback for the Pakistani government, which fears that dire energy shortages could lead to civil unrest as well as economic strain. Pakistani officials said they would press ahead with the project, which would deliver more than 750 million cubic feet of natural gas per day from Iran’s South Pars field. They said they would find replacement financing. “There are always a multiplicity of funding sources which are available for any project,” Pakistani Foreign Minister Hina Rabbani Khar told reporters in Islamabad, the capital. “This is a fairly viable project and we hope we will not have any problem in trying to find ways and means of ensuring its funding.” Secretary of State Hillary Rodham Clinton has warned that there would be “damaging consequences” if Pakistan continued with the pipeline because U.S. sanctions bar American financial institutions and companies from doing business with any concerns that have ties to Iran. The U.S. and its allies believe Iran is attempting to obtain nuclear weapons, despite Tehran’s insistence that its nuclear research is for peaceful purposes. Washington backs an alternative pipeline project that would transport natural gas from Turkmenistan through Afghanistan and into Pakistan and India. But because of the war in Afghanistan, it’s unclear whether that pipeline will be built. China has questioned the legitimacy of unilateral national sanctions on Iran, and several Chinese energy companies have ignored them. But Chinese financial institutions that do business with the United States apparently don’t want to risk those dealings. “U.S. banks increasingly are not willing to do business with foreign financial institutions doing business with Iran,” said Mark Dubowitz, an energy specialist at the Foundation for Defense of Democracies, a nonpartisan think tank. [email protected] [email protected] Richter reported from Washington and Rodriguez from Islamabad. | Mid | [
0.6140724946695091,
36,
22.625
]
|
[youtube=http://youtu.be/6dI-dNE2yQ0] One of the most exciting developments to emerge out of the maker community is the growth of robotic prosthetics. Robots that play games, throw balls, and follow lines around a room are neat, but it’s great to see the development of robots that do something useful, like improving the lives of people who’ve lost their hands. The reasons for this trend are many. Eighteen-year-old roboticist Raj Singh told me that after building a series of game-playing robots he simply wanted a greater challenge and set out to build a robotic arm that helped amputees and people with disabilities. Rising maker star Easton LaChapelle had similar motivations. The price of materials and available equipment has been a driver, too. The low cost of 3D printing has brought prohibitively expensive robotic prosthetics down to earth and opened the doors to a new class of maker-made robotic hands. The Dextrus robotic hand by the Project Open HandProject Open Hand is a newcomer that caught our eye. Sensors mounted on the users forearm move the hand. The hand can be connected to an existing prosthesis with a standard connector. The plan is to sell the hands for less than $1,000. | Mid | [
0.632124352331606,
30.5,
17.75
]
|
Top Of The Town Mare Rooftop’s igloos are a pretty - and practical - addition Posted Wednesday, March 13, 2019 11:37 am Rooftop orbs take Wayland Square to new heights Photo by Ryan Pickering By Megan Schmit Mare Rooftop has become a trendy destination tucked away from downtown. Since their opening in summer of 2018, they’ve won guests over with their sweeping views, specialty drinks, shareable plates, a recently added brunch menu, and, as of December, luxurious outdoor igloos. Thanks to these five translucent bubbles lining the rooftop deck, Mare has managed to make their winning feature enjoyable during all four seasons. At night, they glow in iridescent shades of blue and pink. Each one is heated, can hold up to 10 guests, and makes for a stunning backdrop to the already picturesque bar/restaurant. Visitors can enjoy selections from a special Igloo Menu, featuring gourmet hot drinks like cider and spiked coffee, plus chilled or hot bites and treats. Finally, a rooftop cocktail isn’t just a summer indulgence – it’s a year-round affair. | Mid | [
0.650537634408602,
30.25,
16.25
]
|
//========= Copyright © 1996-2002, Valve LLC, All rights reserved. ============
//
// Purpose:
//
// $NoKeywords: $
//=============================================================================
#ifndef VGUI_COMBOKEY_H
#define VGUI_COMBOKEY_H
#include<VGUI.h>
namespace vgui
{
enum KeyCode;
class ComboKey
{
public:
ComboKey(KeyCode code,KeyCode modifier);
public:
bool isTwoCombo(KeyCode code,KeyCode modifier);
protected:
bool check(KeyCode code);
protected:
KeyCode _keyCode[2];
friend class Panel;
};
}
#endif | Low | [
0.406542056074766,
21.75,
31.75
]
|
There are a number of applications where it is desirable to be able to identify an unknown location of an object which emits a signal. One example occurs when planning an indoor wireless local area network (LAN) having one or more RF or microwave emitters. Of course precisely defining an object's location requires specifying coordinates in three dimensions (e.g., longitude, latitude, and altitude). In the discussion to follow, for simplicity of explanation it is assumed that the third coordinate (i.e., altitude) is either known or is otherwise easily determined once the other two coordinates (e.g., latitude and longitude) are identified. Those skilled in the art will be able to extrapolate the discussion to follow to the case where all three coordinates are to be determined. There are a few known methods to locate signal emitters using a plurality of distributed sensors, or receivers, which are spaced apart from each other. Among the most common of these methods are: Time Difference of Arrival (TDOA), Time of Arrival (TOA), Angle of Arrival (AOA), and Received Signal Strength (RSS). The TDOA method, also known sometimes as multilateration or hyperbolic positioning, is a process of locating an emitter by accurately computing the time difference of arrival (TDOA) of a signal emitted from the emitter to three or more sensors. In particular, if a signal is emitted from a signal emitter, it will arrive at slightly different times at two spatially separated sensor sites, the TDOA being due to the different distances to each sensor from the emitter. For given locations of the two sensors, there is a set of emitter locations that would give the same measurement of TDOA. Given two known sensor locations and a known TDOA between them, the locus of possible locations of the signal emitter lies on a hyperbola. In practice, the sensors are time synchronized and the difference in the time of arrival of a signal from a signal emitter at a pair of sensors is measured. With three or more sensors, multiple hyperbolas can be constructed from the TDOAs of different pairs of sensors. The location where the hyperbolas generated from the different sensor pairs intersect is the most likely location of the signal emitter. In the TOA method, a signal emitter transmits a signal at a predetermined or known time. Three or more sensors each measure the arrival time of the signal at that sensor. The known time of arrival leads to circles of constant received time around each sensor. The locations where the circles from the three or more sensors intersect are the most likely location of the signal emitter. In the AOA method, the angle of arrival of the signal is measured with special antennas at each receiver. This information is combined to help locate the signal emitter. In the RSS method, the power of the received signal at each sensor is measured, and this information is combined to help locate the signal emitter. There are a few different emitter location procedures that employ RSS. For example, one commonly used method in planning indoor wireless LAN systems in a building of interest is to map the received signal strength at various locations around the building during a setup phase. From this map, a variety of algorithms can be used to locate the signal emitter based on computed received power at three or more sensors. A more detailed explanation of principles employed in an RSS method of locating a signal emitter will now be provided, particularly illustrating a case involving an RF emitter and RF sensors. FIG. 1 illustrates a general case of an RF emitter 110 and two RF sensors 122 and 124. In free space, the received power of a signal transmitted by RF emitter 110 decreases with the square of the distance from RF emitter 110. For indoor or dense urban environments the power fall-off is even steeper, for example r−3 or r−4, where r is the distance from RF emitter 110. In general, given a transmitted power P0 measured at distance r0, the power P1 received at first RF sensor 122 is: P 1 = P 0 ( r 0 r 1 ) n , ( 1 ) where r1 is the distance between RF emitter 110 and first RF sensor 122, and n is the exponential rate at which the power decreases with distance. Likewise the received power P2 at second RF sensor 124 is: P 2 = P 0 ( r 0 r 2 ) n , ( 2 ) where r2 is the distance between RF emitter 110 and second RF sensor 124. This leads to: P 1 P 2 = ( r 2 r 1 ) n ( 3 ) With a bit of manipulation this yields: 10 ( log ( P 1 P 2 ) n ) = r 2 r 1 = const = α ( 4 ) It can be shown that this leads to a circle of a given radius and centered on the line defined by the two RF sensors. FIG. 2 illustrates an exemplary circle generated by power measurements of a signal transmitted by RF emitter 110 and received at RF sensors 122 and 124. With at least three RF sensors, three such circles are generated, and the location of RF emitter 100 can be found where the three circles intercept. With many sensors, it is possible to increase the accuracy by determining the point where most of the generated circles intersect. However, the addition of measurement uncertainty and noise makes this a difficult problem to solve analytically with a high degree of accuracy. Moreover, using just the measured signal power, as it typical in most RSS methods, multiple emitters transmitting from different locations at the same time with the signals having the same characteristics (e.g., frequency, bandwidth, etc.) leads to confusing results for the emitter location. Furthermore, with existing equipment, it is often difficult for a troubleshooter to easily and efficiently view all of the relevant data of interest to allow a clear picture of any coverage and interference issues. More robust data analysis and data presentation capabilities are needed. In particular, methods are needed that are robust when multiple emitters are present that transmit signals at the same time and on the same frequency. What is needed, therefore, is a method and system for locating signal emitters that addresses one or more of these shortcomings. | Mid | [
0.651515151515151,
32.25,
17.25
]
|
"Brought to you by WITH S2 Written In The Heavens Subbing Squad" "Episode 24" "Notice Suspension of Business" "We weren't prepared for this at all." "We never even dreamed this could happen." "Aigoo." "People's lives are so unpredictable." "A person's life is truly nothing." "Those people that came and went like this was their home when Father was alive, how could they just completely sever ties like that?" "Hey Seo Tae Jo!" "How dare you come here!" "What the hell are you thinking stepping in here again!" "Gap Soo Ahjussi!" "Hey, you punk." "How do you think Master got that way?" "Because of who?" "He died because of everything you caused!" "What are you going to do now?" "What are you going to do about it, punk?" "Huh?" " What are you going to do about it?" " Calm down." "Even ingratitude has its limits!" "Aigoo, Master!" "Aigoo, Master!" "Aigoo, Master!" "Stop it, all of you, and calm down." "Tae Jo, come in." "You should at least pay your last respects to the Master." "What are you doing?" "Why don't you come in to pay your final respects?" "Tae Jo." "Come on in." "I didn't know it would be so barren when Father made his final journey." "Over a few lines printed in the newspapers, how could people cut him out so completely?" "It makes me fear that humanity is really like this." "It may be that people are late because it was such a sudden announcement." "He was a man that only ever knew bread." "To think that he had to suffer such a disgrace in his final days... truly, there are no words that can describe my feelings." "The Master said this, that the truth will always prevail." "I'm sure the people who knows our teacher still respect him as always." "It's time for the funeral procession, Boss." "Oh, right." "But, Il Jung Hyungnim, do you really think it will be all right?" "He is forever my Master." "It is just proper that I escort him on his final journey." "I am truly so sorry." "We just received the announcement." "We ran like mad from all over to get here on time and only just arrived." "Reverently, we pray, may the Master rest in peace." "Even though we're considered merely as bread guys baking bread worth a couple hundred Won," "we are master artisans that spend our lives in the pursuit of creating the taste of that bread." "Don't ever forget that, Tak Gu." "Here is today's schedule, President." "Are you all right?" "President." "Yes, I'm all right." "My arm is just a little sore." "Let's resume last night's new product development meeting." "Yes, all right." "Starting the day you went for the funeral services, for 3 days straight, he's been having meetings with the Board of Directors." "I believe President Il Jung's judgment is getting more and more cloudy." "At this rate, it looks like our Geosung managerial rights will be passed on to mere little bakeries." "Does that make any sense?" "At next week's Board of Director's Meeting," "I think we'll need to show him exactly where we stand." "[Fermentation Journal]" "Tae Jo," "If you are reading this letter, it probably means you've taken my fermentation journal, right?" "Did you want to see me, Boss?" "Yeah, come in." "Okay." "How are you feeling?" "Don't worry." "I'm okay." "I called you in here because I have something to give you." "It looks like Father left this for you." "This is the task" "I gave you all for the 3rd round of the competition." "[The world's happiest bread]" "Finding the world's most filling bread was in the spirit of thinking of others." "Finding the world's most interesting bread... was in the spirit of enjoying yourself." "Finding the world's happiest bread... is in the spirit of finding the bread you are to make for all the days of your life." "This is the final task I will give to you, so I hope in earnest that you will carry it out." "Master." "Master." "Master!" "What you asked me before..." "Does your request still stand?" "Are you saying you'll help me?" "You said that if I couldn't do it for you, do it for Tak Gu." "Then, will it also be a revenge against the people who hurt the Pal Bong Bakery?" "It can also be dangerous." "What is it I need to do?" "What are you saying?" "You haven't been able to obtain that 8% of stocks?" "It appears that there's some nervousness in supporting Ma Jun." "What do I need to do?" "The marriage you mentioned to Seo Chang Produce family's daughter," "I think it would be a good idea to move that along more quickly." "I understand what you're saying." "I'll give it priority." "So, Na Jin, when do you return to the US?" "Next week." "Ah, I see." "This may sound sudden, but our Ma Jun has returned and all..." "I'm wondering, how about an engagement in the fall?" "Of course right now, Na Jin, school is probably more important to you, right?" "That doesn't matter." "If we agree on a date, I can come out briefly in the middle of the session." "Then, shall we plan on it, Na Jin?" "What do you think, Madam Lee?" "Well, if our Na Jin wants to, well..." "I'm sorry, but I can't." "Na Jin," "I'm not going to marry you." "I won't go through with this marriage." "Then, I'll be leaving now." "Ma Jun." "Ma Jun." "You stop right there." "What on earth are you doing?" "Why throw ash on a completely set dinner table!" "Can you please stop doing this!" "Did I ever agree to that marriage?" "I never agreed to it!" "Is this marriage bad for you?" "This is all for your benefit." "This is to lay the foundation for your support!" "I told you, it's not just one or two people that have staked their lives for you." " So, why can't you come to your senses?" " Who asked them to do that for me!" " I never asked for that!" " Gu Ma Jun!" "I love that girl!" "What?" "I love Shin Yu Kyung, Mom." "I can't live without her." "If even for a moment, she's not with me," "I get anxious and it drives me crazy." "If I don't see her, I miss her so much I can't breathe, and at that moment, it feels like I'm dying." "Now, I don't have anyone left." "Yu Kyung is the only one I have left." "You know that?" "What do you mean she's all you have?" "You have a mother." "You have Geosung!" "Other than you, other than Geosung, I have no one left." "You little half-wit." "Are you really going to disappoint your mother like this?" "Up to now, I've believed in you and you only." "I lived all my life for this moment, and you... you betray me like this?" "Mom, please!" "I don't want to hurt you." "So, please leave me alone." "Let me breathe, please!" "You're the one that needs to let go of this cowardice and focus on what you need to do!" "There's no way, I'm going to lose you to a worthless girl like Shin Yu Kyung!" "I absolutely can't, I won't!" "Mom!" "The more you act this way," "Shin Yu Kyung's life will become more miserable, Ma Jun." "And you, you will never win over this mother of yours." "Understand?" "We'll plan for an engagement with Na Jin before the end of September." "That's how it's going to be." "I'll take you home." "Get in." "That's okay." "You can just go." "It's 15 minutes by bus." "What do I need to do to become happy?" "What can I do to obtain that so called happiness?" "Do you know?" "You want to become happy?" "If I live with you, could it happen?" "I don't know." "But you and I are people who can't hope for happiness, aren't we?" "After all, I'm someone who's using you to win against your mother, and you're using me to hurt Tak Gu, and take me away from him." "We are together for reasons that's so far from happiness." "Could we actually ever attain happiness?" "That's true." "Hearing you say that, I see that's the case." "I have something to give you." "This weekend when you come to dinner, make sure you wear this bracelet." "What meaning does it hold?" "It means that I'm going to marry you." "Perhaps it means, it may cause the beginning of misery." "And, if knowing that, you will continue on this path with me," "I'll take it to the very end." "It means something like that." "I'll see you this weekend." "I'll come and pick you up." "Mi Sun." "Why... didn't I know when he was around?" "How great his presence was." "I miss Grandpa." "My grandpa..." "I miss my grandpa so much." "I miss my grandpa so much, Tak Gu!" "Grandpa!" "[The world's happiest bread]" "[Baker King, Kim Tak Gu]" "Isn't that the smell of bread baking?" "Excuse me?" "Uh." "It is." "It smells like it's coming from our bakery." "Who in the world is it?" "So early in the morning?" "Come and eat, everyone!" "Breakfast is here!" "Hmm?" "What is all this?" "Fresh baked breads." "I brought it for your breakfast." "I don't know how it will taste." "But, I made it just as Master made it for me last." "Aigoo, the bread is really moist." "Is this bread or collagen?" "It's so soft." "It's really good, really good, eh?" "I think the kneading must have been perfect." "The flavor is just right, and the outside has a nice crisp sheen and color." "And the individual shapes came out really nicely, Boss." "Looks like we can eat these now without jam or cream." "So, I guess it means you're now making some proper edible bread." "Boss..." "It's not just edible." "It's tasty." "It's really good, Tak Gu." "It's a lot like the bread Father used to make." "Thank you." "Why are you just standing there?" "Come sit, huh?" "Sit down and eat with us." "Yes, Boss." "Now then." "What?" "Jin Gu Hyungnim left?" "Mmm." "He said it would be awkward saying goodbye if everyone knew, so he just said goodbye to my father and left early this morning." "Still, how could he leave without a word to any of us?" "So, where did he go?" "What bakery?" "I don't know." "He didn't say anything to Dad either." "He just said he couldn't return for a while." "That's all he said." "Where on earth has he gone?" "It's good you came." "The real fight begins now." "I'm looking forward to what you can do." "Madam, you've come." " Where's the President?" " He's not in right now." "He's out for a dinner appointment." "Where did he go?" "We don't really know." "Place a call to him." "Excuse me?" "Driver Yoon must have taken him." "Call him." "Yes, Madam." "President." "No, don't stand up." "So, what did you want to see me about?" "There's no reason." "I just wanted to see you and have dinner with you, like this." "What just happened?" "14 years ago, when I was in an accident," "I hurt my eyes a bit." "Occasionally, I have problems with my vision." "I'm sorry." "I am so very sorry." "If words like that could make me forget all of the past, how nice that would be." "What should I do?" "If I ask you to do something, could you even do it?" "In regards to what you are planning right now," "I can't do anything." "The reason suddenly dawned on me, the reason, why you suddenly stunned me about my mother's death," "You thought I wouldn't know the reason?" "This kind of revenge isn't right." "Your hurt will only deepen." "Oh, how grateful I should be." "You're not possibly showing concern for me, are you?" "Can't you forgive what happened in the past?" "Starting now, I'll rectify the things that have gone wrong." "Is it not possible?" "Why didn't you take care of him?" "Our Tak Gu..." "You promised to look after him, so why didn't you?" "Why?" "Look..." "Those people, do you know what they did to our Tak Gu?" "They tried to send him off onto a deep sea fishing boat." "They tried to send off a 12 year old child onto a fishing boat, the young Madam and Manager Han!" "What are you saying?" "Living under the heavens, people can't do such dreadful things." "As a human being, those are things one absolutely mustn't do!" "Are you... really saying, my wife did those things?" "Forgive what's happened in the past?" "For the past 14 years, my heart's been pierced and resentment has been festering in my blood, and you ask me to forget all those years?" "I can't do that." "I absolutely won't forgive." "If you leave like this, what am I supposed to do?" "Follow them." "Excuse me?" "What are you waiting for?" "Follow that car immediately." "Ah, ah, yes." "President!" "Are you all right?" "I'm all right." "My hand is just a bit sore." "Um, but President..." "Actually, outside..." "Madam was here." "What are you talking about?" "She followed the woman you were with earlier." "The President's wife's car is still following us." "Shall I lose her?" "Let's go there." "Excuse me?" "Let's take her there." "All right, I understand." "Hello." "Ah yes, President." "We're headed toward Cheongsan." "[The world's happiest bread]" "You must be tired following such a long way." "Let's talk." "I had something to talk to you about as well." "Shall we walk?" "I have some place to show you too." "Where in the world are you going?" "We're almost there." "Follow me just a little longer." "They went towards the cliff." "What?" "It was here." "What?" "This is where" "I fell and died, 14 years ago." "Did you bring me all this way to complain about that?" "What could I do to make your life difficult?" "What could I do to take revenge on you painfully?" "For the last 14 years, that's all I thought of, again and again." "Then, you've succeeded somewhat." "Because right now, I'm seriously upset by you." "If you insist on crossing a line you shouldn't with my husband, then I won't leave you alone any longer." "Let's end it here." "A heart full of hate for a person has no end." "The more pain you cause, the more painful things happen." "You..." "What are you thinking right now?" "You wouldn't..." "Let's end it right here, Madam." "You're crazy!" "What the hell are you doing?" "Let go!" "Let go!" "With the two of us gone, everything will be fine." "Let's go together." "Even if I were to die, do you think I'll die with you?" "Let go of me." "Let go immediately!" "I can't do that." "Let's go!" "Stop it right now!" "Honey!" "Let go of that hand." "I don't want to." "Please let it go." "The person you need to punish is not this person, but me." "All of this was my fault." "It happened because of my order." "It was me that sent someone to protect you." "The person who sent someone to separate you and Tak Gu was also me." "So the person you need to punish is not her, but me." "Why... why..." "Tak Gu, that child," "I wanted to bring him up solely as my legitimate son." "I wanted to separate you from that child, and make him solely the eldest son of Geosung." "All of it was my fault." "So, please stop." "Even though it's only but an empty shell, but to me," "it's a family I must protect." "She is my wife, and they are my children." "So, please stop now." "Even if it's only for Tak Gu, whom you miss so much, please stop now." "What?" "Tak Gu... your son." "You think you'll deceive me again with those words?" "Do I look like I'm deceiving you right now?" "Is it true?" "Is my Tak Gu... truly, is alive?" "He's grown up to be a fine young man." "He's now an excellent baker." "My..." "Tak Gu?" "Tak Gu." "Tak Gu." "President!" "President!" "President, what's the matter?" "What's the matter, President?" "President!" "Ahjumma, I'll get it." "Hello?" "Mom." "Why's your voice sound like that?" "What's going on?" "What?" "The President?" "Yes." "They are currently headed to the hospital." "I think you should go, Manager." "Honey!" "Open your eyes." "Honey, what's the matter?" " Cerebral hemorrhage?" " Yes" "Then, when will he wake up?" "We can't guarantee anything." "Even if he regains consciousness, he may be paralyzed on one side." "Right now, all we can do is wait..." "You're a doctor." "Is that all a doctor can say?" "My husband is lying there unconscious, and you say, all we can do is wait?" "All we can do is wait!" "Mom, calm down." "We understand what you're saying for now, Doctor." "We'll wait." "What do you mean we'll wait!" "Immediately, bring my husband back to consciousness." "Bring my husband back to consciousness!" "Doctor, you can go." "Save my husband first." "Where are you going?" "Where are you going?" "Stop it, Mom!" "The doctor has done everything he can." "If all we can do is wait, we have to wait!" "Please, calm down." "If you act like this, it becomes harder for all of us." "Please, let's just pray that Dad can wake up safely, huh?" "Save him!" "Save him!" "Save him!" "Why didn't I stop?" "It's all my fault." "This is all happening because of me." "Doctor, please save the president." "Will you?" "No matter what, he has to wake up." "Only then... only then, can I face the senior Madam after I'm gone." "Only then, can I look for Tak Gu." "Is this guy living okay?" "You look exhausted." "Yeah, a little." "How did you know?" "I'm supposed to return to the secretary's office on Monday." "I was in the office talking about that." "That secretary's office, do you have to go back there?" "How's the President?" "Is he okay?" "I don't know." "What about you?" "Are you okay?" "And what's this now?" "Are you being polite, or are you really concerned about me?" "Go on and check in on the President." "I better go and catch the last bus." "Just stay 5 minutes." "Just 5 minutes." "You, what are you doing here right now?" "I'm asking you." "What are you doing here!" "It's gone." "What?" "It's not at the office and it's not here." "What's not?" "I can't find the President's company stock shares certificate." "Are you in your right mind?" "He's lying there, and we have no idea when he may wake up, and you can look for that sort of thing right now!" "Because the President is on his sick bed, of course, we need to get our hands on it first." "Have you forgotten that his eldest son according to the family registry is Tak Gu?" "If by any chance, the President's stocks are passed on by a will or default order, the beneficiary is not Ma Jun, but Tak Gu." "We have to block that situation." "What brings you here, sir?" "Are you President Gu Il Jung's eldest son, Kim Tak Gu?" "Yes, that's right." "But, who are you?" "I'm Geosung Foods' legal counsel, Park In Taek." "It's not here." "Where is it?" "Mom, we have to go see Father at the hospital..." "Mom!" "Where the hell did he put it?" "What happened here?" "Did you do this?" "It's not here." "Excuse me?" "Where on earth did he put it?" "Where'd he put it?" "What is this?" "It's something that the President asked me to keep, about a month ago." "He told me that if by chance, any problems arise with his health or life, that I should find Kim Tak Gu at Pal Bong Bakery and deliver it myself to you." "What are you saying?" "What do you mean any problems with his health or life?" "Then, are you saying..." "Last night, President Gu Il Jung suddenly collapsed due to cerebral hemorrhage." "*(bleeding into the brain tissue)*" "[Power of Attorney]" "Tak Gu, if by any chance, something happens to me, the only one who could act in my place is you." "[Register of Geosung Shareholders]" "[Title Deed, Gu Il Jung]" "[Stock Certificate]" "All my deeds and stocks I'm entrusting to you, so please, look after Geosung, Tak Gu." "How's his condition?" "In these last few days, his vital signs have improved quite a bit." "The issue is still the uncertainty of when will he regain consciousness." "I see." "Then, for now, his life is not in danger." "Is that correct?" "Yes." "We've passed the critical phase for now." "Then, that's fine." "Then, I think we'll be taking the President home." "Mom." "Your father absolutely hates hospitals." "If we take him home and he's comfortable, his recovery will be much faster." "I'll inform Chief Ju of that plan, so please get everything ready, Doctor Jung." "Yes, well..." "But anyhow, have you seen the papers?" "I took a glimpse, and it looks like Il Jung's condition is really critical." "Do you think we should go and see him?" "What's the matter, Tak Gu?" "Aren't you eating?" "Oh." "I had some bread earlier, and I feel full." "Then, if you'll excuse me..." "I'm sorry, President." "I can't return to Geosung." "I don't have many people around me I can trust, Tak Gu." "Now, I'm not sure who to trust and who to be suspicious of." "It looks like Il Jung's condition has gotten really critical." "He suddenly collapsed due to cerebral hemorrhage." "Why... didn't I know when he was around?" "How great his presence was." "Who are you?" "Who are you?" "Whoa." "Oh my gosh!" "I didn't know there were houses so big!" "It's really big!" "Wow!" "Let's go in." "Who's house is it?" "Is it someone you know?" "It's your father's house." "Who are you?" "I'm Kim Tak Gu," "Geosung Food President Gu Il Jung's eldest son." "Please let them know Kim Tak Gu is here." "Yes." "Let him in." "They say a guest has arrived." "Who is it?" "You came?" "Yeah." "I came." "Brought to you by WITH S2 Written In The Heavens Subbing Squad" "Main Translator: meju" "Spot Translators: ai*, serendipity" "Timer: julier" "Editor/QC:" "PTTaT" "Coordinators: mily2, ay_link" "Watch dramas legally at dramafever. com | crunchyroll. com" | Mid | [
0.558766859344894,
36.25,
28.625
]
|
Sarwat Nazir Sarwat Nazir () is a fiction writer, novelist, screenwriter, and playwright. She is best known for her screen play Main Abdul Qadir Hoon and Umm-e-Kulsoom. Novels Complete Novels List of Sarwat Nazir: Faislay Ka Lamha Roshan Sitara Main Abd-ul-Qadir Hum Sitamgar Umm-e-Kulsoom Roshan Sitara Muhabbat Aisa Darya ha Sirat-e-Mustaqeem Gawah Rehna Khuwab Hain Hum Sach ki Pari Faslay ka Lamha Besharam Plays and dramas She has written a number of plays in the past and she is writing more screenplays than novels nowadays: Some notable dramas are Main Abdul Qadir Hoon Besharam (TV series) Shikwa (TV series) Mumkin Aik Pal Umm-e-Kulsoom Roshan Sitara Sitamgar (TV series) Sirat-e-Mustaqeem Tere Baghair Noor-e-Zindagi Sehra Main Safar Choti Si Zindagi Tanhai (TV series) Khud parast ''Khaas Awards and nominations Nomination Best Writer Drama Serial for Roshan Sitara at 1st Hum Awards 2013. References External links Category:Living people Category:People from Sialkot Category:Pakistani women writers Category:Pakistani writers Category:Pakistani novelists Category:Hum Award winners Category:Pakistani screenwriters Category:Lux Style Award winners Category:Pakistani television writers Category:Writers from Lahore Category:Pakistani dramatists and playwrights Category:Women novelists Category:Women dramatists and playwrights Category:Women television writers Category:Year of birth missing (living people) | High | [
0.688783570300158,
27.25,
12.3125
]
|
TWO LEINSTER COUNTIES have turned to Kerry natives to fill their senior football managerial vacancies. Kerry duo John Evans and John Sugrue. Source: INPHO John Evans was last night confirmed by the Wicklow county board as the choice to be their new senior football manager as he gets set to take over from Johnny Magee. While it is reported by the Laois Today website that John Sugrue is to be put forward for ratification as the new Laois boss after Peter Creedon’s departure after a year in charge. Evans has extensive experience in inter-county management. He was in charge of Tipperary between 2008 and 2012, claiming two league promotions and a Munster U21 football title in 2010. John Evans celebrates Kerry's 2010 Munster U21 final win. Source: James Crombie/INPHO In 2012 he came on board during that year’s championship as part of Seamus McEnaney’s backroom team in Meath, before then taking over as Roscommon manager for three campaigns between 2013 and 2015. Previously Evans has been at the helm when Laune Rangers won the All-Ireland senior club football title in 1996. John Sugrue, a native of Renard in South Kerry, has built up an impressive coaching CV in recent seasons. Sugrue, who lives in Portlaoise, trained Kerry during Pat O’Shea’s tenure in 2007 and 2008. He also worked as a physio with Kerry in 2011 and then with Laois in 2012 and 2013, when Justin McNulty was at the helm. As a player Sugrue won Kerry senior football medals between 2004 and 2006 with divisional outfit South Kerry before then managing that side to county senior glory in 2015. Captain Bryan Sheehan celebrates South Kerry's triumph in 2015. Source: Donall Farmer/INPHO Source: The42 Podcasts/SoundCloud Subscribe to The42 podcasts here: | High | [
0.71859296482412,
35.75,
14
]
|
Q: Can't locate the GET request Here's the page I'm looking at: http://beta.fortune.com/fortune500/walmart-1 The only relevant XHR that I see under Chrome Dev Tools Network tab is this: http://fortune.com/api/v2/company/wmt/expand/1 But the response to that doesn't contain all the data of the page, only the pricing data. I've been trying to locate the request being made for the data you see at the top of the page on black background (Previous Rank, Revenues ($M), Rev Change, etc.). What's the GET request for this data? Or are those fields being populated in some other way? A: Take a closer look at the webpage source code (XHR response from http://beta.fortune.com/fortune500/walmart-1), you will see the following HTML fragment (I just beautified it to make more clear): <div data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0"> <div class="ranking-slide brand-revenue-slide" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Previous Rank" style="transform:translateX(0px);-webkit-transform:translateX(0px);"> <a class="ranking-caption" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Previous Rank.0"><span class="title" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Previous Rank.0.0">Previous Rank</span><span class="data" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Previous Rank.0.1">1</span></a> </div> <div class="ranking-slide brand-revenue-slide" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Revenues ($M)" style="transform:translateX(0px);-webkit-transform:translateX(0px);"> <a class="ranking-caption" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Revenues ($M).0"><span class="title" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Revenues ($M).0.0">Revenues ($M)</span><span class="data" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Revenues ($M).0.1">$482,130</span></a> </div> <div class="ranking-slide brand-revenue-slide" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Rev Change" style="transform:translateX(0px);-webkit-transform:translateX(0px);"> <a class="ranking-caption" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Rev Change.0"><span class="title" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Rev Change.0.0">Rev Change</span><span class="data" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Rev Change.0.1">-0.7%</span></a> </div> <div class="ranking-slide brand-revenue-slide" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Profits ($M)" style="transform:translateX(0px);-webkit-transform:translateX(0px);"> <a class="ranking-caption" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Profits ($M).0"><span class="title" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Profits ($M).0.0">Profits ($M)</span><span class="data" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Profits ($M).0.1">$14,694</span></a> </div> <div class="ranking-slide brand-revenue-slide" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Profit Change" style="transform:translateX(0px);-webkit-transform:translateX(0px);"> <a class="ranking-caption" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Profit Change.0"><span class="title" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Profit Change.0.0">Profit Change</span><span class="data" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Profit Change.0.1">-10.2%</span></a> </div> <div class="ranking-slide brand-revenue-slide" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Assets ($M)" style="transform:translateX(0px);-webkit-transform:translateX(0px);"> <a class="ranking-caption" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Assets ($M).0"><span class="title" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Assets ($M).0.0">Assets ($M)</span><span class="data" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Assets ($M).0.1">$199,581</span></a> </div> <div class="ranking-slide brand-revenue-slide" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Employees" style="transform:translateX(0px);-webkit-transform:translateX(0px);"> <a class="ranking-caption" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Employees.0"><span class="title" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Employees.0.0">Employees</span><span class="data" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Employees.0.1">2,300,000</span></a> </div> <div class="ranking-slide brand-revenue-slide" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Mkt Value as of 3/31/16 ($M)" style="transform:translateX(0px);-webkit-transform:translateX(0px);"> <a class="ranking-caption" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Mkt Value as of 3/31/16 ($M).0"><span class="title" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Mkt Value as of 3/31/16 ($M).0.0">Mkt Value as of 3/31/16 ($M)</span><span class="data" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Mkt Value as of 3/31/16 ($M).0.1">$215,356</span></a> </div> <div class="ranking-slide brand-revenue-slide" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Morning Consult Brand Index" style="transform:translateX(0px);-webkit-transform:translateX(0px);"> <a class="ranking-caption" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Morning Consult Brand Index.0" href="https://morningconsultintelligence.com/examine?v=YnJhbmRzX3RyZW5kX3dhbG1hcnQ&d=dHNkYXQ&s=bW9ybmluZyBjb25zdWx0&ref=Zm9ydHVuZQ"><span class="title" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Morning Consult Brand Index.0.0">Morning Consult Brand Index</span><span class="data" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Morning Consult Brand Index.0.1">A-</span></a><a class="morning-consultant" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Morning Consult Brand Index.1" href="javascript:void(0)"><svg data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Morning Consult Brand Index.1.0" height="19" viewbox="0 0 19 19" width="19"> <path d="M17.153 9.368c0 4.293-3.492 7.785-7.785 7.785-4.292 0-7.785-3.492-7.785-7.785 0-4.292 3.493-7.785 7.785-7.785 4.293 0 7.785 3.493 7.785 7.785M19 9.5C19 4.262 14.738 0 9.5 0S0 4.262 0 9.5 4.262 19 9.5 19 19 14.738 19 9.5m-7.96-4.005c.195-.196.307-.468.307-.745 0-.28-.112-.55-.308-.747-.197-.197-.47-.31-.748-.31-.277 0-.55.113-.746.31-.198.196-.31.468-.31.747 0 .277.112.55.31.745.197.197.467.31.746.31.278 0 .55-.113.747-.31m-2.044 9.81c-.164 0-.326-.054-.458-.16-.21-.17-.31-.44-.26-.705l1.03-5.184-.567.317c-.35.196-.796.072-.993-.277-.198-.35-.074-.79.277-.988l1.947-1.09c.25-.14.556-.12.787.045.23.166.343.45.288.728L9.98 13.363l.873-.378c.37-.16.8.008.96.375.162.367-.007.794-.377.954l-2.15.93c-.093.042-.192.062-.29.062" data-reactid=".16d1cbmhnfi.1.0.4.1:1.3.1.0.0.0.$slide-Morning Consult Brand Index.1.0.0" fill="#1BAAE1"></path></svg></a> </div> </div> That data fully represents the webpage content you are asking about: | Low | [
0.5011990407673861,
26.125,
26
]
|
Conduction-coupled Tesla transformer. A proof-of-principle Tesla transformer circuit is introduced. The new transformer exhibits the high voltage-high power output signal of shock-excited transformers. The circuit, with specification of proper circuit element values, is capable of obtaining extreme oscillatory voltages. The primary and secondary portions of the circuit communicate solely by conduction. The destructive arcing between the primary and secondary inductors in electromagnetically coupled transformers is ubiquitous. Flashover is eliminated in the new transformer as the high-voltage inductors do not interpenetrate and so do not possess an annular volume of electric field. The inductors are remote from one another. The high voltage secondary inductor is isolated in space, except for a base feed conductor, and obtains earth by its self-capacitance to the surroundings. Governing equations, for the ideal case of no damping, are developed from first principles. Experimental, theoretical, and circuit simulator data are presented for the new transformer. Commercial high-temperature superconductors are discussed as a means to eliminate the counter-intuitive damping due to small primary inductances in both the electromagnetic-coupled and new conduction-coupled transformers. | High | [
0.6716417910447761,
33.75,
16.5
]
|
AQUA-DISPERSIONS Theses high quality pigment dispersions will allow you to make paint for all of the water based techniques. All there is to be done is to mix the dispersions with any of water based binders. Acrylic polymer emulsion for acrylic paints, gum Arabic dispersion for watercolor or gouache, egg for egg tempera , etc. What is a pigment dispersion ? What makes our AQUA-DISPERSIONS different is that they have been selected for their high level of performance in respect to the demanding needs of artists. Furthermore, they are not inhibited by any binders, extenders, fillers, blends or tints that could alter or control their performance, strength or interchangeability from one medium to another. This allows the artist to change from acrylic to watercolor, egg-tempera, gouache or any water miscible paints without purchasing each color in those specific mediums. all of the above translates into tremendous savings for the artist or student who enjoys working in different disciplines and wants the quality of materials that reflect dedication at the highest level. The pigments that make our paints vibrant and long lasting are the primary tool of painters and artists and we believe that our product should be included in all studios. As we are aware that most artists have never had the opportunity to buy or use dispersions of this quality in the working process, we encourage you to ask questions and try for yourself this new product. | High | [
0.663484486873508,
34.75,
17.625
]
|
KLK3, PCA3, and TMPRSS2-ERG expression in the peripheral blood mononuclear cell fraction from castration-resistant prostate cancer patients and response to docetaxel treatment. To monitor systemic disease activity, the potential of circulating tumor cells (CTCs) bears great promise. As surrogate for CTCs we measured KLK3, PCA3, and TMPRSS2-ERG messenger RNA (mRNA) in the peripheral blood mononuclear cell (PBMC) fraction from a castration-resistant prostate cancer (CRPC) patient cohort and three control groups. Moreover, biomarker response to docetaxel treatment was evaluated in the patient group. Blood samples from 20 CRPC patients were analyzed at four different time points (prior to docetaxel treatment, at 9 weeks, 27 weeks, and 2 months after treatment). Blood was drawn once from three control groups (10 age-matched men, 10 men under 35 years of age, 12 women). All samples were analyzed for KLK3, PCA3, and TMPRSS2-ERG mRNA by using a quantitative nucleic acid amplification assay with gene-specific primers in the complementary DNA synthesis. At baseline, mRNA for KLK3 was detected in 17 (89%, 95% CI 76-100%), PCA3 in 10 (53%, 95% CI 30-75%), and TMPRSS2-ERG in seven of 19 evaluable patients (37%, 95% CI 15-59%). In contrast, the blood samples from all 32 healthy volunteers were reproducible negative for all markers. In response to docetaxel treatment, KLK3 levels decreased in 80% (95% CI 60-100%), PCA3 in 89% (95% CI 68-100%), and TMPRSS2-ERG in 86% (95% CI 60-100%) of patients. The feasibility of a highly sensitive modified nucleic acid amplification assay to assess KLK3, PCA3, and TMPRSS2-ERG mRNA in the PBMC fraction from CRPC patients was demonstrated. Moreover, response of these markers to systemic treatment was shown. | High | [
0.692737430167597,
31,
13.75
]
|
Aref Lorestani Aref Lorestani (February 4, 1972 – April 15, 2017) was an Iranian actor. Lorestani started his professional career by playing a role in a sitcom, dubbed Jong 77, by famous Iranian director Mehran Modiri, in 1998 and continued appearing in his later series including Man With Two Thousand Faces (2009), Bitter Coffee (2010), My Villa (2012), I’m Just Kidding (2014) and In the Margins (2015). He also played in Modiri's some other television shows, one of the most famous one was Qahveye Talkh (Bitter Coffee) in a role of a corrupt police, the first episodes of which were released in 2010. Lorestani also played roles in a number of cinema movies, including Mani and Neda, by Parviz Sabri, Moadeleh (Equation) and Sham-e Arousi (Wedding Dinner) both by Ebrahim Vahidzadeh and Entekhab (Selection) by Touraj Mansouri. References Category:1972 births Category:2017 deaths Category:Iranian male actors Category:Iranian comedians Category:People from Kermanshah | High | [
0.673295454545454,
29.625,
14.375
]
|
This invention relates to an article formed from a composite material, and is particularly, although not exclusively, concerned with such an article in the form of an aerofoil component such as a fan blade of a turbofan engine, turbo prop, ducted fans and other such turbomachinery. | Mid | [
0.547842401500938,
36.5,
30.125
]
|
Contents of this Issue Navigation Page 18 of 156 bit the cross ROADS Brownsboro Road and Story Avenue T he meeting of the two streets listed above — which in tandem make up the southwest terminus of U.S. 42, whose other end lies in Cleveland — is an odd juncture indeed, mostly because of the severity of that left-hand turn for commuters speeding to work downtown. Fact is, there used to be a well-used hard right, too, that took drivers from Story onto Litterle Road (aka Cut-Of Road) and into a long-gone, French-favored neighborhood informally called Te Point. According to my Coleman's 1949 map of Louisville, the road ran along the western bank of the Beargrass Creek Cut-Of channel and led to a grid of streets with such names as Clinton, Lloyd, Irvine, Fulton, Marion and Lombard on what is now acres of no-man's-land surrounding I-71, including the city's vehicle impoundment lot. Before the Cut-Of was dug in 1854, dividing the Point in two, the creek swung west and made a beeline for the downtown. Until the 1937, 1945 and smaller subsequent foods carried away or otherwise ruined most of the Point's homes, streets such as Barbour, Pope, Richmond and Shiloh — bustling with shops, groceries, churches and schools — met up with Mellwood Avenue. It's next to impossible to see evidence of the old Butchertown-Point neighborhood connection at the Brownsboro-Story corner. Behind the pretty little shotgun houses that dress up the short piece of Story east of Brownsboro, a food berm obstructs your view. Te giant Beargrass Creek Pumping Station hides the engineered duodenal change in the course of the creek. But right at Story's eastern end you can take the Butchertown Greenway trail over a rise as it follows the path of Litterle Road. Soon you'll walk through the giant pillars supporting the expressway and fnd yourself at Truston Park along River Road. — Jack Welch www.502restaurantweek.com 16 LOUISVILLE MAGAZINE 8.13 | Mid | [
0.551876379690949,
31.25,
25.375
]
|
U-M student research may help astronauts burn fuel on Mars ANN ARBOR, Mich.—One of the big problems with space travel is that one cannot over pack. Suppose astronauts reach Mars. How do they explore the planet if they cannot weigh down the vessel with fuel for excursions? A team of undergraduate aerospace engineering students at the University of Michigan is doing research to help astronauts make fuel once they get to Mars, and the results could bring scientists one step closer to manned or extended rover trips to the planet. Their research proposal won the five-student team a highly competitive trip to NASA's Johnson Space Center in Houston to participate in the Reduced Gravity Student Flight Opportunities Program. In Houston, the students conducted zero-gravity experiments using iodine as a catalyst to burn magnesium. Magnesium is a metal found on Mars that can be harvested for fuel—fossil fuels don't burn on Mars because of the planet's carbon dioxide (CO2) atmosphere, but metals do burn in a CO2 atmosphere. The idea for the students' experiments evolved from previous research done by Margaret Wooldridge, an associate professor in mechanical engineering and the team's adviser. Wooldridge's research showed that while magnesium is a promising fuel source, burning magnesium alone—without a catalyst such as iodine—has several challenges. Preliminary results from the student experiments showed that using iodine as a catalyst helped make the magnesium burn better, said Arianne Liepa, aerospace engineering undergrad and team member. The experiments also showed that using the iodine, magnesium, CO2 system worked even better in a microgravity environment. "That bodes well for a power source on Mars where the gravity is approximately one-third that of Earth," Wooldridge said. The students—Greg Hukill, Arianne Liepa, Travis Palmer, Carlos Perez and Christy Schroeder—who conducted the experiments over a nine-day period in March, flew on a specially modified Boeing KC 135A turbojet transport. The plane flies parabolic arcs to produce weightless periods of 20 to 25 seconds at the apex of the arc. | High | [
0.7134670487106011,
31.125,
12.5
]
|
[High-dose chemotherapy with autologous bone marrow transplantation: 11 years' experience in Zurich]. High-dose chemotherapy with autologous bone marrow or peripheral blood stem cell transplantation has gained widespread acceptance for the treatment of certain malignancies. Since the introduction of this therapy in 1988 we have treated 272 patients. Indications for high-dose chemotherapy were high-risk large cell lymphoma and lymphoblastic or Burkitt lymphoma in first remission (73 patients), non-Hodgkin's lymphoma in chemosensitive relapse (65 patients), Hodgkin's lymphoma in relapse (52 patients), germ cell tumours with inadequate response to chemotherapy (34 patients), multiple myeloma (29 patients), and other malignancies (19 patients). Treatment mortality was 1.8%. The 3-year event-free survival and overall survival for all patients were 48 and 61% respectively. High-dose chemotherapy with autologous stem cell transplantation has become a safe procedure and is considered the treatment of choice for relapsed large cell lymphoma, relapsed Hodgkin's disease, stage II or III multiple myeloma, and germ cell tumours with inadequate response to cisplatin-based chemotherapy. In other situations, including aggressive lymphoma with risk factors, acute leucaemia or breast cancer, the superiority of high-dose over conventional chemotherapy remains to be proven. Patients with such diseases should not receive high-dose chemotherapy outside a controlled clinical study. | High | [
0.723472668810289,
28.125,
10.75
]
|
--- abstract: 'This paper presents new classes of consensus protocols with fixed-time convergence, which enable the definition of an upper bound for consensus state as a parameter of the consensus protocol, ensuring its independence from the initial condition of the nodes. We demonstrate that our methodology subsumes current classes of fixed-time consensus protocols that are based on homogeneous in the bi-limit vector fields. Moreover, the proposed framework enables for the development of independent consensus protocols that are not needed to be homogeneous in the bi-limit. This proposal offers extra degrees of freedom to implement consensus algorithms with enhanced convergence features, such as reducing the gap between the real convergence moment and the upper bound chosen by the user. We present two classes of fixed-time consensus protocols for dynamic networks, consisting of nodes with first-order dynamics, and provide sufficient conditions to set the upper bound for convergence a priori as a consensus protocol parameter. The first protocol converges to the average value of the initial condition of the nodes, even when switching among dynamic networks. Unlike the first protocol, which requires, at each instant, an evaluation of the non-linear predefined time-consensus function, hereinafter introduced, per neighbor, the second protocol requires only a single evaluation and ensures a predefined time-consensus for static topologies and fixed-time convergence for dynamic networks. Predefined-time convergence is proved using Lyapunov analysis, and simulations are carried out to illustrate the performance of the suggested techniques. The exposed results have been applied to the design of predefined time-convergence formation control protocols to exemplify their main features.' address: - 'Multi-agent autonomous systems lab, Intel Labs, Intel Tecnología de México, Av. del Bosque 1001, Colonia El Bajío, Zapopan, 45019, Jalisco, México.' - 'Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Av. General Ramón Corona 2514, Zapopan, 45201, Jalisco, México.' - 'CINVESTAV, Unidad Guadalajara, Av. del Bosque 1145, colonia el Bajío, Zapopan , 45019, Jalisco, México.' - 'Research Laboratory on Optimal Design, Devices and Advanced Materials -OPTIMA-, Department of Mathematics and Physics, ITESO, Periférico Sur Manuel Gómez Morín 8585 C.P. 45604, Tlaquepaque, Jalisco, México.' author: - 'R. Aldana-López' - 'David Gómez-Gutiérrez' - 'E. Jiménez-Rodríguez' - 'J. D. Sánchez-Torres' - 'A. G. Loukianov' title: 'On predefined-time consensus protocols for dynamic networks' --- Predefined-time convergence, fixed-time consensus, multi-agent systems, average consensus, self-organizing systems Introduction ============ Consensus algorithms allow a network of agents to agree on a value for its internal state in a distributed fashion by using only communication among neighbors [@Olfati-Saber2007]. For this reason, they have attracted a great deal of attention in the field of automatic control, self-organizing systems and sensor networks [@liu2017reliable], with applications, for instance, to flocking [@Olfati2006], formation control [@Oh2015; @Ren2007; @Li2013], distributed resource allocation [@Xu2017; @Xu2017b], distributed map formation [@Aragues2012] and reliable filter design for sensor networks with random failures [@liu2017reliable]. For agents with first-order dynamics, a consensus protocol with asymptotic convergence to the average value of the initial conditions of the node has been proposed in [@Olfati-Saber2007]. Using the stability results of the switching systems [@Liberzon2003] it can be shown that such protocols reach a consensus even on dynamic networks by arbitrarily switching between highly connected graphs [@Olfati-Saber2007; @Cai2014]. Consensus protocols with enhanced convergence properties have been suggested based on finite-time [@Bhat2005; @Utkin1999], and fixed-time [@Polyakov2012; @Parsegov2012] stability theory. In [@Sayyaadi2011; @Shang2012; @Wang2010; @zhu2013; @Franceschelli2013; @Gomez-Gutierrez2018] continuous and discontinuous protocols with finite-time convergence were proposed. However, the convergence-time is an unbounded function of the initial conditions. A remarkable extension of the previous methods is the fixed-time convergent consensus. In this case, there exists a bound for the convergence time that is independent of the initial conditions [@Andrieu2008; @Polyakov2012]. Therefore, for the design of high-performance consensus protocols, the fixed-time convergence is a desirable property. Several consensus protocol have been proposed based on the fixed-time stability results from [@Polyakov2012; @Parsegov2012], see e.g., [@Zuo2014; @Defoort2015; @Parsegov2013; @Sharghi2016; @Hong2017; @Ning2018; @Wang2017b]. However, these consensus protocols have been justified only for static networks. Another fixed-time consensus algorithm was proposed by [@Zuo2014a], which is a consensus protocol for dynamic networks. However, similar to [@Sharghi2016; @Hong2017], the convergence analysis is based on the upper estimate of the convergence-time given in [@Polyakov2012], which is known to be a too conservative upper bound [@Aldana-Lopez2019]. Recently, to enable the application of fixed-time consensus algorithms in scenarios with time constraints, there has been an effort in finding the least upper bound of the settling time function of the class of fixed-time stable systems given in [@Polyakov2012 Lemma 1]. First, in [@Parsegov2012] the least upper bound was found for a subclass of systems, which has lead to consensus protocols for dynamic networks such as [@Parsegov2013; @Zuo2014; @Ni2017; @Ning2017b; @Wang2017b]. Recently, in [@Aldana-Lopez2019] the least upper bound for the settling time was found for the general class of fixed-time stable systems given in [@Polyakov2012 Lemma 1], based on this result in [@AldanaConsensus2019] consensus protocols for dynamic networks, subsuming those given in [@Ni2017; @Ning2017b], were presented, where an upper bound of the convergence time can be set a priori as a parameter of the protocol, in view of this feature, this class of consensus algorithms is known as predefined-time consensus. Another approach to derive predefined-time consensus algorithms has been addressed via a linear function of the sum of the errors between neighboring nodes together with a time-varying gain, for instance, using time base generators [@Morasso1997], see e.g., [@Yong2012; @Liu2018; @Wang2017; @Wang2018; @Colunga2018b; @Zhao2018; @Ning2019]. However, these methods require that all nodes have a common time-reference because the same value of the time-varying gain should be applied to all nodes. Thus, this approach is not suitable in GPS-denied environments or in scenarios where having a common time reference is a strong assumption. Moreover, such time-varying gain becomes singular at the pre-set time, either because the gain goes to infinite as the time tends to the pre-set time [@Yong2012; @Zhao2018] or because it produces Zeno behavior (infinite number of switching in a finite-time interval) as the time tends to the pre-set time [@Liu2018]. Contribution ============ This paper aims to provide new classes of consensus protocols, to obtain fixed-time convergence in dynamic networks arbitrarily switching among connected topologies. Sufficient conditions are derived, such that the upper bound for the convergence time is selected a priori as a parameter of the protocol. Such protocols are referred as predefined-time consensus protocols. Two classes of consensus protocols for networks composed of nodes with first-order dynamics are proposed. The first one is presented to solve the average-consensus problem with predefined convergence under dynamic networks. The second protocol is shown to have predefined-time convergence on static networks and fixed-time convergence under dynamic networks. Unlike the first protocol that requires, at each time instant, one evaluation of the nonlinear predefined-time consensus function (hereinafter introduced) per neighbors, the second protocol only requires a single evaluation, with the trade-off of not ensuring the convergence to the average value of the nodes’ initial conditions. Contrary to consensus protocols with predefined convergence based on time-varying gains as in [@Yong2012; @Liu2018; @Wang2017; @Wang2018; @Colunga2018b], the proposed classes of protocols does not require the strong assumption of a common time-reference for all nodes. Moreover, unlike autonomous fixed-time consensus protocols [@Ning2017b; @Wang2017b], which are based on a subclass of the fixed-time stable systems given in [@Polyakov2012 Lemma 1], which uses homogeneous in the bi-limit [@Andrieu2008] vector fields, in this paper a methodology for the design of new consensus protocols is presented, showing that predefined-time consensus can be achieved with a broader class of consensus protocols, that are not required to be homogeneous in the bi-limit. This result provides extra degrees of freedom to select a protocol, for instance, to reduce the slack between the predefined upper bound for the convergence and the exact convergence time. This methodology generalizes the recent results [@Ning2017b; @AldanaConsensus2019] on fixed-time consensus for dynamic networks formed by agents with first-order dynamics. The rest of the paper is organized as follows. Section \[Sec.Preliminaries\] introduces the preliminaries on graph theory and predefined-time stability. Section \[Sec.MainResult\] presents two new classes of consensus protocols with predefined-time convergence together with illustrative examples showing the performance of the proposed approach. In Section \[Sec:Formation\] these results are applied to the design of formation control protocols with predefined-time convergence. Finally, Section \[Sec.Conclu\] provides the concluding remarks and discusses future work. Preliminaries {#Sec.Preliminaries} ============= Graph Theory {#SubSec.GraphTheory} ------------ The following notation and preliminaries on graph theory are taken mainly from [@godsil2001]. An undirected graph $\mathcal{X}$ consists of a vertex set $\mathcal{V}(\mathcal{X})$ and an edge set $\mathcal{E}(\mathcal{X})$ where an edge is an unordered pair of distinct vertices of $\mathcal{X}$. Writing $ij$ denotes an edge, and $j\sim i$ denotes that the vertex $i$ and vertex $j$ are adjacent or neighbors, i.e., there exists an edge $ij$. The set of neighbors vertex of $i$ in the graph $\mathcal{X}$ is expressed by $\mathcal{N}_i(\mathcal{X})=\{j:ji\in \mathcal{E}(\mathcal{X})\}$. A path from $i$ to $j$ in a graph is a sequence of distinct vertices starting with $i$ and ending with $j$ such that consecutive vertices are adjacent. If there is a path between any two vertices of the graph $\mathcal{X}$ then $\mathcal{X}$ is said to be connected. Otherwise, it is said to be disconnected. A weighted graph is a graph together with a weight function $\mathcal{W}:\mathcal{E}(\mathcal{X})\to \mathbb{R}_{+}$. If $\mathcal{X}$ is a weighted graph such that $ij\in\mathcal{E}(\mathcal{X})$ has weight $a_{ij}$ and $n=|\mathcal{V}(\mathcal{X})|$. Then the incidence matrix $D(\mathcal{X})$ is a $\vert \mathcal{V}(\mathcal{X})\vert \times \vert \mathcal{E}(\mathcal{X})\vert$ matrix, such that if $ij\in \mathcal{E}(\mathcal{X})$ is an edge with weight $a_{ij}$ then the column of $D$ corresponding to the edge $ij$ has only two nonzero elements: the $i-$th element is equal to $\sqrt{a_{ij}}$ and the $j-$th element is equal to $-\sqrt{a_{ij}}$. Clearly, the incidence matrix $D(\mathcal{X})$, satisfies $\mathbf{1}^TD(\mathcal{X})=0$. The Laplacian of $\mathcal{X}$ is denoted by $\mathcal{Q}(\mathcal{X})$ (or simply $\mathcal{Q}$ when the graph is clear from the context) and is defined as $\mathcal{Q}(\mathcal{X})=D(\mathcal{X})D(\mathcal{X})^T$. The Laplacian matrix $\mathcal{Q}(\mathcal{X})$ is a positive semidefinite and symmetric matrix. Thus, its eigenvalues are all real and non-negative. When the graph $\mathcal{X}$ is clear from the context we omit $\mathcal{X}$ as an argument. For instance we write $Q$, $D$, etc to represent the Laplacian, the incidence matrix, etc. [@godsil2001] \[lemma:Lambda2\] Let $\mathcal{X}$ be a connected graph and $\mathcal{Q}$ its Laplacian. The eigenvalue $\lambda_1(\mathcal{Q})=0$ has algebraic multiplicity one with eigenvector $\mathbf{1}=[1\ \cdots\ 1]^T$. The smallest nonzero eigenvalue of $\mathcal{Q}$, denoted by $\lambda_2(\mathcal{Q})$ satisfies that $\lambda_2(\mathcal{Q})=\underset{x\perp \mathbf{1},x\neq 0}{\min}\dfrac{x^T \mathcal{Q}x}{x^Tx}$. It follows from Lemma \[lemma:Lambda2\] that for every $x\bot\mathbf{1}$, $x^T\mathcal{Q}x\geq \lambda_2(\mathcal{Q}) \Vert x \Vert_2^2>0$. $\lambda_2(\mathcal{Q}(\mathcal{X}))$ is known as the algebraic connectivity of the graph $\mathcal{X}$. A switched dynamic network $\mathcal{X}_{\sigma(t)}$ is described by the ordered pair $\mathcal{X}_{\sigma(t)}=\langle\mathcal{F},\sigma\rangle$ where $\mathcal{F}=\{\mathcal{X}_1,\ldots,\mathcal{X}_m\}$ is a collection of graphs having the same vertex set and $\sigma:[t_0,\infty)\rightarrow \{1,\ldots m\}$ is a switching signal determining the topology of the dynamic network at each instant of time. In this paper, we assume that $\sigma(t)$ is generated exogenously and that there is a minimum dwell time between consecutive switchings in such a way that Zeno behavior in network’s dynamic is excluded, i.e., there is a finite number of switchings in any finite interval. Notice that, no maximum dwell time is set, thus the system may remain under the same topology during its evolution. Fixed-time stability with predefined upper bound for the settling time ---------------------------------------------------------------------- The preliminaries on predefined-time stability are taken from [@aldana2019design]. Consider the system $$\label{eq:sys} \dot{x}=-\frac{1}{T_c}f(x), \ \forall t\geq t_0, \ f(0)=0,\ \ x(t_0)=x_0,$$ where $x\in\mathbb{R}^n$ is the state of the system, $T_c>0$ is a parameter and $f:\mathbb{R}^n\to\mathbb{R}^n$ is nonlinear, continuous on $x$ everywhere except, perhaps, at the origin. We assume that $f(\cdot)$ is such that the origin of is asymptotically stable and, except at the origin, has the properties of existence and uniqueness of solutions in forward-time on the interval $[t_0,+\infty)$. The solution of with initial condition $x_0$ is denoted by $x(t;x_0)$. [@Polyakov2014](Settling-time function) The *settling-time function* of system is defined as $T(x_0,t_0)=\inf\{\xi\geq t_0: \lim_{t\to\xi}x(t;x_0)=0\}-t_0$. [@Polyakov2014] \[def:fixed\](Fixed-time stability) System is said to be *fixed-time stable* if it is asymptotically stable [@Khalil2002] and the settling-time function $T(x_0,t_0)$ is bounded on $\mathbb{R}^n\times\mathbb{R}_+$, i.e. there exists $T_{\text{max}}\in\mathbb{R}_+\setminus\{0\}$ such that $T(x_0,t_0)\leq T_{\text{max}}$ if $t_0\in\mathbb{R}_+$ and $x_0\in\mathbb{R}^n$. Thus, $T_{\text{max}}$ is an Upper Bound of the Settling Time (*UBST*) of $x(t;x_0)$. \[Assump:AsympSys\] Let $\Psi(z)=\Phi(|z|)^{-1}{\mbox{sign}(()}z)$, with $z\in\mathbb{R}$, where $\Phi:\mathbb{R}_+\to\Bar{\mathbb{R}}_+\setminus\{0\}$ is a function satisfying $\Phi(0)=+\infty$, $\forall z\in\mathbb{R}_+\setminus\{0\}$, $\Phi(z)<+\infty$ and $$\label{Eq:Finite_Improper} \int_0^{+\infty} \Phi(z)dz = 1.$$ \[Lemma:TimeScale\] Let $\Psi(z)$ and be a function satisfying Assumption \[Assump:AsympSys\], then, the system $$\label{Eq:TSFunc} \dot{x}=-\frac{1}{T_c}\Psi(x), \ \ x(t_0)=x_0,$$ is asymptotically stable and the least *UBST* function $T(x_0)$ is given by $$\label{Eq:Time_integral1} \sup_{x_0 \in \mathbb{R}^n} T(x_0)=T_c.$$ (Lyapunov characterization for fixed-time stability with predefined *UBST*) \[thm:weak\_pt\] If there exists a continuous positive definite radially unbounded function $V:\mathbb{R}^n\to\mathbb{R}$, such that its time-derivative along the trajectories of satisfies $$\label{eq:dV_weak} \dot{V}(x)\leq-\frac{1}{T_c}\Psi(V(x)), \ \ x\in\mathbb{R}^n\setminus\{0\},$$ where $\Psi(z)$ satisfies Assumption \[Assump:AsympSys\], then, system is fixed-time stable with $T_c$ as the predefined *UBST*. Main Result {#Sec.MainResult} =========== It is assumed a multi-agent system composed of $n$ agents, whcih are able to communicate with its neighbors according to a communication topology given by the switching dynamic networks $\mathcal{X}_{\sigma(t)}$. The $i-$th agent dynamics is given by $$\dot{x}_i=u_i$$ where $u_i$ is called the consensus protocol. The aim of the paper is to introduce new classes of consensus protocols for dynamic networks as well as to provide the conditions under which, using only information from the neighbors $\mathcal{N}_i(\mathcal{X}_{\sigma(t)})$, the convergence is guaranteed in a predefined-time. \[Def:ConsensusFunction\] Let $\Omega:\mathbb{R}\to\mathbb{R}$ be a monotonically increasing function satisfying - there exist a function $\hat{\Omega}:\mathbb{R}_+\to\mathbb{R}_+$, a non-increasing function $\beta:\mathbb{N}\to\mathbb{R}_+$ and $d\geq 1$, such that for all $x=(x_1,\dots,x_n)^T$, $x_i\in\mathbb{R}_+$, the inequality $$\hat{\Omega}\left(\beta(n) \|x\|_2\right)\leq\beta(n)^d\sum_{i=1}^n\Omega(x_i) \label{convex_degree}$$ holds, where $\|x\|_2 = \left(\sum_{i=1}^n|x_i|^2\right)^{1/2}$. - $\Psi(z) = z^{-1}\hat{\Omega}(|z|)$ satisfies Assumption \[Assump:AsympSys\]. then, $\Omega(\cdot)$ is called a predefined-time consensus function. \[lemma:convex\_subhomo\] Let $\Omega:\mathbb{R}\to\mathbb{R}$ be a monotonically increasing function, then if it satisfies either - *i*) $\Omega(x+y)\leq\Omega(x)+\Omega(y)$, i.e. $\Omega(z)$ sub-additive, and $\Omega(\mu x)\leq\mu^d\Omega(x)$ for $\mu\in[0,1]$ and $d\geq 1$, i.e. $\Omega(z)$ sub-homogeneous of degree $d$. - *ii*) $\Omega(z)$ convex. Then, $\Omega(z)$ complies with for $\beta(n)=n^{-1}$, $\hat{\Omega}(z)=\Omega(z)$ and $d$ the degree of sub-homogeneity for *i*) and $d=1$ for *ii*). Lemma \[Lemma:Hardy\] leads to $\Omega(n^{-1}\|x\|_2)\leq\Omega(n^{-1}\|x\|_1) = \Omega\left(n^{-1}\sum_{i=1}^{n}x_i\right)$. Moreover, for *i*) $\Omega\left(n^{-1}\sum_{i=1}^{n}x_i\right)\leq \sum_{i=1}^n\Omega(n^{-1}x_i)\leq n^{-d}\sum_{i=1}^n\Omega(x_i)$. For *ii*), $\Omega\left(n^{-1}\sum_{i=1}^{n}x_i\right)\leq n^{-1}\sum_{i=1}^n\Omega(x_i)$ due to Jensen’s inequality of convex functions [@jensen1906 Formula 5]. Hence, $\Omega(z) = \hat{\Omega}(z)$ complies with for either *i*) or *ii*). \[Lemma:ExamplesFunc\] The following functions are predefined-time consensus functions, satisfying with $d=1$, $\beta(n)=\frac{1 }{n}$ and $\hat{\Omega}(z)=\Omega(z)$: - [*i)*]{} $\Omega(z) = \frac{1}{p}\exp(z^p)z^{2-p}$ for $0<p\leq 1$ - [*ii)*]{} $\Omega(z) = \frac{\pi}{2}(\exp(2z) - 1)^{1/2}z$ - [*iii)*]{} $\Omega(z) = \gamma z(a z^p+b z^q)^k$ where $a,b,p,q,k>0$ satisfy $kp<1$ and $kq>1$, and $$\gamma=\frac{\Gamma \left(m_p\right) \Gamma \left(m_q\right)}{a^{k}\Gamma (k) (q-p)}\left(\frac{a}{b}\right)^{m_p},$$ with $m_p=\frac{1-kp}{q-p}$, $m_q=\frac{kq-1}{q-p}$ and $\Gamma(\cdot)$ is the Gamma function defined as $\Gamma(z)=\int_0^{+\infty} e^{-t}t^{z-1}dt$ [@Bateman1955 Chapter 1]. For item [*i)*]{}, note that $\frac{d^2}{dz^2}\Omega(z) = \exp(z^p)z^{-p}(p^2z^{2p}+(3p-p^2)z^p+2-(3p-p^2))$. Moreover, note that $0<(3p-p^2)\leq 2$. Henceforth, $\frac{d^2}{dz^2}\Omega(z)\geq0$ and therefore $\Omega(z)$ is convex. For item [*ii)*]{} note that $\frac{d^2}{dz^2}\Omega(z) = (2/\pi)\exp(2z)(\exp(2z)-1)^{-2/3}(\exp(2z)z-2z+2\exp(2z)-2)$. Recall that $\exp(z)\geq 1+z$ for $z\geq 0$. Hence, $\exp(2z)z-2z+2\exp(2z)-2\geq z + z^2\geq 0$ and therefore $\Omega(z)$ is convex. For [*iii)*]{} it was proved in [@AldanaConsensus2019] that $\Omega(z)$ is convex. Therefore, by Lemma \[lemma:convex\_subhomo\], $\Omega(z)$ and satisfy $d=1$ and $\hat{\Omega}(z)=\Omega(z)$ for items *i)-iii)*. Moreover, $\Psi(z) = p\exp(-z^p)z^{1-p}$, $\Psi(z) = \frac{2}{\pi}(\exp(z)-1)^{1/2}$ and $\Psi(z) = \gamma^{-1}(a x^p + bx^q)^{-k}$ satisfy the conditions of Assumption \[Assump:AsympSys\], as shown in [@aldana2019design]. Therefore, the functions in items [*i)*]{}–[*iii)*]{} are predefined-time consensus functions. \[Remark:ReduceSlack\] In the following, we derive the condition for fixed- and predefined-time consensus under dynamic networks, with protocols that extend those presented in the literature, for instance, [@Ning2017b; @Wang2017b]. In the interest of providing a general result we may obtain $\beta(\cdot)$ and $\hat{\Omega}(\cdot)$, resulting in satisfying in a conservative manner. However, in some scenarios, $\beta(\cdot)$ can be obtained such that is less conservative, resulting in protocols where the slack between the true convergence and the predefined one is reduced. The following lemma illustrates this case for $k=1$ in the predefined-time consensus function given in Lemma \[Lemma:ExamplesFunc\] item *iii)*. \[Lemma:RhoLarge\] The function $\Omega(z) = \gamma \left( a z^{p+1}+bn^{\frac{q-1}{2}} z^{q+1}\right)$ where $a,b,p,q>0$ satisfy $p<1$ and $q>1$, and $$\gamma=\frac{\Gamma \left(m_p\right) \Gamma \left(m_q\right)}{a (q-p)}\left(\frac{a}{b}\right)^{m_p},$$ with $m_p=\frac{1-p}{q-p}$, $m_q=\frac{q-1}{q-p}$ and $\Gamma(\cdot)$ the Gamma function, is a predefined-time consensus function, satisfying with $\beta(n)=1$ and $\hat{\Omega}(z)=\gamma(a z^{p+1} + b z^{q+1})$. Let $x_i\in\mathbb{R}_+$ and note that from Lemma \[Lemma:Hardy\] it follows that $\sum_{i=1}^n x_i^{1+p} = \sum_{i=1}^n(x_i^2)^\frac{1+p}{2}=\|(x_1^2,\dots,x_n^2)^T\|_{(1+p)/2}^{(1+p)/2}\geq\|(x_1^2,\dots,x_n^2)^T\|_{1}^{(1+p)/2}\geq\left(\sum_{i=1}^nx_i^2\right)^{\frac{1+p}{2}}=\|x\|_2^{1+p}$. Similarly, $\sum_{i=1}^n x_i^{1+q} = \sum_{i=1}^n(x_i^2)^\frac{1+q<}{2}=\|(x_1^2,\dots,x_n^2)^T\|_{(1+q)/2}^{(1+q)/2}\geq n^{\frac{1-q}{2}}\|(x_1^2,\dots,x_n^2)^T\|_{1}^{(1+q)/2}\geq n^{\frac{1-q}{2}}\left(\sum_{i=1}^nx_i^2\right)^{\frac{1+q}{2}}=n^{\frac{1-q}{2}}\|x\|_2^{1+q}$. Hence, $\sum_{i=1}^n\Omega(x_i) = a\sum_{i=1}^nx_i^{1+p}+bn^\frac{q-1}{2}\sum_{i=1}^nx_i^{1+q}\geq a\|x\|^{1+p}_2 + b\|x\|_2^{1+q}=\hat{\Omega}(\|x\|_2)$. The proof that $\Psi(z)=z^{-1}\hat{\Omega}(|z|)$ satisfies Assumption \[Assump:AsympSys\] can be found in [@aldana2019design]. \[Remark:Symmetric\] In [@Zuo2014; @Ning2017b; @Wang2017b], fixed-time consensus protocols were proposed based on the function $\Omega(\cdot)$ given in Lemma \[Lemma:RhoLarge\], but restricted to the case where $p=1-s$ and $q=1+s$ with $0<s<1$. Notice that, in Lemma \[Lemma:RhoLarge\] such restriction is removed. Moreover, in this paper we show that predefined-time consensus can be obtained with a larger class of functions, such as those given in Lemma \[Lemma:ExamplesFunc\]. Based on predefined-time consensus functions $\Omega(\cdot)$ the following classes of consensus protocols for dynamic networks are proposed: 1. $$\label{Eq:ConsensusProtocolA} u_i=\kappa_i\sum_{j\in\mathcal{N}_i(\mathcal{X}_{\sigma(t)})}\sqrt{a_{ij}}e_{ij}^{-1}\Omega(|e_{ij}|), \ \ \ \ \ e_{ij}=\sqrt{a_{ij}}(x_j(t)-x_i(t)),$$ 2. $$\label{Eq:ConsensusProtocolB} u_i=\kappa_ie_i^{-1}\Omega(|e_i|), \ \ \ \ \ e_i=\sum_{j\in\mathcal{N}_i(\mathcal{X}_{\sigma(t)})}a_{ij}(x_j-x_i).$$ We show that, if parameters $\kappa_i$ satisfies $\kappa_i>0$ then consensus is achieved with fixed-time convergence. Moreover, we derive the condition on $\kappa_i$ under which predefined-time convergence is obtained. Predefined-time average consensus for dynamic networks ------------------------------------------------------ In this subsection we focus on the analysis of the class of consensus protocol , we will derive the condition under which consensus on the average of the initial values of the agents is achieved. Notice that, the dynamics of the network under these protocols can be written as $$\label{ConsensusDynamicA} \dot{x}=-D(\mathcal{X}_{\sigma(t)})\mathcal{F}(D(\mathcal{X}_{\sigma(t)})^Tx),$$ where, for $z=[z_1 \ \cdots \ z_n]^T\in\mathbb{R}^n$, the function $\mathcal{F}:\mathbb{R}^n\rightarrow \mathbb{R}^n$ is defined as $$\label{Eq:Fp} \mathcal{F}(z)= \begin{bmatrix} \kappa_1z_1^{-1}\Omega(|z_1|) \\ \vdots \\ \kappa_nz_n^{-1}\Omega(|z_n|) \end{bmatrix}$$ To prove that is a predefined-time average consensus algorithm, the following result will be used. \[DeltaOrth\] Let the disagreement variable $\delta$ be such that $x = \alpha\mathbf{1} + \delta$, where $\alpha=\frac{1}{n}\mathbf{1}^Tx(t_0)$ is the average value of the nodes’ initial condition. Then, if the graph is connected, under the consensus protocol , $\delta^T \mathbf{1}=0$. Let $s_x=\mathbf{1}^Tx$ be the sum of the nodes’ values. Recall that $\mathbf{1}^TD(\mathcal{X}_{\sigma(t)})=0$, then $\dot{s}_x=-\mathbf{1}^T D(\mathcal{X}_{\sigma(t)})F_p(D(\mathcal{X}_{\sigma(t)})^Tx)=0$. Thus, $s_x$ is constant during the evolution of the system, i.e. $\forall t\geq 0$, $s_x(t)=\mathbf{1}^Tx(t_0)=s_x(t_0)$. Therefore, $$\mathbf{1}^T\delta=\mathbf{1}^Tx-\alpha n=s_x(t)-s_x(t_0)=0, \ \ \text{ for all } \ t\geq t_0.$$ (Predefined-time average consensus for fixed and dynamic networks) \[Th:ConsensusA\] Let $\mathcal{X}_{\sigma(t)}=\langle\mathcal{F},\sigma\rangle$ be a switched dynamic network formed by strongly connected graphs, and let $\Omega(\cdot)$ be a predefined-time consensus function with associated $d$, $\beta(\cdot)$ and $\hat{\Omega}(\cdot)$ such that holds. Then, if $\kappa_i>0$, $i=1,\ldots,n$, then is a consensus protocol with fixed-time convergence. Moreover, if $\kappa_i \geq \dfrac{\beta(\underline{m})^{d}}{\lambda\beta(\overline{m})^{2}T_c}$, $i=1,\ldots,n$, where $$\label{Eq:LambdaW} \lambda = \min_{\mathcal{X}_i\in\mathcal{F}} \lambda_2(\mathcal{Q}(\mathcal{X}_i)), \ \ \ \underline{m}=\min_{\mathcal{X}_i\in\mathcal{F}}|\mathcal{E}(\mathcal{X}_i)| \text{ and } \overline{m} = \max_{\mathcal{X}_i\in\mathcal{F}}|\mathcal{E}(\mathcal{X}_i)|.$$ then, is an average consensus algorithm for dynamic networks with predefined convergence time bounded by $T_c$, i.e. all trajectories of converge to the average of the initial conditions of the nodes in a time $T(x_0)\leq T_c$. Let $\delta = [\delta_1, \dots , \delta_n]^T$ be the disagreement variable $x(t) = \alpha\mathbf{1} + \delta(t)$, where $\alpha=\frac{1}{n}\mathbf{1}^Tx_0$, which by Lemma \[DeltaOrth\] satisfies $\mathbf{1}^T\delta = 0$. Note that $\dot{x} = \dot{\delta} = -D(\mathcal{X}_l)\mathcal{F}(D(\mathcal{X}_l)^T\delta)$. Consider the Lyapunov function candidate $$V(x) = \sqrt{\lambda}\beta(\overline{m}) \|\delta\|. \label{Eq:LyapunovProt1_0}$$ which is radially unbounded and satisfies $V(x)=0$ if and only if $\delta=0$. To show that consensus is achieved on dynamic networks under arbitrary switchings, we will prove that is a common Lyapunov function for each subsystem of the switched nonlinear system [@Liberzon2003 Theorem 2.1]. To this aim, assume that $\sigma(t)=l$ for $t\in[0,\infty)$. Then, it follows that $$\dot{V}(x) = \frac{\sqrt{\lambda}\beta(\overline{m})}{\|\delta\|}\delta^T\dot{\delta} = -\lambda\beta(\overline{m})^2V^{-1}\delta^TD(\mathcal{X}_l)\mathcal{F}(D(\mathcal{X}_l)^T\delta),$$ Let $v = D(\mathcal{X}_l)^T\delta = [v_1,\dots,v_m]^T$, therefore: $$\begin{aligned} \dot{V}(x) = -\lambda\beta(\overline{m})^2V^{-1} v^T\mathcal{F}(v) = -\lambda\beta(\overline{m})^2V^{-1}\sum_{i=1}^m\kappa_i\Omega\left(|v_i|\right).\label{prot1_firsteq}\end{aligned}$$ Using the fact that $\Omega(\cdot)$ is a predefined time consensus function, the right hand side of can be rewritten as: $$\begin{aligned} \lambda\beta(\overline{m})^{2}\beta(m)^{-d}V^{-1}\sum_{i=1}^m\beta(m)^ d\kappa_i\Omega\left(|v_i|\right) \geq & \kappa\lambda\beta(\overline{m})^{2}\beta(m)^{-d}V^{-1}\sum_{i=1}^m\beta(m)^d\Omega\left(|v_i|\right) \\ \geq &\kappa\lambda\beta(\overline{m})^{2}\beta(m)^{-d}V^{-1}\hat{\Omega}\left(\beta(m)\|v\|_2\right),\end{aligned}$$ where $\kappa = \min\{\kappa_1,\dots,\kappa_n\}$. Moreover, it follows from Lemma \[Lemma:Hardy\] and Lemma \[lemma:Lambda2\] that $$\|v\|_2 = \sqrt{v^Tv} = \sqrt{\delta^T \mathcal{Q}(\mathcal{X}_l)\delta} \geq \sqrt{\lambda}\|\delta\|=\beta(\overline{m})^{-1}V.$$ Therefore: $$\begin{aligned} \label{prot1_res1} V^{-1}\hat{\Omega}\left(\beta(m)\|v\|_2\right) \geq V^{-1}\hat{\Omega}\left(\frac{\beta(m)}{\beta(\overline{m})}V\right)\geq V^{-1}\hat{\Omega}\left(V\right) = \Psi(V)\end{aligned}$$ Moreover, the following inequality is obtained from , and : $$\dot{V}(x) \leq -\kappa\lambda\beta(\overline{m})^{2}\beta(m)^{-d}\Psi(V) \label{Eq:Predef1}$$ Then, according to Theorem \[thm:weak\_pt\], the disagreement variable $\delta$ converges to zero in a fixed-time upper bounded by $T_c$, and therefore protocol guarantees that the consensus is achieved in a fixed-time upper bounded by $$\sup_{x_0 \in \mathbb{R}^n} T(x_0)\leq\frac{\beta(m)^{d}}{\kappa\lambda\beta(\overline{m})^{2}}$$ Therefore, if $$\label{Eq:KappaProtA} \kappa_i\geq\kappa=\dfrac{\beta(\underline{m})^{d}}{\lambda\beta(\overline{m})^{2}T_c},$$ $i=1,\ldots,n$, then $$\dot{V}(x) \leq -\dfrac{\beta(\underline{m})^{d}}{\beta(m)^{d}T_c}\Psi(V) \leq -\frac{1}{T_c}\Psi(V)$$ and, since $\beta(\cdot)$ is a non-increasing function, then $V(x)$ converges to zero in a predefined-time upper bounded by $T_c$. Since the above argument holds for any connected $\mathcal{X}_l\in\mathcal{F}$, then protocol guarantees that the consensus is achieved, before a predefined-time $T_c$, on switching dynamic networks under arbitrary switching. Furthermore, it follows from Lemma \[DeltaOrth\] that the consensus state is the average of the initial values of the agents. \[Example1\] Consider a network composed of 10 agent and four different communication topologies, $\mathcal{F}=\{\mathcal{X}_1,\ldots,\mathcal{X}_4\}$ as shown in Figure \[fig:DFD\_net1\]-\[fig:DFD\_net4\] with algebraic connectivity $\lambda_2(\mathcal{Q}(\mathcal{X}_1))=0.2279$, $\lambda_2(\mathcal{Q}(\mathcal{X}_2))=0.6385$, $\lambda_2(\mathcal{Q}(\mathcal{X}_3))=0.2679$, and $\lambda_2(\mathcal{Q}(\mathcal{X}_4))=0.2603$ and the cardinality of the edge set given by $|\mathcal{E}(\mathcal{X}_1)|=12$, $|\mathcal{E}(\mathcal{X}_2)|=13$, $|\mathcal{E}(\mathcal{X}_3)|=10$ and $|\mathcal{E}(\mathcal{X}_4)|=10$. Thus, $\lambda=0.2279$, $\underline{m}=10$ and $\overline{m}=13$. The consensus protocol is selected as in with $\Omega(\cdot)$ given in Lemma \[Lemma:RhoLarge\], with $p = 0.2$, $q=1.1$, $a=1$ and $b=2$. According to Lemma \[Lemma:RhoLarge\], $\Omega(\cdot)$ is a predefined-time consensus function, satisfying with $\beta(n)=1$ and $\hat{\Omega}(z)=\gamma(a z^{p+1} + b z^{q+1})$. The gain $\kappa_i$ of the consensus protocol is set as $\kappa_i=\kappa$, $i=1,\ldots,n$, where $\kappa$ is given in with $T_c=1$. A simulation of the convergence of the consensus algorithm, under the switching dynamic network $\mathcal{X}_{\sigma(t)}$, with nodes’ initial conditions given by $x(t_0)=[23.13, 18.33, 8.01, 20.45, 7.57, -22.77, 12.40, -9.22, 22.02, -10.66]^T$ is given in Figure \[Fig:DFD\_plot\] (top), where the switching signal $\sigma(t)$ is shown in Figure \[Fig:DFD\_plot\] (bottom). Notice that the consensus state is the average of the nodes’ initial conditions, and that convergence is obtained before $T_c$. [3cm]{} ![The graphs forming the switching dynamic network of Example \[Ex:Fixed\]](Simulations/DFD_net0.pdf "fig:"){width="\linewidth"} [3cm]{} ![The graphs forming the switching dynamic network of Example \[Ex:Fixed\]](Simulations/DFD_net1.pdf "fig:"){width="\linewidth"} [3cm]{} ![The graphs forming the switching dynamic network of Example \[Ex:Fixed\]](Simulations/DFD_net2.pdf "fig:"){width="\linewidth"} [3cm]{} ![The graphs forming the switching dynamic network of Example \[Ex:Fixed\]](Simulations/DFD_net3.pdf "fig:"){width="\linewidth"} A simpler predefined-time consensus algorithm for static networks ----------------------------------------------------------------- In this subsection we will analyze the consensus protocol proposed in . We first show that is a consensus protocol with fixed-time convergence for static networks and we derive the conditions under which predefined-time convergence bounded is obtained. Afterwards, we show that for dynamic networks, is fixed-time convergence. Unlike protocol which requires, at each time instant, one evaluation of the nonlinear predefined-time consensus function (hereinafter introduced) per neighbors, the second protocol only requires a single evaluation and ensures predefined-time consensus for static topologies and fixed-time convergence for dynamic networks. \[Remark:Simpler\] \[Th:ConsensusB\] Let $\mathcal{X}$ be the connectivity graph for the static network, and let $\Omega(\cdot)$ be a predefined-time consensus function with associated $d$, $\beta(\cdot)$ and $\hat{\Omega}(\cdot)$ such that holds. Then, if $\mathcal{X}$ is a connected graph and $\kappa_i>0$, $i=1,\ldots, n$, then is a consensus algorithm with fixed-time convergence. Moreover, if $\mathcal{X}$ is a connected graph and $\kappa_i\geq\frac{1}{\lambda_2(\mathcal{Q})\beta(n)^{2-d}T_c}$, $i=1,\ldots, n$, then is a consensus algorithm with predefined-time convergence bounded by $T_c$, i.e. all trajectories of converges to the consensus state $x_1=\cdots=x_n$ in predefined-time bounded by $T_c$. First notice that the dynamic of the network under the consensus algorithm is given by $$\label{ConsensusDynamicB} \dot{x}=-\mathcal{F}(Q(\mathcal{X}_{\sigma(t)})x).$$ where $\mathcal{F}(\cdot)$ is defined as in . Thus, the equilibrium subspace is given by $\mathcal{Z}(x)=\{x:x_1=\cdots=x_n\}$, i.e. at the equilibrium, consensus is achieved. Consider the radially unbounded Lyapunov function candidate $$V(x) = \sqrt{\lambda_2(\mathcal{Q})}\beta(n)\sqrt{x^T \mathcal{Q} x}, \label{Eq:LyapunovStatic}$$ which satisfies that $V(x)=0$ if and only if $x\in\mathcal{Z}(x)$, and whose time-derivative along the trajectory of system yields $$\begin{aligned} \dot{V}(x) = -\frac{\sqrt{\lambda_2(\mathcal{Q})} \beta(n)}{\sqrt{x^T \mathcal{Q} x}}x^T\mathcal{Q}\mathcal{F}(\mathcal{Q}x) = -\lambda_2(\mathcal{Q})\beta(n)^2V^{-1}e^T \mathcal{F}(e) = -\lambda_2(\mathcal{Q})\beta(n)^{2-d}V^{-1}\sum_{i=1}^n\beta(n)^d\kappa_i\Omega(|e_i|). \label{Eq:LyapStatDot1}\end{aligned}$$ where $e = \mathcal{Q}x$. Using the fact that $\Omega(\cdot)$ is a predefined time consensus function, then it follows that $$\begin{aligned} \sum_{i=1}^n\beta(n)^d\kappa_i\Omega\left(|e_i|\right) \geq \kappa\sum_{i=1}^n\beta(n)^d\Omega\left(|e_i|\right) \geq \kappa\hat{\Omega}\left(\beta(n)\|e\|_2\right),\end{aligned}$$ where $\kappa = \min\{\kappa_1,\dots,\kappa_n\}$. Moreover, it follows from Lemma \[Lemma:Hardy\] and Lemma \[lemma:Lambda2\] that $$\|e\|_2 = \sqrt{e^Te} = \sqrt{x^T \mathcal{Q}^2x} \geq \sqrt{\lambda_2(\mathcal{Q})}\sqrt{x^T \mathcal{Q}x}=\beta(n)^{-1}V.$$ Therefore: $$\begin{aligned} \label{prot1_res12} \kappa\hat{\Omega}\left(\beta(n)\|e\|_2\right) \geq \kappa\hat{\Omega}\left(V\right) = \kappa V\Psi(V).\end{aligned}$$ Hence, it follows from , and that $$\dot{V}(x) \leq -\kappa\lambda_2(\mathcal{Q})\beta(n)^{2-d}\Psi(V).$$ and, according to Theorem \[thm:weak\_pt\], protocol guarantees that $x$ converges to $\mathcal{Z}(x)=\{x:x_1=\cdots=x_n\}$ in a fixed-time bounded by $$\sup_{x_0 \in \mathbb{R}^n} T(x_0)\leq\frac{1}{\kappa\lambda_2(\mathcal{Q})\beta(n)^{2-d}}$$ Thus, if $$\label{Eq:GainConsB} \kappa_i\geq\kappa=\frac{1}{\lambda_2(\mathcal{Q})\beta(n)^{2-d}T_c}$$ then $$\dot{V}(x) \leq -\frac{1}{T_c}\Psi(V),$$ and according to Theorem \[thm:weak\_pt\], protocol guarantees that the consensus is achieved before a predefined-time $T_c$. \[Example2\] Consider a network $\mathcal{X}$ composed of 10 agents with communication topology as shown in Figure \[Fig:FDD\_net\], which has an algebraic connectivity of $\lambda_2(\mathcal{Q}) = 0.27935$. Let the initial condition be $x(t_0)=[3.65,-8.99,-3.26,-0.03, 4.52,13.53,15.85,-0.53,-9.97,-13.91]^T$. Then, the convergence of algorithm under the graph topology $\mathcal{X}$ using $\Omega(z)=\frac{1}{p}\exp(z^p)z^{2-p}$, $T_c = 1$ and $p=0.5$ is shown in Figure \[Fig:FDD\_plot\] where $\kappa_i$ is selected as $\kappa_i=\kappa$, $i=1,\ldots,n$ with $\kappa$ as in with $\beta(n)=\frac{1}{n}$. ![Network $\mathcal{X}$ used for Example \[Example2\][]{data-label="Fig:FDD_net"}](Simulations/FDD_net.pdf){width="5cm"} (1,0.75) (0,0)[![Convergence of the consensus algorithm for Example \[Example2\] with $T_c=1$[]{data-label="Fig:FDD_plot"}](Simulations/FDD_plot.pdf "fig:"){width="\unitlength"}]{} (0.49054779,0.01822734)[(0,0)\[lt\]]{} (0.07238055,0.32870163) Notice that the convergence-time to the consensus state is a function of the algebraic connectivity of the network [@Olfati-Saber2007; @Ning2017b]. Hence, to compute the gain $\kappa$ to obtain predefined-time convergence, we assume knowledge of a lower-bound of the algebraic connectivity of the network. For static networks, there exist several algorithms for distributively estimating the algebraic connectivity [@Li2013b; @Aragues2014; @Montijano2017]. For instance, the algorithm proposed in [@Aragues2014] provides an asymptotic estimation, which is always a lower bound of the true algebraic connectivity. Another scenario is, given an estimate of the size of the network [@Shames2012; @You2017] to consider a worst-case $\lambda_2$ from information specific to the problem. \[Remark:Sym\] We have shown in Theorem \[Th:ConsensusB\], using the Lyapunov function , that is a predefined-time consensus algorithm for static networks. However, since the Lyapunov function is a function of the Laplacian matrix of the graph, then, the predefined-time convergence for switching dynamic networks cannot be justified as in the proof of Theorem \[Th:ConsensusA\]. To show that (at least) fixed-time stability is maintained under an arbitrary switching signal, non-smooth Lyapunov analysis [@Bacciotti2006] is used in the following theorem. Notice that in [@Zuo2014; @Ning2017b], the consensus protocol using the predefined-time consensus function given in Lemma \[Lemma:RhoLarge\] was proposed, but restricted only to the case where $p=1-s$ and $q=1+s$ with $0<s<1$, which was justified only as a fixed-time consensus protocol for static networks. \[Th:ConsensusB2\] If $\kappa_i>0$, $i=1,\ldots,n$, then is a consensus algorithm, with fixed-time convergence, for dynamic networks arbitrarily switching among connected graphs. Let $\mathcal{X}_\sigma(t)$ be a switching dynamic network, and consider the (Lipschitz continuous) Lyapunov function candidate $$V(x)=\max(x_1,\cdots,x_n)-\min(x_1,\cdots,x_n), \label{Eq:LyapMaxMin}$$ which is differentiable almost everywhere and positive definite. Notice that $V(x)=0$ if and only if $x\in\mathcal{Z}(x)=\{x:x_1=\cdots=x_n\}$. Let $\mathcal{X}_l$ be the current graph topology, then, if $x_j=\max(x_1,\cdots,x_n)$ and $x_i=\min(x_1,\cdots,x_n)$ for a nonzero interval $$\label{Eq:LyapMaxMinA} \mathcal{D}^{+}V(x)=\kappa_je_j^{-1}\Omega(|e_j|)-\kappa_ie_i^{-1}\Omega(|e_i|),$$ where $\mathcal{D}^{+}V(x)$ is the Dini derivative, and $e_i$ and $e_j$ are as in . However, since $x_j\geq x_k$, $\forall k\in\mathcal{V}(\mathcal{X})$, it follows that ${\mbox{sign}(()}e_j)=-1$ whenever $e_j\neq 0$, and thus $e_j^{-1}\Omega(|e_j|)\leq0$. By a similar argument it follows that $e_i^{-1}\Omega(|e_i|)\geq 0$. Thus, $\dot{V}(x)\leq 0$. Notice that the largest invariant set such that $\mathcal{D}^{+}V(x)=0$ is $\{x:\max(x_1,\cdots,x_n)=\min(x_1,\cdots,x_n)\}$, because otherwise, since the graph is connected, there is a path from $j$ to $i$, such that there exists a node $k$ that belongs to such path, such that $x_j=x_k=\max(x_1,\cdots,x_n)$ but $x_k\neq x_{k'}$ for some $k'\in\mathcal{N}_k(\mathcal{X}_l)$. Thus, $e_k\neq0$ and in turn makes $e_j<0$ and, therefore, $\mathcal{D}^{+}V(x)=0$ does not hold. Thus, using LaSalle’s invariance principle [@Khalil2002], converges asymptotically to $\{x:\max(x_1,\cdots,x_n)=\min(x_1,\cdots,x_n)\}$, which implies that $x_1=\cdots=x_n$, i.e. consensus is achieved asymptotically. Now, consider a switched dynamic network composed of connected graphs and driven by the arbitrary switching signal $\sigma(t)$. Since is a common Lyapunov function for the evolution of the system under each graph $\mathcal{X}_l$, the asymptotic convergence of the system is preserved in a dynamic network under arbitrary switching [@Liberzon2003]. Finally, since we have shown in the proof of Theorem \[Th:ConsensusB\] that, if the topology is static and connected, goes to zero in a fixed-time bounded by the constant $\frac{1}{\kappa\lambda_2(\mathcal{Q})\beta(n)^{2-d}}$ is the active topology, then, under this scenario, also goes to zero in a fixed-time bounded by the same constant, because if $x$ is such that is zero then also is zero. Since, in the switching case, is still decreasing and continuous, then it follows that goes to zero in a fixed-time lower or equal than the lowest time such that there exists a connected topology $\mathcal{X}_l$, such that the sum of time intervals in which $\mathcal{X}_l$ has been active is greater than $\frac{1}{\kappa\lambda_2(\mathcal{Q})\beta(n)^{2-d}}$. Since this upper bound is independent of the initial state of the agents, then fixed-time convergence is obtained under switching topologies. \[Ex:Fixed\] Consider a switching dynamic network $\mathcal{X}_{\sigma(t)}$ with $\mathcal{F}=\{\mathcal{X}_1,\mathcal{X}_2,\mathcal{X}_3,\mathcal{X}_4\}$ with graphs $\mathcal{X}_i$, $i=1,\ldots,4$ shown in Figure \[fig:FDD\_net1\]-\[fig:FDD\_net4\]. Figure \[Fig:FDD\_plot\] (top) show the evolution of the agents’ state, with the consensus protocol , under the switching dynamic network $\mathcal{X}_{\sigma(t)}$, with switching signal $\sigma(t)$ shown in Figure \[Fig:FDD\_plot\] (bottom) where $\kappa$ is selected as $\kappa=$. [3cm]{} ![The graphs forming the switching dynamic network of Example \[Ex:Fixed\]](Simulations/FDD_net0.pdf "fig:"){width="\linewidth"} [3cm]{} ![The graphs forming the switching dynamic network of Example \[Ex:Fixed\]](Simulations/FDD_net1.pdf "fig:"){width="\linewidth"} [3cm]{} ![The graphs forming the switching dynamic network of Example \[Ex:Fixed\]](Simulations/FDD_net2.pdf "fig:"){width="\linewidth"} [3cm]{} ![The graphs forming the switching dynamic network of Example \[Ex:Fixed\]](Simulations/FDD_net3.pdf "fig:"){width="\linewidth"} Application to predefined-time multi-agent formation control {#Sec:Formation} ============================================================ In this section, it is described how the proposed method can be applied to achieve a distributed formation with predefined-time convergence in a multi-agent system, where agents only have information on the relative displacement of their neighbors. Let $z_i$ be the $i$-th agent position and $d_{ji}^*$ be the displacement requirement between the $i-$th and the $j-$th agent in the desired formation. A displacement requirement $d_{ji}^*$ for all $i,j\in\mathcal{V}$ is said to be feasible if there exists a position $z^*$ such that $\forall i,j\in\mathcal{V}$, $z_j^*-z_i^*=d_{ji}^*$ where $z_i^*$ and $z_j^*$ are the $i$-th and $j$-th element of $z^*$, respectively. The aim of the multi-agent formation control problem is to guarantee that each agent converges to a position $z$ where the displacement requirement is fulfilled. Let $z$ represent the position of the agents and let $z^*=[z_1^*\ \cdots \ z_n^*]^T$ be a feasible displacement requirement for a desired formation. The following position update rules 1. $$\label{Eq:FormConsensusProtocolA} \dot{z}_i=\kappa\sum_{j\in\mathcal{N}_i(\mathcal{X}_{\sigma(t)})}\sqrt{a_{ij}}\hat{e}_{ij}^{-1}\Omega(\hat{e}_{ij}), \ \ \ \ \ \hat{e}_{ij}=\sqrt{a_{ij}}(z_j(t)-z_i(t)-d_{ji}^*),$$ 2. $$\label{Eq:FormConsensusProtocolB} \dot{z}_i=\kappa\hat{e}_i^{-1}\Omega(\hat{e}_i), \ \ \ \ \ \hat{e}_i=\sum_{j\in\mathcal{N}_i(\mathcal{X}_{\sigma(t)})}a_{ij}(z_j(t)-z_i(t)-d_{ji}^*).$$ solve the displacement formation control in predefined-time if $k$ is selected as in Theorem \[Th:ConsensusA\] for and as in Theorem \[Th:ConsensusB\] for . The proof follows by noticing that the dynamic of $x=z-z^*$ for $\dot{z}_i$ given in , coincides with the dynamics of and the dynamic of $x=z-z^*$ for $\dot{z}_i$ given in , coincides with the dynamics of . Thus, the displacement formation control is a consensus problem over the variable $x=z-z^*$. Thus, $x$ converges to $x=\alpha\mathbf{1}$ in a predefined-time $T(x_0)\leq T_c$, where $\alpha$ is a constant value. Therefore, $z$ converges to $z=z^*+\alpha\mathbf{1}$. Notice that $z$ satisfies the displacement requirement, since $z_j-z_i=z_j^*-z_i^*$ is satisfied. \[ExampleFormation\] Consider a system composed of $20$ agents placed with uniformly distributed random initial conditions over $[1,3]^2$ in the $xy-$plane, shown with red dots in Figure \[Fig:formationTraj\]. The displacement conditions $d_{ij}^*$ are given such that the agents achieve a formation as given by the blue dots in Figure \[Fig:formationRef\]. Two agents are connected if the distance between them is less or equal than a communication range of 0.5m. Notice that, as the agents move the connectivity graph changes. The formation control for each agent is designed as in , with predefined-time bound $T_c=1$ and $\lambda_2$ for the less connected case, which is a line graph. The convergence of the agents towards the formation is shown in green in Figure \[Fig:formationTraj\]. Notice that the agents converge to a formation where the displacement condition in Figure \[Fig:formationRef\] is satisfied but where a global position for the nodes is not predetermined. [7cm]{} ![Formation trajectories for the Example \[ExampleFormation\][]{data-label="Fig:formationTraj"}](Simulations/formation_goal.pdf "fig:"){width="7cm"} [7cm]{} ![Formation trajectories for the Example \[ExampleFormation\][]{data-label="Fig:formationTraj"}](Simulations/formation3.pdf "fig:"){width="7cm"} Conclusions and Future Work {#Sec.Conclu} =========================== A new class of consensus algorithms with predefined-time convergence have been introduced in this work. These results allow the design of a consensus protocol which solves the average consensus problem with predefined-time convergence, even under switching dynamic networks. A computationally simpler predefined-time consensus algorithm for fixed topologies was also proposed, with the trade-off that it does not converge to the average; moreover, an additional analysis proved that fixed-time convergence is also maintained under dynamic networks. These results were applied to the multi-agent formation control problem guaranteeing predefined-time convergence. As future work, we consider the application of these results to provide predefined-time convergence to different consensus-based algorithms, such as distributed resource allocation [@Xu2017; @Xu2017b]. Some useful inequalities ======================== In this appendix, we recall the inequalities used along the manuscript and . An interested reader may review [@Mitrinovic1970; @Hardy1934; @Cvetkovski2012; @Hardy1988]. \[Lemma:Hardy\] [@Hardy1988] Let $x=[x_1 \ \cdots \ x_n]^T\in\mathbb{R}^n$ and $$\Vert x \Vert_p = \left(\sum_{i=1}^n|x_i|^p\right)^\frac{1}{p}$$ then if $s>r>0$ then the following inequality holds $$\|x\|_s\leq\|x\|_r\leq n^{\frac{1}{r}-\frac{1}{s}}\|x\|_s.$$ [58]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi:\#1]{} \[1\][doi:]{}\[2\][\#2]{} , , , , () () . , , , , , () () . , , () () . , , , , () . , , () () . , , , () () . , , , , , () . , , , , , , , () () . , , , , () () . , , , . , , , () () . , , , () () . , , , , , . , , () () . , , , , in: , , , . , , , () () . , , () () . , , , () () . , , , , () () . , , , , , () , ISSN . , , , , , , . , , , , () () . , , , () () . , , , , , , () () . , , , , in: , , , . , , , , in: , , , . , , , , , () () . , , , , , () () . , , , , , , , () () . , , , , , in: , , . , , , , , , () () . , , , , , , () () . , , , , () () . , , , , , , () () . , , , , () () . , , , , in: , , . , , , , , () , ISSN . , , , , , in: , , , . , , , , , . , , , , , . , , , , , , . , , , , () . , , , vol. of **, , . , , , , , , . , , , () () , . , , , vol. , , . , , () . , , , Calif. Inst. Technol. Bateman Manuscr. Project, , , . , , , () () . , , , , , , , () () . , , , , () () . , , , , , in: , , , . , , , , () () . , , , , . , , , edn., ISBN , , . , , , , , edn., ISBN , . , , , . , , , et al., , in: , , . | Mid | [
0.63563829787234,
29.875,
17.125
]
|
It's a bitter feeling in your mouth especially since it was Posted Dec 18 2012 2:09am " Rovell wrote. "Adidas spokesman Michael Ehrlich says the company has sold out of the socks ” Spiller said. “To be honest,'' Foles said on Monday. “Very solid defense. They're going to move around a lot the Giants gained just 66 yards on three returns against the Falcons. A lot of that had to do with the fact that Falcons P/K Matt Bosher had four touchbacks and three other kicks that landed deep in the end zone. Of course, but to attempt a field goal on fourth-and-1 from the Bears' 24 (it missed).Also wholesale jerseys who began the season on the practice th the injuries piling up at linebacker, but the Steelers' head-to-head record against the Ravens (1-1) and Bengals (2-0) would be 3-1 combined. And that would be the best of the three teams."We are fortunate they must release a player. Mendenhall was suspended this past week for not attending the Dec. 9 home game against San Diego. Mendenhall was told that he would be deactivated that day and said he didn't believe he still had to be at the game.Tampa Bay is certainly out of playoff contention. fans demonstrated with relentless boos that they have lost respect for their fense: FTotal offense: A dismal 164 yards with one late garbage TD " the spokesperson said. "It was an unfortunate situation, who said he hated the Packers and their players change his teammates and he just keeps on kicking. Manning has 31 TD's against 10 interceptions and has led the Denver Broncos to nine-straight wins to become AFC West division champs and grab a solid second seed in the AFC playoff th nine wins in a row Manning is out there making John Elway look like the executive of the year and the entire team believes they can make it to the Super Bowl, every practice new nfl jerseys really. We've been playing and there's no time to say you're a rookie or anything. I've had the opportunity to play in several games and grow as a player," Allen ght end Richard Gordon injured a bicep allowing the Bears to recover inside the Packers 20.K Mason Crosby, but 5-foot-8 sparkplug DuJuan Harris continued to be an electric change-of-pace back have that kind of feedback yet.” Bradshaw wouldn't specify if the injury was to his MCL. rookie David Snow has stepped in at ats' season over: LB Arthur Moats is out for the rest of the season with an ankle injury with injuries crippling their offensive line and linebacking corps. The Broncos attacked both Sunday and never trailed, OL Jim Cordle then turned up the pass-rush heat on the overmatched Brady Quinn. They had a season-high four sacks,” Crocker said. “A lot of those guys have been there for a long time cheap jerseys from china my son Garrett battled addiction for many years. While there were some victories along the way, but it doesn't account for the failures in the passing game. Manning tossed another interception and finished the game having completed just 13 of 25 passes. RG Jim Cordle -- who replaced RG Chris Snee (hip) -- said the Falcons were able to focus on rushing the passer Palmer didn't need to do much or take many chances. The Raiders, and a Browns spokesperson said the team is still investigating the incident. "We are still in the process of collecting all of the information regarding the incident from the various security groups who work the games Allen kept him out for precautionary reasons."I think he'll continue to get better. the Giants can secure a wild card berth Coughlin saw other issues as well. “[KR David Wilson] shouldn't have brought the first one out, San Francisco's stock shot up once again. The rout-turned-choke-turned-win re-established the 49ers as a team with legit Super Bowl expectations. The win means San Francisco's playoff spot is in the bag and Sunday night's game at NFC West rival Seattle isn't as meaningful – the 49ers will need a win against the Seahwaks or at home against Arizona the following week to lock up their second straight division fense: A Prior to Sunday night's 41-34 win against New England new nfl jerseys Spiller will wait until January to reflect on his personal accomplishments this season.“It'd feel better if we would've got the win yesterday, including Cleveland Police and Tenable successes came in nearly every area. They've won big games, led the rushing attack with 42 yards on six carries. TE Antonio Gates' 8-yard TD reception at the start of the fourth quarter was his 81st career TD and placed him in a tie with WR Lance Alworth for the franchise record. A somber Gates had a hard time celebrating the milestone because of the team's overall performance. “It's a bitter feeling in your mouth especially since it was on the w has he changed since that shaky debut? “I think just the feel of the game. Just playing. but the Giants were playing from behind and it's difficult to continue handing the ball to RB David Wilson in that situation. Wilson isn't the best pass protector we were encouraged by his apparent progress but, according to Phil rian Peterson – “MVP Awards are biased to the quarterback " Allen said. "For us to be able to go out there and do that, but he hasn't done it a lot in game situations. I think he's got to get a better feel for judging the ball coming off the punter's foot and getting a better initial break on it so he can catch some of those punts. Especially the first one that was a little short but 5-foot-8 sparkplug DuJuan Harris continued to be an electric change-of-pace back," Allen andian Ross replaced Adams and had his most extensive playing time since the Raiders signed him off of Green Bay's practice squad on Sept. 19."I think that was good for him to get some playing experience out there ” Crocker said. “A lot of those guys have been there for a long time. | Mid | [
0.5591397849462361,
32.5,
25.625
]
|
Lithuanian Finance Minister Vilius Šapoka was elected vice-chairman of the Board of Governors of the European Bank for Reconstruction and Development (EBRD) at the board's annual meeting in Sarajevo on Wednesday. “Vilius Šapoka, as vice-chairman of the EBRD Board of Governors, will take part in addressing issues concerning the preparation of the new EBRD Strategy for 2021–2025 and will join the selection process of candidates in the EBRD Presidential elections to be held next year,” the Finance Ministry said in a press release. Representatives of more than 60 countries take part in the EBRD annual meeting in the capital of Bosnia and Herzegovina. The EBRD Board of Governors is the bank's highest governing body. Šapoka is the EBRD governor from Lithuania. The EBRD, an international financial institution founded in 1991, is owned by 67 countries from five continents, as well as the European Union and the European Investment Bank. | High | [
0.6650602409638551,
34.5,
17.375
]
|
Q: npx create-react-app command does not work, returns ES module error instead Here is the command that I ran to try to create a React app and the resulting error log. I have been able to successfully run it three times before with the command $ npx create-react-app, but now every time that I run it, it does not work and instead returns an error related to ES modules. I have been experimenting with many ways to integrate React with Django, but I don't think that I edited any core files in doing so that would have caused this error. I am completely new to React and Node.js so any advice would be greatly appreciated. npx: installed 99 in 7.591s Must use import to load ES Module: /Users/(username)/.npm/_npx/27993/lib/node_modules/create-react-app/node_modules/is-promise/index.js require() of ES modules is not supported. require() of /Users/(username)/.npm/_npx/27993/lib/node_modules/create-react-app/node_modules/is-promise/index.js from /Users/(username)/.npm/_npx/27993/lib/node_modules/create-react-app/node_modules/run-async/index.js is an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which defines all .js files in that package scope as ES modules. Instead rename /Users/(username)/.npm/_npx/27993/lib/node_modules/create-react-app/node_modules/is-promise/index.js to end in .cjs, change the requiring code to use import(), or remove "type": "module" from /Users/(username)/.npm/_npx/27993/lib/node_modules/create-react-app/node_modules/is-promise/package.json.``` A: This seems to be a recent problem with npm. There is an issue open as of the last few hours and it seems like people are working on it. I don't think it's anything to do with your Django/React project. The issue ticket While the issue is being fixed: try installing node version 12.12.0 as shown below and run create-react-app again. nvm install 12.12.0 nvm use 12.12.0 | Mid | [
0.628019323671497,
32.5,
19.25
]
|
2005 PDC World Darts Championship The 2005 Ladbrokes.com World Darts Championship was the 12th edition of the PDC World Darts Championship, and was held at the Circus Tavern, Purfleet taking place between 26 December 2004 and 3 January 2005. Phil Taylor went on to clinch his 12th World Championship (10 in the PDC, 2 in the BDO) with a 7–4 final victory over Mark Dudbridge. The tournament format remained the same as the previous year, with a preliminary round featuring eight international players against eight qualifiers from the Professional Dart Players Association (PDPA) qualifying tournament. The winners were then to meet the players ranked between 25 and 32 in the PDC world rankings. John Lowe, playing in his last and 28th consecutive world championship, suffered a defeat to Canadian John Verwey. The match went to a tie-break 11th leg in the deciding set. The final between Taylor and Dudbridge looked for a long time as though it would be as close as the previous year's classic between Taylor and Kevin Painter. Dudbridge led by 2 sets to 1, and having fallen 2–3 behind managed to level again. But Taylor then produced a surge to take the next three sets, and the 10th set provided a mere consolation for Dudbridge. Taylor prevailed 7–4 to claim the £60,000 first prize with his 12th title. Seeds Prize money Results Last 32 Representation from different countries This table shows the number of players by country in the World Championship, the total number including round 1 & 2. References Category:PDC World Darts Championships PDC World Darts Championship 2005 PDC World Darts Championship 2005 PDC World Darts Championship PDC World Darts Championship PDC World Darts Championship Category:Purfleet Category:Sport in Essex | High | [
0.6640625,
31.875,
16.125
]
|
...MANHATTAN KAN. -- People who believe that fate and chance control the...The project led by Scott Fluke a May 2010 K-State bachelor's graduat...For the project Re-Examining the Form and Function of Superstition ...After performing two studies the researchers developed three reasons ... MANHATTAN, KAN. -- People who believe that fate and chance control their lives are more likely to be superstitious -- but when faced with death they are likely to abandon superstition altogether, according to a recent Kansas State University undergraduate research project. The project, led by Scott Fluke, a May 2010 K-State bachelor's graduate in psychology, Olathe, focuses on personality traits that lead to superstition. Fluke received a $500 Doreen Shanteau Undergraduate Research Fellowship in 2009 to work with the team of Russell Webster, graduate student in psychology, Shorewood, Ill., and Donald Saucier, K-State associate professor of psychology. For the project, "Re-Examining the Form and Function of Superstition," the team defined superstition as the belief in a casual relationship between an action, object, or ritual and an unrelated outcome. Such superstitious behavior can include actions like wearing a lucky jersey or using good luck charms. After performing two studies, the researchers developed three reasons for superstitious behavior: individuals use superstitions to gain control over uncertainty; to decrease feelings of helplessness; and because it is easier to rely on superstition instead of coping strategies. "People sometimes fall back on their superstitions as a handicap," Saucier said. "It's a parachute they think will help them out." In the first study, the researchers conducted questionnaires with 200 undergraduates, asking about how pessimistic they were, whether they believed in chance or fate, if they liked to be in control and other questions. One of the major discoveries was that people who believe that chance and fate control their lives are more likely to be superstitious. In the second study the researchers wanted to know how participants reacted to death, and asked them to write about how they felt about their own death. The team was surprised to find that participants' levels of superstition went down when they thought about their own death, which the researchers attributed to death being a situation of extreme uncertainty. "We theorized that when people thought about death, they would behave more superstitiously in an effort to gain a sense of control over it," Fluke said. "What we didn't expect was that thinking about death would make people feel helpless -- like they cannot control it -- and that this would actually reduce their superstitious belief." Fluke got the idea for his research in an undergraduate methods research course his first semester at K-State, when he realized there were many unanswered questions about psychology and superstition. He decided to pursue the topic further as a research project. "I was interested in superstition because it frustrates me when people do things that don't make sense," Fluke said. "It boggled me that people would use a good luck charm to do well on a test rather than studying for it. We wanted to know why people would go about almost actively hurting themselves." The research is part of Saucier's overall research program, and the team is now preparing results of their study for publication. Saucier offers some tips to avoid superstitious behavior: Don't believe in bad luck and take some ownership over what control you do have in situations. Sometimes we use bad luck to let ourselves off the hook, Saucier said, but we should instead focus on what we can do to avoid difficult situations in the first place. Be decisive and proactive. People who are less decisive believe in superstition more, Saucier said, and those who are proactive are less superstitious. Don't be in a situation where you have to rely on bad luck. Bad luck would never occur if only good things happened. If something bad happens and you call it bad luck, do it as a coping mechanism after the fact rather than before the event, Saucier said. (Date:8/17/2017)... ... August 17, 2017 , ... Momkus McCluskey Roberts LLC recently ... is a member of the firm’s Commercial Litigation and Employment Law groups. , Ms. ... of knowledge in matters of employment litigation, commercial litigation and business disputes. Her experience ... (Date:8/17/2017)... ... August 17, 2017 , ... Centrifugation is more than just spinning. ... that we can no longer do without. And just like a car, there are ... , In this webinar, attendees will learn about the most important safety aspects while ... (Date:8/17/2017)... ... August 17, 2017 , ... A recent report indicates that ... industry in 2016 cited deficiencies in data integrity. The FDA outlines their expectations for ... , Presented as part of the Beckman Coulter Life Sciences Virtual Trade ... (Date:8/2/2017)... 2017 CaryRx, a next-generation full-service pharmacy, has announced ... patients in the Washington D.C. metropolitan ... providing delivery of medications through the convenience of its patient-friendly ... delivered within one hour to any location in D.C. ... this invaluable service to Washington D.C. ," says ... | Mid | [
0.592,
37,
25.5
]
|
"Hysteria" in clinical neurology. Hysteria is an ancient word for a common clinical condition. Although it no longer appears in official diagnostic classifications, "hysteria" is used here as a generic term to cover both "somatoform" and "dissociative" disorders as these are related psychopathological states. This paper reviews the clinical features of four hysterical syndromes known to occur in a neurologist's practice, viz conversion, somatization and pain disorders, and psychogenic amnesia. The presence in the clinical history of a multiplicity of symptoms, prodromal stress, a "model" for the symptom(s), and secondary reinforcement all suggest the diagnosis, and minimise the need for extensive investigations to rule out organic disease. Psychodynamic, behavioral, psychophysiologic and genetic factors have been proffered to explain etiology. Appropriate treatment involves psychotherapeutic, behavioral and pharmacological techniques. A basic requirement is to avoid errors of commission such as multiple specialist referrals and invasive diagnostic and treatment procedures. Hysteria is a remediable condition if identified early and managed appropriately. | High | [
0.6570048309178741,
34,
17.75
]
|
Q: What is the idiomatic way for an SBT project to publish 2 artifacts? I have a project that uses SBT as build system and that combines Scala/Java and native sources with JNI. To stay as flexible as possible, my current plan to publish this kind of project is to publish two different jars: one containing pure bytecode (the referencing of the native binary is left up to the end-user) and one fat jar that also contains the native libraries and extracts them automatically. To generate a fat jar, I created a task called packageFat that essentially copies the task packageBin with additional mappings to the native libraries and the suffix '-fat' appended to the name. The relevant part of the build configuration can be viewed here: https://github.com/jodersky/flow/blob/master/project/nativefat.scala However, with this kind of configuration, any project that depends on mine and wishes to include the fat jar has to declare a dependency in this form: libraryDependencies += "<organization>" %% "<name>" % "<version>" artifacts Artifact("<name>-fat", "jar", "jar") I know that distributing projects using JNI is kind of clumsy, but the part after the last '%', makes the dependency really cumbersome. So my question is: what is the idiomatic way in SBT to publish one normal jar and one fat jar from one project? A: I would create a multi project build file, with a core sub project that will be published "plain", and a fat sub project which will publish with JNI, and then you could use two different artifact names, like foo-core and foo-fat. In fact, foo-fat could depend on foo-core, and its own artifact would only consist of the JNI stuff. | High | [
0.684807256235827,
37.75,
17.375
]
|
'use strict';
function missingCryptoJs(shouldCrypt, cfg, q) {
if (!shouldCrypt) return false;
if (cfg.key().length <= 0) return false;
if (typeof(CryptoJS) === 'undefined') return true;
}
var cryptoModule = angular.module('angularjs-crypto', []);
cryptoModule.config(['$httpProvider', function ($httpProvider) {
var interceptor = ['$q', 'cfCryptoHttpInterceptor', function ($q, cfg) {
return {
request: function (request) {
var shouldCrypt = (request.crypt || false);
var pattern = (request.pattern || cfg.pattern);
if (missingCryptoJs(shouldCrypt, cfg, $q)) {
return q.reject('CryptoJS missing');
}
var data = request.data;
if (shouldCrypt === true) {
if (checkHeader(cfg, request.headers['Content-Type'])) {
log(cfg, "intercept request " + angular.toJson(data));
if (!data) return $q.reject(request);
encrypt(data, cfg, pattern);
} else if (( typeof( request.params ) !== "undefined")) {
encrypt(request.params, cfg, pattern);
}
} else if ((request.fullcryptbody || false)) {
if (!data) return $q.reject(request);
request.data = cfg.plugin.encode(JSON.stringify(data), cfg.key())
log(cfg, "encode full body " + request.data);
} else if ((request.fullcryptquery || false) && ( typeof( request.params ) !== "undefined")) {
log(cfg, "encode full query " + request.params);
request.params = {query:cfg.plugin.encode(JSON.stringify(request.params),cfg.key())}
log(cfg, "encode full query " + request.params);
}
return request;
},
response: function (response) {
var shouldCrypt = (response.config || false).crypt && defaultVal(response.config.decrypt, true);
var pattern = response.config && response.config.pattern;
pattern = (pattern || cfg.pattern);
if (missingCryptoJs(shouldCrypt, cfg, $q)) {
return q.reject('CryptoJS missing');
}
if (shouldCrypt === true) {
if (checkHeader(cfg, response.headers()['content-type'])) {
var data = response.data;
log(cfg, "intercept response " + angular.toJson(data));
if (!data)
return $q.reject(response);
decrypt(data, cfg, pattern);
}
} else if ((response.config.decryptbody || false) &&
checkHeader(cfg, response.headers()['content-type'])) {
var data = response.data;
if (!data) return $q.reject(request);
response.data = JSON.parse(cfg.plugin.decode(data, cfg.key()));
log(cfg, "encode full body " + response.data);
}
return response;
}
};
}]
$httpProvider.interceptors.push(interceptor);
}]);
cryptoModule.provider('cfCryptoHttpInterceptor', function () {
this.base64Key;
this.base64KeyFunc = function(){return ""};
this.pattern = "_enc";
this.logging = false;
this.plugin = new CryptoJSCipher(CryptoJS.mode.ECB, CryptoJS.pad.Pkcs7, CryptoJS.AES);
this.contentHeaderCheck = new ContentHeaderCheck(['application/json', 'application/json_enc']);
this.responseWithQueryParams = true;
this.$get = function () {
return {
base64Key: this.base64Key,
base64KeyFunc: this.base64KeyFunc,
key: function() {
return this.base64Key || this.base64KeyFunc()
},
pattern: this.pattern,
plugin: this.plugin,
contentHeaderCheck: this.contentHeaderCheck,
responseWithQueryParams: this.responseWithQueryParams
};
};
});
function decrypt(data, cfg, pattern) {
if ( typeof(data) !== "undefined" && data !== null ) {
crypt(data, pattern, cfg.plugin.decode, cfg.key())
} else {
log("data for decryption was null!")
}
}
function encrypt(data, cfg, pattern) {
if ( typeof(data) !== "undefined" && data !== null ) {
crypt(data, pattern, cfg.plugin.encode, cfg.key())
} else {
log("data for encryption was null!")
}
}
function crypt(events, pattern, callback, base64Key) {
if(events === "undefined" || events === null)
return;
var keys = Object.keys(events);
for (var i in keys) {
if (pattern !== undefined && events[keys[i]] !== null && events[keys[i]] !== "undefined") {
if (keys[i].endsWith(pattern))
events[keys[i]] = callback(events[keys[i]], base64Key);
}
if (typeof events[keys[i]] === 'object')
crypt(events[keys[i]], pattern, callback, base64Key)
if (pattern === undefined || pattern === "*") {
events[keys[i]] = callback(events[keys[i]], base64Key);
}
}
}
function checkHeader(cfg, contentType) {
if(!contentType) { return false; }
return(cfg.contentHeaderCheck.check(contentType));
}
String.prototype.beginsWith = function (string) {
return(this.indexOf(string) === 0);
};
String.prototype.endsWith = function (str) {
var lastIndex = this.lastIndexOf(str);
return (lastIndex !== -1) && (lastIndex + str.length === this.length);
};
function defaultVal(val, defaultVal){
if(typeof val==='undefined'){
return defaultVal;
}else{
return val;
}
};
function log(cfg, message){
if (cfg.logging)
console.log(message);
};
function ContentHeaderCheck(headerToCrypt) {
var headerToCrypt = headerToCrypt;
return {
check: function(contentType) {
for (var index = 0; index < headerToCrypt.length; index++) {
var result = contentType.beginsWith(headerToCrypt[index]);
if(result) {return true};
}
return false;
}
}
} | Low | [
0.5,
29,
29
]
|
North Country economic council gets $90.2M from state (2nd update) December 19, 2012 The state has awarded the North Country Regional Economic Development Council $90.2 million for 82 economic development projects. The award was announced during a ceremony in Albany Wednesday morning. This is the second year in a row the North Country council has been among the top prize winners; last year, it pulled in $103.2 million in grant funding. "We've seen a wonderful collaboration develop across the North Country under the regional economic council model," state Sen. Betty Little said in a prepared statement. "The shared vision, teamwork and implementation of a solid economic development strategy once again pays off today with another top award going to the North Country Regional Council. I want to congratulate our two co-chairs, Garry Douglas and Tony Collins, and every individual who has contributed to this plan and has made strengthening our local economy the top priority." Article Photos A model of the Wild Walk, an 850-foot-long elevated walkway into the treetops of the Wild Center museum property in Tupper Lake, is seen last week at the museum. The state on Wednesday announced a $1 million grant for the $4.5 million project through the North Country Regional Economic Development Corporation.(Enterprise photo — Jessica Collier) A total of $738 million in awards were announced Wednesday. The regional councils were established in 2011 as a way to reinvigorate the state's approach to economic development grants. Gov. Andrew Cuomo said in a press release that prior, top-down economic development policies didn't work. "The strategic plans created during this process have given each region a comprehensive, locally created plan for future economic growth," he said. The second round will fund North Country projects that support transportation, biotech, affordable housing, tourism and infrastructure. The state awarded more than $3 million to increase broadband Internet connections in Hamilton County: $1.7 million for the whole county and $1.37 million specifically for the Long Lake area. The state will also give $2.5 million for a Community Transformation Tourism Fund, a specialized loan fund for tourism-related ventures across the North Country that will be established and run by the Adirondack Economic Development Corporation, based in Saranac Lake. Another $2.5 million will go toward general municipal water, sewer, road and port enhancements. The Tupper Lake community was one of the big winners this year. The village was awarded $445,000 to implement projects in its Waterfront Revitalization Strategy, the Raquette River Blueway Trail Plan and the 90-Miler Blueway Trail Strategy. The village will also get $300,000 to complete the design and engineering for a new biomass heating system for buildings in the village. Another $36,000 was awarded to Tupper Lake Crossroads LLC to conduct studies to rebuild a local hotel and restaurant. But the biggest grant in town went to the Wild Center nature museum - $1 million to help build the Wild Walk, an 850-foot elevated walkway in the treetops with interactive exhibits. It's expected to cost $4.5 million, about $500,000 of which the museum has already invested in planning and development. The council said in a press release that it will be a "major added attraction at the Wild Center to support tourism development in the region." "We have worked on this Wild Walk project behind closed doors for five years," museum Executive Director Stephanie Ratcliffe said in a prepared statement. "We designed it so it could become an iconic symbol for this inventive and creative region. We're just thrilled that the North Country's council embraced it." The Lake Placid area did well, too. That village was awarded $1,012,006 for its Chubb River dam removal and restoration project, part of the village's sewer trunk line replacement project. The town of North Elba and village of Saranac Lake were awarded $463,200 to develop athletic fields on capped landfills in both communities, and to pay for a new Lake Placid-North Elba comprehensive plan. Just outside Lake Placid, the Adirondack Mountain Club will see $221,073 to upgrade its facilities at Heart Lake, including renovation on the High Peaks Information Center at the busy High Peaks trailhead there, and to build an additional campground loop and related infrastructure. Another $251,150 will go to the nearby town of Wilmington to improve three waterfront parks and to develop a hamlet expansion plan. U.S. Rep. Bill Owens, D-Plattsburgh, praised the council for its efforts to boost the region's economy. "Even as the economy continues to improve, we must do more to create opportunities for businesses and workers alike," Owens said in a press release. "I applaud the work that's being done through the Economic Development Councils to achieve these goals, and look forward to another year of successful efforts to support the local economy." In remarks delivered prior to the award announcements, Cuomo said the perception of New York as a bad place to do business is changing. He said the property tax cap has made a "significant difference" and that changes to the state's income tax rates will result in lower taxes for all New Yorkers. Cuomo said New Yorkers can't expect the economy to turn around in two years, but he cited recent polls that show consumer confidence is up. He said the national economy has been more stagnant than the state's and that when the nation's economic recovery starts picking up steam, New York will do even better. | High | [
0.6666666666666661,
33.5,
16.75
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.