text
stringlengths 8
5.74M
| label
stringclasses 3
values | educational_prob
listlengths 3
3
|
|---|---|---|
Category: Nutrition The chemicals – often known as PFRs, or organophosphate flame retardants – can be used to make clothing or upholstery fire-resistant and may be found in nail polish, yoga mats and car seats. I hardly ever give my opinions about a enterprise however feel I should make an exception in this case. Natural Health Center (NHC) is likely one of the most interesting firms in the Kalamazoo space. Not only is your complete workers skilled earlier than they ever get out on the floor but this household owned, native enterprise offers back in so many ways to everyone with whom they arrive in touch. I highly suggest you avail yourself of the superb merchandise, the number of merchandise and the care with which they tackle allergies and meals sensitivities. They even have open houses the place you can strive before you buy” to remove buyer’s remorse. Their aim is preserve … It’s not simple to create a confidence on digicam and there was a day after I was very petrified of doing such a factor. Now it’s much easier than it used to be thanks to an amazing individual I met on line. This girl known as Naimh Arthur and she invited me to a 30 day challenge and I haven’t seemed again since. Now I can create a video and there’s room for growth. So, if you are a bit terrified of being on digital camera whether for a private purpose otherwise you would like to do it for what you are promoting. This is a great place to start. Just click on on the picture and hyperlink beneath to hitch this FREE 10 Day Shine Video challenge. How to Harvest: Mandarins must be harvested as soon as they flip orange with a view to preserve their taste. When the … Text is out there below the Creative Commons Attribution-ShareAlike License ; extra terms might apply. By using this website, you agree to the Terms of Use and Privacy Policy Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. , a non-profit group. This is especially true for folks with sensitive skin. Most aftershave lotions comprise alcohol which can really dry the pores and skin when used, so razor burn worse than ever. Lean Poultry Meat. Aiming a flat stomach doesn’t suggest skipping on meat. Nice and lean cuts of chicken or turkey breast are finest alternate options. These meats comprise niacin, protein, B6 vitamins and much more. These vitamins complement your meaty bunk-off while easing your digestion. Boil two tablespoons of flax seed in three quarts of water and let cool. Store in the fridge. Each morning add two ounces to orange juice and blend with flax Drink a … Have you ever drunk lemon peel tea? It could be very tasty! It also has a medicinal effect, so it helps protect your health. It is barely bitter, there is refreshing aroma of lemon and refreshing style, as soon as consuming it appears to be addictive. It turns into a robust ally for every single day health. Integrative doctor Dr. Steven Masley reviewed research on the company’s web site that can assist you decide if Protandim is worth it. I love this hubpage and the helpful information you’re giving! Actually, I have no drawback with the digestive system (as you might know it effectively on each Sunday), however I strongly agree that folks must eat meals sensibly and take excellent care of their well being. Preheat oven to 350F. Prepare mini muffin tin by oiling every cup. Combine almond meal,spelt flour, baking powder and salt. Whisk until combined. Separately, mix … This raises the concern that individuals who may otherwise stop smoking might switch to American Spirit instead, considering they will be safer, the researchers write in the journal Tobacco Control. Be ready for emergencies. Anaphylactic reactions caused by food allergies will be probably life threatening. People who have experienced anaphylaxis ought to strictly avoid the meals that triggered the response. Those with severe meals allergy symptoms might have to hold and know tips on how to use injectable epinephrine and antihistamines to treat reactions as a consequence of unintentional ingestion. They should also wear an ID bracelet that describes the allergy. If you have an anaphylactic reaction after consuming some meals, it is essential to have somebody drive you to the closest emergency room, even when symptoms enhance. For the correct analysis and therapy, you’ll want to get comply with-up care of an allergist.
|
Low
|
[
0.49681528662420305,
29.25,
29.625
] |
If the letter is alright, I can have it printed formally. I never heard back from Gareth with respect to any comments. Just let me know what you would like to do. Sara David Roland@EES 04/17/2000 01:41 PM To: Sara Shackleton/HOU/ECT@ECT cc: Gareth Bahlmann/HOU/ECT@ECT Subject: Re: letter agreement between EES and Enron Corp. Sara, One more reminder for you regarding the letter agreement. Thanks, David ---------------------- Forwarded by David Roland/HOU/EES on 04/17/2000 01:40 PM --------------------------- David Roland 04/13/2000 10:24 AM To: Sara Shackleton/HOU/ECT@ECT cc: Gareth Bahlmann/HOU/ECT@ECT Subject: Re: letter agreement between EES and Enron Corp. Sara, We still need to get this letter agreement done for Hawaii. Please give me a call when you have some time to discuss it. David Sara Shackleton@ECT 03/31/2000 04:47 PM Sent by: Kaye Ellis@ECT To: David Roland/HOU/EES@EES cc: Gareth Bahlmann/HOU/ECT@ECT Subject: Re: letter agreement between EES and Enron Corp. Attached is the revised letter agreement between EES and Enron Corp. To: David Roland/HOU/EES@EES cc: Sara Shackleton/HOU/ECT@ECT, Gareth Bahlmann/HOU/ECT@ECT Subject: Re: letter agreement between EES and Enron Corp. Sara and Gareth, We still need to finish this agreement and have it signed. To my knowledge, it has not been completed and signed. David Enron Energy Services From: David Roland 03/30/2000 06:11 PM To: Sara Shackleton/HOU/ECT@ECT cc: Gareth Bahlmann/HOU/ECT@ECT Subject: Re: letter agreement between EES and Enron Corp. Sara, I'm on someone else's computer - I'm having all sorts of e-mail problems today. The business person on the EES side to be signing this letter would be Mark S. Muller. He is a Managing Director of EES LLC. I don't necessarily have a problem with the description of the back-to-back obligations, but I'd like to know Gareth's opinion as to whether the language is too broad. David Sara Shackleton@ECT 03/30/2000 04:38 PM To: David Roland/HOU/EES@EES, Gareth Bahlmann/HOU/ECT@ECT cc: Subject: letter agreement between EES and Enron Corp. Attached is a first stab at the agreement. Please comment.
|
Low
|
[
0.518518518518518,
28,
26
] |
The Moody’s USA Downgrade Can the yield on US Treasuries be considered the "risk free rate of return" if there are other securities which are lower-risk than US Treasuries? Moody’s now admits two things: firstly that triple-A doesn’t mean risk-free (thanks, guys, I think we’d worked that out by now), and secondly — more interestingly — that the US is not the safest triple-A credit. There are now three levels of triple-A, when it comes to sovereign bonds. The weakest — which have been classed as “vulnerable” to a downgrade — are Spain and Ireland. The strongest — which have been classed as “resistant” to a downgrade — are Germany, France, Switzerland, Austria, Australia, Canada, Denmark, Finland, Luxembourg, Netherlands, Norway, Sweden, Singapore, and New Zealand. And in the middle — stronger than the “vulnerable” countries but weaker than the “resistant” countries — are the two “resilient” countries: the UK and the US. Which means that Moody’s now considers the USA to be a weaker credit than Finland or Singapore: a handy datapoint for anybody who thinks the US empire is crumbling.
|
Mid
|
[
0.6000000000000001,
33.75,
22.5
] |
★TerytoriumK2.PL [FFA][128TR] - 137.74.0.202:27015★TerytoriumK2.PL [D2/MIRAGE][128TR] - 137.74.0.202:27016★TerytoriumK2.PL [AWP][128TR] - 137.74.0.202:27017★TerytoriumK2.PL [AIM/DM][128TR] - 137.74.0.202:27018 If you want to talk with me or play with me, go to the server
|
Mid
|
[
0.5820105820105821,
27.5,
19.75
] |
We noticed that you're using an unsupported browser. The TripAdvisor website may not display properly.We support the following browsers:Windows: Internet Explorer, Mozilla Firefox, Google Chrome. Mac: Safari. Wooded Bliss Two Sweet 600 square foot cabin for up to three guests. Just 8 miles West of West Yellowstone & the Yellowstone National Park entrance. Stunning views of Lionhead Mountain and surrounding peaks. Easy access to forest service trails, to Hebgen Lake and to fishing on area rivers. Satellite TV and satellite internet access. BRIEF DESCRIPTION: Wooded Bliss Two is a great base camp for exploring the Yellowstone area and what an awesome alternative to a hotel room! It’s just 8 miles to the park entrance and is nestled in an aspen grove (offering stunning colors to fall visitors!) You’re surrounded by national forest trails for hiking, mountain biking, and snowmobiling and it’s just a short drive to a quiet cove on Hebgen Lake. For larger groups the other half of the duplex, Wooded Bliss One, can also be rented. BEDROOMS AND BATHS: T... Read more Sweet 600 square foot cabin for up to three guests. Just 8 miles West of West Yellowstone & the Yellowstone National Park entrance. Stunning views of Lionhead Mountain and surrounding peaks. Easy access to forest service trails, to Hebgen Lake and to fishing on area rivers. Satellite TV and satellite internet access. BRIEF DESCRIPTION: Wooded Bliss Two is a great base camp for exploring the Yellowstone area and what an awesome alternative to a hotel room! It’s just 8 miles to the park entrance and is nestled in an aspen grove (offering stunning colors to fall visitors!) You’re surrounded by national forest trails for hiking, mountain biking, and snowmobiling and it’s just a short drive to a quiet cove on Hebgen Lake. For larger groups the other half of the duplex, Wooded Bliss One, can also be rented. BEDROOMS AND BATHS: The cabin has one bedroom and one bath. The comfortable bedroom has a queen size bed. There is additional sleeping space on a fold out futon couch in the living room. There is a full bath with a stall shower. KITCHEN: The bright kitchen has a gas range and oven, microwave, blender, electric griddle, and crock pot as well as a good selection of picnic and serving ware. DINING AREA: There is counter dining for three as well as a picnic table in the yard. LIVING ROOM: The cozy living room has a 32” flat screen 3D HD TV with satellite service, VCR and DVD player, family movies and a great selection of games and books for guests to enjoy. The couch folds out for additional sleeping space. SETTING, ACREAGE, AND VIEWS: Wooded Bliss is in a woodsy vacation home neighborhood about 8 miles West of West Yellowstone, which is the West entrance to Yellowstone National Park. It sits on one half acre and enjoys incredible views of Lionhead Mountain and other peaks along the Continental Divide. You’re likely to see elk, moose and maybe even a bear in this beautiful wild area! OUTDOOR AMENITIES: Wooded Bliss Two has a West facing front porch with a porch swing for enjoying sunsets over the Continental Divide. There’s also a gas barbecue grill and a picnic table for your outdoor meals. CLOSEST TOWN AND AIRPORT: You are 8 miles from the town of West Yellowstone, 90 miles from Bozeman’s Gallatin Field Airport and 110 miles from Idaho Falls, Idaho. ACTIVITIES NEARBY: In winter you can snowmobile right from Wooded Bliss and get onto hundreds of miles of groomed snowmobile trails. If you prefer Nordic skiing, West Yellowstone’s Rendezvous Trails are a short drive away and you can explore Yellowstone Park by snowcoach. In summer there’s hiking and mountain biking right out the door, fishing on blue ribbon streams, boating on Hebgen Lake and of course day trips into Yellowstone Park. There’s so much to do! Additional amenities: 15 min to Yellowstone Park, Close to Town, Full Kitchen, Less than 15 minutes to town, No Air Conditioning, No dishwasher, No washer & dryer, pets not allowed Liked the situation, but more than a stone's throw from West Yellowstone. However, was still an easy drive into the park, and at the best end of Yellowstone for all the highlights and wildlife. There was a Bear spray at the property which was great,...More Rates are calculated based on the latest information provided. Please contact the manager/owner to confirm actual rates for your requested dates. Please contact the manager for complete rate information. Additional Rate and Availability Information Rates shown are for stays of 7 nights. Please inquire for nightly rates if staying less than a week. This home has a 3-night stay minimum. Rates are subject to change. An accommodations tax of 7% applies to all rentals.
|
Mid
|
[
0.570048309178743,
29.5,
22.25
] |
Q: How do I perform a jQuery ajax request in CakePHP? I'm trying to use Ajax in CakePHP, and not really getting anywhere! I have a page with a series of buttons - clicking one of these should show specific content on the current page. It's important that the page doesn't reload, because it'll be displaying a movie, and I don't want the movie to reset. There are a few different buttons with different content for each; this content is potentially quite large, so I don't want to have to load it in until it's needed. Normally I would do this via jQuery, but I can't get it to work in CakePHP. So far I have: In the view, the button control is like this: $this->Html->link($this->Html->image('FilmViewer/notes_link.png', array('alt' => __('LinkNotes', true), 'onclick' => 'showNotebook("filmNotebook");')), array(), array('escape' => false)); Below this there is a div called "filmNotebook" which is where I'd like the new content to show. In my functions.js file (in webroot/scripts) I have this function: function showNotebook(divId) { // Find div to load content to var bookDiv = document.getElementById(divId); if(!bookDiv) return false; $.ajax({ url: "ajax/getgrammar", type: "POST", success: function(data) { bookDiv.innerHTML = data; } }); return true; } In order to generate plain content which would get shown in the div, I set the following in routes.php: Router::connect('/ajax/getgrammar', array('controller' => 'films', 'action' => 'getgrammar')); In films_controller.php, the function getgrammar is: function getgrammar() { $this->layout = 'ajax'; $this->render('ajax'); } The layout file just has: and currently the view ajax.ctp is just: <div id="grammarBook"> Here's the result </div> The problem is that when I click the button, I get the default layout (so it's like a page appears within my page), with the films index page in it. It's as if it's not finding the correct action in films_controller.php I've done everything suggested in the CakePHP manual (http://book.cakephp.org/view/1594/Using-a-specific-Javascript-engine). What am I doing wrong? I'm open to suggestions of better ways to do this, but I'd also like to know how the Ajax should work, for future reference. A: everything you show seems fine. Double check that the ajax layout is there, because if it's not there, the default layout will be used. Use firebug and log function in cake to check if things go as you plan. A few more suggestions: why do you need to POST to 'ajax/getgrammar' then redirect it to 'films/getgrammar'? And then render ajax.ctp view? It seems redundant to me. You can make the ajax call to 'films/getgrammar', and you don't need the Router rule. You can change ajax.ctp to getgrammar.ctp, and you won't need $this->render('ajax');
|
Low
|
[
0.5295508274231671,
28,
24.875
] |
15" Die Cast Professional Woofer Perfect for musical instrument, PA and sound reinforcement, this driver is an excellent cost effective replacement in many Peavey, JBL and EV cabinets. Large magnet structure and 4" voice coil combine to provide high reliability and performance in demanding stage environments. See for yourself why this highly rated speaker has been an MCM best seller for nearly 15 years. Product Description Rigid paper cone Treated cloth accordion surround Die cast basket 4" voice coil 100oz. magnet Specifications:: Power Capacity: 400W/800W RMS/peak Sensitivity: 98dB (W/M) Impedance: 8ohm Re: 6.6ohm Le: 1.2mH Frequency response: 26Hz~3500Hz Fs: 28Hz Qts: 0.23 Qes: 0.27 Qms: 1.46 Vas: 463 (liters) Xmax: 4.4mm Dimensions: Overall frame diameter: 15.75" Required cutout: 14.63" Mounting depth: 5" Product Reviews I purchased two of these back in 1999 for a pair of 9 cubic foot ported DJ/PA speakers that I made. These woofers have served me well for many years. At times during large outdoor events, I ran these woofers with one Mackie 1400i amp bridged per woofer (850W RMS, HPF set to 38Hz.) Despite this abuse, the woofers have survived and continue to work well! Overall I am very satisfied with these woofers and they have proven to be very robust. My only complaint is their relatively short X-Max, which is why I only gave 4 stars for "features / quality." I wanted deep bass, so I built large cabinets with a low tuning freq. I'm sure the X-Max would not be an issue in smaller cabinets tuned to a higher frequency. At the time I designed these cabinets I used formulas from a speaker building book, none of which helped me determine cone excursion at a given power and frequency. Now there is software available like Bass Box that can predict cone excursion. If I had a program like this at the time, I'm sure I would have selected a different cabinet volume and tuning freq. oh well...live and learn :-) Overall nice woofers for a reasonable price. this woofer is impressive, the low end hits vary good even with a 500 watt professional amplifier.... this woofer is made heavy duty, and has a massive die cast frame... it would work great in a 3 way speaker system with proper crossover... i have this running on a low pass crossover for mid/sub bass.. it does the job, enough bass to shake the walls... not super low as a sub, but vary good clean solid bass.... you have to give this woofer a burn-in time to loosen up... this speaker can operate between 80 to 450 watt rms amplifier... why spend more money for a brand name, when could get vary close as sound quality this woofer produces..... I needed a 15" driver to fit in my JBL scoops. The depth was important, I only had 6 inches to work with. The JBL E140's were cost probitive on my budget and these have similar chracteristics. Rock solid cast frame, thick cone, 4' voice coil and huge magnet. A 400 watt RMS power rating is a plus. I pushed these hard many times over the past five years. I use a TOA P924 amp for each and it gets no where near clipping and it's realy loud. I'd recomend these for any low end application. Built a 2x15 cabinet for a customer with these. He wanted to play drop-D on a 5 string, and I wanted to build him a cabinet that would reproduce that fundamental note. 26hz. These woofers were a natural choice. In a 8 cubic foot enclosure slot ported tuned to 30hz, these woofers produce THUNDEROUS bass! Low low low lows I've never heard before out of pro woofers. Loud, clean, no distortion, my Peavey Mark IV runs out of steam long before these run out of xmax. My garage with cinder block walls rattles, you can feel the bass in the poured concrete floor! 30hz comes easy, 25hz is present, not sure where these eventually roll off, (or if they ever do!), but my god, these are outstanding value for money. Buy them if you can afford them, skip Eminence, skip Peavey, skip JBL, these exceed them at well under half the price. I designed a pair of transmission line loudspeakers using this woofer for West Kentucky Community & Technical College where I work as Engineer for their Television Department. They perform circles around the theatre's EV house system at frequencies below 40 Hz. They produce 22 Hz at a level that literally rattles the light fixtures in the theatre's ceiling. The TLs are of 10 cubic feet with a line length of 9 feet. I have 4 -2 15 sub cabinets loaded with these, and I push them with 2- 2500 watt power amps, and they will hurt your chest if I want them to, and yet they will be clean and clear as a good quality home stereo. I use these for outdoor gigs exclusively, because we use BOSE L1 Model 1 systems to play indoor gigs and the quality of these subs sound nearly as good . I am a bass player in a working band that did 54 shows in 2009. I bought this as a "cheap" replacement for my bottom cab and was shocked at how good it sounds. The basket is built like a TANK! It has a sweet and clear all-around sound and thumps when you need it. As a extreme sound reinforment user used to EV Kilomax drivers, These equal at 1/6th the price. Headeast, Jackyl, And many others have played though these while I did sound for them. Add Foghat, Whitesnake, Ozzy's Guitarist, Too many to name. The Eagles have even heard them and the sax player Greg's wifes younger sister usues thes for her band as the main drivers. I am in the karaoke/dj buisness on the weekends. I have tried many different brands of replacement drivers in the past, with better luck on some than others. What i can tell you is this thing pounds. It will work in a sub encloser, or as a fullrange driver. This thing will take all a mackie 2600 bridged will throw at it in a sub encloser! Its worth every penny!!!! I decided to try this as a replacement in a pair of Peavey SP1G's, since they were $60.00 cheaper (each) than the replacements from Peavey. Plus MCM is local and I was able to skip the shipping cost. I am absolutely thrilled with the results. I had to do a very slight mod on the cabinet, to allow for the slightly larger frame diameter, but the holes lined up perfectly, and the drivers sound fantastic. These are used on a PeeWee and Jr High football field, and are subject to a great deal of punishment, yet they hold up great. I highly recommend. Associated Products Soft, non-hardening caulking material for use in all types of seams, joints and openings. This caulk can easily be thumbed into place and smoothed with finger. Non-sag consistency makes it ideal for vertical and overhead applications.
|
High
|
[
0.69047619047619,
29,
13
] |
Alterations in neural cardiovascular control mechanisms with ageing. AGEING AND MECHANISMS OF BLOOD PRESSURE CONTROL: Ageing is associated with functional and structural alterations to the cardiovascular system. Evidence is accumulating, however, that ageing also determines major changes in the effectiveness of mechanisms involved in blood pressure control and that this represents an important determinant of the cardiovascular changes that can be observed in the elderly. DIFFERENCES SEEN IN ELDERLY SUBJECTS: It has been observed that compared to young subjects, in the elderly (1) baroreceptor control of the heart rate and cardiac function is impaired; (2) baroreceptor modulation of the sympathetic drive to the peripheral circulation is impaired, particularly the speed of reflex adjustments to normal and abnormal stimuli; and (3) cardiopulmonary stretch receptors, which tonically inhibit sympathetic tone, the renal release of renin and vasopressin secretion, are impaired. These three factors may account, at least in part, for the raised blood pressure and sympathetic activity in the elderly. They certainly explain the reduced ability of elderly people to maintain blood pressure and blood volume homeostasis, and their increased blood pressure variability over 24 h. ASSOCIATION WITH HYPERTENSION: All these problems are exacerbated if ageing is associated with hypertension, and are highly relevant to antihypertensive treatment. Care should be taken that any antihypertensive drugs selected for treatment in the elderly do not aggravate these basic homeostatic problems.
|
High
|
[
0.700265251989389,
33,
14.125
] |
Recombinant soluble trimeric CD40 ligand is biologically active. CD40 ligand (CD40L) is expressed on the surface of activated CD4+ T cells, basophils, and mast cells. Binding of C40L to its receptor, CD40, on the surface of B cells stimulates B cell proliferation, adhesion and differentiation. A preparation of soluble, recombinant CD40L (Tyr-45 to Leu-261), containing the full-length 29-kDa protein and two smaller fragments of 18 and 14 kDa, has been shown to induce differentiation of B cells derived either from normal donors or from patients with X-linked hyper-IgM syndrome (Durandy, A., Schiff, C., Bonnefoy, J.-Y., Forveille, M., Rousset, F., Mazzei, G., Milili, M., and Fischer, A. (1993) Eur. J. Immunol. 23, 2294-2299). We have now purified each of these fragments to homogeneity and show that only the 18-kDa fragment (identified as Glu-108 to Leu-261) is biologically active. When expressed in recombinant form, the 18-kDa protein exhibited full activity in B cell proliferation and differentiation assays, was able to rescue of B cells from apoptosis, and bound soluble CD40. Sucrose gradient sedimentation shows that the 18-kDa protein sediments as an apparent homotrimer, a result consistent with the proposed trimeric structure of CD40L. This demonstrates that a soluble CD40L can stimulate CD40 in a manner indistinguishable from the membrane-bound form of the protein.
|
High
|
[
0.658602150537634,
30.625,
15.875
] |
436 F.3d 644 CHRYSLER CORPORATION, fka Chrysler Holding Corporation, as Successor by Merger to Chrysler Motors Corporation and its Consolidated Subsidiaries, Petitioner-Appellant,v.COMMISSIONER OF INTERNAL REVENUE, Respondent-Appellee. No. 03-1214. United States Court of Appeals, Sixth Circuit. Argued: September 14, 2005. Decided and Filed: February 8, 2006. COPYRIGHT MATERIAL OMITTED ARGUED: Jennifer L. Fuller, Kenneth B. Clark, Fenwick & West, Mountain View, California, for Appellant. Joan I. Oppenheimer, Bridget M. Rowan, United States Department of Justice, Washington, D.C., for Appellee. ON BRIEF: Jennifer L. Fuller, Kenneth B. Clark, William F. Colgin, Barton W.S. Bassett, James P. Fuller, Ronald B. Schrotenboer, Fenwick & West, Mountain View, California, for Appellant. Joan I. Oppenheimer, Gilbert S. Rothenberg, Charles Bricken, United States Department of Justice, Washington, D.C., for Appellee. Before: BOGGS, Chief Judge; NORRIS and COOK, Circuit Judges. OPINION ALAN E. NORRIS, Circuit Judge. 1 Chrysler Corporation appeals from three adverse Tax Court rulings that granted partial summary judgment to the Commissioner of Internal Revenue. The disputed tax computations stem from the early to mid-1980s and involve substantial sums of potential tax liability. These rulings present the following questions: 1) Under the accrual accounting method used by Chrysler, was the company permitted to deduct anticipated warranty expenses in the year that it sold warranted motor vehicles to its dealers even though warranty claims had not necessarily been made? 2) Was Chrysler barred by the ten-year statutory limitations period from altering certain foreign tax credit elections? 3) Did costs associated with the redemption of Chrysler's Employee Stock Option Plan ("ESOP") constitute deductible capital expenditures? 2 This case essentially involves three discrete appeals. For that reason, we will abandon our usual practice of beginning our opinion with a generalized background section in favor of treating each issue individually, providing the necessary factual context in conjunction with our legal analysis. I. 3 Deduction for Anticipated Warranty Expenses 4 In its opinion, the Tax Court framed the issue in these terms: 5 We must decide whether for Federal income tax purposes all events necessary to determine petitioner's liability for its warranty expenses have occurred when it sells its vehicles to its dealers; in other words, has petitioner satisfied the first prong of the all events test entitling it to deduct its estimated future warranty costs on the sale of such vehicles? 6 Chrysler Corp. v. Comm'r, No. 22148-97, 2000 WL 1231528, 80 T.C.M. (CCH) 334, T.C.M. (RIA) 2000-283 (Aug. 31, 2000). Although discussed in more detail shortly, the "all events test" alluded to by the Tax Court provides as follows: 7 Under an accrual method of accounting, a liability ... is incurred, and generally is taken into account for Federal income tax purposes, in the taxable year in which all the events have occurred that establish the fact of the liability, the amount of the liability can be determined with reasonable accuracy, and economic performance has occurred with respect to the liability. 8 Treas. Reg. § 1.461-1(a)(2)(i) (2001). In this appeal, only the first prong of the test — whether the "fact of the liability" has been established — is at issue. 9 The parties agree that this court reviews de novo a grant of summary judgment by the Tax Court.1 Roberts v. Comm'r, 329 F.3d 1224, 1227 (11th Cir.2003). 10 In tax years 1984 and 1985, Chrysler included deductions of $567,943,243 and $297,292,155 on its federal income tax returns on the basis that it incurred those amounts as warranty expenses for motor vehicles sold in those years to its dealers. A sale generally occurred when a vehicle was delivered to the carrier for shipment to the dealer. 11 New vehicle warranties, which are at issue here, cover defects in material and manufacture. As Chrysler points out, state and federal laws regulate the entire warranty regime. Specifically, the Uniform Commercial Code, as adopted by virtually every state, state "lemon" laws, and the Magnuson-Moss Warranty-Federal Trade Commission Improvement Act, 15 U.S.C. §§ 2301-12 ("Magnuson-Moss"), impose warranty obligations on the seller. During the period at issue, every new vehicle sold by Chrysler was covered by a warranty. When selling a new vehicle, dealers would provide buyers with a warranty manual that explained its terms and limitations. 12 Chrysler offered two kinds of express warranty: a basic warranty that applied to the first 12 months or 12,000 miles, and an extended warranty that covered certain types of repairs after the basic warranty had expired. In turn, Chrysler contracted with its dealers to repair vehicles under warranty. Typically, dealers would make repairs and then seek reimbursement from the company. However, dealers were required to comply with certain agreed upon procedures to substantiate their reimbursement requests that, if not followed, could result in non-payment. By 1984 Chrysler had installed a computer system known as the Dealer Information Access Link ("DIAL"), which an increasing number of dealers used to report warranty repairs that were subject to reimbursement. DIAL made it easier for Chrysler to track and respond to warranty claims. 13 Chrysler engaged consultant Arthur D. Little, Inc., to calculate the amount of warranty expenses the company incurred for tax years 1984 and 1985. Chrysler uses the accrual method of accounting and a tax year based upon the calendar year. It is undisputed that the expenses incurred by Chrysler to fix conditions covered by warranty constitute "ordinary and necessary" business expenses under 26 U.S.C. § 162. During the period at issue, Chrysler accrued the entire estimated cost of its warranties in the year that it sold the vehicles to the dealers. Chrysler included this liability on its balance sheet and took it into account in the calculation of net (BOOK) income. 14 The Commissioner reduced Chrysler's warranty cost deduction for 1984 by $287,939,317, which had the ripple effect of increasing the company's 1985 deduction for such costs by $62,767,885. 15 The Tax Court framed the legal question in these terms: 16 Whether a business expense has been "incurred" so as to entitle an accrual-basis taxpayer to deduct it under section 162(a) is governed by the "all events" test as set out in United States v. Anderson, 269 U.S. 422, 441, 46 S.Ct. 131, 70 L.Ed. 347 (1926). In Anderson, the Supreme Court held that a taxpayer was entitled to deduct from its 1916 income a tax on profits from munitions sales that took place in 1916. Although the tax would not be assessed and therefore would not formally be due until 1917, all the events had occurred in 1916 to fix the amount of the tax and to determine the taxpayer's liability to pay it.... 17 [U]nder the regulations, the all events test has two prongs, each of which must be satisfied before accrual of an expense is proper. First, all the events must have occurred which establish the fact of the liability. Second, the amount must be capable of being determined "with reasonable accuracy." Sec. 1.461-1(a)(2), Income Tax Regs. (accrual of deductions); sec. 1.446-1(c)(1)(ii), Income Tax Regs. (accrual in general). For the purpose of deciding this motion, only the first prong of the test is relevant. For the purpose of the first prong of the test the Supreme Court has stated that the liability must be "final and definite in amount", Security Flour Mills Co. v. Commissioner, 321 U.S. 281, 287, 64 S.Ct. 596, 88 L.Ed. 725 (1944), "fixed and absolute", Brown v. Helvering, 291 U.S. 193, 201, 54 S.Ct. 356, 78 L.Ed. 725 (1934), in order to be deductible. 18 Chrysler, 2000 WL 1231528, 80 T.C.M. (CCH) 334, T.C.M. (RIA) 2000-283 (Aug. 31, 2000) (citations and footnote omitted). 19 Having set the legal stage, the court then examined and distinguished two Supreme Court opinions urged upon it by the parties: United States v. Hughes Prop. Inc., 476 U.S. 593, 106 S.Ct. 2092, 90 L.Ed.2d 569 (1986), and United States v. Gen. Dynamics Corp., 481 U.S. 239, 107 S.Ct. 1732, 95 L.Ed.2d 226 (1987). Because we agree that these cases are central to the resolution of this particular question, we find it useful to begin our discussion with reference to the construction given to them by the Tax Court: 20 [Chrysler] places reliance on United States v. Hughes Properties, Inc., 476 U.S. 593, 106 S.Ct. 2092, 90 L.Ed.2d 569 (1986), for the proposition that statutory liabilities satisfy the first prong of the all events test.... 21 In Hughes Properties, the taxpayer was a Nevada casino that was required by State statute to pay as a jackpot a certain percentage of the amounts gambled in progressive slot machines. The taxpayer was required to keep a cash reserve sufficient to pay the guaranteed jackpots when won. Hughes Properties at the conclusion of each fiscal year entered the total of the progressive jackpot amounts (shown on the payoff indicators) as an accrued liability on its books. From that total, it subtracted the corresponding figure for the preceding year to produce the current tax year's increase in accrued liability. On its Federal income tax return this net figure was asserted to be an ordinary and necessary business expense and deductible under section 162(a). The Court found that the all events test had been satisfied and the taxpayer was entitled to the deduction. The Court reasoned that the State statute made the amount shown on the payout indicators incapable of being reduced. Therefore the event creating liability was the last play of the machine before the end of the fiscal year, and that event occurred during the taxable year. 22 We conclude that the cases cited by [Chrysler] do not strictly stand for the proposition that if a liability is fixed by statute, that fact alone meets the first prong of the all events test. Rather we are of the opinion that the first prong of the all events test may be met when a statute has the effect of irrevocably setting aside a specific amount, as if it were to be put into an escrow account, by the close of the tax year and to be paid at a future date. In the instant case, the applicable statutes do not so provide. 23 [The Commissioner] relies on the analysis contained in the Supreme Court's opinion in United States v. General Dynamics Corp., 481 U.S. 239, 107 S.Ct. 1732, 95 L.Ed.2d 226 (1987). In General Dynamics, the taxpayer, who self-insured its employee medical plan, deducted estimated costs of medical care under the plan. The employer's liability was determinable. The employees' medical needs had manifested themselves, employees had determined to obtain treatment, and treatment had occurred. The only events that had not occurred were the employees' filing claims for reimbursement before the end of the taxable year. The Supreme Court found that the all events test was not met until the filing of properly documented claims. The filing of the claim was the last event needed to create the liability and therefore absolutely fix the taxpayer's liability under the first prong of the all events test. See id. at 244, 107 S.Ct. 1732. 24 [Chrysler] focuses on the fact that the liability in United States v. Hughes Properties, Inc., supra, was in part fixed by operation of statute and concludes from that the first prong of the all events test is satisfied if a statute in part works to fix the liability. We do not agree. In both Hughes Properties and General Dynamics the Supreme Court focused on the last event that created the liability. In Hughes Properties the event creating liability was the last play of the machine before the end of the fiscal year. Because the Nevada statute fixed the amount of the irrevocable payout, that play crystalized or fixed absolutely the taxpayer's liability, thus satisfying the first prong of the all events test. In General Dynamics, the last event that created the liability was the employee filing the claim for reimbursement. 25 We are unable to find sufficient differences between the facts in General Dynamics and those of the instant case to justify departing from the Supreme Court's analysis. Here, as in General Dynamics, the last event fixing liability does not occur before the presenting of a claim, either a claim for warranty service by the customer through one of petitioner's dealers or a claim for reimbursement made on petitioner by the dealer. 26 Id. (footnotes and citations omitted). 27 As the decision of the Tax Court makes clear, the central issue on appeal is precisely what a taxpayer must do in order to establish liability with sufficient certainty to satisfy the first prong of the "all events test."2 We would be less than candid if we did not acknowledge a degree of sympathy with Justice O'Connor's observation in General Dynamics that "[t]he circumstances of this case differ little from those in Hughes Properties." Gen. Dynamics, 481 U.S. at 248, 107 S.Ct. 1732 (dissenting). However, given that the Court reached the opposite result in successive terms when faced with similar sets of facts, we must do our best to distinguish the two cases. As did the Tax Court, we see no viable way of reconciling Hughes Properties with General Dynamics other than by reading the former to stand of the proposition that "[t]he first prong of the all events test may be met when a statute has the effect of irrevocably setting aside a specific amount ... by the close of the tax year and to be paid at a future date." Chrysler Corp. v. Comm'r, supra. The Court in General Dynamics held that the "last link in the chain of events creating liability for purposes of the `all events test'" was the actual filing of a medical claim. Gen. Dynamics, 481 U.S. at 245, 107 S.Ct. 1732. It based its reasoning on the fact that "General Dynamics was ... liable to pay for covered medical services only if properly documented claims forms were filed." Id. at 244, 107 S.Ct. 1732. 28 Like General Dynamics, Chrysler faces potential liability, which in its case is based upon the express and implied warranties that accompany the sale of its motor vehicles. However, that liability does not become firmly established until a valid warranty claim is submitted. As the Court explained, "Nor may a taxpayer deduct an estimated or an anticipated expense no matter how statistically certain, if it is based on events that have not occurred by the close of the taxable year." Gen. Dynamics at 243-44, 107 S.Ct. 1732 (citing Brown v. Helvering, 291 U.S. 193, 201, 54 S.Ct. 356, 78 L.Ed. 725 (1934)). The court distinguished the Code's treatment of insurance companies — allowed by 26 U.S.C. § 832 to deduct reserves for "incurred but not reported" claims — from its treatment of non-insurance companies restrained from deducting "claims that [are] actuarially likely but not yet reported." Id. at 246, 107 S.Ct. 1732. We assume that the DIAL software made it easier for Chrysler to track and process warranty claims; it may also have assisted Arthur D. Little in calculating the cost to the company of future claims. However, even if those claims were predictable with relative accuracy, they were not actually submitted during the taxable year and therefore cannot be deducted because they remain "anticipated expenses." See id. at 245, 107 S.Ct. 1732 ("Based on actuarial data, General Dynamics may have been able to make a reasonable estimate of how many claims would be filed ... [b]ut that alone does not justify a deduction."). 29 In reaching this conclusion, we readily acknowledge that Chrysler has raised a number of thoughtful points. First among them is the contention that the anticipated warranty claims at issue should be analyzed with reference to the second prong of the "all events test," that is, whether "the amount of the liability can be determined with reasonable accuracy." Despite its surface appeal, this argument fails to recognize that it is not the imprecise amount of the claims that renders them non-deductible but their contingent nature. While Chrysler relies in part upon the existence of statutes, such as Magnuson-Moss, to establish the "fact" of its liability, they impose legal duties with respect to warranties but do not necessarily fix liability. Among other things, Magnuson-Moss prescribes the contents of warranties, 15 U.S.C. § 2302, and remedies available to consumers who seek to enforce their terms, 15 U.S.C. § 2310. However, simply because a manufacturer has provided a warranty to a consumer, the scope of which is defined to a some degree by statute, does not mean that liability has attached; until a claim has been filed invoking the terms of a warranty, liability remains contingent and, because of that fact, non-deductible. 30 The parties have cited numerous cases to us in their briefs. That we have declined to discuss them here should not be taken as a sign that they have not been fully considered. Rather, it means that, like the Tax Court, we conclude that this issue hinges upon our construction of Hughes Properties and, more importantly, General Dynamics. For the reasons just outlined, we detect no material distinction between General Dynamics and the case before us. Consequently, we affirm the judgment of the Tax Court. II. Foreign Tax Credits 31 Chrysler experienced dire financial difficulties in the late 1970s and early 1980s. In the tax years 1980 through 1982, it reported no taxable domestic income and paid no taxes to the United States. However, it did pay foreign taxes during that period. Rather than elect to take a credit for foreign taxes paid, 26 U.S.C. § 901(a), Chrysler took a tax deduction, 26 U.S.C. § 164(a)(3), for those years. These deductions were substantial: $34,556,085 (1980); $7,020,844 (1981); and $3,631,958 (1982). 32 On July 6, 1992, the Commissioner informed Chrysler of tax deficiencies for the years 1984 and 1985. In an attempt to reduce that liability, the company filed amended tax returns on July 24, 1995 for the years 1980 through 1985. At the time that it filed those amended returns, only years 1983 through 1985 were open for purposes of assessment of tax by the Commissioner or for a claim of credit or refund by Chrysler. Nonetheless, Chrysler sought to amend its 1980 through 1982 returns to change the foreign tax deductions to foreign tax credits, which could then be carried forward to its tax liabilities for 1984 and 1985. 33 In his notice of deficiency, the Commissioner advised the company that "to the extent your claims are attributable to [foreign tax credit] carryforwards from the years ended December 31, 1980, December 31, 1981, and December 31, 1982, the ten-year statute of limitations for making a timely election to claim foreign tax credit has expired and those carryforwards are not allowable." In response, Chrysler filed a petition with the Tax Court. 34 Three provisions of the Internal Revenue Code come into play for purposes of this appeal. The first provides as follows: 35 (a) Allowance of credit. — If the taxpayer chooses to have the benefits of this subpart, the tax imposed by this chapter shall, subject to the limitation of section 904, be credited with the amounts provided in the applicable paragraph of subsection (b) plus, in the case of a corporation, the taxes deemed to have been paid under sections 902 and 960. Such choice for any taxable year may be made or changed at any time before the expiration of the period prescribed for making a claim for credit or refund of the tax imposed by this chapter for such taxable year. .... 36 26 U.S.C. § 901(a) (emphasis added). It is undisputed that the limitations period set forth in 26 U.S.C. § 6511(d)(3)(A) applies to § 901(a). This section provides as follows: 37 (3) Special rules relating to foreign tax credit.. — 38 (A) Special period of limitation with respect to foreign taxes paid or accrued — If the claim for credit or refund relates to an overpayment attributable to any taxes paid or accrued to any foreign country ... for which credit is allowed against the tax imposed by subtitle A in accordance with the provisions of section 901 ... in lieu of the 3-year period of limitation prescribed in subsection (a), the period shall be 10 years from the date prescribed by law for filing the return for the year with respect to which the claim is made. 39 26 U.S.C. § 6511(d)(3)(A) (emphasis added).3 The final statutory section that comes into play is 26 U.S.C. § 904(c), which was added to the Internal Revenue Code in 1958 in order to allow taxpayers to carry forward or carry back portions of a foreign tax credit that could not be used in the tax year for which they were originally claimed: 40 Carryback and Carryover of Excess Tax Paid. — Any amount by which all taxes paid or accrued to foreign countries or possessions of the United States for any taxable year for which the taxpayer chooses to have the benefits of this subpart exceed the limitations under subsection (a) shall be deemed taxes paid or accrued to foreign countries or possessions of the United States in the second preceding taxable year, in the first preceding taxable year, and in the first, second, third, fourth, or fifth succeeding taxable years, in that order .... Such amount deemed paid or accrued in any year may be availed of only as a tax credit and not as a deduction and only if the taxpayer for such year chooses to have the benefits of this subpart as to taxes paid or accrued for that year to foreign countries or possessions of the United States. 41 26 U.S.C. § 904(c). 42 The United States Court of Claims addressed the same question that faces us some years ago, which it framed in these terms: "[W]hether in cases where there has been a carryover of foreign taxes under section 904(c), the limitations period of section 6511(d)(3)(A) commences with the date prescribed by law for filing the return for the year from which the excess foreign taxes are carried, or with the date prescribed by law for filing the return for the year to which the excess foreign taxes are carried." Ampex Corp. v. United States, 223 Ct.Cl. 428, 620 F.2d 853, 857 (1980) (emphasis original). In other words, do the phrases "such taxable year" as used in § 901(a) and "year with respect to which the claim is made" as used in § 6511(d)(3)(A) mean the year in which the foreign taxes are paid and the right to claim a credit accrues? Or, do they refer to the year in which the foreign tax credit is to be applied? If the latter interpretation is correct, then Chrysler meets the 10-year limitations period because it sought in 1995 to apply the foreign tax credits to its 1985 return. If the former construction holds, then Chrysler is precluded from amending because the years in which the tax credits accrued (1980 through 1982) fall outside the limitations period. 43 The Tax Court reached the following conclusion: 44 Section 901(a) allows a taxpayer such as [Chrysler] to elect to credit income taxes owed to a foreign country in lieu of deducting them under section 164(a)(3). [The Commissioner] argues that [Chrysler's] election was untimely. [The Commissioner] asserts that the phrases "for any taxable year" and "for such taxable year" that appear in section 901(a) refer to [Chrysler's] 1980, 1981, and 1982 taxable years rather than [Chrysler's] 1985 taxable year. [Chrysler] argues that its election was timely. Because section 904(c) allows a taxpayer to carry over a foreign tax credit for up to 5 years, [Chrysler] asserts, section 901(a), when read in the light of section 6511(d)(3)(A), generally allows a taxpayer up to 15 years to elect or change its election under section 901(a). [Chrysler] concludes that the relevant phrases refer to the year for which the overpayment is claimed on account of the foreign taxes; here, 1985. [Chrysler] asserts that its conclusion comports with Congress' intent for section 901(a), i.e., to avoid subjecting a taxpayer's foreign earnings to taxation by both the foreign country and the United States, and that its conclusion is consistent with the application of section 6511(d)(3)(A). 45 We agree with [the Commissioner] that the 10-year period under section 901(a) is measured from the years for which [Chrysler] elected the foreign tax credits; i.e., 1980, 1981, and 1982. We read the phrase "for such taxable year" to refer to the "any taxable year" specified at the beginning of the same sentence, or, in other words, to the year for which the election of the foreign tax credit is made. The only other time that Congress used the word "such" in section 901(a) it did so to refer to the "choice" made by the taxpayer described in the first sentence of section 901(a). We believe it logical to conclude that Congress' use of the second "such", i.e., the one at issue, refers to the only "taxable year" described in section 901(a); namely, the year for which the election of the foreign tax credit is made. 46 Our reading comports with the Commissioner's regulations prescribed under section 901(a). Section 1.901-1(d), Income Tax Regs., provides that "The taxpayer may, for a particular taxable year, claim the benefits of section 901 (or claim a deduction in lieu of a foreign tax credit) at any time before the expiration of the period prescribed by section 6511(d)(3)(A)". Here, [Chrysler] aims to "claim the benefits of section 901" for 1980, 1981, and 1982 and not for 1985. The benefits which [Chrysler] is attempting to avail itself of in 1985 are the benefits of section 904(c). 47 . . . . 48 We hold that [Chrysler's] elections for 1980, 1981, and 1982 were untimely. Accordingly, we will grant [the Commissioner's] motion for partial summary judgment. 49 Chrysler Corp. v. Comm'r, 116 T.C. 465, 469-70, 2001 WL 739231 (2001) (footnotes omitted). 50 Where, as here, the facts are undisputed and the issue is one of statutory construction, we review the judgment of the Tax Court de novo. Intermet Corp. & Subsidiaries v. Comm'r, 209 F.3d 901, 903 (6th Cir.2000) (citing Estate of Mueller v. Comm'r, 153 F.3d 302, 304 (6th Cir.1998)); accord Hospital Corp. of America & Subsidiaries v. Comm'r, 348 F.3d 136, 140 (6th Cir.2003) (Tax Court interpretation of statutory provisions reviewed de novo); Limited, Inc. v. Comm'r, 286 F.3d 324, 331 (6th Cir.2002) (legal conclusions subject to de novo review). 51 Chrysler reminds us that the underlying purpose of the statutory scheme governing foreign tax credits, which had its genesis in the Revenue Act of 1918, 40 Stat. 1057, is to "mitigate the evil of double taxation." Burnet v. Chicago Portrait Co., 285 U.S. 1, 7, 52 S.Ct. 275, 76 L.Ed. 587 (1932); see also Ampex Corp., 620 F.2d at 859-60 (discussing history of foreign tax credit); Hart v. United States, 218 Ct.Cl. 212, 585 F.2d 1025, 1029-32 (Ct.Cl.1978) (noting inconsistencies between legislative history and language of statutes at issue). When construing a legislative enactment, we must give effect to the intent of the legislature adopting the statute in question. Broadcast Music, Inc. v. Roger Miller Music, Inc., 396 F.3d 762, 769 (6th Cir.2005), cert. denied, ___ U.S. ___, 126 S.Ct. 374 (U.S. Oct. 3, 2005) (No. 05-31). While statutes imposing a tax are generally construed liberally in favor of the taxpayer, Limited, 286 F.3d at 332 (citing Weingarden v. Comm'r, 825 F.2d 1027, 1029 (6th Cir.1987)), those granting a deduction are matters of "legislative grace" and are strictly construed in favor of the government. Intermet Corp., 209 F.3d at 904 (also citing Weingarden at 1029); Burroughs Adding Mach. Co. v. Terwilliger, 135 F.2d 608, 610 (6th Cir.1943) ("The right to the [foreign tax] credit claimed is a privilege granted by the Government, and hence the statute is to be strictly construed in favor of the Government."); but see Gentsch v. Goodyear Tire & Rubber Co., 151 F.2d 997, 1000 (6th Cir.1945) (rejecting strict construction because "[r]elief from double taxation is not so much an act of grace as one of justice"). 52 Turning to the task at hand, we note that legislative intent should be divined first and foremost from the plain language of the statute. Broadcast Music, 396 F.3d at 769; Limited, 286 F.3d at 332. If the "text of the statute may be read unambiguously and reasonably," our inquiry is at an end. Limited at 332; accord United States v. Boucha, 236 F.3d 768, 774 (6th Cir.2001) ("[t]he language of the statute is the starting point for interpretation, and it should also be the ending point if the plain meaning of that language is clear") (quoting United States v. Choice, 201 F.3d 837, 840 (6th Cir.2000)). Only when our reading results in ambiguity or leads to an unreasonable result, may we look to the legislative history. Limited at 332. Finally, we must construe a statute as a whole and, in so doing, we must strive to "interpret provisions so that other provisions in the statute are not rendered inconsistent, superfluous, or meaningless." Broadcast Music at 769. 53 With these precepts in mind, we turn to the statutes in question, beginning with 26 U.S.C. § 901(a), the provision generally authorizing foreign tax credits. The critical sentence reads, "Such choice for any taxable year may be made or changed at any time before the expiration of the period prescribed for making a claim for credit or refund of the tax imposed by this chapter for such taxable year." Chrysler concedes that the initial reference to "any taxable" year means the year in which the initial election was made. Like the Tax Court, we agree that the subsequent modifier "such" must refer to something already alluded to — in this case the year in which the taxpayer elected to claim a foreign tax credit. 54 If § 901(a) existed in isolation, that would end the matter. However, as mentioned earlier, it is undisputed that the "period prescribed for making a claim for credit or refund" incorporates the limitations period set forth in 26 U.S.C. § 6511(d)(3)(A). That provision establishes a ten-year window for the taxpayer to claim a credit "for the year with respect to which the claim is made." In our view, the most reasonable reading of these two statutes is that the "such taxable year" of § 901(a) and the "year with respect to which the claim is made" of § 6511(d)(3)(A) refer to the same year: the year in which the taxpayer first made its election whether to claim a foreign tax credit. We recognize that the Court of Claims has reached a different conclusion. Ampex, 620 F.2d at 858 ("[I]f the time of the commencement of the limitations period of section 6511(d)(3)(A) is to be determined solely on the basis of the language of that section, the limitations period begins to run on the date prescribed by law for filing the return for the year for which a refund is sought."). In our view, however, nothing in the language of § 6511(d)(3)(A) alters our initial reading of § 901(a), which fixes the year in question as the year of election. All that § 6511(d)(3)(A) provides to the taxpayer is a longer limitations period for altering its election of a foreign tax credit than the three-year limitations period prescribed by § 6511(a). The touchstone for triggering the statute of limitations remains the original year of election. 55 Lastly, we must consider whether § 904(c), which authorizes a taxpayer to either carry over or carry back its application of a foreign tax credit, affects our analysis. We conclude that it does not. Section 904(c) simply enhances the ability of taxpayers to take full advantage of their foreign tax credits by allowing them to apply credits that cannot be used in the year of election to other years within a prescribed range. If, as in Chrysler's case, the taxpayer did not pay enough taxes to the United States to make full use of its foreign tax credit in the year of election, § 904(c) provides an additional seven-year period to do so. Under those conditions, the Internal Revenue Code allows the taxpayer to "deem" the foreign taxes paid in a year other than the one in which they were actually paid. We see nothing in the language of § 904(c), however, to suggest that the ten-year limitations period of § 6511(d)(3)(A) refers to the year that the foreign tax is "deemed" paid. To the contrary, when read together, the three statutory provisions set forth an unambiguous scheme for claiming and taking foreign tax credits. Section 901(a) requires the taxpayer to elect to take — or decline to take — a foreign tax credit in the year the foreign tax is paid. Section 6511(d)(3)(A) then allows the taxpayer a ten-year period from this initial election to amend its election. Finally, § 904(c) provides a way to increase the likelihood that a taxpayer can take full use of its foreign tax credits by either carrying back or carrying forward unused credits to other tax years. However, it has no effect on the ten-year limitations period, which runs from the tax year that the foreign tax was actually paid and the credit accrued. 56 Because the statutes at issue may be read "unambiguously and reasonably," Limited, 286 F.3d at 332, we need not resort to legislative history. As the Court of Claims has noted, "the normal understanding of the bare language that is entitled to prevail does not necessarily exclude all possibility of an alternative reading that refined and subtle legal analysis might invent." Hart, 585 F.2d at 1028 (punctuation altered). Moreover, as Hart, Ampex, and the parties themselves concede, the legislative history accompanying these statutes is often inconsistent and not overly enlightening. See, e.g., Hart, 585 F.2d at 1030. 57 Before concluding, we will take a moment to mention four considerations that lend additional support to our reading of the statutes. First, we agree with Chrysler that Congress permits the use of foreign tax credits in order to allow taxpayers to avoid double taxation. However, Chrysler did not come away empty-handed because it initially elected not to take foreign tax credits for the years in question; it claimed deductions based upon those foreign taxes and, presumably, there were valid accounting reasons for its choice. That it was dilatory in seeking to amend its returns in order to claim foreign tax credits for the years in dispute does not alter the fact that it received some tax benefit for foreign taxes paid. Second, that Congress passed "clarifying" language to the phrase of § 6511 in dispute — prospectively amending "for the year with respect to which the claim is made" to "for the year in which such taxes were actually paid" — suggests that our reading is correct. See footnote 3, supra. Third, our reading avoids the uncertainty that would attend Chrysler's interpretation, which could lead to a shorter or longer limitations period depending on the unique fiscal circumstances of the taxpayer.4 And, fourth, we are obliged to strictly construe statutes that grant a deduction to a taxpayer in favor of the government. Weingarden, 825 F.2d at 1029. 58 For the foregoing reasons, we affirm the judgment of the Tax Court, and hold that the ten-year statute of limitations period of § 6511(d)(3)(A) "begins with the date prescribed by law for filing the return for the year from which the excess foreign taxes are carried." Ampex, 620 F.2d at 857. III. Employee Stock Ownership Plan 59 The Tax Court framed the third and final issue as follows: 60 [W]hether Chrysler Corporation may deduct for 1985 amounts it paid to redeem its common stock held in the employee stock ownership trust (ESOT) underlying the Chrysler Employee Stock Ownership Plan (ESOP). 61 Chrysler Corp. v. Comm'r, No. 22148-97, 2001 WL 1090239, T.C.M.(RIA) 2001-244 (Sept. 18, 2001). 62 The facts giving rise to this dispute are relatively straightforward and were summarized by the Tax Court as follows: 63 Chrysler was faced with an economic crisis in 1979 that resulted in Congress' enacting the [Chrysler Corporation Loan Guarantee Act of 1979(LGA), Pub.L. 96-185, 93 Stat. 1324 (1980)] on Chrysler's behalf.... By way of the LGA, Congress provided Chrysler with up to $1.5 billion in loan guarantees in return for Chrysler's satisfaction of certain conditions. 64 Two of the conditions required that employees of Chrysler and its subsidiaries and affiliates make at least $587.5 million in wage and benefit concessions and that Chrysler set up an employee stock ownership plan meeting the requirements of both sections 401(a) (qualified deferred compensation plans) and 4975(e)(7) (employee stock ownership plans). Two other conditions required that Chrysler establish the ESOT within the rules of section 401(a) and that Chrysler contribute shares of its common stock to the ESOT over a 4-year period from 1981 through 1984. In each of those 4 years, Chrysler was required to contribute to the ESOT Chrysler common stock with a value of at least $40.625 million; during that 4-year period, Chrysler was required to contribute to the ESOT a total of at least $162.5 million of its common stock. 65 .... 66 Pursuant to the LGA, Chrysler established the ESOP effective July 1, 1980, and funded the ESOT by issuing to it new shares of Chrysler common stock during each of the ESOT's fiscal years ended June 30, 1981 through 1984. Pursuant to the terms of the ESOP, employees could participate in the plan if they had: (1) Worked for Chrysler or any of its subsidiaries or affiliates for 9 continuous months at the beginning of the plan year and (2) been affected by the wage and benefit concessions required by the LGA. Chrysler established the ESOP to: (1) Satisfy the LGA's requirement for obtaining the Federal Government's loan guarantees, (2) compensate employees for wage and benefit concessions, and (3) contribute to Chrysler's financial recovery and long-term viability by enhancing employee motivation and increasing productivity. 67 Chrysler contributed $162.5 million (15,251,891 shares) of its common stock to the ESOT from 1981 through 1984. Chrysler contributed approximately one-fourth of that dollar amount in each of the 4 years and claimed a deduction for the market value of the contributed shares for the years in which the contributions were made. The contributed shares amounted to approximately 22 percent of Chrysler's outstanding shares at the end of 1980, and the ESOT held the largest single block of Chrysler common stock. 68 The ESOT's trustee was a commercial bank named Manufacturer's National Bank of Detroit (MNB), and MNB's nominee was Calhoun & Co. Pursuant to the LGA, MNB allocated the stock contributed by Chrysler to the individual accounts of the ESOP participants in equal amounts, provided that the participant had worked 650 hours or more during the plan year. MNB also invested any dividends received on the stock allocated to a participant's account in additional shares of Chrysler common stock. The LGA authorized the participants to vote the shares in their accounts. MNB had to vote the stock for which no directions had been received in the same proportion as the stock as to which directions had been received. The ESOP authorized distributions to employees only in the event of: (1) Death, in which case the proceeds were forwarded to the designated beneficiary, (2) termination of employment, or (3) the ESOP's termination. Chrysler's board of directors had the discretion to terminate the ESOP at any time after June 30, 1984. 69 In September 1983, while the ESOP was in place, Chrysler renegotiated its collective bargaining contracts with its employees who were members of the United Automobile, Aerospace and Agricultural Implement Workers of America (UAW). The renegotiation resulted in a contract extending through October 1985. In 1985, when the collective bargaining contracts were again renegotiated, Chrysler agreed as part of those contracts to terminate the ESOP and to allow the participants either to keep the Chrysler common stock in the ESOT allocated to them or to allow Chrysler to redeem that stock at a per-share price equal to the applicable closing price on the New York Stock Exchange. In December 1985, Chrysler redeemed just over 9.58 million shares of its common stock from the ESOT for a total cost to Chrysler of $426,969,582. The ESOP participants who opted not to sell their stock received over 3.2 million shares of Chrysler common stock from the ESOT. 70 On its 1985 Federal income tax return, Chrysler claimed a deduction of $327,595,421 associated with its redemption of its common stock from the ESOT. According to Chrysler's computation, the deduction was less than the redemption price so as not to duplicate the tax benefits Chrysler had previously received by way of the tax deductions claimed for the same shares when contributed to the ESOT. The approximate $328 million deduction was not taken for financial accounting purposes. For those purposes, Chrysler reported the redemption as a purchase of treasury stock. 71 Chrysler, 2001 WL 1090239 (citations and footnote omitted). 72 Chrysler does not dispute any of the statements made by the Tax Court. However, referring to a declaration of its assistant treasurer (and later vice-president) Robert S. Miller, the company avers that the termination of the ESOP was union-driven. Specifically, as part of the collective bargaining negotiations of 1985, the two sides agreed that the ESOP would be liquidated. The company takes the position that the deduction of $327,595,421 that it made in 1985 in relation to the termination of the ESOP was properly characterized as "compensation" and therefore was deductible under 26 U.S.C. § 162(a). In its view, the LGA obliged Chrysler to establish the ESOP as a means of compensating its employees for the wages and benefits that they relinquished in order to satisfy the terms of the LGA. This position is strengthened by the fact that the union requested the liquidation of the ESOP. 73 Although the Tax Court's rationale with respect to the details of Chrysler's claim will be discussed in conjunction with the arguments of the parties, it provided the following statement of the core issue, which is worth quoting at the outset: 74 We must decide whether Chrysler may deduct the costs (redemption price and related expenses) which it incurred to redeem its common stock upon termination of the ESOP. Respondent moves the Court to decide this issue by way of partial summary judgment, arguing that a firmly established body of law holds that a corporation may not deduct the costs which it incurs to redeem its stock. Petitioner objects to respondent's motion. Petitioner asserts that it may deduct its costs as personal service compensation or, alternatively, as a financing expense. Petitioner argues as to its primary assertion that material facts are still in dispute which will establish that Chrysler redeemed its common stock from the ESOT intending to compensate the employees for their personal services. Petitioner argues as to its alternative assertion that material facts are still in dispute which will establish that it redeemed its common stock from the ESOT as a financing expense. 75 .... 76 An accrual method taxpayer such as Chrysler may deduct an expenditure under section 162(a) only if the expenditure is: (1) An expense, (2) an ordinary expense, (3) a necessary expense, (4) incurred during the taxable year, and (5) made to carry on a trade or business. A necessary expense is an expense that is appropriate or helpful to the development of the taxpayer's business. An ordinary expense is an expense that is "normal, usual, or customary" in the type of business involved. The need for an expenditure to be ordinary serves, in part, to "clarify the distinction, often difficult, between those expenses that are currently deductible and those that are in the nature of capital expenditures, which, if deductible at all, must be amortized over the useful life of the asset." 77 ... Whether a corporation's redemption of its stock may constitute an ordinary and necessary business expense under section 162 has been considered frequently before. The relevant cases generally begin their analysis with the oft-quoted principle of United States v. Gilmore, 372 U.S. 39, 83 S.Ct. 623, 9 L.Ed.2d 570 (1963). There, the Supreme Court held that the expense of defending a divorce suit was a nondeductible personal expense, even though the outcome of the divorce would affect the taxpayer's holdings of income-producing property and might affect his business reputation. The Court explained: 78 the origin and character of the claim with respect to which an expense was incurred, rather than its potential consequences upon the fortunes of the taxpayer, is the controlling basic test of whether the expense was "business" or "personal" and hence whether it is deductible or not * * * 79 .... 80 Thereafter, the Supreme Court applied the origin of the claim test of United States v. Gilmore, supra, to two companion cases in which the issue was whether expenses were ordinary or capital. See United States v. Hilton Hotels Corp., 397 U.S. 580, 90 S.Ct. 1307, 25 L.Ed.2d 585 (1970); Woodward v. Commissioner, 397 U.S. 572, 90 S.Ct. 1302, 25 L.Ed.2d 577 (1970). Both cases involved the deductibility of a corporation's costs incurred incident to the appraisal and acquisition of dissenters' stock. The Court rejected the corporations' claims that the costs were deductible because their "primary purpose" did not directly involve the acquisition of stock. In the Woodward case, the Court explained that "A test based upon the taxpayer's `purpose' in undertaking or defending a particular piece of litigation would encourage resort to formalisms and artificial distinctions." The Court rejected the primary purpose test as "uncertain and difficult" and directed that the issue of whether an expense is ordinary or capital be controlled by the "simpler inquiry whether the origin of the claim litigated is in the process of acquisition itself." Woodward v. Commissioner, supra at 577, 90 S.Ct. 1302. 81 .... 82 Nor are we persuaded by petitioner's endeavor to avoid application of the well-settled law on redemptions by characterizing the full amount of the redemption payments solely ... as the payment of personal service compensation. The redemption payments at hand were not ... a substitute for wages.... The fact that the redemption payments were not attributable to the personal services of the employees is seen quickly from the fact that Chrysler merely paid the employees for the appreciated value of their stock. The employees did not receive anything of value from the redemption on account of personal services that would entitle Chrysler to deduct a compensation expense with respect thereto. The employees have simply surrendered their Chrysler stock for its value in cash. 83 Chrysler, 2001 WL 1090239, (citations and footnotes omitted). 84 As with the issues previously discussed, we review the Tax Court's partial grant of summary judgment to the Commissioner de novo. See Tele-Commc'ns, Inc. v. Comm'r, 104 F.3d 1229, 1232 (10th Cir.1997). 85 It is undisputed that, as a general rule, "ordinary and necessary" expenses are deductible while capital expenditures are not. 26 U.S.C. §§ 162(a) (ordinary expenses), 263 (capital expenses). As the Court has put it, "If an expense is capital, it cannot be deducted as `ordinary and necessary,' either as a business expense under § 162 of the Code or as an expense of `management, conservation, or maintenance' under § 212." Woodward, 397 U.S. at 575, 90 S.Ct. 1302. 86 The Tax Court quoted the origin of the claim test adopted by Gilmore, supra, and the parties agree that it applies to the issue before us. Chrysler argues that the Tax Court failed to apply the test correctly. The company then refers us to a Ninth Circuit decision that explains the proper approach as follows: 87 Characterization of a transaction for taxation is a two step process. The initial step is to discover the origin of the claim from which the tax dispute arose. This attribution determination is critical to proper tax characterization because of the inherently factual nature of taxation. Once a transaction is placed in its proper context, the nature of that transaction becomes discernible, and its tax character may be identified. Thus, the second step, the actual tax characterization, is dependent upon the proper resolution of the preliminary attribution question. 88 Keller Street Dev. Co. v. Comm'r, 688 F.2d 675, 678 (9th Cir.1982). Chrysler argues that its "claim" is the demand by the union in 1985 that the ESOP be terminated in consideration of wage and benefit concessions and that Chrysler do so by repurchasing its members' shares. See Stipulation of Facts #1 — ESOP Issue, August 8, 2000 at ¶ 169. As the company sees it, the origin of the claim in this case cannot be divorced from the clearly compensatory contribution of shares by Chrysler as required by the LGA to offset wage concessions made by its employees. 89 Chrysler also relies upon facts, especially those contained in the Miller declaration referenced earlier, that the company believes support its view — or at the very least require a trial. These contentions, which it argues the Tax Court overlooked, include testimony that 1) it was foreseeable that the union would terminate the ESOP if Chrysler regained its financial strength; 2) the union preferred cash compensation to ESOP shares; 3) termination of the ESOP put the company closer to its position prior to the enactment of the LGA; and, 4) Chrysler did not financially benefit from the ESOP. 90 Just as it contends the Tax Court misconstrued the origin of the claim, so, too, Chrysler argues that the court mistook the character of the claim, which it states was "compensation, pure and simple." In support it cites Wells Fargo & Co. v. Comm'r, 224 F.3d 874 (8th Cir.2000), which involved amounts paid to cancel employee stock options in connection with an acquisition. Despite the fact that this would appear to be a capital expense, part of the deduction was allowed on the theory that the origin of the payments was not the acquisition but the employment relationship between the company and the option holders. Id. at 887 (explaining distinction between benefit directly related to acquisition, which is typically capitalized, and an indirect relationship between salaries originating from employment relationship and acquisition). 91 As the Commissioner points out in response, however, the employees here did not receive a premium upon redemption because Chrysler paid the fair market value of its stock. In other words, the ESOP would have received the same amount had it simply sold the shares on the open market. Furthermore, not all employees chose to receive cash in lieu of stock upon the termination of the ESOP. That some employees remained shareholders in essentially the same financial position as before the liquidation of the ESOP highlights the non-compensatory nature of the redemption. Those employees who elected to receive stock distributions had made the same wage and benefit concessions as those receiving cash distributions, but received no part of the amounts that taxpayer now seeks to deduct. If, instead of compensating its employees with ESOP stock, Chrysler had compensated them in cash that they subsequently invested in Chrysler's stock, the company could not deduct the appreciated value of that stock as compensation if it ultimately redeemed the stock. 92 In Harder Servs., Inc. v. Comm'r, 67 T.C. 585, 1976 WL 3601 (1976), a corporation redeemed an employee's stock when it terminated him and attempted to deduct the $100,677 redemption cost. The Tax Court refused to allow the amount to be deducted as compensation, rather than as a capital expense, in part because "there is nothing in the record to indicate that [the employee] would have been paid anything with respect to the termination of his employment had he wished to retain his 22 shares of Harder Tree thereafter." Id. at 597. 93 The Commissioner also distinguishes Wells Fargo, supra. He cites with approval the statement from that case, "the ultimate question is whether the expense is directly related to the transaction which provides the long term benefit." Wells Fargo, 224 F.3d at 886. According to the Commissioner, the flaw in Chrysler's argument is its attempt to portray its contribution of stock to the ESOP and its later redemption of the stock as one claim. While Chrysler's stock contributions to the ESOP were compensatory in nature, its agreement with the UAW to terminate the ESOP and to redeem the shares of those participating employees who chose to take cash in lieu of shares represented a distinct, non-compensatory transaction that was not compelled by either the LGA or the terms of the ESOP. Thus, the costs incurred were "directly related" to the stock redemption, but only tangentially related to the LGA and to the establishment and funding of the ESOP. 94 We conclude that the Tax Court correctly characterized the nature of the stock redemption as non-compensatory. Although the redemption may have been "in consideration of wage and benefit concessions," the company deducted the cost of those benefits as they were incurred in the years leading up to the liquidation of the ESOP. At the time of redemption, it simply paid the employees fair market value for their shares. Under these circumstances, the Tax Court correctly held that Chrysler could not deduct its redemption-related expenses as compensation. IV. 95 The decision of the Tax Court entered on November 18, 2002, is affirmed. Notes: 1 Because this court must view the facts in a light most favorable to Chrysler,id. at 1227, we rely on the parties' Stipulation of Facts dated June 5, 2000, their Joint Stipulation of Agreed Upon Facts dated July 11, 2000, and the representations made by Chrysler in its briefs to this court. 2 "The taxpayer has the burden of proving its entitlement to a deduction."Gen. Dynamics, 481 U.S. at 245, 107 S.Ct. 1732 (citing Helvering v. Taylor, 293 U.S. 507, 514, 55 S.Ct. 287, 79 L.Ed. 623 (1935)). 3 We note that, while this version of the statute applies to the instant appeal, it was amended by Congress in 1997 so that the highlighted language now provides, "for filing the return for the year in which such taxes were actually paid or accrued." Taxpayer Relief Act of 1997, Pub.L. No. 105-34, 111 Stat. 788, 945 4 As the Court of Claims observed, however, even under our interpretation, the limitations period could be increased to as much as twelve years in which a taxpayer sought to carry back foreign tax credits as provided in § 904(c)Ampex, 620 F.2d at 860 n. 10.
|
Mid
|
[
0.558312655086848,
28.125,
22.25
] |
Q: Error when running applet with Slick2D I have been trying to get a slick application to work on a website for awhile now, and I have the HTML code correct, I know that for sure, however I am getting an error from the applet saying this: Initializing real applet Mon May 20 17:07:24 EDT 2013 ERROR:Game.GameBoard java.lang.ClassNotFoundException: Game.GameBoard at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at org.newdawn.slick.AppletGameContainer.init(AppletGameContainer.java:123) at org.lwjgl.util.applet.AppletLoader.switchApplet(AppletLoader.java:1330) at org.lwjgl.util.applet.AppletLoader$2.run(AppletLoader.java:909) at java.awt.event.InvocationEvent.dispatch(Unknown Source) at java.awt.EventQueue.dispatchEventImpl(Unknown Source) at java.awt.EventQueue.access$200(Unknown Source) at java.awt.EventQueue$3.run(Unknown Source) at java.awt.EventQueue$3.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source) at java.awt.EventQueue.dispatchEvent(Unknown Source) at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.run(Unknown Source) This occurred while 'Initializing real applet' Unable to create game container java.lang.RuntimeException: Unable to create game container at org.newdawn.slick.AppletGameContainer.init(AppletGameContainer.java:147) at org.lwjgl.util.applet.AppletLoader.switchApplet(AppletLoader.java:1330) at org.lwjgl.util.applet.AppletLoader$2.run(AppletLoader.java:909) at java.awt.event.InvocationEvent.dispatch(Unknown Source) at java.awt.EventQueue.dispatchEventImpl(Unknown Source) at java.awt.EventQueue.access$200(Unknown Source) at java.awt.EventQueue$3.run(Unknown Source) at java.awt.EventQueue$3.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source) at java.awt.EventQueue.dispatchEvent(Unknown Source) at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.run(Unknown Source) Done loading I understand that the applet won't start. I've done some excessive searching to find the answer, however, I have found none. I do not use a GameState, but use a BasicGame. Most tutorials I've read say to just write it like a normal application. So I'm confused as to how to get this to work properly. It doesn't seem like I need to make any new changes, but that I can't figure out what exactly I need to do. Any help would be appreciated. Thank you! A: It seems as if the error happened as a naming issue.
|
Low
|
[
0.489311163895486,
25.75,
26.875
] |
Afternoon tea and scones – Cameron Highlands, Pahang Located on the Titiwangsa Range some 1,500m above sea level, Cameron Highlands is Malaysia's largest hill resort. The cool weather up here makes it the perfect environment to cultivate a variety of produce, including tea, flowers, cacti, vegetables and strawberries. Enjoy the picturesque view of flourishing tea plantations blanketing the undulating hills, which can be enjoyed over a refreshing cup of tea. Must Do Visit the BOH Tea Plantation to learn about the tea-making process, followed by a steaming cup of tea in full view of the sprawling tea plantation
|
Mid
|
[
0.600961538461538,
31.25,
20.75
] |
Q: Conditionally assign multiple columns to another DataFrame (the condition determines which set of columns in that row are assigned) I need to reformat some data. I've never used pandas before, and could use some help. I have two DataFrames: df1 dfTarget df1 is the un-formatted data, dfTarget is how I need the data to be formatted Based on a condition, I need one group of columns in df1 to be copied to certain columns in dfTarget. If the condition is false, I need another group of columns in df1 to be copied to certain columns in dfTarget. Simplified df1: city state condition city2 state2 0 1 2 3 4 Simplified dfTarget: mCity mState 0 1 2 3 4 Basically, if the condition is true, I need to move 'city' and 'state' into 'mCity' and 'mState' respectively. If the condition is false, I need to move 'city2' and 'state2' into 'mCity' and 'mState'. dfTarget is starting off empty, and needs to be filled row by row based on a bunch of conditions in df1. I've never used pandas, and tried to research this myself, but got lost quickly in all the different methods. Please, what's the best way to do this? A: It should be simple enough to conditionally assign the columns, assuming the indices and/or number of rows is the same. If the condition comes from a column, you can try np.where: dfTarget[['mCity', 'mState']] = np.where( df1[['condition']], df1[['city', 'state']], df1[['city2', 'state2']]) Minimal Example df1 = pd.DataFrame({ 'city': list('abc'), 'state': list('def'), 'condition': [True, False, True], 'city2': list('hij'), 'state2': list('klm')}) dfTarget = pd.DataFrame(index=df1.index, columns=['mCity', 'mState']) dfTarget[['mCity', 'mState']] = np.where( df1[['condition']], df1[['city', 'state']], df1[['city2', 'state2']]) mCity mState 0 a d 1 i l # comes from second group of columns 2 c f
|
High
|
[
0.679425837320574,
35.5,
16.75
] |
Indian cricket board lambasted over Kumble birthday post AFP 17th October, 2017 06:27:36 India’s powerful cricket board hastily deleted a birthday message for record Test wicket-taker Anil Kumble Tuesday after irate fans took offence at the spinning great being described simply as a “former bowler”. The row follows Kumble’s acrimonious departure as national coach in June, when he quit saying his relationship with captain Virat Kohli was “untenable”. The Board of Control for Cricket in India joined in the well wishes for Kumble’s 47th birthday but quickly backtracked after its low-key description sent fans into a spin. “Here’s wishing former #TeamIndia bowler @anilkumble1074 a very happy birthday,” read the BCCI message on its official Twitter account, before the post was deleted. Fans of Kumble were angered, saying the board was playing down the achievements of India’s greatest bowler. “Umm bowler? Wasnt he also Captain and Coach and is India’s leading Wicket taker?” replied television journalist and author Digvijay Singh Deo on Twitter. Other fans urged the board to give Kumble his “due respect”. The BCCI then tweeted another message calling him “former captain” and “legend”, but the damage was done. Kumble’s relationship with the BCCI was the subject of much speculation after his resignation. It was reported in the Indian press that board officials tried to salvage ties between Kumble amd Kohli, but that the relationship was beyond repair. Kumble, who played for India for 17 years, has never commented publicly on the matter. Ravi Shastri was since been appointed coach of the Indian side. Kumble, or ‘Jumbo’ as he is fondly known, remains India’s highest Test wicket-taker with his 619 scalps in 132 matches.
|
Mid
|
[
0.552577319587628,
33.5,
27.125
] |
--- abstract: 'We study the $3 \times 3$ elliptic systems $\nabla (a(x) \nabla\times u)-\nabla (b(x) \nabla \cdot u)=f$, where the coefficients $a(x)$ and $b(x)$ are positive scalar functions that are measurable and bounded away from zero and infinity. We prove that weak solutions of the above system are Hölder continuous under some minimal conditions on the inhomogeneous term $f$. We also present some applications and discuss several related topics including estimates of the Green’s functions and the heat kernels of the above systems.' address: - 'Department of Mathematics, Sungkyunkwan University, Suwon 440-746, Republic of Korea' - 'Department of Mathematics, Yonsei University, Seoul 120-749, Republic of Korea' author: - Kyungkeun Kang - Seick Kim title: 'Elliptic systems with measurable coefficients of the type of Lamé system in three dimensions.' --- Introduction {#sec:intro} ============ In this article, we are concerned with the system of equations $$\label{eq1.5ee} \nabla\times (a(x)\nabla\times {\boldsymbol{u}})-\nabla(b(x)\nabla \cdot {\boldsymbol{u}})= {\boldsymbol{f}} \quad\text{in }\;\Omega,$$ where the unknown ${\boldsymbol{u}}=(u^1,u^2, u^3)$ and the inhomogeneous term ${\boldsymbol{f}}=(f^1,f^2,f^3)$ are vector valued functions defined on a (possibly unbounded) domain $\Omega\subseteq {\mathbb R}^3$, and the coefficients $a(x)$ and $b(x)$ are positive scalar functions on $\Omega$ that are measurable and bounded away from zero and infinity. It should be noted from the beginning that the above system is elliptic. As a matter of fact, the following vector identity $$\label{eq0.2ab} \nabla \times (\nabla \times {\boldsymbol{u}})- \nabla (\nabla \cdot {\boldsymbol{u}})= -\Delta {\boldsymbol{u}}$$ implies that in the case when $a$ and $b$ are constants, the above system reduces to $$-a\Delta {\boldsymbol{u}}+(a-b) \nabla (\nabla \cdot {\boldsymbol{u}})={\boldsymbol{f}}\quad\text{in }\;\Omega,$$ which (under the assumption that $a>0$ and $b>4a/3$) becomes the Lamé system of linearized elastostatics in dimension three; see e.g., Dahlberg et al. [@DKV]. A special case of the system is the following system $$\label{eq0.1aa} \nabla\times (a(x)\nabla\times {\boldsymbol{u}})=0,\quad \nabla \cdot {\boldsymbol{u}}=0\quad\text{in }\;\Omega,$$ which arises from Maxwell’s equations in a quasi-static electromagnetic field, where the displacement of the electric current is neglected; see e.g., Landau et al. [@LLP Ch. VII]. In [@KK02], the authors proved that weak solutions of the system are Hölder continuous in $\Omega$; see also Yin [@Yin02]. It is an interesting result because in general, weak solutions of elliptic systems with bounded measurable coefficients in dimension three or higher are not necessarily continuous; see De Giorgi [@DG68]. Another motivation for studying the system comes from an interesting article by Giaquinta and Hong [@GH], where they considered the following equations involving differential forms: $$\label{eq0.2hg} d^* (\sigma(x) d A)=0,\quad -d^* A=0\quad\text{in }\;\Omega,$$ where $\sigma(x) \in L^\infty(\Omega)$ is a function with $\sigma_1 \leq \sigma(x) \leq \sigma_2\,$, $\sigma_1$ and $\sigma_2$ being two positive constants, $A$ is a one-form, $dA$ is its exterior differential, and $d^*$ denotes the adjoint of $d$ (i.e., $d^*=\delta$, the codifferential). Related to the well-known result of De Giorgi [@DG57] on elliptic equations, they raised an interesting question of whether any weak solution $A$ of the equations is Hölder continuous in $\Omega$. In the three dimensional setting, the equations becomes the system , and thus, in dimension three, a positive answer was given in [@KK02]. Conversely, in terms of differential forms, the system with ${\boldsymbol{f}}=0$ becomes $$d^* (a(x) dA)+d(b(x) d^* A)=0\quad\text{in }\;\Omega;\quad A=u^1 dx^1+ u^2 dx^2+ u^3 dx^3.$$ Similar to the question raised by Giaquinta and Hong [@GH], it is natural to ask whether weak solutions of the above equations are Hölder continuous in $\Omega$. We hereby thank Marius Mitrea for suggesting this question to us. In this article, we prove that weak solutions of the system are Hölder continuous in $\Omega$ assuming a minimal condition on ${\boldsymbol{f}}$, and thus give a positive answer to the above question in dimension three; see Theorem \[thm3.2a\] below for the precise statement. With this Hölder estimate at hand, we are able to show that there exists a unique Green’s function ${\boldsymbol{G}}(x,y)$ of the system in an arbitrary domain $\Omega\subseteq {\mathbb R}^3$, and it has the natural bound $${\lvert{\boldsymbol{G}}(x,y)\rvert} \leq N {\lvertx-y\rvert}^{-1}$$ for all $x, y\in\Omega$ such that $0<{\lvertx-y\rvert}<d_x \wedge d_y$, where $d_x:={\operatorname{dist}}(x,\partial\Omega)$, $a\wedge b:=\min(a,b)$, and $N$ is a constant independent of $\Omega$. In particular, when $\Omega={\mathbb R}^3$, the above estimate holds for all $x \neq y$; see Theorem \[thm5.6gr\] and Remark \[rmk6.7gr\] below. It also follows that the heat kernel ${\boldsymbol{K}}_t(x,y)$ of the system exists in any domain $\Omega$, and in the case when $\Omega={\mathbb R}^3$, we have the following usual Gaussian bound for ${\boldsymbol{K}}_t(x,y)$; see Theorem \[thm2hk\] below: $${\lvert{\boldsymbol{K}}_t(x,y)\rvert} \leq N t^{-3/2}\exp\{-\kappa|x-y|^2/ t \},\quad \forall t>0,\;\; x,y\in{\mathbb R}^3.$$ Another goal of this article is to establish a global Hölder estimate for weak solutions of the system in bounded Lipschitz domains. More precisely, we consider the following Dirichlet problem $$\label{eq0.3cc} \left\{ \begin{array}{c} \nabla\times (a(x)\nabla\times {\boldsymbol{u}})={\boldsymbol{f}} +\nabla\times {\boldsymbol{g}} \quad\text{in }\;\Omega,\\ \nabla \cdot {\boldsymbol{u}}=h\quad\text{in }\;\Omega,\\ {\boldsymbol{u}}=0\quad \text{on }\;\partial\Omega, \end{array} \right.$$ where $\Omega$ is a bounded, simply connected Lipschitz domain. We prove that the weak solution ${\boldsymbol{u}}$ of the above problem is uniformly Hölder continuous in $\overline\Omega$ under some suitable conditions on the inhomogeneous terms ${\boldsymbol{f}}$, ${\boldsymbol{g}}$, and $h$; see Theorem \[thm3.1t\] for the details. This question of global Hölder regularity for weak solutions of the system turned out to be a rather delicate problem and was not discussed at all in [@KK02]. Yin addressed this issue in [@Yin02], but it appears that there is a serious flaw in his proof; he also considered a similar problem with a more general boundary condition in [@Yin04], but it seems to us that his argument there regarding estimate near the boundary has a gap too. Utilizing the above mentioned global Hölder estimate for weak solutions of the system , we show that the Green’s function ${\boldsymbol{G}}(x,y)$ of the system in $\Omega$ has the following global bound: $${\lvert{\boldsymbol{G}}(x,y)\rvert} \leq N {\bigl\{d_x\wedge {\lvertx-y\rvert}\bigr\}}^{\alpha} {\bigl\{d_y\wedge {\lvertx-y\rvert}\bigr\}}^{\alpha} {\lvertx-y\rvert}^{-1-2\alpha},\quad \forall x, y\in \Omega,\;\; x\neq y,$$ where $0<\alpha<1$; see Theorem \[thm5.8gr\] for the details. In that case, we also have the following global estimate for the heat kernel ${\boldsymbol{K}}_t(x,y)$ of the system in $\Omega$: For all $T>0$, there exists a constant $N$ such that for all $x,y \in \Omega$ and $0<t \leq T$, we have $${\lvert{\boldsymbol{K}}_t(x,y)\rvert} \leq N \left(1 \wedge \frac {d_x} {\sqrt {t} \vee {\lvertx-y\rvert}} \right)^{\alpha} \left(1 \wedge \frac{d_y} {\sqrt {t} \vee {\lvertx-y\rvert}}\right)^{\alpha}\,t^{-3/2}\exp {\{-\kappa {\lvertx-y\rvert}^2/t\}},$$ where $\kappa>0$ and $\alpha\in (0,1)$ are constants independent of $T$, and we used the notation $a\vee b=\max(a,b)$; see Theorem \[thm3hk\] below. At the moment, it is not clear to us whether or not any global Hölder estimate is available for weak solutions of the full system with zero Dirichlet boundary data. The organization of the paper is as follows. In Section \[sec:nd\], we introduce some related notation and definitions. In Section \[sec:main\], we state our main theorems and give a few remarks concerning extensions of them. The proofs of our main results are given in Section \[sec:pf\] and some applications of them are presented in Section \[sec:app\]. We devote Section \[sec:green\] entirely to the study of the Green’s functions of the system , and Section \[sec:p\] to the investigation of the parabolic system and the heat kernels associated to the system . Notation and Definitions {#sec:nd} ======================== Basic notation -------------- The basic notation used in this article are those employed in Gilbarg and Trudinger [@GT]. A Function in bold symbol such as ${\boldsymbol{u}}$ means that it is a three dimensional vector-valued function; $\nabla\cdot {\boldsymbol{u}}$ denotes ${\operatorname{div}}{\boldsymbol{u}}$, $\nabla \times {\boldsymbol{u}}$ denotes ${\operatorname{curl}}{\boldsymbol{u}}$, and $\nabla {\boldsymbol{u}}$ denotes the gradient matrix of ${\boldsymbol{u}}$. Throughout the article, $\Omega$ denotes a (possibly unbounded) domain in ${\mathbb R}^3$ (i.e., an open connected set in ${\mathbb R}^3$) and $\partial \Omega$ denotes its boundary. For a domain $\Omega$ with $C^1$ boundary $\partial\Omega$, we denote by ${\boldsymbol{n}}$ the unit outward normal to $\partial\Omega$. Let $L$ be the operator of the form $$L{\boldsymbol{u}}:=\nabla\times (a(x)\nabla\times {\boldsymbol{u}})-\nabla(b(x)\nabla \cdot {\boldsymbol{u}})$$ whose coefficients are measurable functions on $\Omega$ satisfying the following condition: $$\label{eq0.2bh} \nu \leq a(x),\; b(x) \leq \nu^{-1},\quad\forall x\in\Omega,\quad \text{for some }\;\nu\in (0,1].$$ For $x\in\Omega$ and $r>0$, we denote $B_r(x)$ the open ball of radius $r$ centered at $x$ and $$\Omega_r(x):= \Omega\cap B_r(x); \quad (\partial\Omega)_r(x):=\partial\Omega\cap B_r(x).$$ We write $S' \subset\subset S$ if $S'$ has a compact closure in $S$; $S'$ is strictly contained in $S$. Function spaces --------------- The Hölder spaces $C^{k,\alpha}(\overline \Omega)$ ($C^{k,\alpha}(\Omega)$) are defined as the subspaces of $C^k(\overline \Omega)$ ($C^k(\Omega)$) consisting of functions whose $k$-th order partial derivatives are uniformly Hölder continuous (locally Hölder continuous) with exponent $\alpha$ in $\Omega$. For simplicity we write $$C^{0,\alpha}(\Omega)=C^\alpha(\Omega),\quad C^{0,\alpha}(\overline\Omega)=C^\alpha(\overline \Omega),$$ with the understanding $0<\alpha<1$ whenever this notation is used. We set $${\lVertu\rVert}_{C^\alpha(\overline\Omega)}={\lvertu\rvert}_{0,\alpha;\Omega}=[u]_{\alpha;\Omega}+{\lvertu\rvert}_{0;\Omega} := \sup_{\substack{x, y \in \Omega\\ x\neq y}} \frac{{\lvertu(x)-u(y)\rvert}}{{\lvertx-y\rvert}^\alpha}+\sup_{\Omega}\,{\lvertu\rvert}.$$ For $p\geq 1$, we let $L^p(\Omega)$ denote the classical Banach space consisting of measurable functions on $\Omega$ that are $p$-integrable. The norm in $L^p(\Omega)$ is defined by $${\lVertu\rVert}_{p; \Omega}={\lVertu\rVert}_{L^p(\Omega)}=\left(\int_\Omega {\lvertu\rvert}^p\,dx\right)^{1/p}.$$ For $p \geq 1$ and $k$ a non-negative integer, we let $W^{k,p}(\Omega)$ the usual Sobolev space; i.e. $$W^{k,p}(\Omega)={\{ u\in L^p(\Omega): D^\alpha u \in L^p(\Omega)\;\,\text{for all}\;\, {\lvert\alpha\rvert} \leq k\}}.$$ We denote by $C^\infty_0(\Omega)$ the set of all functions in $C^\infty(\Omega)$ with compact support in $\Omega$. Some other notations are borrowed from Galdi [@Galdi] and Malý and Ziemer [@MZ]. Setting $${\mathcal{D}}={\mathcal{D}}(\Omega)={\bigl\{{\boldsymbol{u}} \in C^\infty_0(\Omega): \nabla\cdot {\boldsymbol{u}}=0\;\text{ in }\;\Omega\bigr\}},$$ for $q\in[1,\infty)$ we denote by $H_q(\Omega)$ the completion of ${\mathcal{D}}(\Omega)$ in the norm of $L^q$. The space $Y^{1,2}(\Omega)$ is defined as the family of all weakly differentiable functions $u\in L^{6}(\Omega)$, whose weak derivatives are functions in $L^2(\Omega)$. The space $Y^{1,2}(\Omega)$ is endowed with the norm $${\lVertu\rVert}_{Y^{1,2}(\Omega)}:={\lVertu\rVert}_{L^{6}(\Omega)}+{\lVert\nabla u\rVert}_{L^2(\Omega)}.$$ If ${\lvert\Omega\rvert}<\infty$, then Hölder’s inequality implies that $Y^{1,2}(\Omega)\subset W^{1,2}(\Omega)$. We define $Y^{1,2}_0(\Omega)$ as the closure of $C^\infty_0(\Omega)$ in $Y^{1,2}(\Omega)$. In the case $\Omega = {\mathbb R}^3$, we have $Y^{1,2}({\mathbb R}^3)=Y^{1,2}_0({\mathbb R}^3)$. Notice that by the Sobolev inequality, it follows that $$\label{eqP-14} {\lVertu\rVert}_{L^{6}(\Omega)} \leq N {\lVert\nabla u\rVert}_{L^2(\Omega)},\quad \forall u\in Y^{1,2}_0(\Omega).$$ Therefore, we have $W^{1,2}_0(\Omega)\subset Y^{1,2}_0(\Omega)$ and $W^{1,2}_0(\Omega)=Y^{1,2}_0(\Omega)$ if ${\lvert\Omega\rvert}<\infty$; see [@MZ §1.3.4]. In particular, if $\Omega$ is a bounded domain, then we have $Y^{1,2}_0(\Omega)=W^{1,2}_0(\Omega)$. Lipschitz domain ---------------- We say that $\Omega\subset {\mathbb R}^3$ is a (bounded) Lipschitz domain if i) $\Omega$ is a bounded domain; i.e. $${\operatorname{diam}}\Omega:=\sup{\{{\lvertx-y\rvert}: x, y\in \Omega\}} <\infty,$$ ii) There are constants $M$ and $r_0>0$, called Lipschitz character of $\partial\Omega$, such that for each $P\in \partial\Omega$, there exists a rigid transformation of coordinates such that $P=0$ and $$\Omega \cap B_{r_0}={\{x=(x',x_3)\in {\mathbb R}^3: x_3> \varphi (x')\}}\cap B_{r_0};\quad B_{r_0}=B_{r_0}(0),$$ where $\varphi:{\mathbb R}^2 \to {\mathbb R}$ is a Lipschitz function such that $\varphi(0)=0$, with Lipschitz constant less than or equal to $M$; i.e. $${\lvert\varphi(x')-\varphi(y')\rvert} \leq M {\lvertx'-y'\rvert},\quad \forall x',y'\in {\mathbb R}^2.$$ Weak solutions {#sec2.2ws} -------------- We say that ${\boldsymbol{u}}$ is a weak solution in $Y^{1,2}(\Omega)$ of the system if $$\label{eq2.1ws} \int_\Omega a (\nabla \times {\boldsymbol{u}}) \cdot (\nabla \times {\boldsymbol{\phi}}) + b (\nabla \cdot {\boldsymbol{u}}) (\nabla \cdot {\boldsymbol{\phi}}) = \int_\Omega {\boldsymbol{f}} \cdot {\boldsymbol{\phi}},\quad \forall {\boldsymbol{\phi}} \in C^\infty_0(\Omega).$$ We say that a function ${\boldsymbol{u}}$ is a weak solution in $Y^{1,2}_0(\Omega)$ of the problem $$\label{eq2.3ws} \left\{ \begin{array}{c} \nabla\times (a(x)\nabla\times {\boldsymbol{u}})-\nabla(b(x)\nabla \cdot {\boldsymbol{u}})= {\boldsymbol{f}} \quad\text{in }\;\Omega,\\ {\boldsymbol{u}}=0\quad \text{on }\;\partial\Omega, \end{array} \right.$$ if ${\boldsymbol{u}}$ belongs to $Y^{1,2}_0(\Omega)$ and satisfies the identity . By a weak solution in $Y^{1,2}_0(\Omega)$ of the problem , we mean a function ${\boldsymbol{u}}\in Y^{1,2}_0(\Omega)$ satisfying $$\begin{aligned} \label{eq2.4ws} \int_\Omega a (\nabla \times {\boldsymbol{u}}) \cdot (\nabla \times {\boldsymbol{\phi}}) &= \int_\Omega {\boldsymbol{f}} \cdot {\boldsymbol{\phi}} + {\boldsymbol{g}} \cdot (\nabla \times {\boldsymbol{\phi}}) ,\quad \forall {\boldsymbol{\phi}} \in C^\infty_0(\Omega)\\ \label{eq2.5ws} \int_\Omega {\boldsymbol{u}} \cdot \nabla \psi&=-\int_\Omega h \psi,\quad \forall \psi\in C^\infty_0(\Omega).\end{aligned}$$ By using the standard elliptic theory, one can easily prove the existence and uniqueness of a weak solution of the problem in $Y^{1,2}_0(\Omega)$ provided ${\boldsymbol{f}} \in L^{6/5}(\Omega)$. Similarly, if ${\boldsymbol{f}}\in H_{6/5} (\Omega)$ and ${\boldsymbol{g}}\in L^2(\Omega)$, one can show that there exists a weak solution in $Y^{1.2}_0(\Omega)$ of the problem when $h=0$; in the more general case when $h \in L^{6/5}(\Omega)$ and $\int_\Omega h = 0$, one can show that there exists a unique weak solution in $Y^{1.2}_0(\Omega)$ of the problem provided that $\Omega$ is a bounded Lipschitz domain; see Appendix for the proofs. Main Results {#sec:main} ============ Our first theorem says that if ${\boldsymbol{f}} \in L^{q}(\Omega)$ with $q>3/2$, then weak solutions of the system are locally Hölder continuous in $\Omega$. \[thm3.2a\] Let $\Omega$ be a (possibly unbounded) domain in ${\mathbb R}^3$. Assume that $a(x)$ and $b(x)$ are measurable functions on $\Omega$ satisfying the condition , and that ${\boldsymbol{u}}\in Y^{1,2}(\Omega)$ is a weak solution of the system , where ${\boldsymbol{f}} \in L^q(\Omega)$ with $q>3/2$. Then ${\boldsymbol{u}}$ is Hölder continuous in $\Omega$, and for all $B_R=B_R(x_0) \subset\subset \Omega$, we have the following estimate for ${\boldsymbol{u}}$: $$\label{eq3.3cc} R^\alpha [{\boldsymbol{u}}]_{\alpha; B_{R/2}} + {\lvert{\boldsymbol{u}}\rvert}_{0;B_{R/2}} \leq N \left( R^{-3/2} {\lVert{\boldsymbol{u}}\rVert}_{L^2(B_R)}+ R^{2-3/q} {\lVert{\boldsymbol{f}}\rVert}_{L^q(B_R)}\right),$$ where $\alpha=\alpha(\nu, q) \in (0,1)$ and $N=N(\nu,q)>0$. In order to establish a global Hölder estimate for weak solutions of the problem , we need to impose some conditions on $\Omega$. We shall assume that $\Omega$ is a bounded Lipschitz domain whose first homology group $H_1(\Omega;{\mathbb R})$ is trivial; i.e., $$\label{eq2.5ef} H_1(\Omega; {\mathbb R})=0.$$ For example, if $\Omega$ is simply connected, then it satisfies the above condition. As mentioned in §\[sec2.2ws\], the existence and uniqueness of a weak solution in $W^{1,2}_0(\Omega)$ of the problem is established by a standard argument; see Appendix. \[thm3.1t\] Let $\Omega \subset {\mathbb R}^3$ be a bounded Lipschitz domain satisfying the condition . Let $a(x)$ be a measurable function on $\Omega$ satisfying the condition and ${\boldsymbol{u}}\in W^{1,2}_0(\Omega)$ be the weak solution of the problem , where ${\boldsymbol{f}}\in H_{q/2}(\Omega)$, ${\boldsymbol{g}}, h \in L^q(\Omega)$ for some $q>3$, and $\int_\Omega h=0$. Then, ${\boldsymbol{u}}$ is uniformly Hölder continuous in $\Omega$ and satisfies the following estimate: $$\label{eq3.2bx} {\lVert{\boldsymbol{u}}\rVert}_{C^{\alpha}(\overline \Omega)}\le N \left({\lVert{\boldsymbol{f}}\rVert}_{L^{q/2}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)} +{\lVerth\rVert}_{L^q(\Omega)} \right),$$ where $\alpha=\alpha(\nu, q, \Omega) \in (0,1)$ and $N=N(\nu,q,\Omega)>0$. Related to the above theorems, several remarks are in order. \[rmk2.5rr\] In Theorem \[thm3.2a\], one may assume that $a(x)$ is not a scalar function but a $3\times 3$ symmetric matrix valued function satisfying $$\nu {\lvert{\boldsymbol{\xi}}\rvert}^2 \leq {\boldsymbol{\xi}}^T a(x){\boldsymbol{\xi}} \leq \nu^{-1} {\lvert{\boldsymbol{\xi}}\rvert}^2,\quad \forall {\boldsymbol{\xi}} \in {\mathbb R}^3,\;\;\forall x\in\Omega,\;\; \text{for some}\;\nu\in (0,1].$$ There is no essential change in the proof; see [@KKM]. As a matter of fact, one may drop the symmetry assumption on $a(x)$ if one assume further that $a\in L^\infty(\Omega)$. In Theorem \[thm3.2a\], instead of assuming ${\boldsymbol{f}} \in L^q(\Omega)$, one may assume that ${\boldsymbol{f}}$ belongs to the Morrey space $L^{p,\lambda}$ with $p=6/5$ and $\lambda=6(1+2\delta)/5$ for some $\delta\in(0,1)$; see the proof of Theorem \[thm4.2b\] and Remark \[rmk4.5ff\] in Section \[sec:p\]. The “interior” Morrey space $L^{p,\lambda}$ is defined to be the set of all functions $f\in L^p(\Omega)$ with finite norm $${\lVertu\rVert}_{L^{p,\lambda}}=\sup_{B_r(x_0)\subset \Omega} \left(r^{-\lambda}\int_{B_r(x_0)} {\lvertu\rvert}^p\,\right)^{1/p}.$$ Moreover, instead of the system , one may consider the following system: $$\nabla\times (a(x)\nabla\times {\boldsymbol{u}})-\nabla(b(x)\nabla \cdot {\boldsymbol{u}}) = {\boldsymbol{f}} + \nabla \times {\boldsymbol{F}} + \nabla g \quad \text{in }\;\Omega.$$ One can show that weak solutions ${\boldsymbol{u}}$ of the above system are Hölder continuous in $\Omega$ if $${\boldsymbol{f}} \in L^{6/5,6(1+2\delta)/5},\;\; {\boldsymbol{F}}\in L^{2,(1+2\delta)/2},\;\;\text{and }\; g\in L^{2,(1+2\delta)/2}\quad \text{for some }\;\delta \in(0,1).$$ In particular, if ${\boldsymbol{f}} \in L^{q/2}(\Omega)$, ${\boldsymbol{F}} \in L^q(\Omega)$, and $g\in L^q(\Omega)$ for $q>3$, then weak solutions ${\boldsymbol{u}}\in Y^{1,2}(\Omega)$ of the above system are Hölder continuous in $\Omega$. Moreover, in that case, we have the estimate $$r^\alpha [{\boldsymbol{u}}]_{\alpha; B_{r/2}} + {\lvert{\boldsymbol{u}}\rvert}_{0; B_{r/2}} \leq N \left( r^{-3/2} {\lVert{\boldsymbol{u}}\rVert}_{2;B_r}+ r^{2-6/q} {\lVert{\boldsymbol{f}}\rVert}_{q/2;B_r}+ r^{1-3/q} {\lVert{\boldsymbol{F}}\rVert}_{q; B_r}+r^{1-3/q} {\lVertg\rVert}_{q;B_r}\right),$$ whenever $B_r=B_r(x_0)\subset\subset \Omega$, where $\alpha=\alpha(\nu, q)\in (0,1)$ and $N=N(\nu, q)$. \[rmk2.10\] In Theorem \[thm3.1t\], one may wish to consider the following problem with non-zero Dirichlet boundary data, instead of the problem : $$\label{eq2.13cc} \left\{ \begin{array}{c} \nabla\times (a(x)\nabla\times {\boldsymbol{u}})={\boldsymbol{f}} +\nabla\times {\boldsymbol{g}} \quad\text{in }\;\Omega,\\ \nabla \cdot {\boldsymbol{u}}=h\quad\text{in }\;\Omega,\\ {\boldsymbol{u}}= {\boldsymbol{\psi}} \quad \text{on }\;\partial\Omega, \end{array} \right.$$ where one needs to assume the compatibility condition $\int_\Omega h = \int_{\partial\Omega} {\boldsymbol{\psi}}\cdot n$ instead of the condition $\int_\Omega h =0$ in Theorem \[thm3.1t\]. If ${\boldsymbol{\psi}}$ is the trace of a Sobolev function ${\boldsymbol{w}} \in W^{1,q}(\Omega)$ with $q>3$, then ${\boldsymbol{v}}:={\boldsymbol{u}}-{\boldsymbol{w}}$ is a solution of the problem with ${\boldsymbol{g}}$ and $h$ replaced respectively by $\tilde{{\boldsymbol{g}}}$ and $\tilde h$, where $$\tilde{{\boldsymbol{g}}}:={\boldsymbol{g}} - a \nabla\times {\boldsymbol{w}},\quad \tilde h:=h-\nabla \cdot {\boldsymbol{w}} \in L^q(\Omega).$$ Notice that $\int_\Omega \tilde h =0$. Therefore, by the estimate and Morrey’s inequality, we have the following estimate the weak solution ${\boldsymbol{u}}$ of the problem : $${\lVert{\boldsymbol{u}}\rVert}_{C^{\alpha}(\overline \Omega)} \leq N \left({\lVert{\boldsymbol{f}}\rVert}_{L^{q/2}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)} +{\lVerth\rVert}_{L^q(\Omega)} + {\lVert{\boldsymbol{w}}\rVert}_{W^{1,q}(\Omega)} \right),$$ where $\alpha=\alpha(\nu, q, \Omega) \in (0,1)$ and $N=N(\nu,q,\Omega)>0$. Recall that $\Omega\subset {\mathbb R}^3$ is assumed to be a bounded Lipschitz domain. It is known that if ${\boldsymbol{\psi}}$ belongs to the Besov space $B^q_{1-1/q}(\partial\Omega)$, then it can be extended to a function ${\boldsymbol{w}}$ in the Sobolev space $W^{1,q}(\Omega)$ in such a way that the following estimate holds: $${\lVert{\boldsymbol{w}}\rVert}_{W^{1,q}(\Omega)} \leq N {\lVert{\boldsymbol{\psi}}\rVert}_{B^q_{1-1/q}(\partial\Omega)},$$ where $N=N(\Omega,q)$; see e.g., Jerison and Kenig [@JK95 Theorem 3.1]. Therefore, the following estimate is available for the weak solution ${\boldsymbol{u}}$ of the problem : $${\lVert{\boldsymbol{u}}\rVert}_{C^{\alpha}(\overline \Omega)}\le N \left({\lVert{\boldsymbol{f}}\rVert}_{L^{q/2}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)} +{\lVerth\rVert}_{L^q(\Omega)} + {\lVert{\boldsymbol{\psi}}\rVert}_{B^q_{1-1/q}(\partial\Omega)} \right),$$ where $\alpha=\alpha(\nu, q, \Omega) \in (0,1)$ and $N=N(\nu,q,\Omega)>0$. The above estimate provides, in particular, the global bounds for the weak solution ${\boldsymbol{u}}$ of the problem in $\Omega$. It seems to us that Theorem \[thm3.1t\] is the first result establishing the global boundedness of weak solutions of the Dirichlet problem in Lipschitz domains. Proofs of Main theorems {#sec:pf} ======================= Proof of Theorem \[thm3.2a\] ---------------------------- We shall make the qualitative assumption that the weak solution ${\boldsymbol{u}}$ is smooth in $\Omega$. This can be achieved by assuming coefficient $a(x)$ and the inhomogeneous term ${\boldsymbol{f}}$ are smooth in $\Omega$ and adopting the standard approximation argument. It should be clear from the proof that the constant $\alpha$ and $N$ will not depend on these extra smoothness assumption. By a standard computation (see e.g., [@KK02 Lemma 4.4]), we can derive the following Caccioppoli’s inequality for ${\boldsymbol{u}}$: \[lem4.2tt\] With ${\boldsymbol{u}}$, ${\boldsymbol{f}}$, and $R$ as in the theorem, we have $$\int_{B_{2r}} {\lvert\nabla \times {\boldsymbol{u}}\rvert}^2+ {\lvert\nabla \cdot {\boldsymbol{u}}\rvert}^2 \leq N \left(r^{-2} \int_{B_{3r}} {\lvert{\boldsymbol{u}}\rvert}^2+ {\lVert{\boldsymbol{f}} \rVert}_{L^{6/5}(B_{3r})}^2\right);\quad r=R/3.$$ We take the divergence in the system to get $$-\Delta \psi=0\quad\text{in }\;\Omega;\quad \psi:=b \nabla \cdot {\boldsymbol{u}}.$$ Denote $B(x)= 1/b(x)$ and observe that ${\boldsymbol{u}}$ satisfies $$\label{eq4.27bb} \nabla\cdot {\boldsymbol{u}} = B \psi \quad \text{in }\;\Omega.$$ Next, we split ${\boldsymbol{u}}={\boldsymbol{v}}+{\boldsymbol{w}}$ in $B_r=B_r(x_0)$, where $r=R/3$ and ${\boldsymbol{v}}$ is a solution of the problem $$\left\{ \begin{array}{c} \nabla \cdot {\boldsymbol{v}} = B \psi- (B \psi)_{x_0,r} \quad \text{in }\;B_r,\\ {\boldsymbol{v}}= 0 \quad \text{on }\;\partial B_r, \end{array} \right.$$ where we used the notation $$(B \psi)_{x_0,r}:=\fint_{B_r(x_0)} B \psi.$$ We assume that the function ${\boldsymbol{v}}$ is chosen so that following estimate, which is originally due to Bogovskiǐ [@Bog], holds for ${\boldsymbol{v}}$ (see Galdi [@Galdi §III.3]): $$\label{eq4.30sh} {\lVert\nabla {\boldsymbol{v}}\rVert}_{L^p(B_r)} \leq N {\lVertB \psi-(B \psi)_{x_0,r}\rVert}_{L^p(B_r)} \leq N {\lVertB \psi\rVert}_{L^p(B_r)},\;\;\forall p \in (1,\infty);\quad N=N(p).$$ Since $\psi$ is a harmonic function, the mean value property of $\psi$ yields $$\label{eq4.31se} {\lVert\psi\rVert}_{L^p(B_r)}\leq N r^{3/p-3/2} {\lVert\psi\rVert}_{L^2(B_{2r})},\quad \forall p\in (0,\infty];\quad N=N(p).$$ Combining the estimates and , and then using followed by Lemma \[lem4.2tt\] and Hölder’s inequality, we get $$\label{eq4.29sk} {\lVert\nabla {\boldsymbol{v}}\rVert}_{L^q(B_r )} \leq N r^{3/q-5/2} {\lVert{\boldsymbol{u}}\rVert}_{L^2(B_{3r})}+N r {\lVert{\boldsymbol{f}}\rVert}_{L^{q}(B_{3r})} ;\quad N=N(\nu,q).$$ By Sobolev inequality, , , Lemma \[lem4.2tt\], and Hölder’s inequality, we also estimate $$\label{eq4.33rr} {\lVert{\boldsymbol{v}}\rVert}_{L^2(B_r)} \leq N r {\lVert\nabla {\boldsymbol{v}}\rVert}_{L^2(B_r)} \leq N \left( {\lVert{\boldsymbol{u}}\rVert}_{L^2(B_{3r})}+ r^{7/2-3/q} {\lVert{\boldsymbol{f}}\rVert}_{L^{q}(B_{3r})} \right).$$ On the other hand, note that ${\boldsymbol{w}}={\boldsymbol{u}}-{\boldsymbol{v}}$ is a weak solution of the problem $$\left\{ \begin{array}{c} \nabla\times (a(x)\nabla\times {\boldsymbol{w}})=\nabla \psi - \nabla\times (a(x)\nabla\times {\boldsymbol{v}}) \quad\text{in }\; B_r,\\ \nabla \cdot {\boldsymbol{w}}= (B \psi)_{x_0,r} \quad\text{in }\;B_r. \end{array} \right.$$ We remark that in the proof of [@KK02 Theorem 2.1], we used the condition $\nabla\cdot {\boldsymbol{u}}=0$ only to establish the following equality (recall the identity above), $$\nabla\times (\nabla \times {\boldsymbol{u}})=-\Delta {\boldsymbol{u}},$$ which can be also obtained by merely assuming that $\nabla \cdot {\boldsymbol{u}}$ is constant. Therefore, by [@KK02 Theorem 2.1 and Remark 2.10], we have (via a standard scaling argument) $$\label{eq4.35mm} r^\alpha[{\boldsymbol{w}}]_{\alpha; B_{r/2}} \leq N \left(r^{-3/2} {\lVert{\boldsymbol{w}}\rVert}_{L^2(B_r)} + r^{2-6/q}{\lVert\nabla \psi\rVert}_{L^{q/2}(B_r)}+ r^{1-3/q}{\lVert\nabla {\boldsymbol{v}}\rVert}_{L^q(B_r)} \right),$$ where $\alpha=\alpha(\nu, q)\in (0,1-3/q]$ and $N=N(\nu, q)$. We estimate the RHS of as follows. By the estimate , we have $${\lVert{\boldsymbol{w}}\rVert}_{L^2(B_r)} \leq {\lVert{\boldsymbol{u}}\rVert}_{L^2(B_r)}+{\lVert{\boldsymbol{v}}\rVert}_{L^2(B_r)} \leq N \left( {\lVert{\boldsymbol{u}}\rVert}_{L^2(B_{3r})} + r^{7/2-3/q} {\lVert{\boldsymbol{f}}\rVert}_{L^{q}(B_{3r})}\right).$$ By a gradient estimate for harmonic functions followed by , Lemma \[lem4.2tt\], and Hölder’s inequality, we get $$\label{eq4.37ch} {\lVert\nabla \psi\rVert}_{L^{q/2}(B_r)} \leq N r^{6/q-5/2}{\lVert\psi\rVert}_{L^2(B_{2r})} \leq N \left (r^{6/q-7/2} {\lVert{\boldsymbol{u}}\rVert}_{L^2(B_{3r})} + r^{3/q}{\lVert{\boldsymbol{f}}\rVert}_{L^{q}(B_{3r})}\right).$$ By combining – , and , we obtain $$r^\alpha [{\boldsymbol{w}}]_{\alpha; B_{r/2}} \leq N \left( r^{-3/2} {\lVert{\boldsymbol{u}}\rVert}_{L^2(B_{3r})} + r^{2-3/q} {\lVert{\boldsymbol{f}}\rVert}_{L^{q}(B_{3r})}\right).$$ By Morrey’s inequality followed by , we also get $$[{\boldsymbol{v}}]_{\mu; B_r} \leq N \left( r^{-3/2-\mu} {\lVert{\boldsymbol{u}}\rVert}_{L^2(B_{3r})} + r {\lVert{\boldsymbol{f}}\rVert}_{L^{q}(B_{3r})}\right);\quad \mu=1-3/q.$$ By combining the above two estimates and noting that $\alpha \leq \mu= 1-3/q$, we conclude $$\label{eq4.40zz} r^{\alpha} [{\boldsymbol{u}}]_{\alpha; B_{r/2}} \leq N \left( r^{-3/2} {\lVert{\boldsymbol{u}}\rVert}_{L^2(B_{3r})} + r^{2-3/q} {\lVert{\boldsymbol{f}}\rVert}_{L^{q}(B_{3r})}\right).$$ From the above estimate , we can estimate ${\lvert{\boldsymbol{u}}\rvert}_{0;B_{r/4}}$ as follows. For all $y\in B_{r/4}$, the triangle inequality yields $${\lvert{\boldsymbol{u}}(y)\rvert}\leq {\lvert{\boldsymbol{u}}(x)\rvert}+ [{\boldsymbol{u}}]_{\alpha; B_{r/2}} (r/2)^\alpha, \quad \forall x\in B_{r/4}.$$ Taking the average over $B_{r/4}$ in $x$, and then using Hölder’s inequality and , we get $${\lvert{\boldsymbol{u}}(y)\rvert} \leq \left(\fint_{B_{r/4}} {\lvert{\boldsymbol{u}}\rvert}^2\right)^{1/2}+N \left( r^{-3/2} {\lVert{\boldsymbol{u}}\rVert}_{L^2(B_{3r})} + r^{2-3/q} {\lVert{\boldsymbol{f}}\rVert}_{L^{q}(B_{3r})}\right).$$ Since the above estimate is uniform in $y\in B_{r/4}$, we thus have $$\label{eq4.41yx} {\lvert{\boldsymbol{u}}\rvert}_{0; B_{r/4}} \leq N \left( r^{-3/2} {\lVert{\boldsymbol{u}}\rVert}_{L^2(B_{3r})} + r^{2-3/q} {\lVert{\boldsymbol{f}}\rVert}_{L^{q}(B_{3r})}\right).$$ Recall that $r=R/3$. Therefore, the desired estimate follows from and and the standard covering argument. The theorem is proved. [$\blacksquare$]{} Proof of Theorem \[thm3.1t\] ---------------------------- We shall again make the qualitative assumption that the coefficient $a(x)$, the inhomogeneous terms ${\boldsymbol{f}}$, ${\boldsymbol{g}}$, $h$, and the domain $\Omega$ are smooth. By a standard elliptic regularity theory, we may then assume that ${\boldsymbol{u}}$ is also smooth in $\overline\Omega$. In this proof, we denote by $N$ a constant that depends only on $\nu$, $q$, and $\Omega$, unless explicitly otherwise stated. It should be emphasized that those constants $N$ employed in various estimates below, do not inherit any information from the extra smoothness assumption imposed on $\Omega$; its dependence on $\Omega$ will be only that on the Lipschitz character $M, r_0$ of $\partial\Omega$ and ${\operatorname{diam}}\Omega$. Let us recall the following lemma, the proof of which can be found in [@KK05]. \[lem:1\] Let ${\boldsymbol{f}} \in {\mathcal{D}}(\Omega)$, where $\Omega$ is a domain in ${\mathbb R}^3$. Then, there exists ${\boldsymbol{F}}\in C^\infty(\Omega)$ such that $\nabla\times {\boldsymbol{F}}={\boldsymbol{f}}$ in $\Omega$. Moreover, for any $p\in (1,\infty)$, we have $${\lVert\nabla {\boldsymbol{F}}\rVert}_{L^p(\Omega)}\le N {\lVert{\boldsymbol{f}}\rVert}_{L^p(\Omega)};\quad N=N(p).$$ By using the above lemma, we may write ${\boldsymbol{f}}=\nabla \times {\boldsymbol{F}}$, where ${\boldsymbol{F}}\in C^\infty(\Omega)$ satisfies the following estimate $$\label{eq4.1aa} {\lVert\nabla {\boldsymbol{F}}\rVert}_{L^{q/2}(\Omega)}\le N {\lVert{\boldsymbol{f}}\rVert}_{L^{q/2}(\Omega)};\quad N=N(q).$$ Notice that ${\boldsymbol{u}}$ then satisfies $$\label{eq4.3cc} \nabla\times(a(x) \nabla\times {\boldsymbol{u}} -{\boldsymbol{F}} -{\boldsymbol{g}}) =0 \quad \text{in }\;\Omega.$$ Let $\varphi$ be a solution of the Neumann problem $$\label{eq4.4dd} \left\{ \begin{array}{c} \Delta \varphi = \nabla\cdot (a(x) \nabla\times {\boldsymbol{u}} -{\boldsymbol{F}} -{\boldsymbol{g}}) \quad \text{in }\;\Omega,\\ \partial \varphi/\partial n=-({\boldsymbol{F}}+{\boldsymbol{g}})\cdot {\boldsymbol{n}} \quad \text{on }\;\partial\Omega, \end{array} \right.$$ where ${\boldsymbol{n}}$ denotes the outward unit normal vector of $\partial\Omega$. Recall that $\varphi$ is unique up to an additive constant. We shall hereafter fix $\varphi$ by assuming $\fint_\Omega \varphi=0$. \[lem:1-1\] With ${\boldsymbol{u}}$ and $\varphi$ given as above, we have $$\label{eq4.5ee} \nabla \varphi = a(x) \nabla\times {\boldsymbol{u}} -{\boldsymbol{F}} -{\boldsymbol{g}} \quad \text{in }\;\Omega.$$ First we claim that the boundary condition ${\boldsymbol{u}}=0$ on $\partial\Omega$ implies that $$\label{eq4.6cj} (\nabla \times {\boldsymbol{u}}) \cdot {\boldsymbol{n}} =0\quad \text{on }\;\partial\Omega.$$ To see this, take any surface ${\mathcal{S}}\subset \partial\Omega$ with a smooth boundary $\partial {\mathcal{S}}\subset \partial\Omega$. By Stokes’ theorem, we then have $$\iint_{\mathcal{S}}(\nabla \times {\boldsymbol{u}})\cdot {\boldsymbol{n}}\, dS = \int_{\partial {\mathcal{S}}} {\boldsymbol{u}} \cdot d{\boldsymbol{r}}=0.$$ Since ${\mathcal{S}}$ is arbitrary and $(\nabla \times {\boldsymbol{u}})\cdot {\boldsymbol{n}}$ is continuous, we have $(\nabla \times {\boldsymbol{u}}) \cdot {\boldsymbol{n}} =0$ on $\partial\Omega$ as claimed. Next, we set $${\boldsymbol{G}}=\nabla \varphi - a(x) \nabla\times {\boldsymbol{u}} +{\boldsymbol{F}} +{\boldsymbol{g}}.$$ The lemma will follow if we prove that ${\boldsymbol{G}} \equiv 0$ in $\Omega$. By we have $\nabla\times {\boldsymbol{G}} =0$ in $\Omega$, and thus by the condition , there exists a potential $\psi$ such that ${\boldsymbol{G}}=\nabla \psi$ in $\Omega$. Then by and , we find that $\psi$ satisfies $\Delta \psi=0$ in $\Omega$ and $\partial\psi/\partial n =0$ on $\partial\Omega$. Therefore, we must have ${\boldsymbol{G}}= \nabla \psi =0$ in $\Omega$. The lemma is proved. Hereafter, we shall denote $A(x)= 1/a(x)$. It follows from that $$\nu \leq A(x) \leq \nu^{-1},\quad\forall x\in\Omega.$$ Observe that from we have $$0=\nabla\cdot (\nabla \times {\boldsymbol{u}}) = \nabla\cdot \bigl[A(x) \bigr(\nabla \varphi +{\boldsymbol{F}} +{\boldsymbol{g}}\bigr)\bigr],$$ and thus by Lemma \[lem:1-1\] we find that $\varphi$ satisfies the following conormal problem: $$\label{eq4.7aa} \left\{ \begin{array}{c} {\operatorname{div}}(A(x) \nabla \varphi) = -{\operatorname{div}}\left(A {\boldsymbol{F}} + A {\boldsymbol{g}} \right) \quad \text{in }\;\Omega,\\ (A(x)\nabla \varphi) \cdot {\boldsymbol{n}}=-(A{\boldsymbol{F}}+A{\boldsymbol{g}})\cdot {\boldsymbol{n}} \quad \text{on }\;\partial\Omega. \end{array} \right.$$ In the variational formulation, means that we have the identity $$\label{eq4.10bv} \int_\Omega A\nabla \varphi \cdot \nabla \zeta = -\int_\Omega (A {\boldsymbol{F}} + A {\boldsymbol{g}})\cdot \nabla \zeta,\quad \forall \zeta\in W^{1,2}(\Omega).$$ In particular, by using $\varphi$ itself as a test function, we get $${\lVert\nabla \varphi\rVert}_{L^2(\Omega)} \leq N \left( {\lVert{\boldsymbol{F}}\rVert}_{L^2(\Omega)}+ {\lVert{\boldsymbol{g}}\rVert}_{L^2(\Omega)}\right);\quad N=N(\nu).$$ By Poincaré’s inequality (recall $\fint_\Omega \varphi=0$) and Hölder’s inequality, we then have $${\lVert\varphi\rVert}_{W^{1,2}(\Omega)} \leq N \left( {\lVert{\boldsymbol{F}}\rVert}_{L^q(\Omega)}+ {\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}\right).$$ Moreover, one can obtain the following estimate by utilizing and adjusting, for example, the proof of [@GT Theorem 8.29] (see [@LU] and also [@Lieberman §VI.10]): $$\label{eq4.11ew} [\varphi]_{\mu; \Omega}\leq N\left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}\right);\quad \mu=\mu(\nu,q,\Omega)\in (0,1).$$ Then, by Campanato’s integral characterization of Hölder continuous functions (see e.g., [@Gi83 Theorem 1.2, p. 70]), we derive from that $$\label{eq4.16kt} \int_{\Omega_r(x_0)} {\left\lvert\varphi-\varphi_{x_0,r}\right\rvert}^2 \leq N r^{3+2\mu} \left({\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}\right)^2; \quad \varphi_{x_0,r}:= \fint_{\Omega_r(x_0)} \varphi.$$ From the identity , we also obtain the following Caccioppoli’s inequality: $$\label{eq4.17hh} \int_{\Omega_{r/2}(x_0)} {\left\lvert\nabla \varphi\right\rvert}^2 \leq N r^{-2}\int_{\Omega_r(x_0)} {\left\lvert\varphi-\varphi_{x_0,r}\right\rvert}^2+ N r^{3-6/q} \left({\lVert{\boldsymbol{F}}\rVert}_{L^q(\Omega)}^2 + {\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}^2\right).$$ Setting $\gamma=\min(\mu, 1-3/q)$, and combining and , we get the following Morrey-Campanato type estimate for $\nabla \varphi$: $$\label{eq4.9rr} \int_{\Omega_r(x_0)} {\lvert\nabla \varphi\rvert}^2 \leq N r^{1+2\gamma} \left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}\right)^2, \quad \forall x_0\in\Omega,\;\; \forall r \in (0, {\operatorname{diam}}\Omega).$$ Having the estimate together with the boundary condition ${\boldsymbol{u}}=0$ on $\partial\Omega$, which is assumed to be locally Lipschitz, we now derive a global Hölder estimate for ${\boldsymbol{u}}$ as follows. Since $\nabla\cdot {\boldsymbol{u}}=h$, by and we see that ${\boldsymbol{u}}$ satisfies $$-\Delta {\boldsymbol{u}}=\nabla\times(A \nabla\varphi)+ \nabla\times(A {\boldsymbol{F}} + A {\boldsymbol{g}})-\nabla h\quad\text{in }\;\Omega.$$ By Hölder’s inequality, we find that (recall $\gamma \leq 1-3/q$) $$\int_{\Omega_r(x_0)} {\lvert{\boldsymbol{F}}+ {\boldsymbol{g}}\rvert}^2 \leq N r^{1+2\gamma} \left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}\right)^2,\quad \forall x_0\in\Omega,\;\; \forall r \in (0, {\operatorname{diam}}\Omega),$$ where we used the assumption that ${\operatorname{diam}}\Omega<\infty$. Similarly, Hölder’s inequality yields $$\int_{\Omega_r(x_0)} {\lverth\rvert}^2 \leq N r^{1+2\gamma}{\lVerth\rVert}_{L^{q}(\Omega)}^2,\quad \forall x_0\in\Omega,\;\; \forall r \in (0, {\operatorname{diam}}\Omega).$$ Setting ${\boldsymbol{G}}:=A(\nabla \varphi+ {\boldsymbol{F}} +{\boldsymbol{g}})$, we find that ${\boldsymbol{u}}$ satisfies $$\label{eq4.17de} \left\{ \begin{array}{c} -\Delta {\boldsymbol{u}} = \nabla\times {\boldsymbol{G}} -\nabla h \quad \text{in }\;\Omega,\\ {\boldsymbol{u}}=0 \quad \text{on }\;\partial\Omega, \end{array} \right.$$ where ${\boldsymbol{G}}$ and $h$ satisfies the following estimate for all $x_0\in\Omega$ and $0< r <{\operatorname{diam}}\Omega$: $$\label{eq4.18kk} \int_{\Omega_r(x_0)} {\lvert{\boldsymbol{G}}\rvert}^2 +{\lverth\rvert}^2 \leq N r^{1+2\gamma} \left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right)^2.$$ Observe that the identity implies that $\nabla \times {\boldsymbol{u}}$ enjoys the Morrey-Campanato type estimate . The following lemma asserts that in fact, the “full gradient” $\nabla {\boldsymbol{u}}$ satisfies a similar estimate. With ${\boldsymbol{u}}$ given as above, there exists $\alpha=\alpha(\nu,q,\Omega)\in(0,1)$ such that for all $x_0\in\Omega$ and $0<r < {\operatorname{diam}}\Omega$, we have $$\label{eq4.20ys} \int_{\Omega_r(x_0)} {\lvert\nabla {\boldsymbol{u}}\rvert}^2 \leq N r^{1+2\alpha} \left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right)^2.$$ We decompose ${\boldsymbol{u}}={\boldsymbol{v}}+{\boldsymbol{w}}$ in $\Omega_r(x_0)$, where ${\boldsymbol{v}}$ is the solution of $$\left\{ \begin{array}{c} -\Delta {\boldsymbol{v}} = 0 \quad \text{in }\;\Omega_r(x_0),\\ {\boldsymbol{v}}={\boldsymbol{u}} \quad \text{on }\;\partial\Omega_r(x_0). \end{array} \right.$$ Notice that each $v^i$ ($i=1,2,3$) is a harmonic function vanishing on $(\partial\Omega)_r(x_0)\subset \partial\Omega$. By a well-known boundary Hölder regularity theory for harmonic functions in Lipschitz domains, there exists $\beta=\beta(\Omega)\in (0,1)$ and $N=N(\Omega)$ such that $$\label{eq4.21np} \int_{\Omega_\rho(x_0)} {\lvert\nabla {\boldsymbol{v}}\rvert}^2 \leq N\left(\frac{\rho}{r}\right)^{1+2\beta} \int_{\Omega_r(x_0)} {\lvert\nabla {\boldsymbol{v}}\rvert}^2,\quad \forall \rho \in (0,r].$$ On the other hand, observe that ${\boldsymbol{w}}={\boldsymbol{u}}-{\boldsymbol{v}}$ is a weak solution of the problem $$\left\{ \begin{array}{c} -\Delta {\boldsymbol{w}} = \nabla\times{\boldsymbol{G}} - \nabla h \quad \text{in }\;\Omega_r(x_0),\\ {\boldsymbol{w}}= 0 \quad \text{on }\;\partial\Omega_r(x_0). \end{array} \right.$$ By using ${\boldsymbol{w}}$ itself as a test function in the above equations and utilizing , we derive $$\label{eq4.22xm} \int_{\Omega_r(x_0)} {\lvert\nabla {\boldsymbol{w}}\rvert}^2 \leq N \int_{\Omega_r(x_0)} {\lvert{\boldsymbol{G}}\rvert}^2+{\lverth\rvert}^2 \leq N r^{1+2\gamma} \left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right)^2.$$ By combining and , we get for any $\rho\leq r$, $$\int_{\Omega_\rho(x_0)} {\lvert\nabla {\boldsymbol{u}}\rvert}^2 \leq N\left(\frac{\rho}{r}\right)^{1+2\beta} \int_{\Omega_r(x_0)} {\lvert\nabla {\boldsymbol{u}}\rvert}^2 + N r^{1+2\gamma} \left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right)^2.$$ Take any $\alpha>0$ such that $\alpha < \min(\beta,\gamma)$ and applying a well-known iteration argument (see e.g. [@Gi83 Lemma 2.1, p. 86]), for all $x_0\in\Omega$ and $0<r <R \leq {\operatorname{diam}}\Omega$, we have $$\int_{\Omega_r(x_0)} {\lvert\nabla {\boldsymbol{u}}\rvert}^2 \leq N\left(\frac{r}{R}\right)^{1+2\alpha} \int_{\Omega_R(x_0)} {\lvert\nabla {\boldsymbol{u}}\rvert}^2 + N r^{1+2\alpha} \left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right)^2.$$ The lemma follows from the above estimate (take $R={\operatorname{diam}}\Omega$) and the estimate $$\label{eq4.23nx} \int_{\Omega} {\lvert\nabla {\boldsymbol{u}}\rvert}^2 \leq N \int_{\Omega} {\lvert{\boldsymbol{G}}\rvert}^2 + {\lvert\nabla h\rvert}^2 \leq N\left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right)^2,$$ which is obtained by using ${\boldsymbol{u}}$ itself as a test function in and then applying with $r={\operatorname{diam}}\Omega$. The lemma is proved. We now estimate $[{\boldsymbol{u}}]_{\alpha;\Omega}$ as follows. Denote by $\tilde{{\boldsymbol{u}}}$ the extension of ${\boldsymbol{u}}$ by zero on ${\mathbb R}^3\setminus \Omega$. Notice that $\tilde{{\boldsymbol{u}}} \in W^{1,2}({\mathbb R}^3)$ and $\nabla \tilde{{\boldsymbol{u}}}=\chi_{\Omega} \nabla {\boldsymbol{u}}$. Then by Poincaré’s inequality and , we find that for all $x\in\Omega$ and $0<r<{\operatorname{diam}}\Omega$, we have $$\int_{B_r(x)} {\left\lvert\tilde{{\boldsymbol{u}}}-\tilde{{\boldsymbol{u}}}_{x,r}\right\rvert}^2\leq N r^{3+2\alpha}\left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right)^2.$$ By a standard argument in the boundary regularity theory, it is readily seen that the above estimate is valid for all $x\in B_R(x_0)$ and $r<2R$, where $x_0\in\Omega$ and $R={\operatorname{diam}}\Omega$. Therefore, by the Campanato’s integral characterization of Hölder continuous functions, we find that $\tilde{{\boldsymbol{u}}}$ is uniformly Hölder continuous in $B_R(x_0)\supset \overline \Omega$ with the estimate $$\label{eq4.22vm} [\tilde{{\boldsymbol{u}}}]_{\alpha; B_R(x_0)} \leq N \left({\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right).$$ The above estimate clearly implies that $$\label{eq4.23qt} [{\boldsymbol{u}}]_{\alpha; \Omega}\le N\left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right).$$ Finally, we estimate of ${\lvert{\boldsymbol{u}}\rvert}_{0; \Omega}$ similar to . For $x_0\in \Omega$, the triangle inequality yields $${\lvert{\boldsymbol{u}}(x_0)\rvert}\leq {\lvert{\boldsymbol{u}}(x)\rvert}+ [\tilde{{\boldsymbol{u}}}]_{C^{0,\alpha}(\overline B_R(x_0))} R^\alpha, \quad \forall x\in\Omega;\quad R={\operatorname{diam}}\Omega.$$ Taking the average over $\Omega$ in $x$, and then using Hölder’s inequality and , we have $${\lvert{\boldsymbol{u}}(x_0)\rvert} \leq \left(\fint_\Omega {\lvert{\boldsymbol{u}}\rvert}^2\right)^{1/2}+N({\operatorname{diam}}\Omega)^\alpha \left({\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right).$$ On the other hand, by and the Poincaré’s inequality, we have $$\int_\Omega {\lvert{\boldsymbol{u}}\rvert}^2 \leq N \int_\Omega {\lvert\nabla {\boldsymbol{u}}\rvert}^2 \leq N\left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right)^2.$$ Therefore, by combining the above two inequalities, we obtain $$\label{eq4.26dj} {\lvert{\boldsymbol{u}}\rvert}_{0;\Omega}\leq N\left( {\lVert{\boldsymbol{F}}\rVert}_{L^{q}(\Omega)}+{\lVert{\boldsymbol{g}}\rVert}_{L^q(\Omega)}+{\lVerth\rVert}_{L^q(\Omega)}\right).$$ The desired estimate now follows from , , , and the Sobolev’s inequality. The proof is complete. [$\blacksquare$]{} Applications {#sec:app} ============ Quasilinear system ------------------ As a first application, we consider the quasilinear system, $$\label{eq5.1an} \nabla\times (\mathcal A(x,{\boldsymbol{u}})\nabla\times {\boldsymbol{u}})-\nabla(\mathcal B(x,{\boldsymbol{u}})\nabla \cdot {\boldsymbol{u}})= {\boldsymbol{f}} \quad\text{in }\;\Omega.$$ Here we assume $\mathcal A, \mathcal B:\Omega\times{\mathbb R}^3\to{\mathbb R}$ satisfy the following conditions: i) $\nu\leq \mathcal A, \mathcal B \leq \nu^{-1}$ for some constants $\nu \in (0,1]$. ii) $\mathcal A$ and $\mathcal B$ are Hölder continuous in $\Omega\times{\mathbb R}^3$; i.e. $\mathcal A, \mathcal B \in C^\mu(\Omega\times{\mathbb R}^3)$ for $\mu\in (0,1)$. \[thm:app1\] Let $\mathcal A$ and $\mathcal B$ satisfy the above conditions and let ${\boldsymbol{u}}\in Y^{1,2}(\Omega)$ be a weak solution of the system with ${\boldsymbol{f}} \in L^q(\Omega)$ for $q>3$. Then, we have ${\boldsymbol{u}} \in C^{1,\alpha}(\Omega)$, where $\alpha=\min(\mu, 1-3/q)$. By Theorem \[thm3.2a\], we know ${\boldsymbol{u}} \in C^{\beta}(\Omega)$ for some $\beta \in (0,1)$. Then the coefficients $a(x):=\mathcal A(x,{\boldsymbol{u}}(x))$ and $b(x):=\mathcal B(x,{\boldsymbol{u}}(x))$ are Hölder continuous with some exponent $\gamma \in (0,1)$. The rest of proof relies on the well-known “freezing coefficients” method in Schauder theory and is omitted; c.f. [@KK02 Theorem 2.2]. In Theorem \[thm:app1\], if one assumes instead that $\mathcal A, \mathcal B \in C^{k,\mu}(\Omega\times {\mathbb R}^3)$ and ${\boldsymbol{f}} \in C^{k-1,\mu}(\Omega)$ with $k \in {\mathbb Z}_{+}$ and $\mu\in(0,1)$, then one can show that ${\boldsymbol{u}}\in C^{k+1,\mu}(\Omega)$; in particular, ${\boldsymbol{u}}$ becomes a classical solution of the system . Maxwell’s system in quasi-static electromagnetic fields with temperature effect ------------------------------------------------------------------------------- As mentioned in the introduction, the problem arises from the Maxwell’s system in a quasi-static electromagnetic field. Especially, if the electric conductivity strongly depends on the temperature, then by taking the temperature effect into consideration the classical Maxwell system in a quasi-static electromagnetic field reduces to the following mathematical model (see Yin [@Yin97]): $$\left\{\begin{array}{c} {\boldsymbol{H}}_t+\nabla \times(\rho(u)\nabla \times {\boldsymbol{H}})=0,\\ \nabla\cdot {\boldsymbol{H}}=0,\\ u_t-\Delta u=\rho(u)\, {\lvert\nabla\times {\boldsymbol{H}}\rvert}^2, \end{array}\right.$$ where ${\boldsymbol{H}}$ and $u$ represents, respectively, the strength of the magnetic field and temperature while $\rho(u)$ denotes the electrical resistivity of the material, which is assumed to be bounded below and above by some positive constants; i.e., $$\label{eq6.3ap} \nu \leq \rho \leq v^{-1}\;\text{ for some }\;\nu\in (0,1].$$ We are thus lead to consider the following Dirichlet problem in the steady-state case: $$\label{eq6.4ii} \left\{\begin{array}{c} \nabla\times(\rho(u)\nabla\times {\boldsymbol{H}})=0\quad\text{in }\;\Omega,\\ \nabla\cdot {\boldsymbol{H}}=0\quad\text{in }\;\Omega,\\ {\boldsymbol{H}} = {\boldsymbol{\Psi}}\quad\text{on }\;\partial\Omega,\\ -\Delta u=\rho(u)\,{\lvert\nabla\times {\boldsymbol{H}}\rvert}^2\quad\text{in }\;\Omega,\\ u = \phi\quad\text{on }\;\partial\Omega, \end{array}\right.$$ where we assume that ${\boldsymbol{\Psi}}$ and $\phi$ are functions in $W^{1,q}(\Omega)$ for $q>3$. Existence of a pair of weak solutions $({\boldsymbol{H}}, u)$ was proved in Yin [@Yin97] and local Hölder continuity of the pair $({\boldsymbol{H}}, u)$ in $\Omega$ was proved by the authors in [@KK02]. Here, we prove that the pair $({\boldsymbol{H}}, u)$ is indeed uniformly Hölder continuous in $\overline \Omega$. \[thm:2-1\] Let $\Omega$ satisfy the hypothesis of Theorem \[thm3.1t\] and $\rho$ satisfy the condition . Let $({\boldsymbol{H}},u)$ be the weak solution of the problem . Then we have $({\boldsymbol{H}},u)\in C^\alpha(\overline \Omega)$ for some $\alpha \in (0,1)$. In particular, ${\boldsymbol{H}}$ and $u$ are bounded in $\Omega$. By Theorem \[thm3.1t\] and Remark \[rmk2.10\], we find that ${\boldsymbol{H}} \in C^\alpha(\overline \Omega)$ for some $\alpha \in (0,1)$ and satisfies the estimate $$\label{eq6.9nn} {\lVert{\boldsymbol{H}}\rVert}_{C^{\alpha}(\overline \Omega)}\le N {\lVert{\boldsymbol{\Psi}}\rVert}_{W^{1,q}(\Omega)}.$$ Also, notice from and Remark \[rmk2.10\] that for all $x_0\in\Omega$ and $0<r < {\operatorname{diam}}\Omega$, we have $$\label{eq6.10qv} \int_{\Omega_r(x_0)} {\lvert\nabla {\boldsymbol{H}}\rvert}^2 \leq N r^{1+2\alpha} {\lVert{\boldsymbol{\Psi}}\rVert}_{W^{1,q}(\Omega)}^2.$$ On the other hand, using the vector calculus identity, $$\nabla \cdot({\boldsymbol{F}}\times {\boldsymbol{G}})=(\nabla \times {\boldsymbol{F}})\cdot {\boldsymbol{G}}-{\boldsymbol{F}}\cdot(\nabla \times {\boldsymbol{G}}),$$ together with the first equation $\nabla \times(\rho(u)\nabla \times {\boldsymbol{H}})=0$ in , we find that $u$ satisfies $$-\Delta u=\nabla \cdot({\boldsymbol{H}}\times(\rho(u)\nabla \times {\boldsymbol{H}}))\quad\text{in }\;\Omega.$$ By and , we see that ${\boldsymbol{\Phi}}:={\boldsymbol{H}}\times(\rho(u)\nabla \times {\boldsymbol{H}})$ satisfies the following estimate: $$\label{eq6.11ub} \int _{\Omega_r(x_0)}{\lvert{\boldsymbol{\Phi}}\rvert}^2\leq N r^{1+2\alpha} {\lVert{\boldsymbol{\Psi}}\rVert}_{W^{1,q}(\Omega)}^4,\quad \forall x_0\in\Omega,\;\; \forall r \in (0, {\operatorname{diam}}\Omega).$$ Therefore, $u$ is a solution of the Dirichlet problem $$\left\{\begin{array}{c} -\Delta u = \nabla\cdot {\boldsymbol{\Phi}}\quad\text{in }\;\Omega,\\ u = \phi\quad\text{on }\;\partial\Omega, \end{array}\right.$$ where ${\boldsymbol{\Phi}}$ satisfies the Morrey-Campanato type estimate and $\phi\in W^{1,q}(\Omega)$, and thus by a well-known elliptic regularity theory, we have $${\lVertu\rVert}_{C^\alpha(\overline\Omega)} \leq N \left( {\lVert{\boldsymbol{\Psi}}\rVert}_{W^{1,q}(\Omega)}^2 + {\lVert\phi\rVert}_{W^{1,q}(\Omega)}\right).$$ In particular, we see that ${\boldsymbol{H}}$ and $u$ are bounded in $\Omega$. The proof is complete. \[rmk:2-1\] In Theorem \[thm:2-1\], if one assumes further that $\rho\in C^k({\mathbb R})$, where $k\in {\mathbb Z}_{+}$, then by Theorem \[thm:app1\] and the bootstrapping method, one finds that ${\boldsymbol{H}} \in C^{k,\alpha}(\Omega)\cap C^\alpha(\overline\Omega)$ and $u \in C^{k+1,\alpha}(\Omega)\cap C^{\alpha}(\overline\Omega)$; see [@KK02 Theorem 3.2 and Remark 3.3]. In particular, if $\rho\in C^2({\mathbb R})$, then the pair $({\boldsymbol{H}},u)$ becomes a classical solution of the problem . Green’s function {#sec:green} ================ In this section, we will discuss the Green’s functions (more appropriately, it should be called Green’s matrices) of the operator $L$ in arbitrary domains. Let $\Sigma$ be any subset of $\overline{\Omega}$ and $u$ be a $Y^{1,2}(\Omega)$ function. Then we shall say $u$ vanishes on $\Sigma$ (in the sense of $Y^{1,2}(\Omega)$) if $u$ is a limit in $Y^{1,2}(\Omega)$ of a sequence of functions in $C^\infty_0(\overline \Omega\setminus\Sigma)$. \[def2\] We say that a $3\times 3$ matrix valued function ${\boldsymbol{G}}(x,y)$, with entries $G_{ij}(x,y)$ defined on the set ${\bigl\{(x,y)\in\Omega\times\Omega: x\neq y\bigr\}}$, is a Green’s function of $L$ in $\Omega$ if it satisfies the following properties: i) ${\boldsymbol{G}}(\cdot,y) \in W^{1,1}_{loc}(\Omega)$ and $L{\boldsymbol{G}}(\cdot,y)=\delta_y I$ for all $y\in\Omega$, in the sense that for $k=1,2, 3$, $$\int_{\Omega} a (\nabla\times {\boldsymbol{G}}(\cdot,y){\boldsymbol{e}}_k) \cdot (\nabla \times {\boldsymbol{\phi}}) + b (\nabla \cdot {\boldsymbol{G}}(\cdot,y) {\boldsymbol{e}}_k)(\nabla\cdot {\boldsymbol{\phi}})= \phi^k(y),\quad \forall {\boldsymbol{\phi}} \in C^\infty_0(\Omega),$$ where ${\boldsymbol{e}}_k$ denotes the $k$-th unit column vector; i.e., ${\boldsymbol{e}}_1=(1,0,0)^T$, etc. ii) ${\boldsymbol{G}}(\cdot,y) \in Y^{1,2}(\Omega\setminus B_r(y))$ for all $y\in\Omega$ and $r>0$ and ${\boldsymbol{G}}(\cdot,y)$ vanishes on $\partial\Omega$. iii) For any ${\boldsymbol{f}} \in C^\infty_0(\Omega)$, the function ${\boldsymbol{u}}$ given by $${\boldsymbol{u}}(x):=\int_\Omega {\boldsymbol{G}}(y,x) {\boldsymbol{f}}(y)\,dy$$ is a weak solution $Y^{1,2}_0(\Omega)$ of the problem ; i.e., ${\boldsymbol{u}}$ belongs to $Y^{1,2}_0(\Omega)$ and satisfies $L {\boldsymbol{u}}={\boldsymbol{f}}$ in the sense of the identity . We note that part iii) of the above definition gives the uniqueness of a Green’s matrix; see Hofmann and Kim [@HK07]. We shall hereafter say that ${\boldsymbol{G}}(x,y)$ is the Green’s matrix of $L$ in $\Omega$ if it satisfies all the above properties. Then, by using Theorem \[thm3.2a\] and following the proof of [@HK07 Theorem 4.1], we obtain the following theorem, where we use the notation $$a\wedge b:=\min(a,b),\quad a \vee b:=\max(a,b),\quad \text{where }\;a,b \in {\mathbb R}.$$ \[thm5.6gr\] Let $\Omega$ be a (possibly unbounded) domain in ${\mathbb R}^3$. Denote $d_x:={\operatorname{dist}}(x,\partial\Omega)$ for $x\in\Omega$; we set $d_x=\infty$ if $\Omega={\mathbb R}^3$. Then, there exists a unique Green’s function ${\boldsymbol{G}}(x,y)$ of the operator $L$ in $\Omega$, and for all $x, y\in\Omega$ satisfying $0<{\lvertx-y\rvert}<d_x \wedge d_y$, we have $$\label{eq5.7gr} {\lvert{\boldsymbol{G}}(x,y)\rvert} \leq N {\lvertx-y\rvert}^{-1},\quad \text{where }\;N=N(\nu)>0.$$ Also, we have ${\boldsymbol{G}}(x,y)={\boldsymbol{G}}(y,x)^T$ for all $x, y\in \Omega$ with $x\neq y$. Moreover, ${\boldsymbol{G}}(\cdot,y)\in C^\alpha(\Omega\setminus{\{y\}})$ for some $\alpha=\alpha(\nu) \in(0,1)$ and satisfies the following estimate: $$\label{eq5.8gr} {\lvert{\boldsymbol{G}}(x,y)-{\boldsymbol{G}}(x',y)\rvert} \leq N {\lvertx-x'\rvert}^{\alpha} {\lvertx-y\rvert}^{-1-\alpha},\quad \text{where }\;N=N(\nu)>0,$$ provided that ${\lvertx-x'\rvert}<{\lvertx-y\rvert}/2\,$ and ${\lvertx-y\rvert}<d_x\wedge d_y$. Next, we consider the Green’s functions of the system . \[def3g\] We say that a $3\times 3$ matrix valued function ${\boldsymbol{G}}(x,y)$, which is defined on the set ${\bigl\{(x,y)\in\Omega\times\Omega: x\neq y\bigr\}}$, is a Green’s function of the system in $\Omega$ if it satisfies the following properties: i) ${\boldsymbol{G}}(\cdot,y) \in W^{1,1}_{loc}(\Omega)$ for all $y\in\Omega$ and for $k=1,2, 3$, we have $$\begin{aligned} \int_{\Omega} a (\nabla\times {\boldsymbol{G}}(\cdot,y){\boldsymbol{e}}_k)\cdot (\nabla \times {\boldsymbol{\phi}}) &= \phi^k(y),\quad \forall {\boldsymbol{\phi}} \in C^\infty_0(\Omega),\\ \int_\Omega {\boldsymbol{G}}(\cdot,y){\boldsymbol{e}}_k \cdot \nabla \psi &= 0, \quad \forall \psi \in C^\infty_0(\Omega),\end{aligned}$$ where ${\boldsymbol{e}}_k$ denotes the $k$-th unit column vector; i.e., ${\boldsymbol{e}}_1=(1,0,0)^T$, etc. ii) ${\boldsymbol{G}}(\cdot,y) \in Y^{1,2}(\Omega\setminus B_r(y))$ for all $y\in\Omega$ and $r>0$ and ${\boldsymbol{G}}(\cdot,y)$ vanishes on $\partial\Omega$. iii) For any ${\boldsymbol{f}} \in \mathcal D(\Omega)$, the function ${\boldsymbol{u}}$ given by $${\boldsymbol{u}}(x):=\int_\Omega {\boldsymbol{G}}(y,x) {\boldsymbol{f}}(y)\,dy$$ is a weak solution in $Y^{1,2}_0(\Omega)$ of the problem $$\left\{ \begin{array}{c} \nabla\times (a(x)\nabla\times {\boldsymbol{u}})={\boldsymbol{f}} \quad\text{in }\;\Omega,\\ \nabla \cdot {\boldsymbol{u}}=0\quad\text{in }\;\Omega,\\ {\boldsymbol{u}}=0\quad \text{on }\;\partial\Omega, \end{array} \right.$$ that is, ${\boldsymbol{u}}$ belongs to $Y^{1,2}_0(\Omega)$ and satisfies the above system in the sense of the identities and with ${\boldsymbol{g}}=0$ and $h=0$. Then by the same reasoning as above, Theorem \[thm5.6gr\] also applies to the Green’s functions of the system . Moreover, in the case when $\Omega$ is a bounded Lipschitz domain satisfying the condition , a global version of estimate is available thanks to Theorem \[thm3.1t\] and [@KK10 Theorem 3.13]. \[thm5.8gr\] The statement of Theorem \[thm5.6gr\] remains valid for the Green’s functions of the system . Moreover, if we assume that $\Omega$ is a bounded Lipschitz domain satisfying the condition , then for all $x, y\in\Omega$ with $x\neq y$, we have $${\lvert{\boldsymbol{G}}(x,y)\rvert} \leq N {\bigl\{d_x\wedge {\lvertx-y\rvert}\bigr\}}^{\alpha} {\bigl\{d_y\wedge {\lvertx-y\rvert}\bigr\}}^{\alpha} {\lvertx-y\rvert}^{-1-2\alpha},$$ where $\alpha=\alpha(\nu,\Omega) \in (0,1)$ and $N=N(\nu, \Omega)$. \[rmk6.7gr\] Theorem \[thm5.6gr\] in particular establishes the existence of the Green’s function of the operator $L$ in ${\mathbb R}^3$, which is usually referred to as the fundamental solution of the operator $L$. Notice that in that case, we have the pointwise estimate available for all $x, y\in {\mathbb R}^3$ with $x\neq y$, and estimate for all $x, x'$ satisfying ${\lvertx-x'\rvert}<{\lvertx-y\rvert}/2$. The various estimates for the Green’s function that appears in [@HK07 Theorem 4.1] are also available in Theorem \[thm5.6gr\]. Associated parabolic system {#sec:p} =========================== In this separate and independent section, we consider the system of equations $$\label{eq4.1ax} {\boldsymbol{u}}_t+\nabla\times (a(x)\nabla\times {\boldsymbol{u}}) - \nabla (b(x) \nabla\cdot {\boldsymbol{u}})={\boldsymbol{f}} \quad\text{in }\;\Omega\times (0,T),$$ and prove that weak solutions of the system are Hölder continuous in $\Omega\times (0,T)$ provided that ${\boldsymbol{f}}$ satisfies some suitable condition, which is an extension of [@KKM Theorem 3.1], where it is shown that weak solutions of the following system are Hölder continuous: $$\label{eq7.2ps} {\boldsymbol{u}}_t+\nabla\times (a(x)\nabla\times {\boldsymbol{u}})=0,\quad \nabla \cdot {\boldsymbol{u}}=0 \quad\text{in }\;\Omega\times (0,T).$$ As mentioned in the introduction, the above system arises naturally from Maxwell’s equations in a quasi-static electromagnetic field. More precisely, let $\sigma(x)$ denote the electrical conductivity of a material and the vector ${\boldsymbol{H}}(x,t)$ represent the magnetic field. It is shown in Landau et al. [@LLP Ch. VII] that in the quasi-static electromagnetic fields, ${\boldsymbol{H}}$ satisfies the equations $${\boldsymbol{H}}_t+\nabla \times \left(\tfrac{1}{\sigma}\nabla\times {\boldsymbol{H}}\right)=0,\quad \nabla \cdot{\boldsymbol{H}}=0\quad\text{in }\;\Omega\times(0,T),$$ which is a special case of the system . Also, in this section we study the Green’s functions of the system and the system , by using recent results from [@CDK; @CDK10]. Notation and definitions {#notation-and-definitions} ------------------------ In this section, we abandon some notations introduced in Section \[sec:main\]. Instead, we follow the notations of Ladyzhenskaya et al. [@LSU] with a slight variation. We denote by $Q_T$ the cylindrical domain $\Omega\times (0,T)$, where $T>0$ is a fixed but arbitrary number, and $S_T$ the lateral surface of $Q_T$; i.e., $S_T=\partial\Omega\times [0,T]$. Parabolic function spaces such as $L_{q,r}(Q_T)$, $L_q(Q_T)$, $W^{1,0}_2(Q_T)$, $W^{1,1}_2(Q_T)$, $V_2(Q_T)$, and $V^{1,0}_2(Q_T)$ are exactly those defined in Ladyzhenskaya et al. [@LSU]. We define the parabolic distance between the points $X=(x,t)$ and $Y=(y,s)$ by $${\lvertX-Y\rvert}_p:=\max({\lvertx-y\rvert}, \sqrt{{\lvertt-s\rvert}})$$ and define the parabolic Hölder norm as follows: $${\lvertu\rvert}_{\alpha/2,\alpha;Q}=[u]_{\alpha/2, \alpha;Q}+{\lvertu\rvert}_{0;Q}: = \sup_{\substack{X, Y \in Q\\ X\neq Y}} \frac{{\lvertu(X)-u(Y)\rvert}}{{\lvertX-Y\rvert}_p^ \alpha} + \sup_{X\in Q}\,{\lvertu(X)\rvert}.$$ We write $\nabla u$ for the spatial gradient of $u$ and $u_t$ for its time derivative. We define $$Q^{-}_r(X)= B_r(x)\times (t-r^2,t),\quad Q_r(X)=B_r(x)\times (t-r^2,t+r^2).$$ We denote by ${\mathscr{L}}$ the operator $\partial_t+L$; i.e., $${\mathscr{L}}{\boldsymbol{u}}:={\boldsymbol{u}}_t+ L {\boldsymbol{u}}= {\boldsymbol{u}}_t+\nabla\times (a(x)\nabla\times {\boldsymbol{u}}) - \nabla (b(x) \nabla\cdot {\boldsymbol{u}}),$$ and by ${{}^t\!\mathscr{L}}$ the adjoint operator $-\partial_t+L$. For a cylinder $Q$ of the form $\Omega\times (a,b)$, where $-\infty\leq a<b\leq \infty$, we say that ${\boldsymbol{u}}$ is a weak solution in $V_2(Q)$ ($V^{1,0}_2(Q)$) of ${\mathscr{L}}{\boldsymbol{u}} ={\boldsymbol{f}}$ if ${\boldsymbol{u}} \in V_2(Q)$ ($V^{1,0}_2(Q)$) and satisfies the identity $$-\int_{Q} {\boldsymbol{u}} \cdot {\boldsymbol{\phi}}_t+ \int_{Q} a (\nabla \times {\boldsymbol{u}}) \cdot (\nabla\times {\boldsymbol{\phi}}) + b (\nabla \cdot {\boldsymbol{u}})(\nabla \cdot {\boldsymbol{\phi}})= \int_{Q} {\boldsymbol{f}} \cdot {\boldsymbol{\phi}}, \quad \forall {\boldsymbol{\phi}} \in C^\infty_0(Q).$$ Similarly, we say that ${\boldsymbol{u}}$ is a weak solution in $V_2(Q)$ ($V^{1,0}_2(Q)$) of ${{}^t\!\mathscr{L}}{\boldsymbol{u}} ={\boldsymbol{f}}$ if ${\boldsymbol{u}} \in V_2(Q)$ ($V^{1,0}_2(Q)$) and satisfies the identity $$\int_{Q} {\boldsymbol{u}} \cdot {\boldsymbol{\phi}}_t+ \int_{Q} a (\nabla \times {\boldsymbol{u}}) \cdot (\nabla\times {\boldsymbol{\phi}}) + b (\nabla \cdot {\boldsymbol{u}})(\nabla \cdot {\boldsymbol{\phi}})= \int_{Q} {\boldsymbol{f}} \cdot {\boldsymbol{\phi}}, \quad \forall {\boldsymbol{\phi}} \in C^\infty_0(Q).$$ Hölder continuity estimates --------------------------- The following theorem is a parabolic analogue of Theorem \[thm3.2a\]. However, it should be clearly understood that in the theorem below, the coefficients $a$ and $b$ of the system are assumed to be time-independent. \[thm4.2b\] Let $Q_T=\Omega\times(0,T)$, where $\Omega$ be a domain in ${\mathbb R}^3$. Assume that $a(x)$ and $b(x)$ are measurable functions on $\Omega$ satisfying . Let ${\boldsymbol{u}}$ be a weak solution in $V_2(Q_T)$ of the system with ${\boldsymbol{f}} \in L_q(Q_T)$ for some $q>5/2$. Then ${\boldsymbol{u}}$ is Hölder continuous in $Q_T$, and for any $Q^{-}_R=Q^{-}_R(X_0) \subset\subset Q_T$, we have the following estimate for ${\boldsymbol{u}}$ in $Q^{-}_{R/2}$: $$\label{eq4.3hi} R^\alpha [{\boldsymbol{u}}]_{\alpha, \alpha/2;Q^{-}_{R/2}} + {\lvert{\boldsymbol{u}}\rvert}_{0;Q^{-}_{R/2}} \leq N \left( R^{-5/2} {\lVert{\boldsymbol{u}}\rVert}_{L_2(Q^{-}_R)}+ R^{2-5/q} {\lVert{\boldsymbol{f}}\rVert}_{L_q(Q^{-}_R)}\right),$$ where $\alpha=\alpha(\nu, q) \in (0,1)$ and $N=N(\nu,q)>0$. The proof of the above theorem will be given in §\[sec7.3p\] below. As in [@KKM Theorem 3.2], one can consider the case when the coefficients $a$ and $b$ of the system are time-dependent but still have some regularity in $t$-variable. For a measurable function $f=f(X)=f(x,t)$, we set $$\omega_\delta(f):=\sup_{X=(t,x)\in{\mathbb R}^4} \sup_{r\le \delta} \frac{1}{{\lvertQ_r(X)\rvert}} \int_{t-r^2}^{t+r^2} \!\int_{B_r(x)} {\lvertf(y,s)-\bar f_{t,r}(y)\rvert}\,dy\,ds, \quad\forall \delta>0,$$ where $\bar f_{t,r}(y)=\fint_{t-r^2}^{t+r^2} f(y,s)\,ds$. We say that $f$ belongs to ${\mathrm{VMO}}_t$ if $\lim_{\delta\to 0} \omega_\delta(f)=0$. Assume that the coefficients $a(x,t)$ and $b(x,t)$ are defined in the entire space ${\mathbb R}^4$ and belong to ${\mathrm{VMO}}_t$. Let ${\boldsymbol{u}}\in V_2(Q_T)$ be a weak solution of the system $${\boldsymbol{u}}_t+\nabla\times (a(x,t)\nabla\times {\boldsymbol{u}}) - \nabla (b(x,t) \nabla\cdot {\boldsymbol{u}})={\boldsymbol{f}} \quad\text{in }\;Q_T,$$ where ${\boldsymbol{f}} \in L_q(Q_T)$ with $q>5/2$. Then one can show that ${\boldsymbol{u}}$ is Hölder continuous in $Q_T$. The proof is very similar to that of [@KKM Theorem 3.2]. Also, as is mentioned in Remark \[rmk2.5rr\], one may assume that $a$ is a $3\times 3$ (possibly non-symmetric) matrix valued function satisfying the uniform ellipticity and boundedness condition; see [@KKM] and also consult [@Kim] for treatment of non-symmetric coefficients. \[rmk4.5ff\] In Theorem \[thm4.2b\], instead of assuming that ${\boldsymbol{f}}\in L_q(Q_T)$, one may assume that ${\boldsymbol{f}}$ belongs to the mixed norm space $L_{q,r}(Q_T)$ with suitable $q$ and $r$. In fact, one may assume that ${\boldsymbol{f}}$ belongs to the Morrey space, $M^{10/7,10(3+2\delta)/7}$ with $\delta\in(0,1)$, where $M^{p,q}$ is the set of all functions $f\in L_p(Q_T)$ with finite norm (c.f. Lieberman [@Lieberman §VI.7]) $${\lVertu\rVert}_{M^{p,q}}=\sup_{Q^{-}_r(X_0)\subset Q_T} \left(r^{-q}\int_{Q^{-}_r(X_0)} {\lvertu\rvert}^p\,\right)^{1/p}.$$ Then, instead of the estimate in the proof of Theorem \[thm4.2b\], we would have $$\int_{Q^{-}_r(X)}{\lvert\nabla{\boldsymbol{w}}\rvert}^2 \leq N{\lVert{\boldsymbol{f}}\rVert}^2_{L_{10/7}(Q^{-}_r(X))} \leq N r^{3+2\delta} {\lVert{\boldsymbol{f}}\rVert}^2_{M^{10/7,10(3+2\delta)/7}}.$$ The rest of proof remains essentially the same. Green’s function {#greens-function} ---------------- Let $U=\Omega\times {\mathbb R}$ be an infinite cylinder in with the base $\Omega$ being a (possibly unbounded) domain in ${\mathbb R}^3$ and let $\partial U$ be its (parabolic) boundary $\partial\Omega\times{\mathbb R}$. Let ${\mathcal{S}}\subset \overline Q$ and $u$ be a $W^{1,0}_2(Q)$ function. We say that $u$ vanishes (or write $u=0$) on ${\mathcal{S}}$ if $u$ is a limit in $W^{1,0}_2(Q)$ of a sequence of functions in $C^\infty_0(\overline Q\setminus {\mathcal{S}})$. We say that a $3\times 3$ matrix valued function ${\boldsymbol{G}}(X,Y)={\boldsymbol{G}}(x,t,y,s)$, with entries $G_{ij} (X,Y)$ defined on the set ${\bigl\{(X,Y)\in U\times U: X\neq Y\bigr\}}$, is a Green’s function of the operator ${\mathscr{L}}$ in $U$ if it satisfies the following properties: i) ${\boldsymbol{G}}(\cdot,Y)\in W^{1,0}_{1,loc}(U)$ and ${\mathscr{L}}{\boldsymbol{G}}(\cdot,Y) = \delta_Y I$ for all $Y\in U$, in the sense that for $k=1,2,3$, the following identity holds for all ${\boldsymbol{\phi}} \in C^\infty_0(U)$: $$\int_{U} -{\boldsymbol{G}}(\cdot,Y) {\boldsymbol{e}}_k \cdot {\boldsymbol{\phi}}_t+ a (\nabla\times {\boldsymbol{G}}(\cdot,y){\boldsymbol{e}}_k)\cdot (\nabla \times {\boldsymbol{\phi}}) + b (\nabla \cdot {\boldsymbol{G}}(\cdot,y) {\boldsymbol{e}}_k)(\nabla\cdot {\boldsymbol{\phi}})= \phi^k(Y),$$ where ${\boldsymbol{e}}_k$ denotes the $k$-th unit column vector; i.e., ${\boldsymbol{e}}_1=(1,0,0)^T$, etc. ii) ${\boldsymbol{G}}(\cdot,Y) \in V_2^{1,0}(U\setminus Q_r(Y))$ for all $Y\in U$ and $r>0$ and ${\boldsymbol{G}}(\cdot,Y)$ vanishes on $\partial U$. iii) For any ${\boldsymbol{f}}\in C^\infty_0(U)$, the function ${\boldsymbol{u}}$ given by $${\boldsymbol{u}}(X):=\int_U {\boldsymbol{G}}(Y,X) {\boldsymbol{f}}(Y)\,dY$$ is a weak solution in $V^{1,0}_2(U)$ of ${{}^t\!\mathscr{L}}{\boldsymbol{u}}={\boldsymbol{f}}$ and vanishes on $\partial U$. We note that part iii) of the above definition gives the uniqueness of a Green’s function; see [@CDK]. We shall thus say that ${\boldsymbol{G}}(X,Y)$ is the Green’s function of ${\mathscr{L}}$ in $U$ if it satisfies the above properties. By Theorem \[thm4.2b\] and [@CDK Theorem 2.7], we have the following theorem: \[thm1hk\] Let $ U=\Omega\times {\mathbb R}$ be an infinite cylinder, where the base $\Omega$ is a (possibly unbounded) domain in ${\mathbb R}^3$. Then the Green’s function ${\boldsymbol{G}}(X,Y)$ of ${\mathscr{L}}$ exists in $U$ and satisfies $$\label{eq7.9cc} {\boldsymbol{G}}(x,t,y,s)={\boldsymbol{G}}(x,t-s,y,0);\quad {\boldsymbol{G}}(x,t,y,0)\equiv 0\;\;\text{for}\; t<0.$$ For all ${\boldsymbol{f}}\in C^\infty_0(U)$, the function ${\boldsymbol{u}}$ given by $$\label{eqn:E-70} {\boldsymbol{u}}(X):=\int_{U} {\boldsymbol{G}}(X,Y){\boldsymbol{f}}(Y)\,dY$$ is a weak solution in $V^{1,0}_2(U)$ of ${\mathscr{L}}{\boldsymbol{u}}={\boldsymbol{f}}$ and vanishes on $\partial U$. Moreover, for all ${\boldsymbol{g}}\in L^2(\Omega)$, the function ${\boldsymbol{u}}(x,t)$ defined by $${\boldsymbol{u}}(x,t):=\int_{\Omega} {\boldsymbol{G}}(x,t,y,0){\boldsymbol{g}}(y)\,dy$$ is a unique weak solution in $V^{1,0}_2(Q_T)$ of the problem[^1] $${\mathscr{L}}{\boldsymbol{u}} =0,\quad {\boldsymbol{u}} \big|_{S_T}=0,\quad {\boldsymbol{u}} \big|_{t=0}={\boldsymbol{g}}(x),$$ and if ${\boldsymbol{g}}$ is continuous at $x_0\in\Omega$ in addition, then we have $$\lim_{\substack{(x,t)\to (x_0,0)\\ x\in\Omega,\,t>0}} {\boldsymbol{u}}(x,t)={\boldsymbol{g}}(x_0).$$ \[rmk7.11hk\] The identity ${\boldsymbol{G}}(x,t,y,s)={\boldsymbol{G}}(x,t-s,y,0)$ in Theorem \[thm1hk\] comes from the fact that ${\mathscr{L}}$ has time-independent coefficients; see [@DK09]. The function ${\boldsymbol{K}}_t(x,y)$ defined by $$\label{eq7.12qm} {\boldsymbol{K}}_t(x,y)={\boldsymbol{G}}(x,t,y,0),\quad x,y\in\Omega,\;\; t>0$$ is usually called *the (Dirichlet) heat kernel* of the elliptic operator $L$ in $\Omega$. It is known that ${\boldsymbol{K}}_t$ satisfies the semi-group property $${\boldsymbol{K}}_{t+s}(x,y)=\int_\Omega {\boldsymbol{K}}_t(x,z) {\boldsymbol{K}}_s(z,y)\,dz, \quad\forall x,y\in\Omega,\;\;\forall t,s>0,$$ and in particular, if $\Omega={\mathbb R}^3$, then we also have the following identity: $$\int_{{\mathbb R}^3} {\boldsymbol{K}}_t(x,y)\,dy=I, \quad\forall x\in{\mathbb R}^3,\;\;\forall t>0,$$ where $I$ denotes the $3\times 3$ identity matrix; see [@CDK Theorem 2.11 and Remark 2.12]. The following theorem is another consequence of Theorem \[thm4.2b\]; see [@CDK Theorem 2.11]. \[thm2hk\] Let ${\boldsymbol{K}}_t(x,y)$ be the heat kernel for the operator $L$ in ${\mathbb R}^3$ as constructed in Theorem \[thm1hk\]. Then we have the following Gaussian bound for the heat kernel: $${\lvert{\boldsymbol{K}}_t(x,y)\rvert} \leq N t^{-3/2}\exp\{-\kappa|x-y|^2/ t \},\quad \forall t>0,\;\; x,y\in{\mathbb R}^3,$$ where $N=N(\nu)>0$ and $\kappa=\kappa(\nu)>0$. Next, we consider the Green’s functions of the system . \[def3ks\] We say that a $3\times 3$ matrix valued function ${\boldsymbol{G}}(X,Y)={\boldsymbol{G}}(x,t,y,s)$, with entries $G_{ij} (X,Y)$ defined on the set ${\bigl\{(X,Y)\in U\times U: X\neq Y\bigr\}}$, is a Green’s function of the system in $U$ if it satisfies the following properties: i) ${\boldsymbol{G}}(\cdot,Y)\in W^{1,0}_{1,loc}(U)$ for all $Y\in U$ and for $k=1,2, 3$, we have $$\begin{aligned} \int_{U} -{\boldsymbol{G}}(\cdot,Y) {\boldsymbol{e}}_k \cdot {\boldsymbol{\phi}}_t+ a (\nabla\times {\boldsymbol{G}}(\cdot,y){\boldsymbol{e}}_k)\cdot (\nabla \times {\boldsymbol{\phi}}) &= \phi^k(Y), \quad \forall {\boldsymbol{\phi}} \in C^\infty_0(U),\\ \int_U {\boldsymbol{G}}(\cdot,Y){\boldsymbol{e}}_k \cdot \nabla \psi &= 0, \quad \forall \psi \in C^\infty_0(U),\end{aligned}$$ where ${\boldsymbol{e}}_k$ denotes the $k$-th unit column vector; i.e., ${\boldsymbol{e}}_1=(1,0,0)^T$, etc. ii) ${\boldsymbol{G}}(\cdot,Y) \in V_2^{1,0}(U\setminus Q_r(Y))$ for all $Y\in U$ and $r>0$ and ${\boldsymbol{G}}(\cdot,Y)$ vanishes on $\partial U$. iii) For any ${\boldsymbol{f}}\in C^\infty_0(U)$ satisfying $\nabla\cdot {\boldsymbol{f}} =0$ in $U$, the function ${\boldsymbol{u}}$ defined by $${\boldsymbol{u}}(X):=\int_\Omega {\boldsymbol{G}}(Y,X) {\boldsymbol{f}}(Y)\,dY$$ is a weak solution in $V^{1,0}_2(U)$ of the problem $$-{\boldsymbol{u}}_t+\nabla\times (a(x)\nabla\times {\boldsymbol{u}})={\boldsymbol{f}},\quad \nabla \cdot {\boldsymbol{u}}=0,\quad {\boldsymbol{u}}\big|_{\partial U}=0,$$ that is, ${\boldsymbol{u}}$ belongs to $V^{1,0}_2(U)$, vanishes on $\partial U$, and satisfies the above system in the sense of the following identities: $$\begin{aligned} \int_{U} {\boldsymbol{u}} \cdot {\boldsymbol{\phi}}_t+ a (\nabla \times {\boldsymbol{u}}) \cdot (\nabla\times {\boldsymbol{\phi}})& = \int_{U} {\boldsymbol{f}} \cdot {\boldsymbol{\phi}}, \quad \forall {\boldsymbol{\phi}} \in C^\infty_0(U).\\ \int_\Omega {\boldsymbol{u}} \cdot \nabla \psi &=0,\quad \forall \psi\in C^\infty_0(U).\end{aligned}$$ It can be easily seen that existence of the Green’s function of the system in $U$ follows from [@KKM Theorem 3.1] and [@CDK Theorem 2.7], and that it satisfies the relations in Theorem \[thm1hk\]. We shall say that ${\boldsymbol{K}}_t$ defined by the formula is *the (Dirichlet) heat kernel* of the elliptic system in $\Omega$. Then it satisfies the statement in Remark \[rmk7.11hk\] as well as that in Theorem \[thm2hk\]. If we assume further that $\Omega$ is a domain satisfying the hypothesis of Theorem 3.5, then we have the following result, which is an easy consequence of [@CDK10 Theorem 3.6] combined with Theorem \[thm3.1t\] and [@DK09 Lemma 4.4] (see also [@CDK10 Remark 3.10]): \[thm3hk\] Let $ U=\Omega\times {\mathbb R}$ with $\Omega$ satisfying the hypothesis of Theorem \[thm3.1t\]. Then the heat kernel ${\boldsymbol{K}}_t(x,y)$ of the system exists in $\Omega$. Moreover, for all $T>0$ there exists a constant $N=N(\nu,\Omega,T)$ such that for all $x,y \in \Omega$ and $0<t \leq T$, we have $${\lvert{\boldsymbol{K}}_t(x,y)\rvert} \leq N \left(1 \wedge \frac {d_x} {\sqrt {t} \vee {\lvertx-y\rvert}} \right)^{\alpha} \left(1 \wedge \frac{d_y} {\sqrt {t} \vee {\lvertx-y\rvert}}\right)^{\alpha}\,t^{-3/2}\exp {\{-\kappa {\lvertx-y\rvert}^2/t\}},$$ where $\kappa=\kappa(\nu,\Omega)>0$ and $\alpha=\alpha(\nu,\Omega) \in (0,1)$ are constants independent of $T$, and we used the notation $a\wedge b=\min(a,b)$, $a\vee b=\max(a,b)$, and $d_x={\operatorname{dist}}(x,\partial\Omega)$. Proof of Theorem \[thm4.2b\] {#sec7.3p} ---------------------------- We follow the strategy used in [@KKM]. As before, we shall make the qualitative assumption that the weak solution ${\boldsymbol{u}}$ is smooth in $Q_T$. Let us first assume that ${\boldsymbol{f}} =0$ and consider the homogeneous system $$\label{eq4.4dp} {\boldsymbol{u}}_t+L {\boldsymbol{u}} :={\boldsymbol{u}}_t+\nabla\times (a(x)\nabla\times {\boldsymbol{u}}) - \nabla (b(x) \nabla\cdot {\boldsymbol{u}})=0 \quad\text{in }\; Q_T.$$ The proof of the following lemma is very similar to that of [@KKM Lemma 3.1 – 3.3], where we strongly used the assumption that coefficients of the operator are time-independent. \[lem4.5\] Let ${\boldsymbol{v}}\in V_2(Q^{-}_{\lambda r})$, where $Q^{-}_{\lambda r}=Q^{-}_{\lambda r}(X_0)$ with $\lambda>1$, be a weak solution of ${\boldsymbol{v}}_t+L{\boldsymbol{v}} =0$ in $Q^{-}_{\lambda r}$. Then we have the following estimates: $$\begin{aligned} \sup_{t_0-r^2\leq t \leq t_0}\int_{B_r}{\lvert{\boldsymbol{v}}(\cdot,t)\rvert}^2 + \int_{Q^{-}_r} {\lvert\nabla {\boldsymbol{v}}\rvert}^2 &\leq N r^{-2} \int_{Q^{-}_{\lambda r}}{\lvert{\boldsymbol{v}}\rvert}^2,\\ \sup_{t_0-r^2\leq t \leq t_0}\int_{B_r}{\lvert\nabla {\boldsymbol{v}}(\cdot,t)\rvert}^2 + \int_{Q^{-}_r} {\lvert{\boldsymbol{v}}_t\rvert}^2 & \leq N r^{-4} \int_{Q^{-}_{\lambda r}}{\lvert{\boldsymbol{v}}\rvert}^2,\\ \sup_{t_0-r^2\leq t \leq t_0}\int_{B_r}{\lvert{\boldsymbol{v}}_t(\cdot,t)\rvert}^2 + \int_{Q^{-}_r} {\lvert\nabla {\boldsymbol{v}}_t\rvert}^2 & \leq N r^{-6} \int_{Q^{-}_{\lambda r}}{\lvert{\boldsymbol{v}}\rvert}^2.\end{aligned}$$ where $N=N(\nu, \lambda)>0$. The proofs of following lemmas are also standard in parabolic theory and shall be omitted; see e.g., [@CDK Lemma 2.4 and 3.1] and also [@CDK10 Lemma 8.6]. \[lem4.6\] Let ${\boldsymbol{u}}\in V_2(Q^{-}_r)$, where $Q^{-}_r=Q^{-}_r(X_0)$, be a weak solution of ${\boldsymbol{u}}_t+L {\boldsymbol{u}}= {\boldsymbol{f}}$ in $Q^{-}_r$. Then we have the estimate $$\int_{Q^{-}_r} {\lvert{\boldsymbol{u}} - {\boldsymbol{u}}_{X_0,r}\rvert}^2 \leq N \left(r^2 \int_{Q^{-}_r} {\lvert\nabla {\boldsymbol{u}}\rvert}^2 + r^{-1}{\lVert{\boldsymbol{f}}\rVert}_{L_1(Q^{-}_r)}^2\right);\quad {\boldsymbol{u}}_{X_0,r} = \fint_{Q^{-}_r(X_0)} {\boldsymbol{u}}.$$ where $N=N(\nu)>0$. \[lem4.7\] Let ${\boldsymbol{u}}\in V_2(Q^{-}_{\lambda r})$, where $Q^{-}_{\lambda r}=Q^{-}_{\lambda r}(X_0)$ with $\lambda>1$, be a weak solution of ${\boldsymbol{u}}_t+L{\boldsymbol{u}} ={\boldsymbol{f}}$ in $Q^{-}_{\lambda r}$. Then we have $$\sup_{t_0-r^2\leq t \leq t_0}\int_{B_r}{\lvert{\boldsymbol{u}}(\cdot,t)\rvert}^2 + \int_{Q^{-}_r} {\lvert\nabla {\boldsymbol{u}}\rvert}^2 \leq N \left(r^{-2} \int_{Q^{-}_{\lambda r}} {\lvert{\boldsymbol{u}}\rvert}^2+ {\lVert{\boldsymbol{f}} \rVert}_{L_{10/7}(Q^{-}_{\lambda r})}^2\right),$$ where $N=N(\nu,\lambda)>0$. With the above lemmas and Theorem \[thm3.2a\] at hand, we now proceed as in the proof of [@KKM Theorem 3.1] (see also proof of [@Kim Theorem 3.3]) to conclude that any weak solution ${\boldsymbol{v}} \in V_2(Q_T)$ of the system is Hölder continuous in $Q_T$ and satisfies the estimate $$\label{eq4.10mb} [{\boldsymbol{v}}]_{\mu,\mu/2;Q^{-}_{R/2}} \leq N R^{-5/2-\mu} {\lVert{\boldsymbol{v}}\rVert}_{L_2(Q^{-}_R)};\quad Q^{-}_R=Q^{-}_R(X_0),$$ where $\mu=\mu(\nu) \in (0,1)$ and $N=N(\nu)>0$. There is a well-known procedure to obtain Hölder estimates for weak solutions of the inhomogeneous system ${\boldsymbol{u}}_t+L{\boldsymbol{u}} = {\boldsymbol{f}}$ from the above estimate for weak solutions of the corresponding homogeneous system ${\boldsymbol{u}}_t+L{\boldsymbol{u}} = 0$, which we shall demonstrate below for the completeness. For $X\in Q^{-}_{R/4}(X_0)$ and $r\in (0,R/4]$, we split ${\boldsymbol{u}}={\boldsymbol{v}} + {\boldsymbol{w}}$ in $Q^{-}_r(X)$, where ${\boldsymbol{w}}$ is the unique weak solution in $V^{1,0}_2(Q^{-}_r(X))$ of ${\boldsymbol{w}}_t+L {\boldsymbol{w}}={\boldsymbol{f}}$ in $Q^{-}_r(X)$ with zero boundary condition on the parabolic boundary $\partial_p Q^{-}_r(X)$. Then, ${\boldsymbol{v}}={\boldsymbol{u}}-{\boldsymbol{w}}$ satisfies ${\boldsymbol{v}}_t+L {\boldsymbol{v}}=0$ in $Q^{-}_r(X)$, and thus, for $0<\rho \leq r$ (c.f. [@CDK Eq. (3.9)]), we have $$\begin{aligned} \label{eq7.32} \int_{Q^{-}_\rho(X)}{\lvert\nabla {\boldsymbol{u}}\rvert}^2 &\leq 2\int_{Q^{-}_\rho(X)}{\lvert\nabla {\boldsymbol{v}}\rvert}^2+2 \int_{Q^{-}_\rho(X)}{\lvert\nabla {\boldsymbol{w}}\rvert}^2\\ \nonumber &\leq N(\rho/r)^{3+2\mu}\int_{Q^{-}_r(X)}{\lvert\nabla {\boldsymbol{v}}\rvert}^2+2 \int_{Q^{-}_r(X)}{\lvert\nabla {\boldsymbol{w}}\rvert}^2\\ \nonumber &\leq N(\rho/r)^{3+2\mu}\int_{Q^{-}_r(X)}{\lvert\nabla {\boldsymbol{u}}\rvert}^2+N \int_{Q^{-}_r(X)}{\lvert\nabla {\boldsymbol{w}}\rvert}^2.\end{aligned}$$ Choose $p\in (5/2,q)$ such that $\alpha:=2-5/p<\mu$. By the energy inequality and a parabolic embedding theorem (see [@LSU §II.3]), we get (c.f. [@CDK Eq. (3.10)]) $$\label{eq7.33} \int_{Q^{-}_r(X)}{\lvert\nabla{\boldsymbol{w}}\rvert}^2 \leq N{\lVert{\boldsymbol{f}}\rVert}^2_{L_{10/7}(Q^{-}_r(X))} \leq N r^{3+2\alpha} {\lVert{\boldsymbol{f}}\rVert}^2_{L_{p}(Q^{-}_{R/2})}.$$ Combining with , we get for all $\rho<r \leq R/4$, $$\int_{Q^{-}_\rho(X)}{\lvert\nabla {\boldsymbol{u}}\rvert}^2\leq N(\rho/r)^{3+2\mu}\int_{Q^{-}_r(X)}{\lvert\nabla {\boldsymbol{u}}\rvert}^2+ Nr^{3+2\alpha}{\lVert{\boldsymbol{f}}\rVert}^2_{L_p(Q^{-}_{R/2})}. \end{equation*} Then, by a well known iteration argument (see e.g., \cite[Lemma 2.1, p. 86]{Gi83}), we have \[ \int_{Q^{-}_r(X)}{\lvert\nabla {\boldsymbol{u}}\rvert}^2 \leq N(r/R)^{3+2\alpha}\int_{Q^{-}_{R/4}(X)}{\lvert\nabla {\boldsymbol{u}}\rvert}^2+ N r^{3+2\alpha}{\lVert{\boldsymbol{f}}\rVert}^2_{L_p(Q^{-}_{R/2})}.$$ By Lemma \[lem4.6\], the above estimate, and Hölder’s inequality, we get $$\int_{Q^{-}_r(X)}{\lvert{\boldsymbol{u}}- {\boldsymbol{u}}_{X,r}\rvert}^2\leq N r^{5+2\alpha}\left(R^{-3-2\alpha}{\lVert\nabla {\boldsymbol{u}}\rVert}^2_{L_2(Q^{-}_{R/4}(X))}+{\lVert{\boldsymbol{f}}\rVert}^2_{L_p(Q^{-}_{R/2})}\right).$$ Then, by Campanato’s characterization of Hölder continuous functions, we have $$[{\boldsymbol{u}}]_{\alpha,\alpha/2; Q^{-}_{R/4}} \leq N\left(R^{-3/2-\alpha}{\lVert\nabla {\boldsymbol{u}}\rVert}_{L_2(Q^{-}_{R/2})}+{\lVert{\boldsymbol{f}}\rVert}_{L_{p}(Q^{-}_{R/2})}\right).$$ By Lemma \[lem4.7\] and Hölder’s inequality (recall $\alpha=2-5/p$), we then obtain $$\label{eq10.28} R^\alpha [{\boldsymbol{u}}]_{\alpha,\alpha/2; Q^{-}_{R/4}} \leq N\left(R^{-5/2}{\lVert{\boldsymbol{u}}\rVert}_{L_2(Q^{-}_R)}+R^{2-5/q} {\lVert{\boldsymbol{f}}\rVert}_{L_q(Q^{-}_{R})}\right).$$ Similar to , we then also obtain $$\label{eq07zz} {\lvert{\boldsymbol{u}}\rvert}_{0; Q^{-}_{R/8}} \leq N\left(R^{-5/2}{\lVert{\boldsymbol{u}}\rVert}_{L_2(Q^{-}_R)}+R^{2-5/q} {\lVert{\boldsymbol{f}}\rVert}_{L_q(Q^{-}_{R})}\right).$$ Finally, the desired estimate follows from , , and the standard covering argument. The theorem is proved. [$\blacksquare$]{} Appendix {#sec:appendix} ======== Existence of a unique weak solution of the problem --------------------------------------------------- We prove existence of a unique weak solution in $Y^{1,2}_0(\Omega)$ of a more general problem $$\label{eq9.1ax} \left\{ \begin{array}{c} \nabla\times (a(x)\nabla\times {\boldsymbol{u}})-\nabla(b(x)\nabla \cdot {\boldsymbol{u}})= {\boldsymbol{f}} + \nabla\times {\boldsymbol{F}} + \nabla g\quad\text{in }\;\Omega,\\ {\boldsymbol{u}}=0\quad \text{on }\;\partial\Omega, \end{array} \right.$$ where ${\boldsymbol{f}} \in L^{6/5}(\Omega)$ and ${\boldsymbol{F}}, g \in L^2(\Omega)$. We say that a function ${\boldsymbol{u}}$ is a weak solution in $Y^{1,2}_0(\Omega)$ of the problem if ${\boldsymbol{u}}$ that belongs to $Y^{1,2}_0(\Omega)$ and satisfies the identity $$\int_\Omega a (\nabla \times {\boldsymbol{u}}) \cdot (\nabla \times {\boldsymbol{v}}) + b (\nabla \cdot {\boldsymbol{u}}) (\nabla \cdot {\boldsymbol{v}}) = \int_\Omega {\boldsymbol{f}} \cdot {\boldsymbol{v}} +{\boldsymbol{F}} \cdot \nabla \times {\boldsymbol{v}} + g \nabla \cdot {\boldsymbol{v}}, \quad \forall {\boldsymbol{v}} \in C^\infty_0(\Omega).$$ Notice that the inequality implies that the bilinear form $$\label{eq9.2ax} {\left\langle{\boldsymbol{u}},{\boldsymbol{v}}\right\rangle}={\left\langle{\boldsymbol{u}},{\boldsymbol{v}}\right\rangle}_H= \sum_{i=1}^3 \int_\Omega \nabla u^i\cdot \nabla v^i$$ defines an inner product on $H:=Y^{1,2}_0(\Omega)^3$ and that $H$ equipped with the above inner product is a Hilbert space. We define the bilinear form associated to the operator $L$ as $$B[{\boldsymbol{u}},{\boldsymbol{v}}]:=\int_{\Omega} a (\nabla \times {\boldsymbol{u}})\cdot (\nabla \times {\boldsymbol{v}})+ b (\nabla \cdot {\boldsymbol{u}})(\nabla \cdot {\boldsymbol{v}}).$$ Then, in light of the identity , we find that $$\int_\Omega {\lvert\nabla \times {\boldsymbol{u}}\rvert}^2 + {\lvert\nabla \cdot {\boldsymbol{u}}\rvert}^2 = \int_\Omega {\lvert\nabla {\boldsymbol{u}}\rvert}^2,\quad \forall {\boldsymbol{u}} \in H.$$ It is routine to check that the bilinear form $B$ satisfies the hypothesis of the Lax-Milgram Theorem. On the other hand, by the inequality , the linear functional $$F({\boldsymbol{v}}):= \int_\Omega {\boldsymbol{f}} \cdot {\boldsymbol{v}} + {\boldsymbol{F}} \cdot \nabla\times {\boldsymbol{v}} + g \nabla \cdot {\boldsymbol{v}}$$ is bounded on $H$. Therefore, by the Lax-Milgram Theorem, there exists a unique element ${\boldsymbol{u}}\in H$ such that $B[{\boldsymbol{u}}, {\boldsymbol{v}}]=F({\boldsymbol{v}})$ for all ${\boldsymbol{v}} \in H$, which shows that ${\boldsymbol{u}}$ is a unique weak solution in $Y^{1,2}_0(\Omega)$ of the problem . [$\blacksquare$]{} Existence of a unique weak solution of the problem --------------------------------------------------- We shall assume that ${\boldsymbol{f}} \in H_{6/5}(\Omega)$ and ${\boldsymbol{g}} \in L^2(\Omega)$. First, we consider the case when $h=0$ and construct a weak solution in $Y^{1,2}_0(\Omega)$ of the problem as follows. Let $H$ be the completion of $\mathcal D(\Omega)$ (see Section \[sec:nd\] for its definition) in the norm of $Y^{1,2}(\Omega)$. Then $H \subset Y^{1,2}_0(\Omega)^3$ and as above, it equipped with the inner product becomes a Hilbert space. We define the bilinear form $B$ on $H$ by $$B[{\boldsymbol{u}},{\boldsymbol{v}}]:=\int_\Omega a(\nabla \times {\boldsymbol{u}})\cdot (\nabla \times {\boldsymbol{v}}).$$ Then the bilinear form $B$ satisfies the hypothesis of the Lax-Milgram Theorem. We also define the linear functional $F$ on $H$ as $$F({\boldsymbol{v}}):= \int_\Omega {\boldsymbol{f}} \cdot {\boldsymbol{v}} + {\boldsymbol{g}} \cdot (\nabla \times {\boldsymbol{v}}).$$ One can easily check that $F$ is bounded on $H$. Therefore, by the Lax-Milgram Theorem, there exists a unique element ${\boldsymbol{u}}\in H$ such that $B[{\boldsymbol{u}}, {\boldsymbol{v}}]=F({\boldsymbol{v}})$ for all ${\boldsymbol{v}} \in H$. In particular, ${\boldsymbol{u}}$ satisfies identities and with $h=0$. Therefore, ${\boldsymbol{u}}$ is a weak solution in $Y^{1,2}_0(\Omega)$ of the problem in the case when $h=0$. Next, we consider the case when $h\neq 0$. In this case, we assume further that $\Omega$ is a bounded Lipschitz domain so that in particular, we have $Y^{1,2}_0(\Omega)=W^{1,2}_0(\Omega)$. For $h \in L^2(\Omega)$ such that $\int_\Omega h=0$, let ${\boldsymbol{v}} \in W^{1,2}_0(\Omega)$ be a solution of the divergence problem $$\left\{ \begin{array}{c} \nabla \cdot {\boldsymbol{v}} = h \quad \text{in }\Omega\\ {\boldsymbol{v}}= 0 \quad \text{on }\;\partial \Omega, \end{array} \right.$$ that satisfies the following estimate (see e.g., Galdi [@Galdi §III.3]) $${\lVert\nabla {\boldsymbol{v}}\rVert}_{L^2(\Omega)} \leq N {\lVerth\rVert}_{L^2(\Omega)};\quad N=N(\Omega).$$ Let ${\boldsymbol{w}}$ be a solution in $Y^{1,2}_0(\Omega)$ of the problem with ${\boldsymbol{g}}-a \nabla\times {\boldsymbol{v}}$ in place of ${\boldsymbol{g}}$ and $h=0$, which can be constructed as above. Then, it is easy to check that ${\boldsymbol{u}}:={\boldsymbol{v}} + {\boldsymbol{w}}$ is a solution in $Y^{1,2}_0(\Omega)=W^{1,2}_0(\Omega)$ of the original problem . Finally, we prove the uniqueness of weak solutions in $Y^{1,2}_0(\Omega)$ of the problem under the assumption that $\Omega$ is a bounded Lipschitz domain. Notice that in that case we have $Y^{1,2}_0(\Omega)=W^{1,2}_0(\Omega)$. Suppose ${\boldsymbol{u}}$ and ${\boldsymbol{v}}$ are two weak solutions in $W^{1,2}_0(\Omega)$ of the problem . Then the difference ${\boldsymbol{w}}={\boldsymbol{u}} -{\boldsymbol{v}}$ is a weak solution in $W^{1,2}_0(\Omega)$ of the problem with ${\boldsymbol{f}}={\boldsymbol{g}}=0$ and $h=0$. By the identity , we find that ${\boldsymbol{w}} \in H$; see e.g., Galdi [@Galdi §III.4]. Then by the identity , we conclude that ${\boldsymbol{w}}=0$, which proves the uniqueness of weak solutions in $Y^{1,2}_0(\Omega)$ of the problem . [$\blacksquare$]{} This work was supported by WCU(World Class University) program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (R31-2008-000-10049-0). Kyungkeun Kang was supported by the Korean Research Foundation Grant (MOEHRD, Basic Research Promotion Fund, KRF-2008-331-C00024) and the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (2009-0088692). Seick Kim was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (2010-0008224). Bogovskiǐ, M. E. *Solution of the first boundary value problem for an equation of continuity of an incompressible medium*. (Russian) Dokl. Akad. Nauk SSSR **248** (1979), no. 5, 1037–1040; English translation: Soviet Math. Dokl. **20** (1979), no. 5, 1094–1098 (1980). Cho, S.; Dong, H.; Kim, S. *On the Green’s matrices of strongly parabolic systems of second order*. Indiana Univ. Math. J. **57** (2008), no. 4, 1633–1677. Cho, S.; Dong, H.; Kim, S. *Global estimates for Green’s matrix of second order parabolic systems with application to elliptic systems in two dimensional domains*. arXiv:1007.5429v1 \[math.AP\] Dahlberg, B. E. J.; Kenig, C. E.; Verchota, G. C. *Boundary value problems for the systems of elastostatics in Lipschitz domains*. Duke Math. J. **57** (1988), no. 3, 795–818. De Giorgi, E. *Sulla differenziabilità e l’analiticità delle estremali degli integrali multipli regolari*. Mem. Accad. Sci. Torino. Cl. Sci. Fis. Mat. Nat. (3) **3** (1957), 25–43. De Giorgi, E. *Un esempio di estremali discontinue per un problema variazionale di tipo ellittico*. Boll. Un. Mat. Ital. (4) **1** (1968), 135–137. Dong, H.; Kim, S. *Green’s matrices of second order elliptic systems with measurable coefficients in two dimensional domains*. Trans. Amer. Math. Soc. **361** (2009), no. 6, 3303-3323. Galdi, G. P. *An introduction to the mathematical theory of the Navier-Stokes equations*. Vol. I. Linearized steady problems. Springer-Verlag, New York, 1994. Giaquinta, M. *Multiple integrals in the calculus of variations and nonlinear elliptic systems*. Princeton University Press, Princeton, NJ, 1983. Giaquinta, M; Hong, M.-C. *Partial regularity of minimizers of a functional involving forms and maps*. NoDEA Nonlinear Differential Equations Appl. **11** (2004), no. 4, 469–490. Gilbarg, D.; Trudinger, N. S. *Elliptic partial differential equations of second order*. Reprint of the 1998 ed. Springer-Verlag, Berlin, 2001. Hofmann, S.; Kim, S. *The Green function estimates for strongly elliptic systems of second order*. Manuscripta Math. **124** (2007), no. 2, 139-172. Jerison, D.; Kenig, C. E. *The inhomogeneous Dirichlet problem in Lipschitz domains*. J. Funct. Anal. **130** (1995), no. 1, 161–219. Kang, K.; Kim, S. *On the Hölder continuity of solutions of a certain system related to Maxwell’s equations*. SIAM J. Math. Anal. **34** (2002), no. 1, 87–100 (electronic). Kang, K.; Kim, S. *Erratum: On the Hölder Continuity of Solutions of a Certain System Related to Maxwell’s Equations*. SIAM J. Math. Anal. **36** (2005), no. 5, 1704–1705 (electronic). Kang, K.; Kim, S. *Global pointwise estimates for Green’s matrix of second order elliptic systems*. J. Differential Equations (2010) **249** (2010), no. 11, 2643–2662. Kang, K.; Kim, S.; Minut, A. *On the regularity of solutions to a parabolic system related to Maxwell’s equations*. J. Math. Anal. Appl. **299** (2004), no. 1, 89–99. Kim, S. *Gaussian estimates for fundamental solutions of second order parabolic systems with time-independent coefficients*. Trans. Amer. Math. Soc. **360** (2008), no. 11, 6031–6043. Ladyzhenskaya, O. A.; Solonnikov, V. A.; Ural’tseva, N. N. *Linear and quasilinear equations of parabolic type*. American Mathematical Society: Providence, RI, 1967. Ladyzhenskaya, O. A.; Ural’tseva, N. N. *Linear and quasilinear elliptic equations*. Academic Press, New York-London, 1968. Landau, L. D.; Lifshitz, E. M.; Pitaevskii, L. P. *Electrodynamics of Continuous Media*, 2nd Ed. Butterworth-Heinemann, Oxford, U.K.,1984. Lieberman G. M. *Second order parabolic differential equations*, World Scientific Publishing Co., Inc., River Edge, NJ, 1996. Malý, J.; Ziemer, W. P. *Fine regularity of solutions of elliptic partial differential equations*. American Mathematical Society, Providence, RI, 1997. Yin, H.-M. *Regularity of solutions to Maxwell’s system in quasi-stationary electromagnetic fields and applications*. Comm. Partial Differential Equations **22** (1997), no. 7-8, 1029–1053. Yin, H.-M. *Optimal regularity of solution to a degenerate elliptic system arising in electromagnetic fields*. Commun. Pure Appl. Anal. **1** (2002), no. 1, 127–134. Yin, H.-M. *Regularity of weak solution to MaxwellÕs equations and applications to microwave heating*. J. Differential Equations **200** (2004), no. 1, 137–161. [^1]: See, Ladyzhenskaya et al. [@LSU §III.1]
|
Mid
|
[
0.570719602977667,
28.75,
21.625
] |
The Leishmania spp. protozoa have a profound effect on the host cell that they invade. These parasites reside intracellularly usually in macrophages, a cell type with the capacity to kill intracellular microbes. Rather than succumb, Leishmania paralyze the microbicidal pathways of the host cell, changing the macrophage from a lethal cell to an intracellular safe haven which allows the parasite to survive, replicate and ultimately spread to neighboring cells. The mechanism(s) through which the parasite paralyzes the host microbicidal function is incompletely understood. It has recently come to light that most eukaryotic organisms utilize short noncoding RNA sequences to globally regulate expression of a wide variety of genes. Indeed short RNA sequences of 18-30 bp in length, called microRNAs, are critical for regulating expression of an estimated 30% of the human genome. The hypothesis underlying this proposal is that microRNAs are the upstream trigger(s) determining which pattern of macrophage activation will occur after Leishmania infection. We will address the following questions. (1) What microRNAs does the parasite usually induce or suppress during infection of macrophages? (2) Which microRNAs are induced in response to stimuli that activate macrophage toward different polar phenotypes? (3) Is there overlap between the microRNAs discovered in Aims 1 and 2, and what is the effect of experimental manipulation of these microRNAs on Leishmania-infected macrophages? Specific aims of the proposal are: 1. To perform a global profiling of changes in microRNA expression induced in response to phagocytosis of L. chagasi by human macrophages. 2. To use a similar profiling approach to determine which microRNAs are induced in response to macrophage activation toward different polarized phenotypes. Of primary interest will be the microRNA changes that occur during M1 (classical) activation with reciprocal changes in other forms of macrophage activation, since classically activated macrophages can kill intracellular Leishmania. 3. To correlate the results of Aims 1 and 2, and to selectively either overexpress or suppress macrophage expression of selected microRNAs that may change in the pattern of macrophage activation or intracellular parasite growth.
|
High
|
[
0.7001356852103121,
32.25,
13.8125
] |
Castle Hill Electrician Castle Hill electrician If you are looking for a licensed and renowned Castle Hill electrician, we have the right team for you. We look after lighting installation, replacement of an appliance or just any other electrical service in your home or workplace. We are an established electrician Castle Hill team who is highly qualified and reliable in all their services. Sydney Local Electricians have a Castle Hill electrician team for all your electrical needs. This is regardless of whether it’s a simple repair or a full wiring project. Sometimes it’s hard to find the right electrician Castle Hill team to get your work done. Some electricians aren’t reliable or experienced, and it is a possibility that they may make mistakes in your house or office. Electricity and electrical faults are extremely dangerous if not handled properly. A simple electrical fault can cause a lot of damage to your home or business. Get the best service available to ensure the safety of you, your loved ones or your staff. Emergency Castle Hill electrician – We prioritise emergencies We give priority to all your electrical emergencies. Sydney Local Electricians have a specialised emergency Castle Hill electrician team and are trained to handle emergency situations with ease. We believe that the quick arrival of your electrician Castle Hill team to an electrical emergency could save you money. This is because it reduces the likelihood of further damage. This is therefore why our emergency Castle Hill electrician team have vehicles fully equipped with all necessary equipment and tools ready to dispatch at any time. Our emergency Castle Hill electrician team is available 24 hours and therefore, they are ready to tackle any problem that comes their way during the emergency. Don’t hesitate to call us on 02 9746 2435. Our team will arrive at your home or office in no time because we want to ensure your safety. Residential electrician Castle Hill It is always better to avoid emergency situations than running for help during one. That’s why we advise you to choose Sydney Local Electricians for all your home wiring and installation services. By choosing Sydney Local Electricians, we guarantee the work of our electrician Castle Hill team unlike other electricians and can help you to avoid electrical risks in the future. Our residential Castle Hill electrician team take all safety precautions during their residential electrical projects and are well organised. We provide a full range of residential electrical services. This includes installation of fans, lighting, hot water systems and smoke alarms. We can also do the bigger jobs such as switchboard replacement and complete rewiring. Commercial Castle Hill electrician You understand how important it is to have a reliable and qualified commercial electrician if you are a business owner. This is because it’s your responsibility to ensure the safety of your staff. Our commercial Castle Hill electrician team are available for wiring, rewiring, light or appliance installations and repairs. We also do routine electrical maintenance to provide maximum safety. If you haven’t yet called Sydney Local Electricians, call today on 02 9746 2435 to experience the best Castle Hill electrician.
|
High
|
[
0.706521739130434,
32.5,
13.5
] |
Introduction {#sec1} ============ Imagine getting stuck in a massive traffic jam while driving to work. Anticipating being late, you feel that anger rises. Wanting to regulate your anger, you reappraise the situation by thinking that traffic jams often look worse than they are. While monitoring how anger regulation is faring, you realize that current reappraisal attempts are not working well. This poses a dilemma of whether to continue reappraising or try a different course of action. You decide to switch to a different option and attempt to distract yourself by listening to the radio. Although still feeling some anger, you notice a sense of relief. In thinking about the aforementioned situation, several factors can determine monitoring regulatory decisions to maintain a particular strategy or to switch from it to a different one, and their affective consequences. These factors may include elements of the emotional situation, such as the intensity of anger that is activated by the degree of traffic congestion, and the regulatory strategies one is monitoring, such as distraction *vs* reappraisal. Although monitoring appears fundamental in emotion regulation, existing studies remain scarce and indirect with regard to factors that determine monitoring decisions and their affective consequences. Recent conceptual accounts suggest that emotion regulation is composed of several interacting and iterating valuation systems of key regulatory stages, in which emotion regulation-related decisions are made (e.g. [@ref5]; [@ref10]; [@ref34]). Identification involves making the initial decision whether to regulate an emotion or not. If a decision to regulate was reached, a selection regulatory stage involves choosing one of several available regulatory strategies. Following the selection of a particular strategy, it is executed during an implementation stage. Only then monitoring begins, involving the decision whether and how to adjust an active implemented strategy across subsequent iterations, in order to maximize adaptive outcomes. The conceptual extended process model of emotion regulation ([@ref10]) specifies three monitoring decision options: (i) Maintenance: a decision to continue implementing a currently active strategy, conceptually involves subsequent iterations where regulation is positively valued during identification, and the specific active strategy is positively valued during selection. (ii) Switching: a decision to alter a currently active implemented strategy, conceptually involves subsequent iterations where regulation is positively valued during identification, but an alternative regulatory strategy is positively valued during selection. (iii) Stopping: a decision to cease regulation altogether, conceptually involves a subsequent iteration where regulation is negatively valued during identification. This model further argues that adequate monitoring decisions lead to adaptive outcomes. Moving from conceptual grounds, existing empirical monitoring findings remain indirect. [@ref23]) empirically supported a conceptual model ([@ref14]), by providing neural correlates of regulatory maintenance across time. While important, these studies do not describe the switching and stopping monitoring decision options or how individuals decide between options. Extending the scope beyond maintenance, one type of indirect studies (e.g. [@ref46]; [@ref25]) examined the affective consequences of forced switching (i.e. instructing participants to implement two different strategies consecutively) and maintenance (i.e. instructing participants to implement the same strategy twice). While important, because participants could not freely decide between maintaining and switching, factors that influence monitoring decisions remain unexplored. A second type of correlational studies indirectly examined individuals' decisions to maintain or switch from an initial implemented strategy. [@ref16]) found that individuals who reported more switching from an inefficient implemented strategy and reported decreased levels of psychopathology. Recently, utilizing experience sampling, [@ref15]) demonstrated that switching from regulatory strategies that were inefficiently reduced negative affect, subsequently led to improved affect. Although important, these correlational studies that did not manipulate factors influencing monitoring decisions cannot reach causal conclusions. Conceptual advances suggest potential factors that influence monitoring decisions ([@ref5]). This framework focuses on individual differences in the sensitivity to the internal (e.g. subjective and physiological emotional states) and external (e.g. contextual or social cues) environment, which may influence the decision between the three core monitoring decision options (maintain, switch, stop). Empirical support for this framework comes from a single study that examined the influence of the internal environment on the decision to maintain *vs* switch from an implemented strategy and its consequences for general well-being ([@ref3]). Specifically, participants were asked to implement distraction or reappraisal while being physiologically monitored and were then given a choice between maintenance and switching. Results showed that increased internal physiological intensity while implementing reappraisal, which denotes inefficient regulation, was associated with increased switching to distraction. Furthermore, participants who showed high correspondence between increased internal physiological intensity during reappraisal implementation and switching from reappraisal reported higher well-being. While valuable, [@ref3]) did not manipulate participants' internal intensity and therefore could not evaluate the causal influence of this fundamental factor on monitoring decisions. Furthermore, focusing on the role of the internal environment leaves the important role of the external environment unexplored. Last, the evaluation of adaptive consequences of monitoring decisions was based on participants' self-reported well-being rather than on immediate behavioral or neural affective consequences. Despite many advantages, self-reports represent the endpoint, rather than online underlying mechanism, of emotional modulation and are susceptible to reporting biases. Overcoming these limitations, the present study provided two important contributions to the scant literature on regulatory monitoring. The first goal was to provide causal (rather than correlational) evidence for the influence of two core interconnected factors on monitoring regulatory decisions. The second goal was to provide evidence for neuro-affective (rather than self-reported) consequences of monitoring decisions. We utilized our conceptual framework ([@ref34]) that focuses on the combination of: (i) external generated emotional intensity (high, low) and (ii) regulatory strategy (distraction, reappraisal). Our conceptual framework has successfully explained the role of these two core factors in two other regulatory stages that precede the post-implementation/monitoring stage, namely, implementation and selection stages. In the implementation regulatory stage, our framework ([@ref35]) and supporting findings (e.g. [@ref37]; [@ref31]) indicated that in high-intensity situations, early attentional disengagement via distraction leads to a stronger emotional modulation, relative to reappraisal, which involves engaging with emotional information prior to a late semantic meaning reinterpretation. By contrast, in low-intensity situations, distraction and reappraisal are equally effective, but only reappraisal, that involves making sense of emotional events, may provide long-term benefits ([@ref45]; [@ref43]). In the selection regulatory stage, our model ([@ref36]) and supporting evidence (e.g. [@ref39]; [@ref32]) repeatedly found that in high intensity, most individuals prefer distraction, which results in enhanced short-term emotional modulation, over reappraisal, and in low-intensity, most individuals prefer reappraisal, which may provide long-term benefits, over distraction. These regulatory preferences appear very robust (Cohen's *d* = \~2, with 90% of individuals showing these patterns, see [@ref34] for review). Drawing from these lines of research, our first research question was whether external emotional intensity (high, low) and regulatory strategies (distraction, reappraisal) influence regulatory decisions in a monitoring stage. To that end, we created a novel experimental paradigm that manipulates these two independent variables. In each trial, participants were initially instructed to implement distraction or reappraisal when facing images of low or high intensity (henceforth 'initial implementation') and were then asked to choose whether they wish to maintain the initial implemented strategy or switch from it (henceforth 'monitoring choice'). Participants then implemented their chosen option (henceforth 'post-choice implementation'). Consistent with our conceptual framework and previous findings ([@ref34]), our first hypothesis was that initial implementation that is incongruent with averaged regulatory preferences obtained in prior studies (i.e. reappraisal in high intensity, distraction in low intensity) would result in increased switching frequency, relative to initial implementation that is congruent with previously established averaged regulatory preferences (i.e. distraction in high intensity, reappraisal in low intensity). Our second research question was what are the neuro-affective consequences of monitoring decisions. To answer it, we utilized event-related potentials (ERPs), which have been extensively used in the study of emotion regulation (e.g. [@ref12]). We focused on the late positive potential (LPP), a centro-parietal electro-cortical component that reflects enhanced processing of emotionally arousing information. Attenuation of this component reflects downregulation success ([@ref8]) with good internal consistency ([@ref21]). We first wished to replicate prior neural findings ([@ref31]) by demonstrating that in high (but not low) intensity, initial distraction implementation would result in greater LPP modulation, relative to reappraisal. Importantly, this study was the first to examine the neuro-affective consequences (LPP modulation) of monitoring regulatory decisions. We hypothesized that exclusively in high intensity, where neural differences between distraction and reappraisal are evident ([@ref31]), maintaining distraction (relative to switching to reappraisal) and switching to distraction (relative to maintaining reappraisal) would each be associated with stronger LPP modulation. Method {#sec2} ====== Below we report how we determined our sample size, all data exclusions, manipulations and measures in the study. Ethical approval {#sec3} ---------------- This study was approved by the institutional review board of Tel Aviv University, and participants provided informed consent prior to inclusion in the study. Participants {#sec4} ------------ Thirty native Hebrew-speaking[^1^](#fn1){ref-type="fn"} subjects participated. Sample size was pre-determined based on a priori rule of collecting data from 30 participants for ERP studies conducted in our lab (e.g. [@ref31]; [@ref30]; [@ref33]). Two participants were excluded prior to data analyses. One participant was not a Hebrew native speaker, and another participant did not comply with experimental instructions (see below). The main results reported below remain unchanged when including these two participants (all *P*s \< 0.02). The final sample consisted of 28 participants (8 men, mean age = 23.27 years, s.d. = 2.08). Stimuli {#sec5} ------- One hundred eighty negative pictures were chosen from previously validated pictorial datasets[^2^](#fn2){ref-type="fn"} (IAPS: [@ref17]; EmoPicS: [@ref44]). High-intensity pictures (*n* = 90, *M*~arousal~ = 6.45, *M*~valence~ = 2.04) significantly differed in valence and arousal normative ratings from low-intensity pictures (*n* = 90, *M*~arousal~ = 4.73, *M*~valence~ = 3.38) (both *F*s \> 423, *P*s \< 0.001; c.f. [@ref38]). Picture contents were matched for high- and low-intensity categories when possible. Importantly, analyses that decomposed interactions involving emotional intensity compared different regulation instructions within each intensity separately. Therefore, possible content differences between intensities have no bearing on the results. Procedure {#sec6} --------- Following initial EEG setup, participants learned (four trials) and practiced (eight trials) distraction and reappraisal implementation (c.f., [@ref39]). Adherence to regulatory instructions involved asking participants to talk out loud throughout implementation. Distraction instructions involved disengaging attention from emotional pictures by producing unrelated neutral thoughts (e.g. visualizing geometric shapes or daily chores). Reappraisal instructions involved engaging with the processing of emotional pictures, but reinterpreting their negative meaning (e.g. by thinking about less negative aspects of situations or that situations will improve over time) (c.f. [@ref39]). Participants were asked not to form reality challenge reappraisals (i.e. interpret emotional events as fake; [@ref26]). The task consisted of 180 trials (divided into six equally long blocks, separated by breaks). Pictures of low and high intensity were presented in a random order, with no more than two consecutive trials of the same intensity, and were randomly assigned to reappraisal or distraction. To ensure adherence to regulatory instructions, participants provided five oral examples of each strategy during experimental breaks. Based on a priori exclusion criteria in our lab, participants who made more than 50% errors (*n* = 1) were excluded from data analyses. Average percentage of errors was minimal (*M* = 0.05%, s.d. = 0.08). Each trial (see [Figure 1](#f1){ref-type="fig"}) began with a fixation cross (jittered between 2100 and 2900 ms) followed by a 2500 ms cue screen that signaled the intensity of the upcoming picture ('Intense' or 'Mild') and the initial strategy implementation ('Distraction' or 'Reappraisal') (c.f. [@ref31]), followed by a jittered 400--800 ms black screen. The picture was then presented (3000 ms), during which participants implemented the required strategy (*'*initial implementation'). Then, a choice screen was presented, where participants were asked to choose whether they wished to maintain the initial implemented strategy (i.e. choosing distraction following initial distraction implementation or choosing reappraisal following initial reappraisal implementation) or switch to the other regulatory option (i.e. choosing distraction following initial reappraisal implementation or choosing reappraisal following initial distraction implementation) (*'*monitoring choice*'*). Then, a 2000 ms cue screen presented both the chosen regulatory strategy and the intensity of the picture that was previously shown, followed by a jittered 400--800 ms black screen. The same picture was then presented again for 2000 ms, during which participants implemented their chosen strategy *('*post-choice implementation'). The post-implementation window was 1000 ms shorter than the initial implementation window to balance adequate duration to observe LPP effects with maintaining a 5-s picture presentation per trial (c.f., [@ref18]). {#f1} To remind participants that monitoring decisions were aimed at reducing negative experience, the offset of 10% of pictures was followed by a Likert rating scale in which participants reported their negative experience (1 = 'not negative at all', 9 = 'extremely negative'. For complete explanation and analysis of partial self-report data, see [Supplementary Materials](#sup1){ref-type="supplementary-material"}, page 1. Electrophysiological recordings and data reduction {#sec7} -------------------------------------------------- EEG recordings used a Biosemi ActiveTwo recording system (Biosemi B. V., Amsterdam, The Netherlands), from 32 electrodes sites,[^3^](#fn3){ref-type="fn"} and one electrode on each of the left and right mastoids. The horizontal electrooculogram (EOG) was recorded from two electrodes placed 1 cm to the left and right of the external canthi, and vertical EOG was recorded from an electrode placed beneath the left eye. The voltage from each electrode site was referenced online with respect to Common Mode Sense/Driven Right Leg electrodes. EEG data were sampled at 256 Hz. Offline signal processing entailed EEGLAB and ERPLAB Toolboxes ([@ref7]; [@ref19]). Data from all electrodes were re-referenced to the average activity of the left and right mastoids. Continuous EEG data were then band-pass filtered (cutoffs: 0.05--20 Hz; 12 dB/oct rolloff). Eye movement artifacts were removed using independent component analysis ([@ref7]; [@ref20]). For the initial implementation LPP analysis, EEG was epoched into 3200 ms segments, starting 200 ms (baseline) before the picture appeared on the screen and lasting 3000 ms (end of the initial implementation). Similarly, for the post-choice implementation LPP analysis, EEG was epoched into 2200 ms segments, starting 200 ms (baseline) before the picture re-appeared on the screen and lasting 2000 ms (end of post-choice implementation). All trials containing activity exceeding 80 μV within 200 ms were excluded. The initial implementation LPP was defined as the mean amplitude between 300 (when the LPP becomes evident; [@ref11]) and 3000 ms (end of the initial implementation stage). Similarly, the post-choice implementation LPP was defined as the mean amplitude between 300 and 2000 ms (end of the post-choice implementation stage). The LPP was measured as the average activity over centro-parietal electrode sites, where the LPP is typically maximal (CPz--CP1--CP2; c.f., [@ref24]; [@ref43]). Statistical analyses {#sec8} -------------------- Preliminary initial implementation analyses include data that precede monitoring decisions, where all analyzed factors are experimentally manipulated, resulting in the total number of trials (*n* = 180) equally divided across four conditions (*n* = 45 per condition). Slight variation in trial numbers across conditions is possible due to differential ERP trial rejection. However, rejections were minimal \[Valid trials: low intensity/distraction: *M* = 43.96, s.d. = 2.07; low intensity/reappraisal: *M* = 44.21, s.d. = 1.52; high intensity/distraction: *M* = 44.10, *s.d.* = 1.73; high intensity/reappraisal: *M* = 44.18 trials, s.d. = 1.48\]. Trial number did not significantly differ between conditions (all *F*s \< 1). Accordingly, to replicate prior implementation findings, preliminary initial implementation analyses employed a 2 × 2 analysis of variance (ANOVA) with emotional intensity (high, low) and initial implementation (distraction, reappraisal) as repeated-measures factors and LPP as a dependent variable.[^4^](#fn4){ref-type="fn"} Trial numbers across conditions that constitute the first research question (behavioral regulatory monitoring decisions) are matched by experimental design. Accordingly, we employed a 2 × 2 ANOVA with emotional intensity (high, low) and initial implementation (distraction, reappraisal) as repeated-measures factors and switching frequency as a dependent variable. To examine the second research question (short-term neural consequences of monitoring decisions), we first created a neural consequence LPP outcome variable. For each trial, we subtracted the post-choice implementation LPP amplitude from the respective initial implementation/pre-choice LPP amplitude in that trial, with higher scores indicating stronger LPP attenuation (i.e. higher regulatory success). Note that each trial consists of a pre-choice implementation phase and a post-choice implementation phase and thus the subtraction that constitutes the dependent variable occurs within each individual trial. The neural consequence LPP variable was created to adjust for initial implementation LPP differences between distraction and reappraisal (see results below and c.f. [@ref32]). Notably, the main results reported below remain unchanged when re-conducting the analysis on the post-choice implementation LPP, without performing these subtractions (i.e. the predicted Emotional Intensity × Initial Implementation × Monitoring Choice interaction remains significant \[*b* = 3.82, SE = 1.72, 95% CI (0.45, 7.20), *F*(1, 4254) = 4.94, *P* = 0.026\]). Because the neural consequence measure takes into account LPPs that are measured following monitoring decisions, trial numbers across conditions cannot be experimentally controlled ([Table 1](#TB1){ref-type="table"} for all values). This element potentially biases conventional ANOVAs. Accordingly, the analyses of the second research question were performed on individual trials (rather than on condition averages across trials) of the neural consequence LPP, using linear mixed models (LMMs, using PROC MIXED procedure in SAS version 9.4 for Windows; [@ref9]; [@ref4]). LMM is a widely accepted method that accounts for unequal trial numbers in experimental designs, by treating between-subject variance in the outcome measure as a random effect, in addition to modeling the within-subjects effects (e.g. [@ref6]; [@ref27]; [@ref41]). LMMs also make use of all available data, which protects from reduced power and reliability of averaged estimates of cells with considerable amount of discarded observations. ###### Trial means, standard deviations, max and min values for each experimental condition in the post-choice implementation analyses High Intensity Low Intensity --------- ---------------- --------------- ------- ------- ------- ------ ------- ------- Average 34.57 20.43 10.43 24.54 27.07 8.50 17.89 36.46 SD 7.12 8.95 7.12 8.92 9.53 6.07 9.49 6.10 Max 45 42 25 41 44 29 36 44 Min 20 4 0 3 9 1 1 16 Note: Repeated-measures factors include emotional intensity (high, low), initial regulatory strategy (distraction, reappraisal) and monitoring regulatory choice (maintain, switch), and the dependent variable is 'neural consequence' LPP. Note: Dist→Dist: maintaining distraction following initial distraction implementation; Reap→Dist: switching to distraction following initial reappraisal implementation; Dist→Reap: switching to reappraisal following initial distraction implementation; Reap→Reap: maintaining reappraisal following initial reappraisal implementation. Note: The LPP is considered a large robust component, and even in our smallest cell the average exceeds the recommended number of trials required to produce a reliable LPP ([@ref21]). Our LMM modeling approach involved balancing accuracy with parsimony by starting with a maximum random effect structure, followed by separate steps that decrease in complexity (first examining the random three-way interaction, then random two-way interactions, then random main effects), involving the removal of random effects not supported by the data. Each step applies multiple iterations and likelihood evaluations to achieve convergence of the final model estimates. At the end of this convergence process, there are cases when the final model estimates a random effect as one of its boundary constraints, such as exactly zero. We adopted a conservative approach of only removing random effects that explained exactly zero variance c.f., [@ref1] (see [Supplementary Table S3](#sup1){ref-type="supplementary-material"} for all non-zero random effects). Kenward--Roger approximation for degrees of freedom, which entails a Satterthwaite approximation ([@ref28]), was computed as recommended for unbalanced designs ([@ref29]). We removed correlations between random effects to achieve model convergence. The initial maximum random effect structure converged *(−2\*LL* = 42705.9, *AIC* = 42731.9). In the first model complexity reduction step, the random effect of the Emotional Intensity × Initial Implementation × Monitoring Choice three-way interaction was estimated to be zero. Therefore, in the next model complexity reduction step, this random interaction was removed. This new model converged with unchanged fit statistics (*−2\*LL* = 42705.9, *AIC* = 42731.9) while also revealing that the two-way interaction of Emotional Intensity × Initial Implementation was zero. Therefore, in the next model complexity reduction step, this random interaction was removed. This new model converged with unchanged fit statistics as before (*−2\*LL* = 42705.9, *AIC* = 42731.9) and revealed that the main effect of monitoring choice was zero. Accordingly, the final model consisted of all fixed effects together with the following random effect structure: intercept, Emotional Intensity, Initial Implementation, Initial Implementation × Monitoring Choice and Emotional Intensity × Monitoring Choice. This final model converged (*−2\*LL* = 42705.9, *AIC* = 42731.9) and had comparable model fit to a similar model that allowed random effects to correlate (−2 log likelihood = 42693.5, AIC = 42735.5; *Δ*−*2\*LL* = 12.4, *df* = 10, *P* = 0.26). Results {#sec9} ======= Replicating prior neural findings during initial implementation {#sec10} --------------------------------------------------------------- We first wished to replicate prior findings (e.g. [@ref31]) demonstrating that in high (but not low) emotional intensity, initial implementation of distraction would result in greater LPP modulation, relative to reappraisal. The ANOVA yielded a predicted Emotional Intensity × Initial Implementation interaction that was marginally significant \[*F*(1, 27) = 3.36, *P* = 0.07, η~p~^2^ = 0.11; [Figure 2](#f2){ref-type="fig"}, [Supplementary Table S1](#sup1){ref-type="supplementary-material"} for all effects\]. Follow-up analyses supported predictions in showing that in high intensity \[*F*(1, 27) = 13.43, *p* = 0.001, η~p~^2^ = 0.33\], distraction implementation resulted in decreased LPPs (*M* = 4.07, SE = 1.08), relative to reappraisal implementation (*M* = 6.50, SE = 0.97). As expected, in low intensity, there were no differences \[*F*(1, 27) \< 1, *P =* 0.45, η~p~^2^ = 0.02\] in LPPs between distraction (*M* = 0.93, SE = 0.92) and reappraisal implementation (*M* = 1.58, SE = 0.86). {#f2} Regulatory preferences predict regulatory choices to switch *vs* maintain an implemented strategy during post-implementation monitoring {#sec11} --------------------------------------------------------------------------------------------------------------------------------------- Our first research question examined whether initial implementation that is incongruent with regulatory preferences (i.e. distraction in low intensity, reappraisal in high intensity) results in increased switching frequency, relative to initial implementation that is congruent with regulatory preferences (i.e. reappraisal in low intensity, distraction in high intensity)? Prior to hypothesis testing, we confirmed using ANOVAs previously established regulatory preferences, in finding that in a monitoring context, intensity increase from low-to-high was associated with increased preference for distraction over reappraisal (i.e. increased preference to maintain distraction or switch to distraction from reappraisal) \[*t*(27) = −7.81, *P* \< 0.001, *d =* 1.47\]. Confirming our main prediction, we found a significant Emotional Intensity × Initial Implementation interaction \[*F*(1, 27) = 60.99, *P* \< 0.001, η~p~^2^ = 0.69; see [Figure 3](#f3){ref-type="fig"} and [Supplementary Table S2](#sup1){ref-type="supplementary-material"} for all effects\]. Follow-up analyses showed that in high intensity, initial reappraisal implementation (the non-preferred strategy) resulted in higher switching frequency (*M* = 45.41%, SE = 0.04), compared to initial distraction implementation (the preferred strategy, *M* = 23.22%, SE = 0.03) \[*F*(1, 27) = 16.14, *P* \< 0.001, η~p~^2^ = 0.37\]. A mirrored pattern emerged in low intensity, where initial distraction implementation (the non-preferred strategy) resulted in higher switching frequency (*M* = 39.62%, SE = 0.40), compared to initial reappraisal implementation (the preferred strategy, *M* = 18.87%, SE = 0.02) \[*F*(1, 27) = 19.68, *P* \< 0.001, η~p~^2^ = .42\]. {#f3} Neuro-affective consequences of monitoring regulatory decisions {#sec12} --------------------------------------------------------------- Our second research question examined the short-term neural consequences (LPP modulation) of monitoring regulatory choices. We expected to show that in high intensity, where distraction is more effective than reappraisal ([@ref37]; [@ref31], [@ref32]), choosing to maintain (or switch to) distraction would result in greater LPP modulation, compared with choosing to maintain (or switch to) reappraisal. By contrast, in low intensity, where distraction and reappraisal are equally effective, choosing one strategy over the other was not expected to result in differential LPPs. Results using LMMs showed the expected Emotional Intensity × Initial Implementation × Monitoring Choice interaction \[*b* = 5.99, SE = 2.50, 95% CI (1.09, 10.89), *F*(1, 3653) = 5.75, *P* = 0.017, [Figure 4](#f4){ref-type="fig"} for LPP waveforms and LPP topographical distribution and [Supplementary Table S3](#sup1){ref-type="supplementary-material"} for all effects\].[^5^](#fn5){ref-type="fn"} Follow-up analyses explored lower-order effects in the context of a full model (using 'estimate' statements in the SAS syntax). Despite having clear a priori predictions, when decomposing this three-way interaction, we corrected for multiple comparisons by applying the well-established Benjamini--Hochberg procedure that adjusts the criterion for significance by controlling for the false discovery rate ([@ref2]). {#f4} Decomposing the three-way interaction revealed that, consistent with our prediction, in high intensity \[*b* = 8.22, SE = 1.84, 95% CI (4.60, 11.83), *t*(246) = 4.48, *P* \< 0.001, α~adjusted~ = 0.013\], but not in low intensity \[*b* = 2.23, SE = 1.88, 95% CI (−1.47, 5.92), *t*(306) = 1.19, *P* = 0.237, α~adjusted~ = 0.050\], there was an Initial Implementation × Monitoring Choice interaction, such that choosing to maintain (or switch to) distraction, relative to maintain (or switch to) reappraisal, resulted in larger LPP modulation. Specifically, in high intensity, switching to distraction following initial reappraisal implementation resulted in substantially stronger LPP modulation (*M* = 7.03, SE = 1.10), relative to maintaining reappraisal (*M* = 2.62, SE = 0.94) \[*b* = 4.41, SE = 1.20, 95% CI (2.02, 6.80), *t*(93.3) = 3.66, *P* \< 0.001, α~adjusted~ = 0.025\]. Complimentary, in high intensity, maintaining distraction following initial distraction implementation resulted in stronger LPP modulation (*M* = 2.95, SE = 0.86), relative to switching to reappraisal (*M* = −0.85, *SE* = 1.49) \[*b* = 3.81, SE = 1.51, 95% CI (0.80, 6.81), *t*(74.9) = 2.52, *P* = 0.014, α~adjusted~ = 0.038\]. A similar pattern of findings emerged when using an alternative LMM with a more minimal random effect structure or a repeated-measures ANOVA ([Supplementary Material, pages 4--7](#sup1){ref-type="supplementary-material"}). Discussion {#sec13} ========== While a monitoring regulatory stage is considered an integral part of emotion regulation and successful functioning, empirical evidence remains scarce. Our first research question was whether previously established regulatory preferences (i.e. the combination of emotional intensity and initial regulatory strategy) influence the decision to maintain *vs* switch from the implemented strategy. Supporting predictions, initial implementation that is incongruent with regulatory preferences (i.e. distraction in low intensity, reappraisal in high intensity) resulted in increased switching frequency, relative to initial implementation that is congruent with regulatory preferences (i.e. reappraisal in low intensity, distraction in high intensity). Our second research question was what are the neuro-affective consequences (LPP modulation) of monitoring regulatory decisions. We predicted and found that in high (but not low) emotional intensity, where distraction is more effective than reappraisal, choosing distraction (either by maintaining or switching to distraction) resulted in adaptive neural consequences (i.e. greater LPP modulation). Considering our first research question, results extend our conceptual account and selection findings ([@ref34]), by elucidating the role of regulatory preferences for the unexplored monitoring regulatory stage. In each emotional intensity, initial implementation that is incongruent with regulatory preferences, led to increased switching, relative to initial implementation that is congruent with regulatory preferences. Although monitoring decisions were strongly determined by regulatory preferences, we also observed considerable inertial effects. Even in cases where the initial implemented strategy was non-preferred and less effective (reappraisal in high intensity), it was nonetheless maintained in a notable amount of trials. It seems that an initial implemented strategy may function as a strong default. This notion is further supported by studies demonstrating that presenting individuals with a 'default' option leads to disproportionately sticking with it (e.g. [@ref13]; [@ref42]). Considering our second research question, we evaluated adaptive consequences of monitoring decisions using neural measures, transcending prior self-report findings ([@ref3]). These results extend prior neural findings ([@ref31]) by showing that in the monitoring regulatory stage, choosing to maintain (or switch to) distraction in high intensity results in adaptive neural consequences. These results are consistent with the notion that in high intensity, early attentional disengagement via distraction is more effective in the short term, compared to late operating reappraisal ([@ref35]). Accordingly, distraction may serve as a 'first aid' tool in highly intense situations, not only during initial implementation ([@ref31]) but also following monitoring decisions. Notably, however, distraction has significant shortcomings in the long term ([@ref45]). Thus, future studies should examine the benefits of maintaining reappraisal in the long term, where reappraisal is predicted to be more beneficial (e.g. [@ref43]). Our results have clinical implications. Repeated failure to make flexible monitoring decisions that are sensitive to contextual demands (i.e. failing to switch from inefficient strategies or failing to maintain efficient strategies) constitutes a form of emotional dysregulation that may be associated with psychopathology ([@ref40]). However, studies investigating monitoring decisions and their consequences in clinical populations are crucially needed. The current study has several limitations. First, we focused on the combination of two elements---external negative emotional intensity (high, low) and two regulatory strategies (distraction, reappraisal). Based on our conceptual framework ([@ref36]) and prior findings ([@ref31]), we had the clearest predictions regarding these two elements. Nonetheless, future studies should explore additional factors, including positive emotional intensity and other strategies that may influence monitoring decisions. Second, we focused on two monitoring decision options (maintaining and switching), leaving the third option---stopping to regulate---unexplored. Future studies should examine factors that determine decisions to stop regulating and the neuro-affective consequences of stopping. Third, our a priori design decisions led to being able to evaluate neural, but not subjective experience consequences, of regulatory decisions. To evaluate consequences of regulatory decisions, one has to obtain pre-choice and post-choice implementation indices (c.f., [@ref32]). The pre-choice index is crucial to account for well-established differences between reappraisal and distraction during initial implementation (e.g. [@ref31]; see also a replication above). While our design involved pre-choice and post-choice LPP measurements (collected continuously and unobtrusively), we did not collect a pre-choice measure of self-reported negative experience. This a priori decision (c.f., [all]{.ul} prior regulatory selection studies, e.g. [@ref38], [@ref39]) to refrain from asking participants to provide pre-choice ratings immediately prior to making regulatory decisions is based on a concern that this explicit reporting will bias naturally occurring choices. Conceptually, providing baseline self-reports shifts the focus from examining our causal externally manipulated intensity to the examination of the influence of (measured) internal intensity ([@ref3]). Fourth, choosing to maintain (or switch to) reappraisal under low intensity may not necessarily reflect a clear preference for reappraisal. Participants may prefer distraction-over-reappraisal under high intensity, because distraction is more effective than reappraisal. However, given that in low intensity, there are no short-term efficacy differences between the strategies, participants may prefer reappraisal because they strive to balance their overall preferences, or they feel they are expected to use both strategies. More generally, providing participants with only two decision options yields choice preferences that are not fully independent from one another. Last, prior to initial implementation, participants received information regarding the emotional intensity and the instructed strategy. This may have influenced participants' later monitoring decisions. However, consistent with prior paradigms ([@ref31], [@ref33]), providing information on both variables equates the saliency of each prior to upcoming monitoring decision. Funding {#sec14} ======= G. Sheppes is supported by the Israel Science Foundation (Grant No. 1130/16). Conflict of interest {#sec15} ==================== None declared. Supplementary Material ====================== ###### Click here for additional data file. We set an a priori native Hebrew proficiency inclusion rule, because understanding and implementing complex cognitive emotion regulation strategies require high verbal proficiency (c.f., [@ref38], [@ref39]). Picture codes were as follows: Low intensity: IAPS: 224, 1111, 1270, 1271, 1274, 1275, 1280, 1301, 2120, 2278, 2312, 2399, 2456, 2457, 2490, 2590, 2682, 2691, 2692, 2700, 2718, 2722, 2753, 2795, 3216, 3280, 6010, 6190, 6561, 6562, 6825, 6836, 7092, 7135, 7359, 7360, 7361, 7520, 7521, 8231, 9001, 9002, 9008, 9010, 9031, 9041, 9045, 9046, 9090, 9101, 9102, 9110, 9120, 9145, 9160, 9171, 9180, 9182, 9168, 9190, 9270, 9280, 9290, 9291, 9330, 9331, 9341, 9342, 9360, 9373, 9390, 9404, 9415, 9417, 9726, 9427, 9440, 9445, 9452, 9469, 9470, 9471, 9530, 9584, 9590, 9592, 9594, 9610, 9912, 9926. High intensity: EmoPicS: 209, 210, 212, 231, 232, 233, 234, 235, 236, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252; IAPS: 2053, 2730, 2800, 3000, 3005.1, 3010, 3015, 3030, 3060, 3064, 3068, 3071, 3080, 3100, 3101, 3102, 3110, 3120, 3130, 3140, 3150, 3168, 3170, 3180, 3230, 3261, 3266, 3350, 3400, 3530, 3550, 6212, 6230, 6312, 6313, 6315, 6350, 6360, 6510, 6520, 6540, 6550, 6570, 6831, 6838, 9040, 9050, 9181, 9183, 9250, 9252, 9300, 9400, 9410, 9412, 9413, 9420, 9500, 9570, 9571, 9600, 9635.1, 9908, 9910, 9911, 9921. The 32 EEG scalp electrodes sites were as follows: Fp1, Fp2, Fpz, Af3, Af4, Afz, F1, F3, Fz, Fc1, Fc2, Fcz, C1, C3, C2, C4, Cz, Cp1, Cp2, Cpz, P1, P3, P2, P4, Pz, O1, O2, Poz, T7, T8, F2, F4. A significance level of 5% (two-sided) was selected for the two analyses of variance (ANOVAs) used to replicate prior implementation findings and to examine the first research question (behavioral regulatory monitoring decisions). In both ANOVAs, we used only 2 × 2 (or 2 × 2 × 2) experimental designs, which do not require any tests of sphericity ([@ref22]). To mitigate concerns that our findings depend on a large DF value, we also tested the same model using a between-within (i.e. ddfm = BETWITHIN) minimal value of degrees of freedom. Results in this alternative model remained identical, including the three-way interaction of interest \[*b* = 5.99, SE = 2.48, 95% CI (0.88, 11.10), *F*(1, 25) = 5.83, *P* = 0.023\].
|
Mid
|
[
0.6082949308755761,
33,
21.25
] |
Welcome Thank you for your interest in NCQA Patient-Centered Medical Home (PCMH) Recognition. This brief questionnaire will help you determine if you are eligible and ready to begin the PCMH Recognition process. This questionnaire is for practices looking to come through recognition using the 2017 concepts and criteria. Before we get started, please select your current recognition status for the practice sites you want to bring through recognition: We are currently recognized by NCQA We are not currently recognized by NCQA We were recognized in the past but let the recognition lapse. We are interested in seeking recognition again
|
High
|
[
0.678571428571428,
33.25,
15.75
] |
A friend asked me to look at his daughters computer, because she was having some problems with it. There was a lot of software issues with it when I got it, so I asked the friend if it would be okay to just wipe the computer clean (format and re-install Windows XP), which he said was fine with him and his daughter. Attempting to format it using the Microsoft XP Professional install disc, I ran into a major problem. The monitor just blanked out, about halfway through the format. The keyboard was also obviously frozen--the NumLock was stuck on, and I could not toggle it, the Caps Lock, or the Scroll Lock. I tried the process again a few times, with the same result. I then tried formatting to NTFS using a GPartEd CD I had. On the first try, it froze, but the monitor did not "blank out" as it had done. On the second try, it seemed to have worked. So, I go back to try and install Windows XP Pro again without formatting, and it acts up again in the same manner, this time while copying over files. Having this happen to me many times over the course of a few days, I somehow have the luck enough to get it running long enough to install XP. After doing so, I download some software for monitoring voltages (I had a hunch it was the crappy Chiefmax 450W PSU that was causing the problems, after googling the symptoms for a while). It reports somewhat low voltages on the 5V and 3.3V rails, and extremely low voltages for the 12V rails. (Googling more said that monitoring software often reports low 12V voltages, so I still wasn't sure it was the PSU.) The temperatures seemed normal (can't remember what exactly they were at the present, but they weren't high). The computer, after the install, could stay on anywhere from a single minute to two or three hours before it would crap out--again, the monitor blanks out and the keyboard freezes. Racking my brain and googling the symptoms over and over, I was pretty sure the problem lies with the PSU. I told my friend, "I'm fairly sure it's your PSU, because it really is a POS PSU." He tells me that the guy who built his computer is back in town, so he'll take it over there for him to install a new PSU. (I was going to offer to order and install a nice Corsair PSU, but it would have set my friend back $75 plus whatever I Jew outta him for screwing with his computer, and he's a cheap mofo.) Anyways, a few days go buy and now he's brought the computer back to me, so that I can finish the software end of things. I took a peek inside and the new power supply is another "no name" POS, a "VIOTEK 550W." I am again running into the same problem with the monitor blanking out and keyboard freezing. I have been running Memtest86+ on it for the past 30 minutes or so. It just got through its second pass and reports no errors with the memory. (It also is not freezing and locking up.) The drive does format okay (when it completes the format process without freezing). I'm assuming therefore that the HDD and the memory are both okay. I really have no idea how to go about testing the processor to see if it's "bad," but I've always assumed that a "bad" processor would mean a completely dead computer. I'm having my brother bring over his multimeter tomorrow in order to directly test this new PSU's voltages, to see if it's delivering as much power as it should be... Anyways. It's obviously a hardware problem, since the computer started fritzing up before I even got Windows on it. X3 Question is, how should I proceed in narrowing things down, from here? MOBO: ECS 755-A2 Socket754 ATX Motherboard HT 1600 GPU: GeForce MX 440 8x AGP CPU: AMD Sempron 1600Mhz RAM: 200MHz DDR400 512MB (one stick) PSU: VIOTEK... No model #? (Can't even find a website for these guys.) 500W, 28A to the 12V rails (supposedly), 22A to 3.3V, 1A to -12V, 28A to 5V. I've removed and reseated the memory and the GPU, as well as the IDE cables to/from the Mobo and the hard drive (and the Mobo and the DVD Drive), hoping that would be a quick fix. XD Anyways, Memtest is now ending it's 2nd pass, been running for 56 minutes, no errors, no freezing. Advice, anyone? David Hartsock Admin Forum Posts: 1115 Member Since: August 7, 2011 Offline 2 May 4, 2011 - 8:53 am Welcome to the forum! ...because she was having some problems with it.[/quote:2lyrzgtj] What problems were they having with it? The computer, after the install, could stay on anywhere from a single minute to two or three hours before it would crap out--again, the monitor blanks out and the keyboard freezes. Racking my brain and googling the symptoms over and over, I was pretty sure the problem lies with the PSU.[/quote:2lyrzgtj] It's possible. Anyways, a few days go buy and now he's brought the computer back to me, so that I can finish the software end of things. I took a peek inside and the new power supply is another "no name" POS, a "VIOTEK 550W." I am again running into the same problem with the monitor blanking out and keyboard freezing.[/quote:2lyrzgtj] I have been running Memtest86+ on it for the past 30 minutes or so. It just got through its second pass and reports no errors with the memory. (It also is not freezing and locking up.) The drive does format okay (when it completes the format process without freezing).[/quote:2lyrzgtj] If it were memory I'd think it would blue screen with XP running. I'm assuming therefore that the HDD and the memory are both okay.[/quote:2lyrzgtj] My best guess would be the HD. I'd run chkdsk /r /f on the drive. I really have no idea how to go about testing the processor to see if it's "bad," but I've always assumed that a "bad" processor would mean a completely dead computer.[/quote:2lyrzgtj] Yes. I've removed and reseated the memory and the GPU, as well as the IDE cables to/from the Mobo and the hard drive (and the Mobo and the DVD Drive), hoping that would be a quick fix.[/quote:2lyrzgtj] Does the MoBo have onboard video? If so you can remove the GPU to make trouble shooting easier. Sarteck Member Forum Posts: 4 Member Since: May 3, 2011 Offline 3 May 4, 2011 - 11:52 am [quote:15kgfhe8]What problems were they having with it?[/quote:15kgfhe8]They didn't really say, at first, but after I discussed with them the problems I was having with it (before I asked if I could just format the thing), they told me it kept turning off on them. (I assume they meant the monitor was going black and the keyboard was freezing up, like I was experiencing.) [quote:15kgfhe8]If it were memory I'd think it would blue screen with XP running.[/quote:15kgfhe8]I've actually never encountered bad memory (or at least that I've identified), so I wouldn't know. [quote:15kgfhe8]My best guess would be the HD. I'd run chkdsk /r /f on the drive.[/quote:15kgfhe8]I'll try that today and give the results. [quote:15kgfhe8]Does the MoBo have onboard video? If so you can remove the GPU to make trouble shooting easier.[/quote:15kgfhe8]Unfortunately, no. DX I was hoping to do the same thing. In other news, I tried installing Kubuntu 11.04 last night on it after MemTest86+ ran about two hours, and this time actually got an error! The video did "blank out" on the monitor like when trying to install Windows, but after it crashed, the console came up in the background. There was a "Machine Check Exception 4, bank 4: b200000000070f0f" that I am still trying to understand through Google. I'll run that chkdsk today from the recovery console, if I can, and post the results later. Will also try to find out more about this MCE. Sarteck Member Forum Posts: 4 Member Since: May 3, 2011 Offline 4 May 4, 2011 - 3:45 pm I feel like an idiot, now. Going through the BIOS options, I noticed that the RAM was set to 800MHz. The RAM itself is 200MHz RAM. I set it from 800 to 400 (just to see if that would do anything--I planned on lowering it to 200 later on if it worked out). Windows XP Pro is installed, and I'm going through the update process right now. I've not experienced any of the freezing issues I was experiencing before, so I'm assuming that fixed it. I don't know how or why it was set to 800 MHz in the first place. My friend's daughter is not the type of person to try messing around in BIOS (they're the type of people that would call me up to ask me what to do when they get a message saying "Press any key" to find out which key is the "any key"), and I am sure that I would have remembered setting there myself. I'm ASSUMING the friend of theirs that installed the new power supply would not do something like that for no reason, too. Anyways, if I don't post in this thread again, you can be sure that fixed it. What I don't get is, if that was indeed the problem, why is it that Memtest86+ wouldn't pick it up? It went through 4 or 5 passes before I stopped it, each pass showing no errors. I would assume that Memtest86+ would use the values in BIOS (the 800MHz) when running its tests.... Anyone got an answer for that? Well, thanks for putting up with my stupidity. X3 Jim Hillier Admin Forum Posts: 2506 Member Since: August 9, 2011 Offline 5 May 4, 2011 - 5:39 pm [quote:2kldsoje]What I don't get is, if that was indeed the problem, why is it that Memtest86+ wouldn't pick it up? [/quote:2kldsoje] Agree. If my understanding is correct; if the timing set in BIOS does not match RAM, Memtest should throw up persistent errors caused by the mismatch. Checking the setting in BIOS is one of the first recommendations after errors are reported by Memtest, to eliminate that as possible cause........weird!!!! [quote:2kldsoje]thanks for putting up with my stupidity[/quote:2kldsoje] Stupidity??? No, I don't think so. You sound particularly bright to me. Errors and mismatches in BIOS are very often not the first port of call when a machine is misbehaving. Had one myself recently, similar symptoms involving the PC randomly shutting down. Spent hours on the bl**dy thing checking all the usual suspects. Finally went through BIOS settings as a last resort thing. Turned out USB mouse and keyboard support was 'Disabled' in BIOS. So simple yet was the last thing I would have expected. Then again, maybe we are a pair of dummies!!! LOL Cheers (and welcome)......Jim Sarteck Member Forum Posts: 4 Member Since: May 3, 2011 Offline 6 May 5, 2011 - 11:27 am Thanks, Oz. Heh. If there's one thing I've figured out from decades of working with computers, it's that there's so much I [b:1sjqawr2]don't[/b:1sjqawr2] know, so of course I'm a dummy. XD I just take comfort in that I'm not the only dummy out there. As for the Memtest86+, I still can't figure it out... I let it run for quite a while (about two hours, in which time it made 4 or 5 passes) and there wasn't a single error. I am pretty certain that it WAS indeed the BIOS setting, though--at 800MHz, it seems that maybe it was trying to pump info too fast into the RAM (just guessing here; I don't really KNOW, but it seems logical), and at 400MHz (and later 200MHz) it seemed as stable as Windows can be. They've been using it over there for a day now without having to call me. (I do have to go back and install correct sound drivers on it, though--Windows Update seems to think it has a different sound card than it actually has, heh. I'll check on it again to make sure everything is going smoothly.) So, my advice to anyone that encounters a similar freezing problem out there, check the speed your RAM can handle, and make sure that the BIOS setting (if there is one) matches it, before you try to explore possible hardware problems. You might save a lot of time and headache. David Hartsock Admin Forum Posts: 1115 Member Since: August 7, 2011 Offline 7 May 5, 2011 - 12:03 pm (I do have to go back and install correct sound drivers on it, though--Windows Update seems to think it has a different sound card than it actually has, heh. I'll check on it again to make sure everything is going smoothly.)[/quote:30ewbwka] Walk them through downloading/installing (you don't even have to install it) a program called TeamViewer. They run the program, give you their access code/password and you won't even have to go over to their house!
|
Low
|
[
0.5269978401727861,
30.5,
27.375
] |
Usually, when I get an email from an atheist about a group of Christians, it’s rarely positive… but reader Ryan sent along this story that makes you appreciate how tolerant some people can be. It’s about his Catholic friend “Sarah” (emphasis is mine): Sarah’s birthday celebration was at a restaurant with ten other people in attendance. I guessed that her other friends at the table were also Catholics or Christians, because most of her close associations come through the church and Christian acting groups. But the group at the party was loose and fun, and I didn’t get any sense of an overwhelming “Christian” attitude there; these were young people and good friends having an enjoyable evening out celebrating. As the food began to arrive, I noticed that the woman seated across from me, one of Sarah’s friends “Ashley” was trying to pull Sarah into saying grace over their food. She was doing it in a small, quiet, and fast way, making it a private moment between her and Sarah. Sarah stopped her and asked, “Why don’t we wait for everyone to get their food before prayer?” Ashley answered, “I don’t want to exclude anyone who isn’t religious. It would make them uncomfortable.” Sarah pointed toward me and said, “Don’t worry, he’s the only one at the table who isn’t Christian.” At this point I spoke up, saying to Ashley, “That’s extremely polite of you to consider others that way. Please go ahead and pray together, I don’t mind.” “Well, in the city I’m used to being among mixed groups,” Ashley said. “I don’t want anybody to feel excluded.” “That’s very considerate of you,” I said. “Really, I don’t get that a lot. Thank you.” Later, everyone except me bowed their heads and said a prayer together. I sat quietly in my chair, my eyes open, until they were done. Why was this such an important moment for me? Because in my life I have never experienced a Christian checking to make sure that an open prayer among a mixed group won’t make non-Christians uneasy or outcast; in fact, she preferred a tiny private prayer that drew no attention from others. My family, which is mixed with atheists, the vaguely spiritual, and life-long Adventists, makes “saying grace” a central aspect of any gathering, with no consideration for people who might not wish to participate. Someone who wishes to “opt out” becomes a spectacle. I’ve even got ridicule and pressure for it, when it shouldn’t have even been an issue. No one in my family has ever considered behaving in the way that Ashley did at the birthday party. My mother now warns me when “saying grace” is coming up so I can walk away. But still, I am excluded. Instead of the Christians going to their own space on their own to say a prayer, they make it a centerpiece of serving dinner, and those who don’t want to participate are still forced to quietly acquiesce simply so they can avoid drawing attention. I’ve never seen a Christian show such a kind attitude as Ashley did in that one moment. It reminds me of how much prejudice I’ve faced, how many unprovoked attacks I’ve received for simply mentioning that I am not religious, and how many times I’ve been pushed from social situations because others do not understand the diversity in beliefs in the world. I want to thank Ashley for her consideration, and I hope that other theists may also learn to follow her example and understand that they are not the only ones at the table, both the dinner table and the global table. If more people followed her example, we would live in a happier world.
|
Mid
|
[
0.611374407582938,
32.25,
20.5
] |
[The role of the Consultative and Diagnostic Centre "Healthy Nutrition" in the diagnosis and nutritional prevention of non-communicable diseases]. In a consultative and diagnostic center "Healthy Nutrition" of Institute of Nutrition the nutritional status of 3500 patients (mean age 48.4 ± 0.3 years) liv- ing in the Moscow region, using a system Nutritest IP-3, including genomic analysis has been examined. In the analysis of dietary intake by an average review, increased energy intake due to excess intake of the total (44.2% energy) and saturated fat (13.6%) has been shown. 30.0% of patients were overweight and 34.1% were obese. Osteopenia was detected in 31.0% of men and 25.0% women, osteoporosis--20.9% and 30.3%, respectively. Analysis of the results of biochemical studies revealed increased cholesterol in 68.7% of patients, LDL cholesterol--at 63.9%, triglycerides-- at 22.5%, glucose--at 29.4%. The frequency of the occurrence of risk alleles of genes associated with the development of obesity and type 2 diabetes mellitus was: 47.8%--for the polymorphism rs9939609 (FTO gene), 8.3%--for polymorphism rs4994 (gene ADRB3), 60.2%--for the polymorphism rs659366 (gene UCP2), 36.6%--for the rs5219 polymorphism in the gene of ATP-dependent potassium channel.
|
High
|
[
0.673575129533678,
32.5,
15.75
] |
module Carto module OauthProvider module Scopes class Scope attr_reader :name, :category, :description def initialize(name, category, description) @name = name @category = category @description = description end def add_to_api_key_grants(grants, user); end def ensure_grant_section(grants, section) grants.reject! { |i| i[:type] == section[:type] } grants << section end def ensure_includes_apis(grants, apis) return if apis.blank? apis_section = grants.find { |i| i[:type] == 'apis' } apis_section[:apis] = (apis_section[:apis] + apis).uniq end end end end end
|
Low
|
[
0.532008830022075,
30.125,
26.5
] |
Q: Function analytic on annulus bounded by $|z|^2$ This problem comes from an old prelim "Let $f$ be analytic on an open neighborhood of the annulus $1\le |z|\le 2$. Assume that $|f(z)|\le 1$ when $|z|=1$ and $|f(z)|\le 4$ whenever $z=2$. Show that $|f(z)|\le |z|^2$ Here's what I have so far: since $f$ is analytic on a neighborhood of the annulus, it is also analytic on the interior of the annulus. Therefore, the restriction $f_{A}$ cannot attain its $\sup$ or $\inf$ on the interior. So it must obtain its $\sup$ and $\inf$ on the boundary of the annulus. Also, $g(|z|)=|z|^2$ is monotone in $|z|$ so that $\sup_{z\in A}g(z)=4$ and $\inf_{z\in A}g(z)=1$ and these values are obtained on the boundary. So, IF I knew that $|f(z)|$ were monotone in $|z|$, then I think I see why the statement follows, but I can't quite figure out if this assumption is necessary or how to prove the result if it isn't necessary. Any help would be appreciated. Thanks! A: Let $g(z)=\frac{f(z)}{z^2}$. Then $g$ is holomorphic on a neighborhood of the annulus $1\leq |z|\leq 2$, and $|g(z)|\leq 1$ on the boundary of the annulus. The maximum modulus principle then implies that $|g(z)|\leq 1$ on the whole annulus, i.e. $|f(z)|\leq |z|^2$.
|
High
|
[
0.680100755667506,
33.75,
15.875
] |
UNITED STATES DISTRICT COURT FOR THE DISTRICT OF COLUMBIA ______________________________ : VIRGINIA JAMES, : : Plaintiff, : : v. : Civil Action No. 11-0963 (RBW) : MICHE BAG CORP., : : 1 Defendant. : _____________________________ : MEMORANDUM OPINION This matter is before the Court on Miche Bag, LLC’s Renewed Motion to Dismiss Pursuant to Rule 12(b)(6) [ECF No. 29]. For the reasons discussed below, the motion will be granted. I. BACKGROUND According to the plaintiff, in 1983 she “went to an invention office at Wisconsin Ave., N.W., and . . . gave them [an] idea . . . to put a different cover on a pair of shoes” so that “each time you change the cover you change your shoes.” Complaint [ECF No. 1-3] (“Compl.”) at 6-7 1 According to the defendant, there is no such corporate entity as “Miche Bag Corp.,” and that “Miche Bag LLC is not Miche Bag Corp.” Statement of Points and Authorities in Support of Miche Bag LLC’s Motion to Dismiss Pursuant to Rule 12(b)(6) at 5 n.1 (emphasis in original). As it must, the Court construes this pro se plaintiff’s pleadings liberally, and proceeds as if the plaintiff has named the proper corporate entity -- Miche Bag LLC -- as the defendant in this action. 1 (page numbers designated by the Court). 2 She represents that she paid that office $395.00 “for a patent” and for “a market search.” Id. at 7. The plaintiff attached to her complaint an “Inventors Record” purporting to show that on January 8, 1983, she “conceived the invention entitled shoe covering” and that she first disclosed her idea to Robert R. Bourdeau and made her first sketches on January 11, 1983. Id. at 11, Exhibit (“Ex.”) (Inventors Record, File No. DC 5343). Her product was to be marketed in Connecticut, and she was allegedly told that someone “would get in touch with [her] to discuss the money [she] would be getting and other things.” Id. at 7. No one contacted her, and when she called the office the telephone number was no longer in service. Id. The plaintiff “didn’t know how to locate them,” id., though she did send Mr. Bourdeau a letter on or about May 17, 1983. See id. at 15, Ex. (Letter from the plaintiff to Mr. Bourdeau dated May 17, 1983). She contends that she “didn’t have the thousand[s] of dollars to give attorneys needed to find them.” Id. at 7. “Now some twenty-five (25) years latter [sic],” the plaintiff alleges that “they have marketed and are selling [her] idea, product, and concept on T.V.” Id. at 8. The plaintiff asserts that the defendant “advertised on T.V. a product called the Miche Bag,” id. at 6, having altered her product “from changing the cover from a pair of shoes to changing the cover on a pocketbook to change your pocketbook every day.” Id. at 8. Through this marketing, she alleges, “Miche Bag Corp. has neglected to give [her] money owed to [her],” and thus has “stolen [her] product idea[] and concept . . . for [its] own financial . . . gain.” Id. at 9. She demands a declaratory judgment and compensation of $ 20 million. Id. 2 The defendant attaches to its Notice of Removal [ECF No. 1] documents that had been filed in the Superior Court of the District of Columbia. Among those documents is the plaintiff’s four- page complaint and unnumbered exhibits. See Notice of Removal, Ex. A (ECF No.1-3]. In this Memorandum Opinion, references to the complaint and its exhibits adopt the page numbers designated by the Court’s electronic case filing system. 2 II. DISCUSSION The plaintiff’s complaint is construed as bringing a single claim – breach of contract – with the Inventors Record as the purported written contract. See Compl. at 6; More Detailed Statement of My Claim vs. Miche Bag [ECF No. 23] at 2. The defendant moves to dismiss the complaint on the grounds that (1) the Inventors Record is not an enforceable contract to which it is a party and (2) the plaintiff’s claim is time-barred. See generally Statement of Points and Authorities in Support of Miche Bag LLC’s Mottion to Dismiss Pursuant to Rule 12(b)(6) [ECF No. 29] (“Def.’s Mem.”) at 8-16. A. Dismissal Under Rule 12(b)(6) A complaint is subject to dismissal if it fails to state a claim upon which relief can be granted. See Fed. R. Civ. P. 12(b)(6); Sodexo Operations, LLC v. Not-For-Profit Hosp. Corp., __ F. Supp. 2d __, __, No. 12-108, 2013 U.S. Dist. LEXIS 37456, at *2-3 (D.D.C. Mar. 19, 2013). However, a plaintiff need only provide a “short and plain statement of [her] claim showing that [she] is entitled to relief,” Fed. R. Civ. P. 8(a)(2), that ‘“give[s] the defendant fair notice of what the . . . claim is and the grounds upon which it rests,’” Erickson v. Pardus, 551 U.S. 89, 93 (2007) (per curium) (quoting Bell Atl. Corp. v. Twombly, 550 U.S. 544, 555 (2007)). To survive a motion to dismiss under Rule 12(b)(6), “a complaint must contain sufficient factual matter, accepted as true, to ‘state a claim to relief that is plausible on its face.’” Ashcroft v. Iqbal, 556 U.S. 662, 678 (2009) (quoting Twombly, 550 U.S. at 570). In other words, it must “plead factual content that allows the court to draw the reasonable inference that the defendant is liable for the misconduct alleged.” Patton Boggs LLP v. Chevron Corp., 683 F.3d 397, 403 (D.C. Cir. 2012) (internal quotation omitted). Although a complaint filed by a pro se plaintiff is 3 “to be liberally construed,” Erickson, 551 U.S. at 94 (internal citation omitted), it, too, must set forth factual allegations that “raise a right to relief above the speculative level.” Twombly, 550 U.S. at 555. B. The Inventors Record is Not an Enforceable Contract In the District of Columbia, a contract cannot be enforced by a court “unless it can determine what the contract is.” Bond v. U.S. Dep’t of Justice, 828 F. Supp. 2d 60, 79 (D.D.C. 2011). An enforceable contract exists where there is ‘“(1) an agreement to all material terms, and (2) intention of the parties to be bound.’” EastBanc, Inc. v. Georgetown Park Assocs. II, L.P., 940 A.2d 996, 1002 (D.C. 2008) (quoting Duffy v. Duffy, 881 A.2d 630, 634 (D.C. 2005)). “[T]he contract must ‘be sufficiently definite as to its material terms (which include, for example, subject matter, price, payment terms, quantity, quality, and duration) that the promises and performances to be rendered by each party are reasonably certain.’” Mero v. City Segway Tours of Washington DC, LLC, 826 F. Supp. 2d 100, 105 (D.D.C. 2011) (quoting Virtual Dev. & Def. Int’l, Inc. v. Republic of Moldova, 133 F. Supp. 2d 9, 17 (D.D.C. 2001)). Nevertheless, “‘all of the terms contemplated by the agreement need not be fixed with complete and perfect certainty for a contract to [be enforceable].’” Eastbanc, 940 A.2d at 1002 (quoting Rosenthal v. Nat’l Produce Co., 573 A.2d 365, 369 (D.C. 1990)). Rather, “‘the terms of the contract [must be] clear enough for the court to determine whether a breach has occurred and to identify an appropriate remedy.’” Id. (quoting Affordable Elegance Travel, Inc. v. Worldspan, L.P., 774 A.2d 320, 327 (D.C. 2001)). The Inventors Record is a preprinted form which indicates that the plaintiff “conceived the invention entitled shoe covering” on or about January 8, 1983, and that she “first disclosed 4 [her invention] to others” and made her “[f]irst sketches” on or about January 11, 1983. Compl. at 11, Ex. (Inventors Record). It further indicates that the plaintiff “disclosed to [Robert R. Bourdeau] the invention illustrated and described” therein on January 11, 1983. Id. The Inventors Record includes no terms, however. Namely it does not set forth any details concerning price, payment terms, quantity, quality, or duration of the contract. Nor does it obligate Mr. Bourdeau or any other party to patent or market the plaintiff’s invention or to take any other action. Accordingly, the Inventors Record is so vague and devoid of content that it is impossible for the Court to determine its terms or any breach of its terms. 3 Furthermore, there is no basis from which to conclude that the defendant intended to bind itself to the purported terms of the Inventors Record. The “signatories were only [the plaintiff] and Robert Bourdeau . . . as the representing individual for the Marketing Corp.” Plaintiff’s Reply to Defendant’s Motion [ECF No. 33] at 2. Neither Mr. Bourdeau nor the Marketing Corp. is a defendant to this action, however. The sole defendant, Miche Bag, represents that “[n]o individual by the name of Robert R. Bourdeau has ever been associated or affiliated, in any way, with Miche Bag, let alone as an agent capable of binding Miche Bag.” Def.’s Mem. at 16 n.7. Wholly absent from the complaint are any factual allegations linking the plaintiff and the Inventors Record to Miche Bag, or asserting that Mr. Bourdeau either acted on behalf of Miche Bag or was otherwise so closely connected with Miche Bag that this defendant is liable for his actions. The Investors Record, therefore, is not an enforceable contract. 3 If, alternatively, the plaintiff proceeds on a theory that she entered into an oral contract with Mr. Bourdeau, the Court still is left with no basis from which to determine the oral contract’s terms, the parties’ obligations, the existence of a breach, or an appropriate remedy. 5 C. The Plaintiff’s Breach of Contract Claim is Time-Barred Even if the complaint adequately had alleged the existence of an enforceable contract between the plaintiff and Miche Bag, the filing of a breach of contract claim is untimely. Under District of Columbia law, a party must bring an action “on a simple contract, express or implied,” within three years “from the time the right to maintain the action accrues.” D.C. Code § 12-301. “A cause of action for breach of contract accrues, and the statute of limitations begins to run, at the time of the breach.” EastBanc, 940 A.2d at 1004 (citation omitted); see also Bembery v. District of Columbia, 758 A.2d 518, 520 (D.C. 2000). The purported contract here, the Inventors Record, was signed by the plaintiff and Mr. Bourdeau on January 11, 1983. The breach occurred “shortly thereafter,” when allegedly the plaintiff “was scammed and [her] invention stolen.” Motion to Not Dismiss My Claim [ECF No. 31] (“Pl.’s Opp’n”) at 2. By May 17, 1983, the date of the plaintiff’s letter to Mr. Bourdeau, the plaintiff knew that the invention office had closed, that the office had no working telephone number, and that Mr. Bourdeau had taken no action with respect to the plaintiff’s invention, notwithstanding her payment of $395. See Compl. at 15, Ex. (Letter from the plaintiff to Mr. Bourdeau dated May 17, 1983). Moreover, the plaintiff acknowledges that she knew “[b]eyond a shadow of a doubt that [her] invention was stolen in 1983.” Pl.’s Opp’n at 5. 4 The plaintiff’s lawsuit, filed “some twenty-five (25) years lat[]er,” Compl. at 8, therefore is untimely. 4 Because the plaintiff was aware of the breach in 1983, she cannot escape the three-year limitations period by arguing that she did not become aware of the breach until she saw the Miche Bag advertised on television, making the filing of her complaint “within a couple of weeks” thereafter, Pl.’s Opp’n at 2, timely. 6 III. CONCLUSION Because the plaintiff’s complaint fails to state a breach of contract claim against Miche Bag upon which relief can be granted, the defendant’s motion to dismiss will be granted. An Order is issued separately. DATE: September 25, 2013 /s/ REGGIE B. WALTON United States District Judge 7
|
Low
|
[
0.45370370370370305,
24.5,
29.5
] |
WHEN making his rounds as a traveling salesman for a Chicago printing company, Duncan Hines would occasionally pull off the Dixie Highway in Corbin, Ky., and eat at Sanders Cafe. In the 1939 edition of “Adventures in Good Eating,” his pioneering restaurant guide, he recommended the cafe and its adjoining motor court as “very good place to stop en route to Cumberland Falls and the Great Smokies,” highlighting its “sizzling steaks, fried chicken, country ham, hot biscuits.” The cafe is still there, only now it incorporates a museum and holds down a spot on the National Register of Historic Places, for one huge, unignorable reason. The owner, chef and resident genius of the place was none other than Colonel Harland Sanders, who, on this hallowed ground, cooked the first batch of Kentucky Fried Chicken. Cumberland Falls does not work the magic it once did, and Corbin itself is not high on anyone’s list of tourist destinations. But the Colonel Harland Sanders Cafe and Museum is a modest must. In addition to capturing a pivotal moment in the mass-marketing of American vernacular food, it evokes a dreamlike time, before the arrival of the Interstate System and its proliferation of fast-food restaurants and chain hotels, when traveling the American highway was a thrilling, high-risk proposition, with marvelous discoveries and ghastly disappointments waiting at every turn. In its present form, the Sanders Cafe and Museum was born in 1990, the 100th anniversary of Colonel Sanders’s birth. JRN, a Tennessee-based company that operates nearly 200 KFC franchises in the Southeast, was about to open a modern KFC restaurant next to the old cafe. To mark the great birthday, it put out a call for artifacts and memorabilia that would allow it to celebrate the Colonel, his cafe and his fried chicken.
|
High
|
[
0.7336956521739131,
33.75,
12.25
] |
Q: Unable to call gnome-terminal command in my C++ code char *mycmd = "gnome-terminal --profile 'me' -e '/usr/bin/programA --file/usr/bin/config/myconfig.ini --name="programA" --loggingLevel=1'"; popen(mycmd, "r"); Error on 1st line: error: expected ';' before 'Node' I know this is because of the "" for --name Is there anyway to get this command to work? A: Escape the double quotes : char *mycmd = "gnome-terminal --profile 'me' -e '/usr/bin/programA --file/usr/bin/config/myconfig.ini --name=\"programA\" --loggingLevel=1'";
|
Mid
|
[
0.650717703349282,
25.5,
13.6875
] |
1. Field of the Invention This invention relates to semiconductor switches, and in particular to semiconductor switches for microwave as well as millimeter wave bands using a transmission line comprising a dielectric substance substrate and metal conductors, and diodes or field effect transistors (FETs) showing distributed parameter effect. 2. Description of the Prior Art As a semiconductor switching circuit which is contemplated for use in microwave as well as millimeter wave bands, in particular with high frequencies not less than 60 GHz, various kinds of circuits have been proposed and manufactured for trial. Single-pole 3-throw (SP3T) switches for the 77 GHz band (hereinafter to be referred to as Conventional example 1) were reported by M. Case et al. in “1997 MTT-S IMS Digest pp. 1047–1050” and can be nominated as an example of conventional switches. An SP3T switch of Conventional example 1 comprises configuration as shown in FIG. 12. An input terminal 20 is connected with a signal junction N via a transmission line 21. One end of each transmission line 22–24 having length of a quarter of propagating wave length (a quarter wave length transmission line) is connected via capacitance C1, C2, and C3 for DC cutting respectively to each signal junction. The other end of each of a quarter wave length transmission lines 22–24 is connected respectively to one end of PIN diode D1, D2, or D3 as well as to the first, the second, or third output terminal 25–27. The other end of each PIN diode D1, D2, or D3 is connected with the earth. Capacitance C1, C2, and C3 for DC cutting, a quarter wave length transmission lines 22–24, diodes D1, D2, and D3, and the first, the second, and the third output terminals 25–27 form three output signal passes. A diode can be expressed as a resistance for equivalent circuit thereof when the diode is biased forward, and can be expressed as a capacitance for equivalent circuit thereof when the diode is biased in the reverse direction. Accordingly, when a diode is biased forward, there exists little impedance, and the anode and cathode thereof may be regarded to be short-circuited. In addition, the impedance for frequencies in correspondence with propagating wave length when this diode is seen via a quarter wave length transmission line is close to infinite, and thus may be regarded as almost open. That is, a signal pass where a diode is biased forward will be seen as almost open from the signal junction, and as a consequence, an RF signal having propagated the signal pass will be almost totally reflected. On the other hand, since a diode which is biased in the reverse direction functions as a capacitance, the impedance will get high for low frequencies, and accordingly a signal pass where a diode is biased in the reverse direction is transparent. As the frequency gets higher, the impedance of a capacitance gets lower, and therefore, signal reflection at a signal junction will increase. As a result, a signal pass where a diode is biased in the reverse direction allows signals to travel transparently, but on the other hand, an increase in frequency will result in an increase in loss due to reflection. Thus, in switches of Conventional example 1, among the three output signal passes, the signal pass to make signals travel transparently comprises a diode, which is biased in the reverse direction, and on the other hand, the other remaining signal passes comprise diodes, which are biased forward, to cut off signals on the other remaining signal passes, which will enable to switch the signal passes. Insertion loss as well as isolation in a single-pole single-throw (SPST) of Conventional example 1 as described above can for the purpose of simplicity be supposed that characteristic impedance of the transmission line equals impedance of the input-output terminals, and then can be expressed as the equation (1) and the equation (2). IL = 4 4 + ω 2 C 2 Z 0 2 ( 1 ) I SO = 4 ( 2 + Z 0 R ) ( 2 ) As apparent from the equation (2), isolation is expressed with the resistance R and the impedance Z0 of the input-output terminals, but does not depend on frequencies. In switches of Conventional example 1, however, when isolation of, for example, not less than 40 dB is to be attained, the resistance values of diode will have to be not more than 0.13 Ω. Here, in the disclosed document of Conventional example 1, the resistance value of the diode is described as 3 Ω. Accordingly, in switches of Conventional example 1, for the purpose of realizing a resistance value of 0.13 Ω, the anode electrode area to be multiplied approximately by 23 will do. However, the anode electrode area being 23 times as much means that the capacitance value will simultaneously be 23 times as much as well. As a result, since the capacitance value of the diode disclosed in the document is 33 fF, the capacitance to attain isolation of 40 dB will be 759 fF which is 23 times as much. Based on this, with reference to the equation (1), insertion loss for a capacitance of 33 fF (=33×10−15 F) is 0.6 dB, while insertion loss reaches as much as 19 dB when the anode electrode area is made 23 times as much. That is, in switching circuit of the above-described Conventional example 1, insertion loss and isolation are in a trade-off relationship, and high isolation characteristics such as 40 dB were not attainable. In addition, single-pole single-throw (SPST) switches for the 94 GHz band (hereinafter to be referred to as Conventional example 2) were reported by H. Takasu et al. in “IEEE MICROWAVE AND GUIDED LETTERS, Vol. 6, pp. 315–316” and can be nominated, conventionally, as an example of another switch. This switch of Conventional example 2 is also one of possible circuits as switching circuits for high frequency bands not less than 60 GHz. An SPST switch of Conventional example 2 comprises configuration as shown in FIG. 13. An SPST switch of Conventional example 2 comprises a field effect transistor (FET), an inductor, and a resistance. The input-output terminals 31, 32 are respectively connected with the source and drain of an FET, between which an inductor L configured with a microstrip line pass is connected in parallel. To the gate of FET, a resistance R of 2.5 kΩ is connected, and via the resistance a direct current bias is arranged to be applied to the gate. In the state that the channel of FET is closed, the FET can be treated equivalently as a capacitance C, which, therefore, as shown in FIG. 14, together with the inductance L connected with the FET in parallel, resonance takes place at a frequency obtainable from the equation (3), and as a consequence, resulting in high impedance so that signal propagation between the input-output terminals will be cut off. That is, the switch enters the off state. f = 1 2 π LC ( 3 ) FIG. 15 shows frequency characteristics on insertion loss as well as isolation in the switch of Conventional example 2. As obvious from FIG. 15, in the switching circuit of Conventional example 2, isolation characteristics around 30 dB are attainable with comparatively low insertion loss. However, since, as described before, the switching circuit of Conventional example 2 makes use of resonance, its frequency characteristics will fall in narrow band width. Moreover, for the purpose of making a resonance circuit start resonance at a desired frequency, it is necessary to accurately know LC being a constant of the circuit. Accordingly, for the purpose of using a switch of Conventional example 2, not only the capacitance C to appear at closure of the FET channel will have to be accurately estimated, but also as concerns the inductor L accurate modeling will become necessary. On the contrary, FETs as well as PIN diodes, etc., normally have variation of forming process to a certain extent, but for example, due to this variation, the value of capacitance C could deviate from the design, and as a result the resonance frequency will deviate from the design as well, and resonance will not be available at a desired frequency, which, as a consequence, will give rise to reduction of yield. Switching circuits (hereinafter to be referred to as Conventional example 3) were conventionally proposed by H. Mizutani and Y. Takayama in “1997 MTT-S IMS Digest pp. 439–442” and can be nominated as technology to solve the problems with the aforementioned Conventional example 1 as well as Conventional example 2. The switching circuit of Conventional example 3 is a switching circuit utilizing an FET showing distributed parameter effect, and its wide band width characteristics were proved in the document. Incidentally, the contents of the document has been disclosed in Japanese Patent Laid-Open No. 10-41404 specification as well. A switching circuit of Conventional example 3 comprises the configuration as shown in FIG. 16. As understandable with reference to FIG. 16, the switching circuit of Conventional example 3 comprises plural transmission lines and plural FETs. For the switching circuit of Conventional example 3 in detail, each transmission line as well as each FET is respectively defined per micro unit length, and transmission lines are connected in series, and the drain of each FET is connected to the respective junction of them. Incidentally, the source of each FET is connected with the earth. The configuration is made in an infinite connection of these transmission line as well as FET per micro unit length. Such switching circuit of Conventional example 3 is implemented as a plane surface pattern, where each FET (hereinafter to be referred to as distributed parameter FET) comprises a source connected with the earth, a gate finger with a length of 400 μm, and a drain electrode, both longitudinal ends of which have been connected with the input-output terminals. A switching circuit of Conventional example 3 comprising such a configuration acts equivalently as a transmission line without any loss as shown in FIG. 17 in the state that the channel of FET is closed. As apparent from FIG. 17, the switch enters the ON state, and insertion loss is expressed by the equation (4) through the equation (6). S 21 ON = 2 ZZ 0 2 ZZ 0 cos β1 + j ( Z 2 + Z 0 2 ) sin β 1 ( 4 ) β = ω ( L ( C IL + C FET ) ( 5 ) Z = L ( C IL + C FET ) ( 6 ) Here, “Z” represents impedance of the switch, “1” represents length of a finger of an FET, Z0 represents impedance of the input-output terminal. In addition, “ω” represents angular frequency, and L, R, C, and G respectively represent inductance, resistance, parallel capacitance, parallel conductance per unit length of the switch. On the other hand, an FET is equivalently expressed as a mere resistance in the state where its channel is open, thus, the equivalent circuit on the switch at that time will be as shown in FIG. 18. As understandable with reference to FIG. 18, a switching circuit of Conventional example 3 acts equivalently as a transmission line with loss in the state that the channel of FET is open, that is, the switch enters the OFF state, and its isolation can be expressed by the equation (7) through the equation (9). S 21 ON = 2 ZZ 0 2 ZZ 0 cosh ψ + ( Z 2 + Z 0 2 ) sinh ψ ( 7 ) γ ≡ α + j β ≡ j ω L ( j ω C IL + G ) ( 8 ) Z = j ω L j ω C IL + G ( 9 ) From these equations, in a wide band as shown in FIG. 19, low insertion loss and high isolation are obtainable. As understandable from FIG. 19, frequency characteristics of isolation in the switching circuit of Conventional example 3 are in gradual increase. However, not only in switching circuits of the above-described Conventional example 1 as well as Conventional example 2 without doubt, but also in switching circuit of Conventional example 3 it was practically difficult to maintain low insertion loss and realize high isolation in a wide band as a comparatively compact type. This point is explained in detail as follows. In a switch according to Conventional example 3, the 0th digit term concerning the frequency of isolation is expressed by the equation (10). IL DC = ( 2 2 + Z 0 r ) 2 ( 10 ) As understandable from the equation (10), as resistance “r” of distributed parameter FET gets smaller, isolation gets greater. Incidentally, in the switching circuit using a distributed parameter FET, the 0th digit close resemblance on the isolation frequency corresponds with the isolation of the switching circuit with shunt configuration using a lumped constant FET expressed in the aforementioned equation (2). Accordingly, for the purpose of attaining high isolation in the switching circuit of Conventional example 3, the gate finger length must be lengthened so that the resistance “r” of distributed parameter FET be reduced. In particular, for the purpose of attaining high isolation of not less than 80 dB in the switching circuit of Conventional example 3, the gate finger length must be lengthened to, for example, 1 mm so that the resistance “r” of distributed parameter FET be reduced. To extend the gate finger length like this means the chip size of microwave or millimeter wave single integrated circuit (MMIC) will get bigger. As understandable from these features, in microwave or millimeter wave band switching circuits there was a problem that it was difficult for the prior art to realize high isolation of not less than 80 dB covering a wide band width with a comparatively small type configuration, while maintaining low insertion loss. This was originated in circuit configurations in the respective prior arts, such as, existence respectively of the trade-off relationship between insertion loss and isolation, narrow band width characteristics due to usage of resonance, or the trade-off relationship between resistance of distributed parameter FET and the chip size.
|
Mid
|
[
0.6356589147286821,
30.75,
17.625
] |
Endorsement: Novak best choice in GOP primary for Macomb County executive July 13, 2014 David Novak By The Detroit Free Press Editorial Board The Free Press is endorsing candidate DAVID NOVAK in the Republican primary for Macomb County executive. But this endorsement isnít so much a testament to Novakís abilities as a recognition that heís the best candidate in a limited field. Novak faces fellow Republicans Randell Shafer and Erin Stahl in the Aug. 5 primary; the top vote-getter in that election will face incumbent Mark Hackel, the countyís first executive, in the November general election. Itís unlikely that any of the GOP contenders will best Hackel, who has proved a capable leader during his tenure in office. But hereís the problem: Thereís no certainty in elections. Anything could happen ó and any of the three ill-prepared GOP candidates could land at the helm of the stateís third-largest county. Macomb County voters approved a new charter in 2009, endorsing a switch to county executive and a smaller board of commissioners, rather than a larger board, headed by a chairperson. An executive at the helm, the thinking went, would boost economic development efforts, offering companies a single deal-maker to approach. Electing an executive would also put Macomb County leadership on an equal footing with Wayne and Oakland counties, allowing the smaller ó but growing ó county an equal seat at the regional table. Thatís all true, as long as the executive is qualified for the job. Itís true that Hackel will be tough to beat, and that itís difficult to enlist good candidates in a race that is likely a lost cause. But itís also true that the fractious Macomb County Republican Party is ill-equipped to do the work of recruiting and grooming viable countywide candidates. That has to change if the county wants to maintain or increase its political clout. Novak is a veteran with a wide range of business experience, and his concern for Macomb Countyís future is sincere. But during an interview with the Free Press Editorial Board, Novak lacked basic information about key county and regional issues, such as a bond sale pitched by Hackel to generate money for the countyís retiree health care plan, which is underfunded by about $270 million. Novak said he didnít know enough about the proposal to say whether itís a good idea. But the proposed deal has been widely reported on by the news media, and thereís no shortage of information on underfunded retiree health care ó a major liability in Detroitís municipal bankruptcy case, and a problem for local governments across the state. Likewise, Novak said he didnít know enough to offer an opinion on a proposed plan to create a tri-county water authority to assume the responsibilities of the Detroit Water and Sewerage Department, a contentious and well-publicized negotiation between Detroit emergency manager Kevyn Orr, Wayne County Executive Robert Ficano and Oakland County Executive L. Brooks Patterson. Should Novak win the seat he aspires to, heíll have to remedy these deficits posthaste ó and hiring strong support staff will be absolutely essential. Despite these shortcomings, Novak is clearly superior to the other candidates. Shafer became known to folks outside Macomb County politics after a truly bizarre Facebook conversation earlier this year supporting anti-gay and anti-Muslim statements made by former lawmaker and current Republican National Committee member Dave Agema. Shafer sparred with other Macomb County Republicans, most of whom disavowed Agemaís extremist messaging in a rambling and sometimes nonsensical thread. Stahl is unqualified for public office, lacking a grasp of even the basic structure of county government, tax policy or county operations. Macomb Countyís momentum is strong, with a growing population, and a stronger voice at the regional table. Itís crucial that the countyís next executive advance, not hinder, the countyís progress.
|
Mid
|
[
0.610526315789473,
36.25,
23.125
] |
Bamboo Searching for bamboo flooring in Perth, WA? Don’t look any further than Carpets and Floors at Yours, we have the best range of bamboo flooring deals online. Bamboo flooring is a great alternative to a hardwood timber floor, as strand woven bamboo flooring is harder than solid timber, has an unlimited lifespan and is environmentally friendly. Bamboo is unique in this way because it is a form of grass that contains cellulose fibres which doesn’t absorb moisture. Bamboo flooring boards are 99% termite safe and are manufactured and pre-finished ready to be installed into your home or office, making it an ideal choice for almost every area in your home or office. Buying bamboo flooring from Carpets and Floors at Yours is the perfect answer for a no fuss flooring choice.
|
Mid
|
[
0.6533333333333331,
36.75,
19.5
] |
Life would be a boring journey without a good dose of friendship to color your world. Think of the vivid memories they present us having been part of our lives. They are strength when we are weak, happiness when we are sad, companions when we are lonely, and most of all, they are love. Friends are the support system of life; what keeps the world going. Without them, life would become ever so boring and mundane. We probably learn more from the friends that come and go throughout our lives than we do while in the classroom. The interchange of cultures, traditions, beliefs, and family values helps us develop and strengthen our character. Friends make us strong by loving and caring about us. They challenge us to face the reality of who we really are. Next to family, friends are the most important people in our lives. In our most extreme situations, we think of them as an actual lifeline; a support system that we cannot live without! Think of all the endless hours we spend with themâ¦together on the phone, at the mall, going to the movies and ballgames, spending nights, going on vacations and playing sports. actress/singer Bette Midler at the premiere of Mid... Photo of Bette Midler backstage at the Grammy Awar... Just Hits Let's face it; we probably spend more time with our friends than we do with our families! We tend to dress alike, behave alike, sound alike, and participate in activities we wouldn't otherwise be a part of if it weren't for our friends. Therefore, they do take part in making us who we are as individuals. We feed off one another in turn, forming our personalities. Sometimes I think we even forget who we really are because we do become each other. I have found that there are times you hate your friends as much as you love them. I really feel out of whack when this happens. How could this person I depend on, respect, confide in and love, turn on me? How dare they! That is when you start thinking there is something seriously wrong with you for having them as a friend. How could you ever have trusted them? How could you have let down your guard to confide in them your most personal, deep, dark secrets? They told someone about the new guy you are currently crazy about and now somehow the whole school knows about it! How could they have done that to you after having been your best friend? That's when you wonder what friendship is really all about and is it really worth the trouble! Of course, we all know friends are worth the headache. Where would we be without them? [No matter how angry you become, no matter how frustratingly tiresome that story becomes as they repeat it a sixth time, they are still your friend. No matter how many times they wear that same pair of tired plaid pants you can't stand or wear their hair pulled straight back in a tight pony (which is so unattractive in your opinion), you know deep in your heart, in your inner soul that they are so important to you that life wouldn't be as meaningful without them.] You know you can pleasantly tell them, "Oh yeah, you've already told me about that," and smile. Or you can diplomatically tell them that plaid pants and tight ponies went out a long time agoâ¦or better yet, you could not care what they wear or how they look and only be thankful they are in your life because they bring you warmth and sunshine. Yes, I may have to remind myself that those brilliant, crimson red, plaid trousers radiate an immense amount of embarrassment when we are out together, but then again, who really cares! So overall, just like Bette Midler sings, "You got to have friends"! Can you imagine how lost we would be without them? I dare to think about it. I can't fathom the idea of an existence without several "lifelines"; buoys that sustain, moorings that steadfastly keep us tuned into reality, that anchor us to what we love and who we are. "A real friend walks in when the rest of the world walks out." How better could it have been said; because in my opinion, that says it all! We love Friends! My friend actress/singer Bette Midler at the premiere of Mid... Citation styles: Life would be a boring journey without a good dose. (2008, February 01). In WriteWork.com. Retrieved 02:51, March 20, 2018, from http://www.writework.com/essay/life-would-boring-journey-without-good-dose-1 ... and relationship to a fabulous stallion. During much of the story lines, Jodys companions are ... that evening, after Gitano has dinner with Jodys family, Gitano retreats to the bunk house. Jody follows, ... readers attention with a new and thoughtful situation in the boys youth. However, unlike most ...
|
Mid
|
[
0.6416040100250621,
32,
17.875
] |
Q: How to solve equations with two logarithmic terms? Once again I return with questions about logarithms. This time I am having trouble with solving equations of the following form: $a\cdot \log(t)^{Q} - b\cdot \log(t)^{Z} = R $ I cannot figure out how to solve this equation for $t$. What I do know is the following: taking the exponential on both sides results in $\exp(a\cdot \log(t)^{Q}) = \exp(R+ b\cdot \log(t)^{Z}) $ $\iff$ $\exp(a\cdot \log(t)^{Q}) = e^{R}\cdot e^{ b\cdot \log(t)^{Z}}.$ Thanks in advance. A: The "most" you can do is to define $x = \log(t)^Q$, then your equation is $x = \frac{b}{a} x^{\frac{Z}{Q}} + \frac{R}{a}$ That's it, you want the solution to $x = \alpha x^{\beta} + \gamma$ Sadly, the solution to this equation cannot be written in term of usual functions. But you can calculate its approximate value for specific values of $\alpha, \beta, \gamma$
|
Mid
|
[
0.606924643584521,
37.25,
24.125
] |
Urinary magnesium excretion and risk of hypertension: the prevention of renal and vascular end-stage disease study. Observational studies on dietary or circulating magnesium and risk of hypertension have reported weak-to-modest inverse associations, but have lacked measures of actual dietary uptake. Urinary magnesium excretion, an indicator of intestinal magnesium absorption, may provide a better insight in this association. We examined 5511 participants aged 28 to 75 years free of hypertension in the Prevention of Renal and Vascular End-Stage Disease (PREVEND) study, a prospective population-based cohort study. Circulating magnesium was measured in plasma and urinary magnesium in two 24-hour urine collections, both at baseline. Incident hypertension was defined as blood pressure ≥140 mm Hg systolic or ≥90 mm Hg diastolic, or initiation of antihypertensive medication. During a median follow-up of 7.6 years (interquartile range, 5.0-9.3 years), 1172 participants developed hypertension. The median urinary magnesium excretion was 3.8 mmol/24 hour (interquartile range, 2.9-4.8 mmol/24 hour). Urinary magnesium excretion was associated with risk of hypertension in an inverse log-linear fashion, and this association remained after adjustment for age, sex, body mass index, smoking status, alcohol intake, parental history of hypertension, and urinary excretion of sodium, potassium, and calcium. Each 1-unit increment in ln-transformed urinary magnesium excretion was associated with a 21% lower risk of hypertension after multivariable adjustment (adjusted hazard ratio, 0.79; 95% confidence interval, 0.71-0.88). No associations were observed between circulating magnesium and risk of hypertension. In conclusion, in this cohort of men and women, urinary magnesium excretion was inversely associated with risk of hypertension across the entire range of habitual dietary intake.
|
High
|
[
0.7214854111405831,
34,
13.125
] |
Q: How can prove this equation. if $a+b=c+d=e+f=\dfrac{\pi}{3}$, $\dfrac{\sin{a}}{\sin{b}}\cdot\dfrac{\sin{c}}{\sin{d}}\cdot\dfrac{\sin{e}}{\sin{f}}=1$, Prove that: $\dfrac{\sin{(2a+f)}}{\sin{(2f+a)}}\cdot\dfrac{\sin{(2e+d)}}{\sin{(2d+e)}}\cdot\dfrac{\sin{(2c+b)}}{\sin{(2b+c)}}=1$ A: Consider an equilateral triangle $ABC$, and let $D$ be on $BC$ so that $\angle{BAD}=a$, so $\angle{DAC}=\frac{\pi}{3}-a=b$. Let $E$ be on $AC$ so that $\angle{CBE}=c$, so $\angle{EBA}=\frac{\pi}{3}-c=d$. Let $F$ be on $AB$ so that $\angle{ACF}=e$, so $\angle{FCB}=\frac{\pi}{3}-e=f$.By the sine version of Ceva's theorem and the given condition $\frac{\sin{a}}{\sin{b}}\cdot \frac{\sin{c}}{\sin{d}}\cdot \frac{\sin{e}}{\sin{f}}=1$, $AD, BE, CF$ are concurrent at a point, which we shall call $P$. Extend $AD$ to points $A_1, A_2$ s.t. $\angle{A_1CB}=a+f, \angle{A_2BC}=b+c$. We have $\angle{CA_1A}=\pi-\angle{A_1CA}-\angle{A_1AC}=\pi-b-(e+f+a+f)=e$. Similarly $\angle{BA_2A}=d$. Thus $APC$ is similar to $ACA_1$ and $APB$ is similar to triangle $ABA_2$. Therefore $\frac{AA_1}{AC}=\frac{AC}{AP}=\frac{AB}{AP}=\frac{AA_2}{AB}$, so $AA_1=AA_2$, so $A_1=A_2$. Now $$\frac{\sin{(b+2c)}}{\sin{(a+2f)}}=\frac{\frac{A_1P}{\sin{(a+2f)}}}{\frac{A_2P}{\sin{(b+2c)}}}=\frac{\frac{CP}{\sin{e}}}{\frac{BP}{\sin{d}}}=\frac{CP\sin{d}}{BP\sin{e}}$$ Similarly, we get $$\frac{\sin{(d+2e)}}{\sin{(c+2b)}}=\frac{AP\sin{f}}{CP\sin{a}}$$ $$\frac{\sin{(f+2a)}}{\sin{(e+2d)}}=\frac{BP\sin{b}}{AP\sin{c}}$$ so multiplying gives the desired equality $$\frac{\sin{(2a+f)}}{\sin{(2f+a)}}\cdot\frac{\sin{(2e+d)}}{\sin{(2d+e)}}\cdot\frac{\sin{(2c+b)}}{\sin{(2b+c)}}=1$$
|
High
|
[
0.6666666666666661,
37.5,
18.75
] |
--- abstract: 'The Yakutsk array includes the surface scintillation detectors and detectors of the Vavilov-Cherenkov radiation and underground detectors of muons with energies above 1 GeV. All these detectors readings are suggested to be used to study chemical composition of the primary cosmic radiation at ultra-high energies in terms of some model of hadron interactions. The fluxes of electrons, positrons, gammas, Cherenkov photons and muons in individual extensive air showers induced by the primary protons and helium, oxygen and iron nuclei at the level of observation have been estimated with help of the code CORSICA 6.616. The thinning parameter ${10}^{-7}$ have been used. Calculations have been carried out in terms of the QGSJET-2 and Gheisha-2002 models. The responses of various detectors are estimated with the help of the code GEANT4. First, energies $E$ and coordinates $X$ and $Y$ of the core of individual extensive air showers with observed the zenith and azimuth angles have been estimated using all surface scintillation detector readings instead of using of the standard procedure with a parameter $s(600)$. These detector readings have been compared with the detector responses, calculated for all particles which hit the scintillation detectors in each individual shower with observed the zenith and azimuth angles. This comparison show that the values of the function ${\chi}^{2}$ per one degree of freedom changes from 1.1 for iron nuclei to 0.9 for primary protons. As this difference is small all readings of detectors of the Vavilov-Cherenkov radiation have been used. At last, readings of underground detectors of muons with energies above 1 GeV have been exploited to make definite conclusion about chemical composition. The primary gammas are not favourable due to large contribution to a signal in the surface scintillation detectors.' author: - '\' title: 'About chemical composition of the primary cosmic radiation at ultra-high energies' --- chemical composition Introduction ============ The study of the chemical composition of the primary cosmic radiation at ultra high energies is of important. Decreasing of the flux of the primary protons at energies above $\sim 6\cdot {10}^{19}$eV has been predicted by Greisen, Zatsepin and Kuzmin [@1; @2] (effect GZK) due to interactions of these primary protons with microwave background radiation. This suppression of the flux of cosmic radiation at the energy mentioned above would not be seen in case if heavier primaries such as iron nuclei dominate the composition of this cosmic radiation. One more point of great interest is the presence of the primary photons at such ultra high energies. Due to the GZK effect or due to some possible top-down scenarios of origin of cosmic rays such primary photons should give some contribution to the flux of the primary cosmic radiation. Searching for these primary photons has been resulted in setting some upper limits on the fraction of these photons at various energies [@3] – [@8]. The only key to success in almost all attempts to study chemical composition is a dependence of the muon number $N_m$ on energy $E$ of the primary particle which induced an extensive air shower: $$N_m=a\cdot E^b, \label{eq:1}$$ where $a$ and $b$ are constant values and the exponent $b$ is not exceed 1. It is a common agreement then that the photon induced showers would have smaller fraction of muons relatively to all secondary particles due to small cross sections of the photonuclear interactions than the showers induced by the primary protons. And the contrary, the showers induced by the primary iron nuclei would have larger fraction of muons. Of course, it is a very model dependent point. Nevertheless, many attempts have been made to study chemical composition by comparing some distributions of number of muons [@9] with the appropriate data [@10] – [@12]. Many other suggestions have been done to use time distributions of muons, their height production distributions, $X_{max}$ distribution and so on. At ultra high energies the only variables which can be used to study chemical composition are the height $X_{max}$ of a shower maximum (or the curve of the longitudinal development of a shower measured by the fluorescent method), the values of signals in some various detectors on the ground and signals in muon detectors. Again and in this case any conclusions are severe model dependent. So, it is of primary importance to study also parameters of interactions of particles at ultra high energies. The energy $E$ of the primary particles which induced a shower should also be known. There are some standard methods to estimate the energy $E$ of a shower. But any alternative methods of energy estimation are also of interest. It was suggested that readings of all detectors should be compared with calculated signals for a shower with the given values of the zenith and azimuth angles [@13]. Calculations have been carried out for four showers observed at the Yakutsk array (YA) [@14; @15]. It should be mentioned that we use results of simulations for some sample of individual showers to take into account fluctuations in the longitudinal and lateral development. In this paper we use all readings of scintillation detectors placed on the ground, the muon underground detectors and the detectors of the Vavilov-Cherenkov radiation. First, the energy of a shower and coordinates of its axis were estimated with the help of the total signals in ground scintillation detectors. Then we repeated this procedure for readings of detectors of the Vavilov-Cherenkov radiation to check the energy estimate found at the previous step. At last, the muon detector readings have been used. At each step we try to draw some conclusions about chemical composition of the primary cosmic radiation at ultra high energies. Method of simulations ===================== Simulations of the individual shower development in the atmosphere have been carried out with the help of the code CORSIKA 6.616 [@16] in terms of the models QGSJET2 [@17] and Gheisha 2002 [@18] with the weight parameter $\epsilon={10}^{-7}$ (thinning). The program GEANT4 [@19] has been used to estimate signals in the scintillation detectors from the shower electrons, positrons, gammas and muons at different points from the shower axis. For the same shower also at different points from the shower axis signals in detectors of the Vavilov-Cherenkov radiation and in the muon detectors have been calculated. Simulations have been carried out for four or two species of the primary particles (protons and nuclei of helium, oxygen and iron) with a statistics of four individual events for every species of primaries. The energy $E$ of every shower was assumed in calculations to be equal to value $E_{exp}$ estimated previously. Some sample of individual simulated showers induced by various primaries particles have been constructed. These individual showers allow to take into account fluctuations in the longitudinal development of a shower. Then the ${\chi}^{2}$ method has been used to find out which of calculated individual showers agree best with data. It was assumed in accordance with experimental data that the most energetic shower observed at the YA consists mainly of muons and their deflections in the geomagnetic field have been taken into account. Readings of all scintillation detectors have been used to search for the minimum of the function ${\chi}^{2}$ in the square with the width of 400 m and a center determined by data with a step of 1 m. These readings have been compared with calculated responces which were multiplied by the coefficient $C$. This coefficient changed from 0.1 up to 4.5 with a step of 0.1. Thus, it was assumed, that the energy of a shower and signals in the scintillation detectors are proportional to each other in some small interval. New estimates of energy $$E=C\cdot E_{exp},~eV \label{eq:2}$$ where coefficient $C$ shows the difference with the experimental estimate of energy $E_{exp}$, coordinates of axis and values of the function ${\chi}^{2}$ have been obtained for each individual shower separately for total signals in scintillation detectors, signals in detectors of the Vavilov-Cherenkov radiation and signals in muon detectors. The four extensive air showers observed at the YA have been interpreted with the help of this calculations. Results of the study of the chemical composition ================================================ ![The values of the ${\chi}^{2}$ function per one degree of freedom vs the energy coefficient $C$ for various primaries: a – protons, b – helium nuclei, c – oxygen nuclei, d – iron nuclei[]{data-label="Fig. 1"}](Fig11){width="8cm"} ![Fraction $\alpha$ of muon contribution to signal in a scintillation detector for the vertical extensive air showers. Points – [@20], stars – [@21]. Calculated curves: 1 – $E_{\mu}=0.3$ GeV, 2 – $E_{\mu}=1$ GeV, 3 – $E_{\mu}=2$ GeV[]{data-label="Fig. 2"}](Fig22){width="8cm"} First, data of the most energetic shower have been interpreted. The 16 various values of energy estimates for 16 individual simulated showers with different values of the function ${\chi}^{2}$ have been obtained for the same sample of the 31 experimental readings of the scintillation detectors. Fig. 1 illustrates the dependence of the values of ${\chi}^{2}$ function per one degree of freedom on the energy coefficient $C$ for four species of the primary particles and four events for every species as follows: (a) – for the primary protons, (b) – for helium nuclei, (c) – for oxygen nuclei and (d) – for iron nuclei. The systematic decreasing of the coefficient $C$ can be seen with increasing of atomic number of the primary particles (from $\sim 2$ for the primary protons to $\sim 1.7$ for the primary iron nuclei). As the giant shower is very inclined muons give main contribution to signals in the scintillation detectors. For the iron primaries which produce more muons than the proton one the energy estimates are by a factor 1.3 – 1.5 less than for the proton primaries. The values of the ${\chi}^{2}$ function per one degree of freedom are increasing from $\sim 0.9$ for the primary protons to $\sim 1.1$ for the primary iron nuclei. Thus, all species of primary particles are possible for this particular shower. It should be mentioned that muons contribute $\sim 80\%$ of the total signal in this inclined shower. Therefore, in this case readings of muon detectors do not provide additional information relatively to the readings of scintillation detectors on the ground. So, it is of important to find out contribution of muons to total signals for the vertical showers. Fig. 2 shows the fractions of the total signal which are contributed by muons at a distance of 600 m from the shower axis. The curves 1, 2 and 3 are calculated for muons with the threshold energies 0.3, 1 and 2 GeV accordingly. The experimental data [@20; @21] which are obtained for various zenith angles are also shown. These data were corrected to the vertical showers but this correction is somehow uncertain. The data show that muon contribution to total signal decreases from $20\%$ at the energy $E={10}^{18}$ eV to nearly $15\%$ at the energy $E=4\cdot {10}^{19}$ eV. The curve 2 calculated for the primary protons shows a change from $\sim 13\%$ to nearly $\sim 9\%$ at the same energy interval. The difference is very large. It exceeds a factor of 1.6. It is of primary importance. If data show correct values then the models QGSJET2 [@17] and Geisha 2002 [@18] are unable to reproduce the correct values of muon contribution to total signal for the primary protons. Only the primary iron nuclei may fit the data. So, the conclusion about the energy estimates of the giant shower observed at YA [@14] should also be made for the primary iron nuclei. Thus, chemical composition at ultra high energies can be studied in terms of these models. To make it more definite we estimated energies of three more showers with the help of three methods. First, total signals in the scintillation detectors have been used. Secondly, readings of detectors of the Vavilov-Cherenkov radiation have been exploited. At last, the muon detector readings were compared with the appropriate calculated signals. For the first shower with the energy $E_{exp}=6.5\cdot {10}^{19}$ eV coefficient $C$ obtained with the help of the first method happened to be $\sim$ 0.5 – 0.75 for the primary protons and $\sim 0.9$ for the primary iron nuclei with values of the ${\chi}^{2}$ function per one degree of freedom $\sim 2.5$ and $\sim 1.4$ accordingly. So, the primary iron nuclei have some privilege. With the help of detectors of the Vavilov-Cherenkov radiation the appropriate coefficients $C$ were found to be $\sim 1.5$ for the primary protons and $\sim 1.1$ for the primary iron nuclei also with some privilege to iron nuclei. As for signals in muon detectors these coefficients $C$ equal to $\sim 2.2$ and $\sim 1.6$ accordingly with the values of the ${\chi}^{2}$ function per one degree of freedom $\sim 0.5$ and $\sim 0.8$. The similar results have been obtained for the second shower with the energy $E_{exp}=2.5\cdot {10}^{19}$ eV. For the primary protons and iron nuclei the appropriate coefficients $C$ equal to $\sim 0.65$ and $\sim 0.9$ with the values of the ${\chi}^{2}$ function per one degree of freedom $\sim 2.5$ and $\sim 1.7$ for the first method. Second method gave coefficients $C$ $\sim 1.1$ and $\sim 0.9$ for protons and iron nuclei accordingly. Coefficients $C$ $\sim 2.6$ and $\sim 1.6$ were obtained with the help of muon detectors with the values of the ${\chi}^{2}$ function per one degree of freedom $\sim 1.3$ and $\sim 0.9$ accordingly. The third shower with the energy $E_{exp}=5\cdot {10}^{19}$ eV has the coefficients $C$ $\sim 0.75$ and $\sim 0.6$ with the values of the ${\chi}^{2}$ function per one degree of freedom 3.5 and 3.0 for the first method. The second method gave coefficients $C$ $\sim 1.1$ and $\sim 0.9$. Unfortunately, for this shower there no readings of muon detectors. New coordinates of shower axis vary from the experimental one by some dozen of meters. So we may conclude that in terms of the QGSJET2 [@17] and Gheisha 2002 [@18] models heavy primaries such as iron nuclei have some privilege. Conclusion ========== The three methods have been used to estimate energy of four extensive air showers by comparison different detector readings with calculated signals in individual events for various primary particles with the help of the code CORSIKA 6.616 [@16] in terms of the model QGSJET2 [@17] and Gheisha 2002 [@18] models with the weight parameter $\epsilon={10}^{-7}$ (thinning). The program GEANT4 [@18] has been used to estimate signals in the scintillation detectors from the shower electrons, positrons, gammas and muons at different points from the shower axis. It was found out that in terms of the QGSJET2 [@17] and Gheisha 2002 [@18] models heavy primaries such as the iron nuclei fit data for four showers better than the primary protons. It was also stressed that any conclusions are very model dependent. Thus, to be more confident in results of the study of chemical composition parameters of interactions of particles at ultra high energies and energy estimates of showers should be known precisely. Acknowledgements ================ Moscow authors thank RFBR (grant 07-02-01212) and G.T. Zatsepin LSS (grant 959.2008.2) for support.\ Autors from Yakutsk thank RFBR (grant 08-02-00348) for support. [21]{} K. Greisen, Phys. Rev. Lett., [**16**]{}, 748 (1966). G.T. Zatsepin and V.A. Kuzmin, JEPT Lett., [**4**]{}, 78 (1966). R.U. Abbasi et al., Astrophys. J., [**636**]{}, 680 (2006). G.I. Rubtsov et al., Phys. Rev., D [**73**]{}, 063009 (2006). A.V. Glushkov et al., Pis’ma v. ZhETF, [**85**]{}, iss. 3, 163 (2007). M. Ave et al., Phys. Rev., D [**65**]{}, 063007 (2002). K. Shinozaki et al., Astrophys. J., [**571**]{}, L117 (2002). J. Abraham et al., arXiv:astro-ph/0606619 (2006). L.G. Dedenko, ZhETF, [**46**]{}, No. 5,1859 (1964). G.B. Khristiansen et al., Proc. 9th ICRC, London, [**2**]{}, 774 (1965). V.A. Atrashkevich et al., Pis’ma v. ZhETF, [**33**]{}, 236 (1981). J.N. Stamenov et al., Proc. 15th ICRC, Plovdiv, [**8**]{}, 102 (1977). L.G. Dedenko et al., Proc. 31st ICRC, Lodz, (2009). N. Efimov et al., Proc. Int. Workshop on Astrophysical Aspects of the Most Energetic Cosmic Rays, Kofu, 20 (1990). V.P. Egorova et al., Nucl. Phys. B (Proc. Suppl.), [**136**]{}, 3 (2004). D. Heck et al., Report FZKA 6019 (1998). Forschungszentrum Karlsruhe. http://www-ik.fzk.de/corsika/physics description/corsika phys.html. S.S. Ostapchenko, Nucl. Phys. B (Proc. Suppl.), [**151**]{}, 143 (2006). H. Fesefeldt, Report PITHA-85/02, RWTA, Aachen (1985). The GEANT4 Collaboration, http://www.info.cern.ch/asd/geant4.html. S.P. Knurenko et al., Nucl. Phys. B (Proc. Suppl.), [**151**]{}, 92 (2006). A.V. Glushkov et al., Proc. 28th ICRC, Tsukuba, [**1**]{}, 393 (2003).
|
Mid
|
[
0.56551724137931,
30.75,
23.625
] |
Q: 24" iMac Kernal Panics when booting anything OS X related I have an Imac which suddenly started giving me Kernal Panics every time it boots. I suspected a hardware issue so ran a hardware test and that found no issues. I tried booting off another hard drive, several OS X installer discs from Tiger all the way to Snow Leopard and its the exact same problem. But it boots fine in Windows, all drivers installed and everything. No issues at all! I cant work out why it always fails with OS X. If anyone can point to any ideas at all I'd really appreciate it as this is mind boggling. Thanks. A: My first suspicion for symptoms like this is a subtle RAM problem -- since OS X and Windows use RAM somewhat differently, it's plausible that it might crash consistently in one and work (almost) perfectly in the other. I've also seen flaky RAM pass "full" memory tests several times... If you can boot OS X in single-user mode (hold command-S as it begins to boot, and it'll drop you into a very minimal command-line environment), I'd suggest running memtest on it, as I've seen it find memory problems that none of my other test utils (including the ones Apple gives service providers) would catch. Mind you, installing memtest will be a little tricky, since networking and even additional drives/partitions aren't available in single-user mode. Since it sounds like you have an external OS X boot disk, if you can mount that on another Mac, install memtest on it, and then boot the problem Mac from it, that should do it. Another possibility is to try changing the RAM config, to remove possibly flaky RAM -- if the Mac has multiple DIMMs installed, try removing one, then the other. Try moving DIMMs between slots. If you have any spare DIMMs, try those instead of what's in the computer now. Other things to try including booting in safe mode (hold shift as it begins to boot) -- this runs a stripped-down config with (among other things) noncritical kernel extensions disabled, so if it boots that way it may give you a some idea where the problem's coming from.
|
High
|
[
0.714681440443213,
32.25,
12.875
] |
Q: How to workaround 'FB is not defined'? Sometimes I'm getting the "FB is not defined" issue when loading the http://connect.facebook.net/en_US/all.js I've realized that the problem is because sometimes my website just doesn't load that file. So it gets nothing, and the object FB literally doesn't exist. My solution is to prevent my users when this happens, so I've tried the following codes in JavaScript but none seems to work: if (FB) {/*run the app*/} else {/*alert the user*/} if (FB!==false) {/*run the app*/} else {/*alert the user*/} if (FB!='undefined') {/*run the app*/} else {/*alert the user*/} thanks for the answer! A: I think you should solve the main issue instead, which solution is provided by Facebook (Loading the SDK Asynchronously): You should insert it directly after the opening tag on each page you want to load it: <script> window.fbAsyncInit = function() { FB.init({ appId : 'your-app-id', xfbml : true, version : 'v2.1' }); }; (function(d, s, id){ var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); </script> From the documentation: The Facebook SDK for JavaScript doesn't have any standalone files that need to be downloaded or installed, instead you simply need to include a short piece of regular JavaScript in your HTML that will asynchronously load the SDK into your pages. The async load means that it does not block loading other elements of your page. UPDATE: using the latest code from the documentation. A: Assuming FB is a variable containing the Facebook object, I'd try something like this: if (typeof(FB) != 'undefined' && FB != null ) { // run the app } else { // alert the user } In order to test that something is undefined in plain old JavaScript, you should use the "typeof" operator. The sample you show where you just compare it to the string 'undefined' will evaluate to false unless your FB object really does contain the string 'undefined'! As an aside, you may wish to use various tools like Firebug (in Firefox) to see if you can work out why the Facebook file is not loading. A: I guess you missed to put semi-colon ; at the closing curly brace } of window.fbAsyncInit = function() { ... };
|
Low
|
[
0.516431924882629,
27.5,
25.75
] |
February and March of this year, I experienced a couple of personal events that basically, slapped me in the face and said, What are you waiting for??? Go! Go! So, I did with many blessings of love in the form of financial contributions. I thank you who love the vision of Purple Paradise Resort and contributed in whatever manner you felt would benefit me most. I have also been supported energetically, emotionally and spiritually by so many, and I could feel your strength on the plane as I flew from North Carolina to Belize City. It felt as if the world was moving with me and at 38,000 feet, I transmuted as much dark energy from the world as I could, bringing the darkness up through my feet, using my body as a filter and sent it out into the universe as light to be recycled through the Universe and back.NOW, I wake up in the morning stunned I'm actually here, considering all the times in the last three years, I felt I was standing still and not making any progress toward my vision at all. Three years ago last month, I declared I was going to move to Belize without knowing anything about the country other than many people liked to vacation or honeymoon here. Since then, I've been researching the Government, it's actually a real one, not incorporated, though, not much different than the States, but I prefer to leave that information to others to disperse. At the time I made a very short declaration, I had no idea that my original vision for a community would be expanded to what it is today, and when I get some help with designing, I will certainly publish my visions. I have yet to experiment with AutoCad Architecture and Revit more than I have, so that will be up and coming. I have shared my vision with many people here already and I am amazed at the positive reception. Some want to help build to earn money until we don't need it; others love the concept of clean, organic, off-the-grid living in a loving environment and want to participate. I've taken some pictures of downtown San Ignacio where I'm living near, but my camera won't cooperate--seems well-charged batteries are hard to find here, among other things, but I have a plan or two to remedy the situation. I love going to the market to purchase fresh, locally grown, organic food, which is not the word that is used, the farmers will tell you their produce is "local", which works just fine for me. I understand Belize is fighting the GMO's and I am delighted. Monsanto is a very bad word here, again, much to my delight. The energy is so calm and peaceful, yet intense, possibly due to the high frequency of the rainforest. I feel very energized, but relaxed and calm amidst a couple of situations that normally would have rocked me. I'll share a couple pictures I took of the caseta I'm renting for $450 per month, which includes all utilities, completely furnished (except air-conditioning), has unlimited internet, very nice furniture and swimming pool access. My landlady has become a great friend and we go into town regularly together. Angela Carmen Sanchez, who I met while volunteering for the Fix The World Project, met me as I exited the airport, immigrations, to be exact. Though we'd Skyped many times in the last couple years, it was truly a family reunion. Angela is a sweet and vibrant spirit and I'm proud to call her my siStar--who also coined the word. Leaving Angela's with Hector Mar, Owner of Airport Shuttles, taking me from the airport to Angela's and onto San Ignacio, where I will be close to my friends and Realtors, Macarena Rose and Ginny Ophof. The view of Spanish Lookout from my landlady's porch, early evening View of the casetas, one of which I'm renting. The building in the background is the new hospital. The front of my caseta. I have yet to get the hammock hooked up, which appears to be a staple here. Celebrating at JJ's with Barmaid Ariana who created "Purple Paradise" at my request. Fabulous Drink! I wonder if Jerry, one of the owner's,will let me have the recipe for the Resort? I'm still acclimatizing and have help coming this week to get a crowdfunding campaign going until the real funding manifests, so that I can keep this project moving along smoothly and help with others. I've experienced many synchronistic events since arriving and it continues to amaze me. I had two such events the other day shortly after arriving in town. I had to get my phone switched from my current servicer to BTL Digicell and the man who could do this came down the street right then on his motorcycle as we were standing by. I called him an hour or two later to make sure all my data would still be intact, which he said was all taken care of and he was about to drop my phone off at another location, but when I told him where I was, he said he'd be there momentarily; he was just around the corner. I will close for now and promise to get more pictures of San Ignacio, the market and anything else I find you may like seeing. Many changes are taking place whether you can see them or not, all around the world. I feel a huge wave is coming to remove all these constrictions we have felt for so long. I feel by being on Belize now is significant of all the progress that is taking place and it will ramp up speed. Are you ready??? Please remember to visit the Open-Source Technology page and if you'd like to get involved with Purple Paradise Resort, use the form on the Contact Us page to get on the mailing list. I'll be working on a Contributions page, but you can use the form on the Contact Us page and leave a note. Do you have a Vision for a Better World? I invite you to join the Visionary Circle. I'm sending you Lots of Love and Violet Light! See you soon! Nicki Love Linda Livesay 4/6/2014 10:06:49 am Nicki, So happy to see that you made it to Belize. Will be fun to see what unfolds from here. Best to you, Linda from Sharing Springs Community in SC Nicki 4/6/2014 12:04:26 pm Wish we could have met before I left, Linda, but perhaps on Belize! Love you much, Nicki Brenda Diller 4/6/2014 10:35:30 am Congrats, Nikki! I'm so glad to hear you're making your dreams happen one step at a time. Moving to Belize is one of the biggest steps. Nicki 4/6/2014 12:06:02 pm Hi Brenda, I appreciate your love and support! Hope you're enjoying your new place. Send pictures when you can and give Sadie hugs and kisses for me. I love you, Nicki Thomas Maddox 4/6/2014 10:47:42 am Nicki....yowza!...Looks like you landed on your feet!!! Yay for you...some time I would like to come down to Belize and take a break... Good on ya, girl!!! Nicki 4/6/2014 12:07:43 pm Come on down, Thomas, before the building begins or you may not get a much needed rest for awhile. Thank you for your support. A little goes a long way here! Love always, Nicki Thomas 4/8/2014 03:32:38 am Definitely need to come down there...my feet are itching now...!! Ajack 4/6/2014 10:53:55 am There are MILLIONS of LOST Souls.........adrift...... It is a pleasure to connect with a [Purple] Soul that can connect with purpose. Thank You; for our paths crossing. Learn of this place Belize... ......that I might see and experience through Thee; til I arrive. Congrads ! ! ! Nicki 4/6/2014 12:09:14 pm Ajack, I wouldn't be here without a purpose and the love of many. Can't wait to see you and I intend to be looking at properties this week, even though, I have one picked out. So very encouraging to hear about your progress and the practical hopes you continue to share with the rest of us for a chance to spread the light and love, the prospect for enjoying our true potentials. Nicki 4/6/2014 12:43:12 pm Rob, Of course I'm going to keep you updated, especially now that I'm here to manifest the resort. Keep sending that great energy!! Love always, Nicki Jodi 4/6/2014 12:25:21 pm Hi Nicki!! I love the pictures, it looks beautiful. You look well, rested and happy!! I am so very happy for you! I love you siStar.... Nicki 4/6/2014 12:44:27 pm Hi SiStar!! So good to hear from you. Can't wait till you come and visit/stay! I love you much, Nicki Wendy S. 4/6/2014 02:27:22 pm Nikki, I am so glad you made it to Belize! And it seems your energy and passion will manifest all you want and need. You are a blessing. Can't wait to see more pictures! -Wendy Nicki 4/8/2014 02:42:45 pm Wendy, thank you for your sweet comments. I look forward to you visiting and in the meantime, I'll be taking more pictures. Thank you so much, Johan. I hope you and Renee will come down and visit when things are rolling along here. I'm sending you lots of love! Please say hello to James for me. Nicki Eve 4/7/2014 01:39:34 am Thanks sooo much for keeping us up to date Nikki. I am so happy for you and you are such an inspiration to all of us who have not made that leap into the new world!! Keep up the good work, keep being an inspiration to all of us and hope to see you soon!!! LOTSA LUV!!! Nicki 4/8/2014 02:50:00 pm Eve, you inspire me to continue through thick and thin. I appreciate your support more than you know. I have many hugs stored up for you dearest siStar. See you on Belize one day, Nicki Vicky 4/7/2014 03:28:46 am Dear Nicki, I'm so happy to see you're happy! Best wish! Love you! Nicki 4/8/2014 02:54:50 pm Hi Vicky, I really appreciate your comment. It's difficult not to be happy here on Belize. I hope to see you one day here for a visit. Much love to you and your family, Nicki Lois frutiger 4/21/2014 03:53:29 am Nicki, missing Belize already, and I have only been home two days! Some set backs, but such is life. Hope to be back within a month! Keep positive! Lois Nicki 4/21/2014 04:06:35 am Hey Lois, I'm missing you too!!! Was hoping you'd be back already so I can have my room with the pool!!! Make sure you get my number from Ginny. See you soon!!!
|
Mid
|
[
0.6096997690531171,
33,
21.125
] |
Subcellular localization of cadmium in the root cells of Allium cepa by electron energy loss spectroscopy and cytochemistry. The ultrastructural investigation of the root cells of Allium cepa L. exposed to 1 mM and 10 mM cadmium (Cd) for 48 and 72 h was carried out. The results indicated that Cd induced several obvious ultrastructural changes such as increased vacuolation, condensed cytoplasm with increased density of the matrix, reduction of mitochondrial cristae, severe plasmolysis and highly condensed nuclear chromatin. Electron dense granules appeared between the cell wall and plasmalemma. In vacuoles, electron dense granules encircled by the membrane were aggregated and formed into larger precipitates, which increase in number and volume as a consequence of excessive Cd exposure. Data from electron energy loss spectroscopy (EELS) confirmed that these granules contained Cd and showed that significantly higher level of Cd in vacuoles existed in the vacuolar precipitates of meristematic or cortical parenchyma cells of the differentiating and mature roots treated with 1 mM and 10 mM Cd. High levels of Cd were also observed in the crowded electron dense granules of nucleoli. However, no Cd was found in cell walls or in cells of the vascular cylinder. A positive Gomori-Swift reaction showed that small metallic silver
|
Mid
|
[
0.608490566037735,
32.25,
20.75
] |
USC will get the funds over the next couple years, said business school dean Joel Smith III, and will use the money to create a capitalism ethics class, a capitalism-focused professorship, a lecture series and a room in the business library dedicated to the works of authors that support free enterprise such as Ayn Rand. John Allison, chairman and CEO of BB&T, said USC and the bank jointly developed the focus of the endowment. “If you look at a lot of business education programs, they do a good job of teaching people the technical part of business,” Allison said. “But they don’t often explain the philosophical foundations for capitalism, and anybody can make better decisions if they understand the context.” I saw John Allison speak at last summer’s Objectivist Conference, and I think its wonderful (and rare) to see a successful CEO defend capitalism. You can see how he applies Objectivism to the corporate philosophy of BB&T at their philosophy page. It has become obvious to any honest individual that the UN is essentially a pulpit for dictators, communists, and looters of all sorts to attack and demand welfare from the few free countries of the world. Lest we forget that UN representatives represent the policies of the nations they represent, here are the nations that voted to support this mass-murdering “spiritual leader:” China, Russia, France, Angola, Chile, Pakistan, Spain, Algeria, Benin, Brazil, and the Philippines. I don’t need to mention what I think of the EU ruling to fine Microsoft €497.2 million ($605 mil U.S.) for having the audacity to make superior software. However I was curious how the loot would be split it up. Turns out that it will be a trickle into the €100bn EU budget, which is allocated as follows: Almost half of this is spent on agricultural aid, for subsidising farmers and their produce, and for improving rural development. The second biggest portion – about one-third – goes on EU funding, which supports the poorer countries in the union. Currently Ireland, Spain, Portugal and Greece benefit most from this fund. Money has also been allocated for the 10 countries set to join the union – some 40bn euros in the first three years of enlargement, in which time these countries will pay 15bn euros into the EU budget. The remainder goes on research and educational programmes, aid to regions outside the EU such as Africa and the Balkans, and administration costs… Yes, nearly 100% of it is welfare. The linked article also mentions that the DOJ is complaining – such a fat cash cow should not be shared that easily. Europeans enslaved other Europeans for centuries before the drying up of that supply led them to turn to Africa as a source of slaves for the Western Hemisphere. Julius Caesar marched in triumph through Rome in a procession that included British slaves he had captured. There were white slaves still being sold in Egypt two decades after blacks were freed in the United States. It was the same story in Asia, Africa, and among the Polynesians and the indigenous peoples of the Western Hemisphere. No race, country, or civilization had clean hands. What makes the current reparations movement a fraud, whether at Brown University or in the country at large, is the attempt to depict slavery as something uniquely done to blacks by whites. Reparations advocates are doing this for the same reason that Willie Sutton robbed banks: That’s where the money is. No one expects Qaddafi to pay reparations to the descendants of Europeans whom his ancestors captured on the Mediterranean coast or Western Europeans to pay reparations to Slavs who were enslaved on such a scale that the very word slave derived from their name. Still less does anyone expect Africans to pay reparations to black Americans whose ancestors they sold to white men who took them across the Atlantic. Only in America can guilt be turned into cash. Your leaders, chaired by Sharon, will only bring you destruction. Blood begets blood. The Palestinian people can endure a long struggle and if you think that the confrontation will exhaust it then you are deluded. You will lose. Rantisi said similar things just last year, after he himself survived an Israeli targeted strike: “By G-d we will not leave one Jew alive in Palestine.” … “Sheikh Ahmad Yassin rest in peace. They will never enjoy rest. We will send death to every house, every city, every street in Israel!” What form of “death” did he send to Israel? A 16 [edit:14]-year-old Palestinian with a suicide bomb vest strapped to his body was caught at a crowded West Bank checkpoint Wednesday, setting off a tense encounter with Israeli soldiers whom the army said he was sent to kill. The family of the teenager, identified as Hussam Abdo, said he was gullible and easily manipulated. “He doesn’t know anything, and he has the intelligence of a 12 year old,” said his brother, Hosni. “He told us he didn’t want to die. He didn’t want to blow up,” Milrad said. The military said Abdo’s mission was to kill soldiers at the crowded checkpoint. “In addition to the fact that he would have harmed my soldiers, he would have also harmed the Palestinians waiting at the checkpoint, and there were 200 to 300 innocent Palestinians there,” said the commander of the checkpoint, who identified himself only as Lt. Col. Guy. Several teenagers have carried out suicide bombings over the past 3 1/2 years, and there has been recent concern that militant groups were turning to younger attackers to elude Israeli security checks. Abdo, though 16, looked far younger, and the Israeli military initially said it believed he was 10 years old. But what about all this talk about “cycles of violence?” The number of suicide bombings and the number of victims has dropped, with 142 Israelis killed in 22 bombings in 2003, compared to 214 killed in 53 bombings in 2002. Analysts attributed the drop to Israel’s partially built West Bank barrier, beefed-up intelligence and Hamas leaders’ fear of assassination. I suspect that the terrorist leadership is not nearly as death-happy as they fervently claim – they prefer to send scared little boys to carry out their threats of martyrdom. John McManus of the Scottish Miscarriage of Justice Organisation put his finger on the issue in saying that the government seems “to want to punish people for having the audacity to be innocent.” Well, perhaps that’s no surprise, given that they also want to punish people for the audacity of defending themselves against criminals. Sheik Ahmed Yassin, the founder and spiritual leader of Hamas was just killed in an Israeli airstrike. MSNBC was quick to point out that he was a quadriplegic, but not that he has been planned uncounted terrorist attacks, openly assassinated IDF soldiers, and murdered many Palestinian “spies.” Earlier this year, he made his stance clear: “Muslims should threaten Western interests and strike them everywhere.” We should celebrate the fact that Israel has made it’s anti-terrorist stance clear – and demand that our leaders do likewise.
|
Low
|
[
0.470454545454545,
25.875,
29.125
] |
Q: How could Bernie Sanders stall the $2tn economic rescue package? I'm finding it hard to grasp the current status of the $2 trillion stimulus bill because news about the approval process frequently seem to contradict each other. As far as I'm aware, Bernie Sanders has threatened to stop the bill after Senate had already approved it. How would this be possible? What is the bill's current status and which further steps are necessary to begin the payouts? A: That story is from 2 days ago before it passed and is no longer a relevant point. The issue was that 4 republican senators stalled the bill to try to weaken unemployment benefits. Sanders made a counter-threat to stall the bill if they didn't withdraw their objections, to prevent the Senate from giving in to their objections. Sanders objected to an amendment proposed on Wednesday afternoon by Senators Ben Sasse (R., Neb.), Lindsey Graham (R., S.C.), and Tim Scott (R., S.C.) that would cap unemployment benefits at a worker’s previous salary level. Sanders, AOC Threaten Delays on $2 Trillion Economic Stimulus - National Review "Unless Republican Senators drop their objections to the coronavirus legislation, I am prepared to put a hold on this bill until stronger conditions are imposed on the $500 billion corporate welfare fund," Sanders declared, shortly after Sens. Lindsey Graham (SC), Tim Scott (SC), and Ben Sasse (NE) threatened to delay the Senate bill. Sanders Threatens to Demand Stronger Conditions on $500 Billion 'Corporate Welfare Fund' If GOP Moves to Reduce Benefits for Laid Off Workers - Common Dreams In the end, a compromise was reached where the Republicans were allowed to propose an amendment to weaken unemployment benefits, it was voted down, and then the bill was passed unanimously. They were given a vote on an amendment to pare back the unemployment benefits but the measure failed on the Senate floor Wednesday shortly before the bill's final passage. Senate Passes $2 Trillion Coronavirus Relief Package - NPR At this point, the bill has passed the Senate and, assuming the House approves it as is (which they are expected to), the Senate has completed its role in its passage.
|
Low
|
[
0.529058116232464,
33,
29.375
] |
Fashion Companies Re-ups at 530 Seventh Avenue [Updated] Light Inc., a fashion firm that oversees a number of high-end women’s lines, has signed a renewal for its 6,700-square-foot space at 530 Seventh Avenuefor seven years, landlord Savitt Partners announced today. The fashion company, which is in charge of the new line Alice Hope, will continue to occupy its space on part of the 10th floor of the 490,000-square-foot Art Deco building between West 38th and West 39th Streets in the Garment District. The asking rent in the deal was $59 per square foot. Savitt Partners was represented in-house by Brian Neugeboren and Nicole Goetz in the deal. Marc Schoen and Michael Schoen, also of Savitt Partners, represented the tenant. The Savitt Partners agents did not immediately return requests for comment via a spokeswoman. In addition to the deal with Light Inc., Morgan Miller USA, which is a contemporary luxury brand that is known for designs women’s dresses, renewed its 4,139-square-foot lease on the 19th floor of the building for five years, Savitt Partners announced. The asking rent in that transaction was $65 per square foot. The building at 530 Seventh Avenue is known to house numerous fashion tenants.
|
Low
|
[
0.48689138576779006,
32.5,
34.25
] |
Localization and dynamics of Mel(1a) melatonin receptor in the ovary of carp Catla catla in relation to serum melatonin levels. We studied the localization, sub-cellular distribution and daily rhythms of a 37 kDa melatonin receptor (Mel(1a)R) in the ovary to assess its temporal relationship with the serum melatonin levels in four different reproductive phases in carp Catla catla. Our immunocytochemical study accompanied by Western blot analysis of Mel(1a)R in the ovary revealed that the expression of this 37-kDa protein was greater in the membrane fraction than in the cytosol. Ovarian Mel(1a)R protein peaked at midnight and fell at midday in each reproductive phase. Conversely, serum melatonin levels in the same fish demonstrated a minimum diurnal value at midday in all seasons, but a peak at midnight (during pre-spawning, spawning, and post-spawning phases) or at late dark phase (during preparatory phase). In an annual cycle, band intensity of Mel(1a)R protein showed a maximum at night in the spawning phase and a minimum in the post-spawning phase, demonstrating an inverse relationship with the levels of serum melatonin. Our data provide first evidence of the presence of Mel(1a) melatonin receptor in carp ovary and offer interesting perspectives especially for the study of the mechanisms of the control of its rhythmicity and its response to external factors.
|
High
|
[
0.668341708542713,
33.25,
16.5
] |
Gustavus Adolphus Chunn Gustavus Adolphus Chunn was born of English descent in Falkville, Blount County, Alabama, on July 23, 1838, to Lancelot Chunn IV and Lucenda Ann H. Yeager. Attending Blount College in Blountsville, Alabama, G. A. was a very distinquished-looking young man and was short in stature. In Cullman County, Alabama, on December 1, 1878, he married Martha Ann Oaks, daughter of Joshua and Mary E. A. J. Holmes of the Morgan and Cullman County areas of Alabama. Gustavus and Martha had a large family of children including James Lawson, Alvis Walker, Charles Walter, Dora, Benjamin, Mary R., Frederick, George Washington, Mary Alice, Claude E., Hattie, Crawford Alex, Floyd Hebron, Allen, and Horace E. Rev. Chunn was an ordained Baptist minister whose ministry included several area churches. He ministered in Trion, Georgia, in 1905. Then in 1908, he took the part-time pastorate of the newly organized East Lake Baptist Church. G. A.'s picture hung in the library at East Lake Baptist for many years along with it's various other pastors who served over the years. In 1910 he added a second part-time pastorate, Ridgedale Baptist Church, which was located further down Dodds Avenue near the McCallie campus. During this time Ridgedale had 52 members, and Rev. Chunn's salary there was $360 a year. In the following year, the well-respected pastor left East Lake to lead Ridgedale full-time. In 1911 the church had 82 members, and his annual salary of $760 was partially paid by the Home Mission Board. In 1912 Rev. Chunn took a pastorate in Rockwood, Tennessee. Two years later, he pastored in Monterey, Tennessee, and in 1916 he retired from active church work. The Rev. Chunn died on December 29, 1928, and he and Martha are buried in Chattanooga Memorial Park on Memorial Drive in White Oak. Several of their descendants remain in the Chattanooga area.
|
Mid
|
[
0.564655172413793,
32.75,
25.25
] |
/* * =================== esrc_uniformBlockObj.h ========================== * -- tpr -- * CREATE -- 2019.09.23 * MODIFY -- * ---------------------------------------------------------- */ #include "pch.h" #include "esrc_uniformBlockObj.h" //-------------------- Engine --------------------// #include "uniformBlockObjs.h" #include "ColorTable.h" #include "esrc_state.h" #include "ubo_all.h" //-------------------- Script --------------------// #include "Script/gameObjs/bioSoup/BioSoupColorTable.h" namespace esrc {//------------------ namespace: esrc -------------------------// namespace ubo_inn {//-------- namespace: ubo_inn --------------// std::unordered_map<ubo::UBOType, std::unique_ptr<ubo::UniformBlockObj>> uboUPtrs {}; }//------------- namespace: ubo_inn end --------------// void init_uniformBlockObjs()noexcept{ {//---------- Seeds ------------// auto uboType = ubo::UBOType::Seeds; GLuint bindPoint = ubo::get_bindPoint(uboType); GLsizeiptr dataSize = static_cast<GLsizeiptr>( sizeof(ubo::UBO_Seeds) ); std::string uboName {"Seeds"}; auto [insertIt, insertBool] = ubo_inn::uboUPtrs.emplace( uboType, std::make_unique<ubo::UniformBlockObj>(bindPoint, dataSize, uboName) ); tprAssert( insertBool ); } {//---------- Camera ------------// auto uboType = ubo::UBOType::Camera; GLuint bindPoint = ubo::get_bindPoint(uboType); GLsizeiptr dataSize = static_cast<GLsizeiptr>( ubo::UBO_Camera_size ); std::string uboName {"Camera"}; auto [insertIt, insertBool] = ubo_inn::uboUPtrs.emplace( uboType, std::make_unique<ubo::UniformBlockObj>(bindPoint, dataSize, uboName) ); tprAssert( insertBool ); } {//---------- Window ------------// auto uboType = ubo::UBOType::Window; GLuint bindPoint = ubo::get_bindPoint(uboType); GLsizeiptr dataSize = static_cast<GLsizeiptr>( sizeof(ubo::UBO_Window) ); std::string uboName {"Window"}; auto [insertIt, insertBool] = ubo_inn::uboUPtrs.emplace( uboType, std::make_unique<ubo::UniformBlockObj>(bindPoint, dataSize, uboName) ); tprAssert( insertBool ); } {//---------- Time ------------// auto uboType = ubo::UBOType::Time; GLuint bindPoint = ubo::get_bindPoint(uboType); GLsizeiptr dataSize = static_cast<GLsizeiptr>( sizeof(ubo::UBO_Time) ); std::string uboName {"Time"}; auto [insertIt, insertBool] = ubo_inn::uboUPtrs.emplace( uboType, std::make_unique<ubo::UniformBlockObj>(bindPoint, dataSize, uboName) ); tprAssert( insertBool ); } {//---------- WorldCoord ------------// auto uboType = ubo::UBOType::WorldCoord; GLuint bindPoint = ubo::get_bindPoint(uboType); GLsizeiptr dataSize = static_cast<GLsizeiptr>( sizeof(ubo::UBO_WorldCoord) ); std::string uboName {"WorldCoord"}; auto [insertIt, insertBool] = ubo_inn::uboUPtrs.emplace( uboType, std::make_unique<ubo::UniformBlockObj>(bindPoint, dataSize, uboName) ); tprAssert( insertBool ); } {//---------- OriginColorTable ------------// auto uboType = ubo::UBOType::OriginColorTable; GLuint bindPoint = ubo::get_bindPoint(uboType); GLsizeiptr dataSize = static_cast<GLsizeiptr>(ColorTable::get_dataSize()); std::string uboName {"OriginColorTable"}; auto [insertIt, insertBool] = ubo_inn::uboUPtrs.emplace( uboType, std::make_unique<ubo::UniformBlockObj>(bindPoint, dataSize, uboName) ); tprAssert( insertBool ); } {//---------- UnifiedColorTable ------------// auto uboType = ubo::UBOType::UnifiedColorTable; GLuint bindPoint = ubo::get_bindPoint(uboType); GLsizeiptr dataSize = static_cast<GLsizeiptr>(ColorTable::get_dataSize()); std::string uboName {"UnifiedColorTable"}; auto [insertIt, insertBool] = ubo_inn::uboUPtrs.emplace( uboType, std::make_unique<ubo::UniformBlockObj>(bindPoint, dataSize, uboName) ); tprAssert( insertBool ); } {//---------- GroundColorTable ------------// auto uboType = ubo::UBOType::GroundColorTable; GLuint bindPoint = ubo::get_bindPoint(uboType); GLsizeiptr dataSize = static_cast<GLsizeiptr>( sizeof(FloatVec4) * 20 );// shader 中手写了 [20]数组,丑陋的方案... std::string uboName {"GroundColorTable"}; auto [insertIt, insertBool] = ubo_inn::uboUPtrs.emplace( uboType, std::make_unique<ubo::UniformBlockObj>(bindPoint, dataSize, uboName) ); tprAssert( insertBool ); } {//---------- ColorTableId ------------// auto uboType = ubo::UBOType::ColorTableId; GLuint bindPoint = ubo::get_bindPoint(uboType); GLsizeiptr dataSize = sizeof(colorTableId_t); std::string uboName {"ColorTableId"}; auto [insertIt, insertBool] = ubo_inn::uboUPtrs.emplace( uboType, std::make_unique<ubo::UniformBlockObj>(bindPoint, dataSize, uboName) ); tprAssert( insertBool ); } {//---------- BioSoupColorTable ------------// auto uboType = ubo::UBOType::BioSoupColorTable; GLuint bindPoint = ubo::get_bindPoint(uboType); GLsizeiptr dataSize = sizeof( gameObjs::bioSoup::BioSoupColorTable ); std::string uboName {"BioSoupColorTable"}; auto [insertIt, insertBool] = ubo_inn::uboUPtrs.emplace( uboType, std::make_unique<ubo::UniformBlockObj>(bindPoint, dataSize, uboName) ); tprAssert( insertBool ); } //... esrc::insertState("ubo"); } ubo::UniformBlockObj &get_uniformBlockObjRef( ubo::UBOType type_ )noexcept{ tprAssert( ubo_inn::uboUPtrs.find(type_) != ubo_inn::uboUPtrs.end() ); return *(ubo_inn::uboUPtrs.at(type_)); } }//---------------------- namespace: esrc -------------------------//
|
Mid
|
[
0.545064377682403,
31.75,
26.5
] |
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT. package number import "unicode/utf8" // A system identifies a CLDR numbering system. type system byte type systemData struct { id system digitSize byte // number of UTF-8 bytes per digit zero [utf8.UTFMax]byte // UTF-8 sequence of zero digit. } // A SymbolType identifies a symbol of a specific kind. type SymbolType int const ( SymDecimal SymbolType = iota SymGroup SymList SymPercentSign SymPlusSign SymMinusSign SymExponential SymSuperscriptingExponent SymPerMille SymInfinity SymNan SymTimeSeparator NumSymbolTypes ) type altSymData struct { compactTag uint16 system system symIndex byte }
|
Low
|
[
0.494464944649446,
33.5,
34.25
] |
JACKSON, Michigan – Did Mitt Romney win the Michigan primary? Or did he merely survive it? That really depends on your perspective. As recently as a few days ago, Romney was trailing in the polls. And as recently as Tuesday afternoon, Romney staffers were talking down expectations. But Romney won a clean victory on Tuesday night. He won handily in the Detroit metro area, his home turf, but he also ran strong in more contested counties, like Livingston and Jackson, to the west. But why was it ever this close? Romney had superior money, organization, and, for a long time, name recognition. This state ought to be friendly to him – not because of his family ties, which were never as important as pundits assumed, but because the economy is the biggest issue in Michigan and Romney bills himself as the candidate best positioned to deal with it. Instead, Romney had to fight off an insurgency from Rick Santorum, who appealed to economically strapped voters by appealing to their cultural values. Romney succeeded, but the exit polls suggested a familiar class divide. Romney won among voters who attended at least some college and those making more than $100,000 a year. But he lost among voters who attended no college and among those making less than $100,000 a year. As New York Times economics guru and Washington bureau chief David Leonhardt tweeted, if you genetically engineered the typical Romney voter, it was a single Catholic woman who was older than 65 and with a household income of more than $100,000.
|
Mid
|
[
0.583025830258302,
39.5,
28.25
] |
Hégésippe Légitimus Hégésippe Jean Légitimus was born in Pointe-à-Pitre, Guadeloupe on 8th April 1868 and died before the end of World War II in Angles-sur-l'Anglin, France, on 29th November 1944. He was a socialist politician from Guadeloupe who served in the French National Assembly from 1898–1902 and 1906-1914. in 1793, Jean-Baptiste Belley was the first black man elected to the French Parliament. It would be 105 years later before another black man, Hégésippe Légitimus, was elected. Up until 1898 the colonies and territoires d'Outre-Mer had only been represented by white, mixed-race or "béké" deputies. Légitimus was followed shortly afterwards by other black deputies: Gratien Candace, Blaise Diagne, Ngalandou Diouf, Achille René-Boisneuf and Maurice Satineau. He sat in the parliamentary assembly alongside Guesde, Jean Jaurès and Léon Blum, becoming good friends with them. Légitimus was one of the founders of the Parti Ouvrier, the socialist party of Guadaloupe. It was politically aligned with that of mainland France. Légitimus, councillor and mayor of Pointe-à-Pitre, founder of the socialist movement in Guadeloupe, Member of Parliament in Paris, made an indelible mark on French political life at the beginning of the nineteenth century. The price of sugar went through the roof during the US Civil War (1861-1865). But it began to fall again in 1870, creating a crisis that had the effect of rationalising capital, wealth and production on one hand against the abolition of slave labour, unreliable production and social upheaval and war on the other hand. The crisis, and consequent world-wide disruption, continued until 1914, whereupon many families had migrated from Guadeloupe to live in mainland France. Socialism rose up in the thoughts and actions of workers, including the black slaves and workers on the sugar plantations, and amongst the intelligentsia. In 1914, the war to end all world wars began. Socialist parties quickly grew in number, strength and influence throughout recognised world diaspora. Hégésippe Légitimus was the founder and driving force of the Socialist Party in Guadeloupe. Légitimus also founded the Republican Youth Committee and the Workers Party of Guadeloupe. He established a newspaper called “The People” in 1891. The Workers Party was politically aligned to the socialist-left and quickly became popular amongst the Guadeloupeans. It became very popular because it was the first Party to defend workers' rights and give a united voice to the black population. Légitimus entered the House of Deputies as the member for Guadeloupe in 1898. He became President of the Council in 1899 and was elected Mayor of Pointe-à-Pitre in 1904. The new order of politics, aligned with that of mainland France, exemplified by Legitimus's socialist credo, attacked the virtual monopoly held by mulattoes in Guadeloupean business and politics. Mulattoes were accused by many people of acting against the interests of the black population. But, Légitimus also earned his share of critics because he was accused of collaborating with “the big end of town” over bank start-up finances needed by small businesses and support capital needed by on-going, large projects. Economic necessities in the circumstances, as always, might have required a cautious, rather than radical, approach. For a quarter of a century Légitimus was considered the voice of the black movement. Some called him the black Jaurès. Jean Jaurès was the famous French socialist, pacifist and intellectual assassinated by a young fascist war-monger in Montmartre at the beginning of WW1. Légitimus helped open the doors of tertiary education to everyone. He supported the political careers of Gaston Monnerville, the grandson of a slave who had a brilliant legal and political career in France, and Felix Eboue, who was appointed governor of Guadeloupe in 1936. Hégésippe Jean Légitimus was made Chevalier of the Legion of Honour in 1937. He was obliged to stay in France because of the declaration of war and died in Angles-sur-l'Anglin on 29th November 1944. Following a proposal by General de Gaulle, his remains were returned to Guadeloupe where he was given a state funeral. Several boulevards in Guadeloupe are named after Hégésippe and the main one has his bust displayed, perpetuating the memory of this great black leader and politician. During the commemoration of the sesquicentennial of the abolition of slavery in May 1988 several plaques were unveiled in his memory in front of more than fifty of his descendants. The commemoration was chaired by Gésip Légitimus, a grandson of this exceptional man. Hégésippe's son, Victor-Etienne Légitimus, journalist and husband of the actress Darling Légitimus, created La Solidarite Antillaise (The Caribbean Solidarity) to defend the interests of his compatriots. He actively participated in the creation of The Movement Against Racism and For Friendship Amongst Peoples (MRAP) and The International League Against Racism and Anti-Semitism (LICRA). Légitimus wrote in an article in “The People”, 4th February 1894: "The free man is made for speaking, as is the bird for singing. Woe be he if, intelligent, able to be useful to his people, to humanity thanks to his moral and intellectual faculties, he satisfies himself with vegetating miserably between fear and lazy pleasures! We are made for the struggle And whichever way we choose to direct our faculties, It as an imperious law that impels us to implement them.(...) I want mankind happy and smiling, I want a proclaimed and recognized equality between all and by all. I want the light to be diffused in torrents, profusely; no more ignorant people and no more proletarians! All men reunited as one huge family sharing the air, the sun, the water and the bread, with a kiss." References Category:1868 births Category:1944 deaths Category:People from Pointe-à-Pitre Category:Guadeloupean politicians Category:Republican-Socialist Party politicians Category:Members of the 7th Chamber of Deputies of the French Third Republic Category:Members of the 9th Chamber of Deputies of the French Third Republic Category:Members of the 10th Chamber of Deputies of the French Third Republic Category:Chevaliers of the Légion d'honneur
|
Mid
|
[
0.649237472766884,
37.25,
20.125
] |
Expansion tube An expansion tube is a type of impulse facility that is conceptually similar to a shock tube with a secondary diaphragm, an expansion section, a test section, and a dump tank where the endwall would be located in a shock tube. It is typically used to produce high enthalpy flows for high speed aerodynamic flow and aerodynamic heating and atmospheric reentry testing. It is used to engender short-duration, high-velocity gas flows. The device is composed commonly of three sections of tubing aligned in tandem. Thin plastic or metal diaphragms are used for separating from the sections from each other. As in an ordinary shock tube, the driver section is originally filled to high pressure with a light gas. The driven section is filled to a lower pressure with the test gas of interest. The third section of tubing, named the expansion section, includes a light gas at very low pressure. During the time that the driver or driven diaphragm out of function, the driver gas expands into the driven section. A shock wave comes into being which propagates into the test gas, generating an increase in temperature and pressure behind it. The shock travels down the tube, and breaks the driven or expansion diaphragm, and accelerates upon participating in the expansion section. And the shocked test gas is then cooled and speeded up by an unsteady, constant area expansion from the driven section into the lower-pressure expansion section. References Category:Engineering equipment
|
High
|
[
0.68733153638814,
31.875,
14.5
] |
Medical student illness and impairment: a vignette-based survey study involving 955 students at 9 medical schools. Physician impairment is defined by the presence of a physical, mental, or substance-related disorder that interferes with the ability to practice medicine competently and safely. The seeds of impairment may be sown early in adulthood, and medical students experience health issues that may receive insufficient attention in the context of a rigorous training period. Few empirical studies have examined the attitudes of medical students toward recognizing and acting upon signs of potentially impairing illnesses in peers. Medical students at 9 medical schools were invited to participate in a written survey exploring personal health care issues during training. As part of this larger project, students were asked to imagine their response in 3 situations to a medical student who is discovered to have serious symptoms and potential impairment secondary to mental illness, substance abuse, or diabetes. Responses were gathered from 955 students (52% overall response rate). For all of the vignettes, "tell no one but encourage him/her to seek professional help" was the most prevalent reaction (45%, 53%, and 49%, respectively) as opposed to seek advice (37%, 35%, and 42%) and notify the Dean's office (18%, 12%, and 9%). Willingness to report varied by school, and women were somewhat less likely to formally report medical student illness. This study suggests that medical students attach great importance to preserving the confidentiality of fellow medical students who may experience even very severe symptoms. This pattern may have important implications for the early recognition and treatment of potentially impairing disorders. Greater attention to these issues may help assure the health of early career physicians as well as the many patients whose safety and well-being are entrusted to their care.
|
High
|
[
0.6748166259168701,
34.5,
16.625
] |
Note: Javascript is disabled or is not supported by your browser. For this reason, some items on this page will be unavailable. For more information about this message, please visit this page: About CDC.gov. Abstract We report finding Rickettsia parkeri in Brazil in 9.7% of Amblyomma triste ticks examined. An R. parkeri isolate was successfully established in Vero cell culture. Molecular characterization of the agent was performed by DNA sequencing of portions of the rickettsial genes gltA, htrA, ompA, and ompB. The first reported infection with Rickettsia parkeri was in Amblyomma maculatum ticks in Texas >65 years ago (1). Although its pathogenicity for humans was suspected or speculated during the following decades (2), R. parkeri was only recently recognized as a human tickborne pathogen (3). Extensive cross-reactivity exists among spotted fever group rickettsiae—especially R. rickettsii (the etiologic agent of Rocky Mountain spotted fever [RMSF] and Brazilian spotted fever [BSF])—and R. parkeri. Most of the time, R. rickettsii antigen is the only antigen used in serologic analysis for routine diagnosis of RMSF and BSF. Thus, many human cases of R. parkeri infection may be routinely misidentified as RMSF (2). During the 1990s in Uruguay, several human cases of a tickborne rickettsiosis were diagnosed on the basis of serologic analyses; the spotted fever group organism R. conorii was used as antigen (4). Because R. conorii has never been found in the Western Hemisphere, another spotted fever group rickettsia may have been responsible for the reported cases (5). Because of recent reports of R. parkeri infection among A. triste ticks in Uruguay (where A. triste is the most common human-biting tick), this rickettsia has been suggested as the most probable agent of the Uruguayan spotted fever rickettsiosis (5,6). These data are corroborated by similar clinical findings found for both the American spotted fever caused by R. parkeri and Uruguayan spotted fever (2,4). R. parkeri has been reported only in the United States and Uruguay. We report R. parkeri infection of A. triste ticks in Brazil. The Study A. triste ticks were collected in a marsh area (21°07′06.7′′S, 51°46′06.5′′W) in Paulicéia County, state of São Paulo, Brazil. This area harbors a natural population of A. triste, mostly in the natural marsh environment along the Paraná River (7). Marsh deer (Blastocerus dichotomus) have been implicated as primary hosts for the adult stage of A. triste in the area, but the hosts for the immature stages of the tick remain unknown (7). In January 2005, free-living adult A. triste ticks were collected by use of dry ice traps. Collected ticks were taken alive to the laboratory, where they were screened for rickettsial infection by using the hemolymph test with Gimenez staining (8). Immediately after hemolymph was collected, the ticks were stored at –80°C until used for further testing. Ticks with hemolymph test results positive for infection with a Rickettsia-like organism were processed for isolation of Rickettsia in cell culture by using the shell vial technique (9). In brief, Vero cells were inoculated with tick body homogenate and incubated at 28°C. The level of cell infection was monitored by Gimenez staining of scraped cells from the inoculated monolayer; a rickettsial isolate was considered established after 3 passages, each reaching >90% of infected cells (9). For cell isolation, a sample of 100%-infected cells from the fourth Vero cell passage was subjected to DNA extraction and thereafter tested by a battery of PCRs by using previously described primer pairs that targeted fragments of the rickettsial genes gltA, htrA, ompA, and ompB (10). Amplified products were purified and sequenced (9) and then compared with National Center for Biotechnology Information (NCBI) nucleotide BLAST searches (www.ncbi.nlm.nih.gov/blast). Tick specimens with hemolymph test results negative for Rickettsia-like organisms were thawed and individually processed for DNA extraction by the guanidine isothiocyanate–phenol technique (11). PCR amplification of a rickettsial gene fragment (398 nt) of the citrate synthase gene (gltA) was attempted on DNA from each tick by using the primers CS-78 and CS-323, which were designed to amplify DNA from all known Rickettsia spp. (9). Tick samples shown by PCR to be positive were tested further by a second PCR, which used the primers Rr190.70p and Rr190.602n, which amplify a 530-nt fragment of most of the spotted fever group Rickettsia (12). PCR products of the expected sizes were purified and sequenced (9) and then compared with NCBI nucleotide BLAST searches. A total of 31 adult specimens of A. triste ticks were collected in January 2005. Specimens from 3 of the 31 ticks contained Rickettsia-like organisms, as determined by the hemolymph test. PCR amplification of the remaining 28 tick specimens was negative for Rickettsia spp. A Rickettsia organism was successfully isolated from only 1 of the 3 ticks with positive hemolymph test results. The isolate, designated as At24, was successfully established in Vero cell culture. PCR performed on DNA extracted from infected cells yielded the expected PCR products for all reactions. After DNA sequencing, the generated sequences of 1093, 489, 479, and 775 nt for the gltA, htrA, ompA, and ompB genes, respectively, showed 100%, 99.8%, 100%, and 100% identity to corresponding sequences of R. parkeri Maculatum strain from the United States (GenBank accession nos. U59732, U17008, U43802, AF123717, respectively). Isolation attempts for the other 2 ticks with positive hemolymph test results were lost because of bacterial or fungal contamination. Nevertheless, remnants of ticks used to inoculate Vero cells were subjected to DNA extraction and tested by PCR for the gltA and ompA genes, as described above for ticks. Expected products were obtained from these PCR studies, and the generated sequences were 100% identical to the corresponding sequences of R. parkeri Maculatum strain (GenBank accession nos. U59732 and U43802, respectively). The frequency of R. parkeri infection among ticks examined in this study was 9.7% (3/31). Partial sequences (gltA, htrA, ompA, ompB) from R. parkeri strain At24 generated in this study were deposited into GenBank and assigned nucleotide accession nos. EF102236–EF102239, respectively. Conclusions Our report of R. parkeri infection of ≈10% of A. triste ticks from 1 area in the state of São Paulo highlights the possibility of R. parkeri causing human cases of spotted fever rickettsiosis in Brazil. However, in contrast to Uruguay, Brazil appears to have rare occurrences of A. triste and has never had a report of an A. triste bite in humans. In addition, no human case of spotted fever has been reported from sites within the known distribution area of A. triste in Brazil. On the other hand, an R. parkeri–like agent (strain Cooperi) was recently reported to have infected A. dubitatum ticks from a BSF-endemic area in São Paulo (9). Since A. dubitatum is a human-biting tick that is highly prevalent in many BSF-endemic areas (13), it is a potential candidate for transmission of R. parkeri to humans. Spotted fevers caused by R. parkeri and by R. rickettsii differ in 2 ways: an eschar frequently occurs at the tick bite site in spotted fever cases caused by R. parkeri, and lymphadenopathy occurs in cases caused by R. parkeri. Because clinical descriptions of BSF (diagnosed solely by serologic testing that uses R. rickettsii antigen) with these specific clinical signs have been described recently in Brazil (14,15), human infections with R. parkeri may be occurring in this country. These clinical descriptions were from areas with large populations of A. dubitatum but no known occurrence of A. triste. Moreover, because R. rickettsii antigen has been the only antigen regularly used for diagnosis of BSF, human spotted fever cases due to R. parkeri or other spotted fever group rickettsiae may be misidentified as BSF in Brazil. Our study demonstrated an exact concordance between ticks that were positive for Rickettsia-like organisms by the hemolymph test and those that were positive for rickettsial DNA by PCR. Previous studies in our laboratory (9–11) have demonstrated the same results or a slightly higher sensitivity of PCR for detection of rickettsiae in ticks. Dr Silveira is a PhD student at the University of São Paulo. Her research interests have focused on the ecology of tickborne diseases. Acknowledgments We thank David H. Walker for reviewing the manuscript, Pastor Wellington for logistic support during field work, and Márcio B. Castro, Marcos V. Garcia, Viviane A. Veronez, Nancy Prette, Cássio Peterka, and Lucas F. Pereira for their valuable help during field collection of ticks. This work was supported by the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Conselho Nacional de Desenvolvimento Cientifico e Technologico, Brazil. The conclusions, findings, and opinions expressed by authors contributing to this journal do not necessarily reflect the official position of the U.S. Department of Health and Human Services, the Public Health Service, the Centers for Disease Control and Prevention, or the authors' affiliated institutions. Use of trade names is for identification only and does not imply endorsement by any of the groups named above.
|
Low
|
[
0.5284552845528451,
32.5,
29
] |
To Be Six Again A man was sitting on the edge of the bed, observing his wife, looking at herself in the mirror. Since her birthday was not far off he asked what she'd like to have for her Birthday. "I'd like to be six again", she replied, still looking in the mirror. On the morning of her Birthday, he arose early, made her a nice big bowl of Lucky Charms, and then took her to Six Flags theme park. What a day! He put her on every ride in the park; the Death Slide, the Wall of Fear, the Screaming Monster Roller Coaster, everything there was. Five hours later they staggered out of the theme park. Her head was reeling and her stomach felt upside down. He then took her to a McDonald's where he ordered her a Happy Meal with extra fries and a chocolate shake. Then it was off to a movie, popcorn, a soda pop, and her favorite candy, M&M's. What a fabulous adventure! Finally she wobbled home with her husband and collapsed into bed exhausted. He leaned over his wife with a big smile and lovingly asked, "Well Dear, what was it like being six again?" Her eyes slowly opened and her expression suddenly changed. "I meant my dress size, you retard!!!" The moral of the story: Even when a man is listening, ....he is going to get it wrong.
|
Mid
|
[
0.544090056285178,
36.25,
30.375
] |
Intensified production of microalgae and removal of nutrient using a microalgae membrane bioreactor (MMBR). In present research, a microalgae membrane bioreactor (MMBR) was constructed by combining the optical panel photobioreactor (OPPBR) and membrane bioreactor (MBR). Experiments were conducted in MMBR pilot-plant configuration for 150 days. A biomass productivity of 2.53 g/l/day with light transmittance of 94 % at a 300-mm depth in the OPPBR was achieved. The total reduction of chemical oxygen demand (COD) and biochemical oxygen demand (BOD) in the MMBR were found to be 96.99 and 97.09 %, respectively. Additionally, the removal of total nitrogen (TN), NH4-N, NO3-N, total phosphorus (TP), and PO4-P were 96.38, 99.80, 97.62, 92.75, and 90.84 % in MMBR, respectively. These results indicated that the MMBR process was highly effective for COD, BOD, and nutrient removal when compared to the OPPBR or MBR process.
|
High
|
[
0.6578947368421051,
31.25,
16.25
] |
The comfortable 3-star Sotetsu Fresa Inn Tokyo-Toyocho provides quick access to a tower, a museum and a temple nearby. It features 24-hour reception, 24-hour security and ironing service as well as a storage room, a beauty salon and a barber shop. Rooms There are 144 modern rooms at Sotetsu Fresa Inn Tokyo-Toyocho feature complimentary Wi-Fi, individual climate control, a personal safe, a dressing room and a writing desk. They are equipped with an electric kettle, a dishwasher and a fridge. Eat & Drink Guests can dine at Sutekihausu Kamaro and Toyocho La Festa Italy House located within 5 minutes' walk away. Services The property offers conference equipment and a photocopier for business needs.
|
Mid
|
[
0.6000000000000001,
36.75,
24.5
] |
CHEESE LOVERS UNITE ...in your PJs. Did you know that the best way to enjoy cheese is when you are wearing the stretchiest pants you own? You've gotta have somewhere for all that cheesy goodness to go. Do you love Gouda? Oooh, Brie cheese is really good. Or how about some fresh Mozzarella? Swiss, Feta, Gruyere, Creamy Havarti, they are all so good. We could continue but we're pretty sure you get the idea. And... we're also getting pretty hungry and really want some cheese now. So, we got that you love cheese. But do you have those extra stretchy pants we were talking about for that cheese party you are going to? You know, the extra stretchy pants, like these ones. You are missing out without them and you'll be the hit of the party! FUN DETAILS These cheese adult lounge pants are made of 100% polyester. They have an elastic waistband with a fabric drawstring so they will fit before the cheese party and after. To truly make these pants the hit of the party, there is an all-over sublimated cheese print. They almost look good enough to eat! CHEESE PLEASE Cheese party? Check. Got your favorite cheese to share at the party? Check. Do you have the cheese adult lounge pants ordered from FUN.com to arrive in time for your party? We sure hope so. And if you haven't ordered them yet, we are a-okay with waiting for you! No, really, order them right now. We will be here when you get back. Whew. Got them ordered? Good, that's great! Have fun at your cheese party! And we hope you have an equal amount of fun eating that extra fresh mozzarella you got all for yourself. (Hey, we do it too.)
|
Low
|
[
0.39192399049881205,
20.625,
32
] |
The MDGs have provided common goals for the global community to rally around and spurred partners into action. No doubt, there is a lot of work left to do and we need to do better in many areas, but today’s report is a sign that when we act, progress is possible. Here are five facts from the report to give you hope in a better world. FACT: In 1990, an estimated 12.6 million children died before the age of 5. By 2012, that number had been nearly cut in half to an estimated 6.6 million children. FACT: Global actions to prevent and treat malaria averted an estimated 3.3 million deaths, mostly of children under 5, from 2000-2012. FACT: The proportion of people living in extreme poverty was cut in half between 1990 and 2010. FACT: All developing regions in the world have achieved, or are close to achieving, gender parity in primary education. FACT: From 1990-2012, more than 2.3 billion people gained access to an improved source of drinking water. Want to learn more about the MDGs? Visit the UN’s website, un.org/millenniumgoals, and visit our website between every Monday between now and Monday, August 18 for our “MDG Mondays” blog series.
|
High
|
[
0.6683544303797461,
33,
16.375
] |
The 2,278 fans at Gurski Stadium witnessed a modern-day classic Saturday night. There were six goals, two red cards, extra-time and the lead changed hands three times. “Eight months we’ve prepared for this match and I’m so excited to celebrate this with the boys.” – Dominick Zator, Foothills FC captain This was Foothills FC’s second trip to the PDL championship game in three years. They fell at the final hurdle in 2016, losing to the Michigan Bucks in the final. But Foothills FC’s head coach Tommy Wheeldon Jr, who was the coach in 2016 too, was overjoyed to see his side cross the finish line this time. “It feels great to be champion,” said Foothills FC head coach Tommy Wheeldon Jr. following the match. “Couldn’t be prouder of the boys. Showed their character. It was a tough test, being at Reading who were undefeated in their home ground and very humid.” “Ali Musse was exceptional again. Two goals – clutch. Dominick Zator, he scores big goals. He scored the winner in the Western Conference Final in 2016. And Nico Pasquotti’s goal is outrageous. The boys were good. They controlled the game after the first half.” Foothills FC start on the front foot They were rewarded early for their positive play when Ali Musse was fouled deep in the Reading half. Musse gave the visitors the lead when he stepped up himself to crash the resulting free kick into the top corner of Bennet Strutz’s goal from 30 yards. Foothills FC continued to pressure their hosts but were unable find a second goal. And Reading eventually pulled themselves level when Aaron Molloy scored a free kick on the last kick of the second half. The goal was the first conceded by Calgary in six matches. The first half ended 1-1. Reading United, PDL Eastern Conference and Mid Atlantic Division champions, took the lead in the 78th minute on a header by Kieran Roberts. But the hosts were reduced to 10 men two minutes later when Kamal Miller was sent off after receiving a second yellow card. Dominick Zator quickly capitalized for the visitors with an equalizer from outside the box in the 84th minute. Match enters extra-time Foothills FC were themselves reduced to 10 men when Chris Serban was shown a straight red card for a sliding tackle from behind in the 96th minute. The referee soon after blew the full-time whistle, sending the match to extra-time with the score 2-2. The first half of extra-time was goalless. Foothills FC did have a decent shout for a penalty late in the first-half when Dean Northover was possibly sent tumbling by a Reading United tackle in the box. But the referee waived play on. Foothills FC carried most of the play in the second half of extra-time and were worthy of Pasquotti’s and Musse’s late goals in the 119th and 123rd minutes “Eight months we’ve prepared for this match and I’m so excited to celebrate this with the boys,” said Foothills FC captain Dominick Zator. “Everyone fought so hard in today’s game. To come back after (being) 2-1 down in the 80th minute is just phenomenal.” Get Cavalry FC and other Calgary soccer news via email Leave this field empty if you're human: “I only score big goals I guess,” joked Zator, when asked about his late equalizer that sent the game to extra-time. “I had to do something to make sure we stayed in the game so I did anything I could. And fortunately it went in the net. And the team just carried us forward. I’m just extremely proud of the boys.” Cavalry FC and CPL up next for Wheeldon Once the dust settles and the celebrating is over coach Wheeldon will take up coaching and general manager duties for Cavalry FC, Calgary’s new professional soccer team in the Canadian Premier League in 2019. He hopes to take some of Foothills FC’s current squad along with him but also expects to see a few of them dotted around the new league in other teams. “It’s a pivotal moment for me, this is my final PDL game with the club,” said Wheeldon. “It’s made a lot of special memories. But we’ve done it to build a pathway. We’ve shown Canadians can play and we’ve shown we can do it with local players.” “And the boys deserve all the credit the get because they’re exceptional. And this bodes well for the future of Canadian soccer.” About Author Editor of Total Soccer Project | Photographer and Writer | Twitter: @StuartGradon | Stuart Gradon is a multimedia professional, having worked at 2010 FIFA World Cup in South Africa, 2014 FIFA World Cup in Brazil and 2015 FIFA Women's World Cup in Canada.
|
Mid
|
[
0.614718614718614,
35.5,
22.25
] |
The year 2015 has been a tough one for Sixers fans, and not simply because the team has won just 17 of the 86 games it’s played over the past 365 days. For the second straight year, lady luck did not side with the team on lottery night, as Philadelphia again ended up with the #3 pick in the draft. D’Angelo Russell, the dynamic prospect out of Ohio State whose shooting and playmaking ability made him a perfect fit for the point guard-deprived Sixers was unexpectedly snatched up by the Lakers with the second-overall pick. And the player who bore the weight of our unrealistically high hopes, Joel Embiid, was forced to undergo another offseason foot surgery after his bone failed to fully heal following his first procedure. But despite all that, there remains a lot to be hopeful about – The potential for the team to hold four first-round picks in this year’s draft, including two that are likely to land in the top-five. The fact that the Kings, with whom the Sixers own the right to swap picks in each of the next two years and from whom Philadelphia is owed an unprotected first-round pick in 2018, are a tire fire. Ish Smith is the GOAT. Nerlens is back to being fun Nerlens. Dario Saric is making kids faint and telling reporters he wants to be in Philly ASAP. And Embiid, while still a massive question mark moving forward, now stands 7’2", has a body that resembles that of Dwight Howard, and effortlessly bangs practice treys at a rate that would make Summer League Furkan Aldemir blush. We all hope that 2016 will be less tumultuous than the year that preceded it and that the Sixers will use their stockpile of draft picks, more than $60 million in cap space, and the good old-fashioned, basketball-guy pedigree of Jerry Colangelo to take the next step in their rebuild. But no matter what happens, good or bad, there are some things I want to personally try to stay mindful of over the next year. Below are my 2016 Sixers Resolutions. If you’ve got any to add, post them in the comments section. I resolve… … to do my best not to let this season become all about #TeamJah vs. #TeamNerlens … to do my best not to let this season become all about #TeamSam vs. #TeamJerry … to occasionally take a break from the Sixers when I find myself getting too upset about them … to remember that this team, as currently constructed, is destined to lose roughly seven times as many games as it wins … to not engage @HoopsCritic on Twitter, even when his takes are bad and I want him to feel bad ... to celebrate and do some mild gloating if the Sixers do actually make a significant jump in 2016-17 or make a move to acquire a superstar, but not become an insufferable jerk … to not get irrationally upset about the Sixers not offering [insert D-Leaguer putting up unsustainable numbers or recently waived former lottery pick who is garbage] a 10-day contract … to not buy a jersey unless it's an Embiid All-Star jersey … to remember that any lineup Brett Brown puts on the floor is still going to be a lineup comprised entirely of players on the Sixers … to remain level-headed about Joel Embiid, no matter how chiseled his physique or automatic his corner three become … to not get too attached to any one draft prospect … to not try and convert friends and family who are not trusters of the process … to stop yelling at Isaiah Canaan through my TV set
|
Mid
|
[
0.6036036036036031,
33.5,
22
] |
/**********************************************************\ | | | hprose | | | | Official WebSite: http://www.hprose.com/ | | http://www.hprose.org/ | | | \**********************************************************/ /**********************************************************\ * * * HproseException.java * * * * hprose exception for Java. * * * * LastModified: Oct 28, 2017 * * Author: Ma Bingyao <[email protected]> * * * \**********************************************************/ package hprose.common; import java.io.IOException; public class HproseException extends IOException { private final static long serialVersionUID = -6146544906159301857L; public HproseException() { super(); } public HproseException(String msg) { super(msg); } public HproseException(Throwable e) { super(e.getMessage()); initStackTrace(e); } public HproseException(String msg, Throwable e) { super(msg); initStackTrace(e); } private void initStackTrace(Throwable e) { setStackTrace(e.getStackTrace()); } }
|
Low
|
[
0.5247311827956991,
30.5,
27.625
] |
Naheland The Naheland is the landscape on either side of the river Nahe in the German state of Rhineland-Palatinate. Geography The southern foothills of the Hunsrück and the northern North Palatine Uplands on either side of the Nahe are both described as the "Naheland". The Naheland extends for about 80 km from west to east from the source of the river in the Saarland to its mouth on the Rhine in the town of Bingen. Whilst the narrow strip of land in the west is covered by woods and agricultural land, vineyards of the Nahe wine region dominate the wider eastern section. Counties The Naheland lies in the two counties of Birkenfeld and Bad Kreuznach. Culture The Naheland has a rich musical culture consisting of many choirs, wind orchestras, big bands and specialised music groups. Many professional musicians come from this part of the world or work here. Transport The main transport axes of the region run parallel to the Nahe. The B 41 federal road and the non-electrified rail service on the Nahe Valley Railway are of statewide significance. Tourism The term "Naheland" (formerly: "Nahegau") is now increasingly used in the marketing of the region for tourism purposes; Naheland's tourist office being in Kirn. For ramblers there are nine circular walks, the so-called "vital tours". Category:Landscapes of Rhineland-Palatinate
|
High
|
[
0.704301075268817,
32.75,
13.75
] |
Gulnara Vygovskaya Gulnara Vygovskaya (born 6 September 1980) is a Russian long-distance runner who specializes in marathon races. She finished twelfth at the 2006 World Road Running Championships, helping the Russian team take a fifth place in the team competition. Her personal best time in the half marathon is 1:12:06 hours, achieved in September 2006 in Saransk. In the marathon she has 2:32:51 hours, achieved in October 2006 in Frankfurt. External links Category:1980 births Category:Living people Category:Russian female long-distance runners
|
Mid
|
[
0.6398891966759,
28.875,
16.25
] |
/* * Copyright (c) 2002-2003, Intel Corporation. All rights reserved. * Created by: rusty.lynch REMOVE-THIS AT intel DOT com * This file is licensed under the GPL license. For the full content * of this license, see the COPYING file at the top level of this * source tree. Test case for assertion #8 of the sigaction system call that verifies that if signals in the sa_mask (passed in the sigaction struct of the sigaction function call) are added to the process signal mask during execution of the signal-catching function. */ #include <signal.h> #include <stdio.h> #include <stdlib.h> #include <sys/wait.h> #include <unistd.h> #include "posixtest.h" int SIGCHLD_count = 0; void SIGCHLD_handler(int signo LTP_ATTRIBUTE_UNUSED) { SIGCHLD_count++; printf("Caught SIGCHLD\n"); } void SIGCONT_handler(int signo LTP_ATTRIBUTE_UNUSED) { printf("Caught SIGCONT\n"); raise(SIGCHLD); if (SIGCHLD_count) { printf("Test FAILED\n"); exit(-1); } } int main(void) { struct sigaction act; act.sa_handler = SIGCONT_handler; act.sa_flags = 0; sigemptyset(&act.sa_mask); sigaddset(&act.sa_mask, SIGCHLD); if (sigaction(SIGCONT, &act, 0) == -1) { perror("Unexpected error while attempting to " "setup test pre-conditions"); return PTS_UNRESOLVED; } act.sa_handler = SIGCHLD_handler; act.sa_flags = 0; sigemptyset(&act.sa_mask); if (sigaction(SIGCHLD, &act, 0) == -1) { perror("Unexpected error while attempting to " "setup test pre-conditions"); return PTS_UNRESOLVED; } if (raise(SIGCONT) == -1) { perror("Unexpected error while attempting to " "setup test pre-conditions"); return PTS_UNRESOLVED; } printf("Test PASSED\n"); return PTS_PASS; }
|
Mid
|
[
0.559241706161137,
29.5,
23.25
] |
1. Field of the Invention This invention is directed generally to magnetic memory devices for storing digital information and, more particularly, to methods and structures for confining magnetic fields produced by these devices. 2. Description of the Related Art The digital memory most commonly used in computers and computer system components is the dynamic random access memory (DRAM), wherein voltage stored in capacitors represents digital bits of information. Electric power must be supplied to these memories to maintain the information because, without frequent refresh cycles, the stored charge in the capacitors dissipates, and the information is lost. Memories that require constant power are known as volatile memories. Non-volatile memories do not need refresh cycles to preserve their stored information, so they consume less power than volatile memories. There are many applications where non-volatile memories are preferred or required, such as in cell phones or in control systems of automobiles. Magnetic random access memories (MRAMs) are non-volatile memories. Digital bits of information are stored as alternative directions of magnetization in a magnetic storage element or cell. The storage elements may be simple, thin ferromagnetic films or more complex layered magnetic thin-film structures, such as tunneling magnetoresistance (TMR) or giant magnetoresistance (GMR) elements. Memory array structures are formed generally of a first set of parallel conductive lines covered by an insulating layer, over which lies a second set of parallel conductive lines, perpendicular to the first lines. Either of these sets of conductive lines can be the bit lines and the other the word lines. In the simplest configuration the magnetic storage cells are sandwiched between the bit lines and the word lines at their intersections. More complicated structures with transistor or diode configurations can also be used. When current flows through a bit line or a word line, it generates a magnetic field around the line. The arrays are designed so that each conductive line supplies only part of the field needed to reverse the magnetization of the storage cells. Switching occurs only at those intersections where both word and bit lines are carrying current. Neither line by itself can switch a bit; only those cells addressed by both bit and word lines can be switched. Magnetic memory arrays can be fabricated as part of integrated circuits (ICs) using thin film technology. As for any IC device, it is important to use as little space as possible. But as packing density is increased, there are tradeoffs to be considered. When the memory cell size is reduced, the magnetic field required to write to the cell is increased, making it more difficult for the bit to be written. When the width and thickness of bit lines and word lines are reduced, there is higher current density, which can cause electromigration problems in the conductors. Additionally, as conducting lines are made closer together, the possibility of cross talk between a conducting line and a cell adjacent to the addressed cell is increased. If this happens repeatedly, the stored magnetic field of the adjacent cell is eroded through magnetic domain creep, and the information in the cell can be rendered unreadable. In order to avoid affecting cells adjacent to the ones addressed, the fields associated with the bit and word lines must be strongly localized. Some schemes to localize magnetic fields arising from conducting lines have been taught in the prior art. In U.S. Pat. No. 5,039,655, Pisharody taught a method of magnetically shielding conductive lines in a thin-film magnetic array memory on three sides with a superconducting film. At or near liquid nitrogen temperatures (i.e., below the superconducting transition temperature), superconducting materials exhibit the Meissner effect, in which perfect conductors cannot be permeated by an applied magnetic field. While this is effective in preventing the magnetic flux of the conductive line from reaching adjacent cells, its usefulness is limited to those applications where very low temperatures can be maintained. In U.S. Pat. No. 5,956,267, herein referred to as the ""267 patent, Hurst et al. taught a method of localizing the magnetic flux of a bottom electrode for a magnetoresistive memory by use of a magnetic keeper. A layered stack comprising barrier layer/soft magnetic material layer/barrier layer was deposited as a partial or full lining along a damascene trench in a insulating layer. Conductive material was deposited over the lining to fill the trench. Excess conductive material and lining layers that were on or extended above the insulating layer were removed by polishing. Thus, the keeper material lined bottom and side surfaces of the bottom conductor, leaving the top surface of the conductor, facing the bit, free of the keeper material. The process of the ""267 patent aids in confining the magnetic field of the cell and avoiding cross-talk among bits. A need exists, however, for further improvements in lowering the writing current for a given magnetic field. By lowering the current required to write to a given cell, reliability of the cell is improved. In accordance with one aspect of the invention, a magnetic memory array is provided. The array includes a series of top electrodes in damascene trenches wherein each top electrode is in contact with a top magnetic keeper on at least one outer surface of each top electrode, a series of bottom electrodes arranged perpendicular to the top electrodes and bit regions sensitive to magnetic fields and located between the top electrodes and the bottom electrodes at the intersections of the top electrodes and the bottom electrodes. The bit regions may comprise multi-layer tunneling magnetoresistance (TMR) or giant magnetoresistance (GMR) structures. In accordance with another aspect of the invention, a magnetic memory device is provided in an integrated circuit. The device comprises a bottom electrode over a semiconductor substrate, a bit region sensitive to magnetic fields over the bottom electrode and an upper electrode in a damascene trench in an insulating layer. The upper electrode has a bottom surface facing toward the bit region, a top surface facing away from the bit region and two side surfaces facing away from the bit region. The device also includes a magnetic keeper in contact with at least one surface of the upper electrode. In accordance with another aspect of the invention, a magnetic keeper for a top conductor of a magnetic random access memory (MRAM) device is provided. The magnetic keeper comprises a magnetic layer extending along the sidewalls of the top conductor. There is a barrier layer between the magnetic layer and the surrounding insulating layer. The barrier layer also intervenes between a bottom edge of the magnetic layer and the underlying magnetic storage element. In some embodiments, the top conductor is a conductive word line in a damascene trench and is made of copper. The barrier layer may comprise tantalum, and the magnetic layer may comprise cobalt-iron. In accordance with yet another aspect of the invention, a top conductor is provided in a trench in an insulating layer over a magnetic memory device. As part of the top conductor, a magnetic material lining layer is provided along each sidewall of the trench between the conducting material and the insulating layer. The top surface of the lining layer slopes downward from where it meets the insulating layer to where it meets the conducting material. In one embodiment, the top conductor also includes a first barrier layer between the magnetic material lining layer and each sidewall of the trench. The top surface of the first barrier layer slopes downward from where it meets the insulating layer to where it meets the magnetic lining layer. In another aspect, the top conductor also includes a second barrier layer between the magnetic material lining layer and the conducting material. The top surface of the second barrier layer slopes downward from where it meets the magnetic lining layer to where it meets the conducting material. In yet another aspect of the invention, the top conductor also includes a magnetic material top layer across the top surface of the conducting material and in contact with the magnetic material lining layers along the sidewalls of the trench. Additionally, there may be a top barrier layer over at least a central portion of the magnetic material top layer.
|
Low
|
[
0.5247933884297521,
31.75,
28.75
] |
Loading Video… This browser does not support the Video element. A prank group is apparently looking to send an aerial message ahead of Wednesday’s Democratic debate in Las Vegas. Known as P.U.T.I.N., or Pigeons United To Interfere Now, the group shared footage of various pigeons wearing tiny red “Make America Great Again” hats, which are a hallmark of President Donald Trump’s campaign. According to a press release from the group, the pigeons are trained and were released in Downtown Las Vegas, just north of the Paris Theater where the debate is being held. “One of the pigeons, the leader of the flock, was adorned with a small, orange hairpiece, to commemorate that of their leader, President Donald J. Trump,” according to the press release. “The project was the result of months of exhaustive research, logistical hurdles and pigeon care taking.” While the MAGA hat-adorned birds may have taken an early lead as the most notable aerial prank of the 2020 election, other believe that putting small hats on pigeons can be harmful to the birds. In December, an animal rescue organization sought to remove cowboy hats that had been put on the heads of pigeons, also in Las Vegas, over concerns about how the hats may impact the birds, their flight, or their ability to avoid predators. RELATED: Someone is putting tiny cowboy hats on pigeons in Las Vegas as animal rescue works to remove them
|
Low
|
[
0.44421906693711904,
27.375,
34.25
] |
Introduction ============ The Objective Structured Clinical Examination (OSCE) is an established assessment format at most medical schools and is especially suited for evaluating practical clinical skills \[[@R1]\], \[[@R2]\], \[[@R3]\], \[[@R4]\], \[[@R5]\], \[[@R6]\], \[[@R7]\], \[[@R8]\], \[[@R9]\], \[[@R10]\], \[[@R11]\], \[[@R12]\], \[[@R13]\], \[[@R14]\], \[[@R15]\]. An AMEE guideline defines binding standards and metrics for ensuring the quality of OSCEs \[[@R9]\]. The creation of blueprints for both the exam content and the exam format is recommended for all required assessments. A blueprint mapping out exam content and the corresponding stations for the respective subject areas should also form the basis of an OSCE. Based on the blueprint, checklists are created and critically reviewed, and standards are set for performance expectations. A good reliability and inter-rater reliability can be achieved through a sufficient number of OSCE stations, regular standard setting, adaption of the checklists, and regular examiner training. Test statistical analysis of the results should be used to detect problems with the checklists or examiners and to minimize problems by regularly repeating the process described above \[[@R9]\], \[[@R15]\], \[[@R16]\], \[[@R17]\], \[[@R18]\]. Many studies analyze potential factors that influence OSCE scores. These factors take on particular importance when the assessment format is used for a high-stakes exam, as is currently being discussed in Germany in regard to the state medical examinations \[[@R19]\]. Harasym et al. were able to show that stringency or leniency on the part of the examiners can lead to scores that are systematically too high or too low \[[@R13]\]. The student's performance level also appears to influence the reliability of the scores given by examiners. Byrne et al. describe that a good student performance was evaluated with more precision than a borderline performance \[[@R4]\]. Yeates et al. determined in several studies that a good performance was graded higher if the performance immediately prior to it was a poor one \[[@R7]\], \[[@R20]\]. At the same time, a borderline performance was assessed lower if the examiner had observed a good performance immediately before. In addition, the effects on grading as a result of halo effects and a lack of differentiation on the entire grading scale have also been described \[[@R21]\]. Schleicher et al. were able to show in a study encompassing multiple medical schools that student performances were assessed differently by local and central examiners. Simultaneously, a trend was seen toward different grading behavior depending on the genders of the examiners and examinees \[[@R22]\]. All previous studies on potential influencing factors and on quality assurance of the test format are based on analyses of results from live observations or videos of OSCEs. Although these analyses are based on OSCEs that, in general, were preceded by a standardized briefing of the examiners, they were, however, subject to potential influences stemming from the examinees and were not standardized, so that, ultimately, examiner characteristics could not be fully isolated for analysis. A suitable tool does not yet exist to simulate potential influences stemming from the examinee for direct analysis of such influences on examiner behavior and exam results. At the same time, no suitable tool has been available to train examiners in a targeted manner regarding the potential limitations concerning the reliability of grading OSCE performance. Simulated patients are now an integral part of medical education and medical assessments. They offer an opportunity to practice physician-patient interactions in a safe environment and these patients can play an assigned role in a standardized manner. At the same time, it is possible to vary the individual parameters, e.g. the simulated patient's reaction or the extent of the disease, to simulate different situations for students \[[@R23]\], \[[@R24]\], \[[@R25]\]. Based on the concept of simulated patients, it was our aim to transfer this concept of standardization to student performance on an OSCE. In the first part of this study, we investigated the possibility of training students to reproduce a defined performance on an OSCE. In the second part, we used the video recordings from the first part to analyze the influence of examiner experience on the grades they assigned for the performances and to evaluate the basic acceptance of standardized examinees by examiners. As a result, there is a new tool for OSCE quality assurance that also enables the identification of individual factors influencing assessment and the targeted training of examiners in the future. Material, methods and students ============================== Twelve students were each trained to perform in a standardized manner at three different stations of the OSCE on surgery at the Medical Faculty of the University of Heidelberg. Per station, two students were taught to give a standardized excellent performance and two students to give a standardized borderline performance; there was one female and one male student for each performance level. A student who had been prepared to give an excellent performance at the OSCE abdominal examination station was unable to participate on short notice for health reasons. The score for an excellent performance was defined as the maximum number of possible points on the checklist, minus no more than two points; a borderline performance was the required minimum number of points to pass, plus or minus one point (minimal competency). The lowest passing score for the entire OSCE is the sum total of all minimum competencies on Heidelberg's surgical OSCE. Figure 1 [(Fig. 1)](#F1){ref-type="fig"} illustrates the study design; figure 1A [(Fig. 1)](#F1){ref-type="fig"} describes the first part of the study and figure 1B [(Fig. 1)](#F1){ref-type="fig"} the second. OSCE checklists --------------- Three checklists were selected whose use was already well established in the surgical OSCE and which had undergone repeated internal review. These checklists were for the following OSCE stations: Management of a patient with sigmoid diverticulitis;Management of a patient with suspected rectal carcinoma;Abdominal examination. All of the checklists had a minimum of 0 and a maximum of 25 points. Each checklist consisted of five items, for which a maximum of five points each could be given. Each item covered a different number of required answers. Minimal competency was defined as the number of points on a checklist necessary to pass. This also defines the minimum expectancy for each station based on the checklists and is routinely reviewed and defined by way of internal standard setting. The minimal competency for the checklists used was 17 points. The maximum length of time for the exam was nine minutes per checklist; one minute was given to move between stations. The checklists also listed the grading subcategories (e.g. anamnesis, clinical exam, etc.) and the relevant individual items for assigning points: 5 points: all items completed without assistance;3 points: all items completed in full with assistance from the examiner;1 point: items were not fully completed despite assistance from the examiner. It is clear for each graded category whether points should be given globally for overall impression or on the basis of answers to individual items. Each checklist contains a brief case vignette, a task for each individual item, and the expected answers. Possible questions asked along the way by the examiner are not predefined. The station checklists for *sigmoid diverticulitis and rectal carcinoma* cover the taking of the standardized patient's history (item 1), the determination of differential diagnoses based on the case history details (item 2), the decision which suitable diagnostics should be done in the actual situation (item 3), and for the sigmoid diverticulitis station, the description of a CT image from the patient case. Item 4 on both checklists covers the interaction with the standard patient regarding further diagnostic/therapeutic measures. Item 5 evaluates social competence. This also includes the extent to which the students adequately introduce themselves to the patients, how they behave toward the patients, for instance, if they are able to keep eye contact. The checklist for the abdominal examination station covers the sequential steps to examine a patient with lower abdominal pain on the right side (item 1), checking for signs of peritonitis (item 2), explaining the performance of a digital rectal exam and the findings (item 3), examining the liver (item 4), and examining the spleen (item 5). ### Modification of the OSCE checklists To standardize the performance of the standardized examinees and to verify that this performance can be reproduced repeatedly, two new versions of the existing checklists used for the surgical OSCE were generated. ### Checklists to standardize the examinees To standardize the examinees, all of the checklists were operationalized in detail. For the two defined levels of performance, it was determined for each possible answer to a checklist item, whether the examinees should respond with a certain answer or not. In another field it was noted how the examinees should conduct themselves when asked a particular question, e.g. to answer hesitantly or only when prompted (see figure 2 [(Fig. 2)](#F2){ref-type="fig"}). ### Checklists for evaluating performance For the examiner to grade the performance, the evaluation part of the OSCE checklists was modified so that the examiner could note for each possible answer to each task whether or not that answer had been given (see figure 3 [(Fig. 3)](#F3){ref-type="fig"}). To eliminate potential systematic differences in assessment by the examiners, we did not carry out the standardization of the evaluation at the performance level using a global point value for each item, as is done in a real OSCE. A section was added at the end of the checklist in which the examiner was meant to evaluate the performance level using a global grading scale (poor, mediocre, very good) and the authenticity. Concerning the latter, the examiners were asked to evaluate the extent to which they doubted having a real examinee in front of them. The examiners received the standardized assessment instructions for the surgical OSCE. However, they were instructed not to give any points for the individual items, but rather to tick each possible answer and indicate whether or not it had been given. The examiners were informed only after the OSCE that a standardization of student performance had been undertaken. ### Standardized students All 12 students had already completed the Surgery Block and taken the OSCE on surgery. The Surgical Block lasts for one semester and covers the subjects of visceral, vascular, thoracic and heart surgery, urology, orthopedics and trauma surgery, hand and plastic surgery, along with anesthesiology and emergency medicine. Lectures and seminars on pathology and radiology are integrated into the individual subject disciplines. The students were given the checklists for training. The roles and the expected answers on the modified checklists were discussed in detail with each student. After two weeks to learn the checklists and roles, the test situation was simulated between the students and the study coordinator and corrections were made. As this was done, general challenges were discussed at first and then simulated in real-time as a test situation. Feedback was then given on the necessary changes. ### First part of the study #### Process of standardization In the first part of this study (see figure 1 [(Fig. 1)](#F1){ref-type="fig"}, left A), standardization was carried out in a simulated OSCE that was held under real test conditions (time, time to change stations, etc.). The standardized examinees played their roles three times for three different examiners (one male examiner and two female examiners) and were recorded on video. In an additional second step, the videos were analyzed quantitatively and quantitatively by the study coordinator using the modified checklists so that there were six evaluations for each student. When carrying out the quantitative analysis, the deviations were counted based on the prescribed answers that were supposed to have been given. The instances in which too many or too few answers were given were counted in relation to the correct number of expected answers. The mean percentages of the deviations were calculated for all OSCE run-throughs (3 test situations) and for the quantitative analysis from the subsequent video analysis. When carrying out the qualitative analysis, the overall impression was evaluated first: The examinee appeared to be authentic (yes/no) and stayed in the standardized role. The following aspects were also evaluated: Conduct of the examinee when giving answers (appears confident, unconfident, tends to recite lists);Reaction of the examinee to the examiner's behavior/questions (stays in the role, deviates from the prescribed answers, lets him or herself be forced to give answers);Reaction of the examinee to the standard patient's behavior/questions (stays in the role, deviates from the prescribed answers, lets him or herself be forced to give answers);Conduct of the examiners;Conduct of the standard patients. The study coordinator shared responsibility for the organization of the Surgery Block and had acted as an examiner more than 20 times in surgical OSCEs. In addition, she was experienced in the writing of OSCE checklists and exam questions. This study was carried out within the scope of her master's thesis to attain a Master of Medical Education in Germany (MME-D). ### Second part of the study #### Analysis of the influence of examiner experience on performance assessment In the second part of the study (see figure 1 [(Fig. 1)](#F1){ref-type="fig"}, right B), the videos were used to investigate the influence of examiner experience on performance assessment and their acceptance of the standardized examinees. Ten experienced and ten inexperienced examiners watched the video recording of the OSCE station on sigmoid diverticulitis. Experienced examiners had participated at least three times or more in an OSCE and/or had more than five years of clinical experience. Inexperienced examiners were those who had served a maximum of two times as an OSCE examiner and/or had less than five years of clinical experience. The original checklists from the surgical OSCE administered by the Medical Faculty of Heidelberg University were used to grade performance and required the assignment of one to five points for each item. A briefing was held to impart general information on administering the test. The following instructions were given: The students perform a specific task which must be evaluated. No detailed information was given regarding the performance levels.The evaluation must be made based on what is contained in the checklists.Five points may only be assigned for a task if all items were accomplished without assistance.Three points may only be assigned for a task if all items were fully completed with the assistance of the examiner.One point may be given for a task if it was done incompletely despite the assistance of the examiner.Stopping and rewinding the video to view it again was not permitted.All four test situations must be viewed in sequence and without interruption. The examiners were only informed after evaluating the videos that the students had been standardized to perform at a defined level. #### Acceptance of standardized examinees After evaluating all of the test situations, all of the examiners were surveyed to evaluate the acceptance of standardized examinees and their possible uses. The following was asked directly: Assessing the performance was easy for me.I would find it easier to assess in a real test situation.The assessment of the performance was difficult for me.The assessment of performance by good examinees was easy for me.The assessment of performance by poor examinees was easy for me.I find it makes sense to use standardized examinees to prepare inexperienced examiners.Training with video recordings (as opposed to training in a simulated OSCE) is sufficient to prepare examiners.Inexperienced examiners should be trained using standardized examinees before conducting real assessments.Experienced examiners should simulate test situations using standardized examinees.Targeted training of examiners using standardized examinees can make the OSCE objective.The performance of the standardized examinees was authentic. The evaluation was done using a five-point Likert scale with 1=*completely disagree* to 5=*completely agree*. #### Statistical analysis Only a purely descriptive and qualitative analysis was carried out for the first part of the study due to the small cohorts and the individual approaches. Further statistical tests were not applied. The OSCE answer sheets were analyzed as to whether too many or too few answers had been given. Later, the study coordinator used the video recordings to analyze which difficulties arose when answering the questions. All of the quantitative analyses based on the OSCE checklists and the secondary video analysis were compiled and the percentages of deviations from the prescribed answers were calculated for all of the evaluations (see table 1 [(Tab. 1)](#T1){ref-type="fig"}). For the second part of the study, the results of the comparison between experienced and inexperienced examiners are presented as mean values with standard deviation, if not otherwise indicated. The quantitative parameters were analyzed using the two-sided T-test. Categorical variables are given as absolute values. Statistical significance is assumed when the p-value is \<0.05. Statistical analysis was carried out using IBM SPSS Statistics 25 software. Results ======= First part of study: development of the standardized examinees -------------------------------------------------------------- ### Verification of the standardization -- descriptive analysis An individual evaluation was carried out at the item level for each examinee. The percentage of deviations in the answers given from the expected number of responses was analyzed based on the standardization. In doing this, all of the evaluations, checklists from the OSCE, and the secondary quantitative video analysis by the study coordinator were compiled. The detailed results can be found in table 1 [(Tab. 1)](#T1){ref-type="fig"}. Only three examinees were analyzed for the *abdominal examination* checklist since one student was unable to participate in the OSCE for health reasons. It became clear that especially students with a borderline performance had problems giving the answers correctly. The deviations were more distinct than in the case of the excellent students. On the checklists covering *sigmoid diverticulitis and rectal carcinoma*, the difficulties were few for the excellent students: They gave a low percentage of too few answers. Larger deviations were seen for the borderline students. The largest deviation occurred for items 3 and 4. These items covered the determination of additional diagnostic and therapeutic measures. The largest deviation was seen for the station on *abdominal examination* for item 4 by the students giving a borderline performance in the form of a high percentage of missing answers or incorrect performance of medical examination procedures. For this item, the examinees' examination of the liver was assessed. On this checklist, borderline students showed overall heterogeneous performances with too many and too few answers. Standardized examinees who gave an excellent performance, on the other hand, had a tendency to give too few answers or not to perform individual steps of the medical examination procedure. #### Assessment of performance by the examiners All of the examiners, with one exception, had the impression that these were real examinees and indicated they had perceived the standardized examinees as authentic. The excellent performance was recognized as such in all cases. The borderline performance was assessed as borderline six times; in all other run-throughs, however, it was deemed to be a poor performance. #### Qualitative analysis via video analysis The qualitative analysis of the OSCE videos revealed a series of aspects that had a limiting effect on the standardization. The examinees showed a certain tendency to recite the expected answers as if they were memorized lists. This applied more to the excellent examinees than to the borderline ones. Borderline examinees had difficulties staying in their roles particularly for complex items that required drawing on a diagnostic or therapeutic algorithm and not allowing the examiner to push them into giving more than the standardized answers. On the whole, the standardized examinees were able to do this well. At the same time, it was noticed that occasionally the role was over-exaggerated and, for instance, an intentionally hesitant behavior was acted out in a very pronounced manner. As a result, time became tight in individual test situations. The conduct of the examiners also influenced the students' acting of their roles and the results of the standardization. As in real assessments, the examiners showed a tendency to repeat questions or give advice on doing individual tasks. Among other things, this increased the difficulty the students faced in consciously not giving answers. Based on the video analysis, it also became clear that one examiner did not award points for answers which were given or examination steps that were performed. In another situation, an examiner evaluated the response of a simulated patient as the answer given by the examinee. Likewise, it was observed that simulated patients actively influence the assessment by asking their own questions and preventing the students from giving an answer. ### Second part of the study: influence of examiner experience on performance assessment and acceptance of standardized examinees #### Influence of examiner experience on performance assessment Ten experienced and ten inexperienced examiners were included in the study, with one female and nine male examiners forming the experienced group and three female and seven male examiners forming the inexperienced group. All of the examiners assessed all of the standardized examinees in one test situation during the OSCE in the first half of the study. The details regarding examiner experience are presented in table 2 [(Tab. 2)](#T2){ref-type="fig"}. In the assessment of the examinees with excellent performance there was no significant difference between experienced and inexperienced examiners (see table 3 [(Tab. 3)](#T3){ref-type="fig"} and figure 4 [(Fig. 4)](#F4){ref-type="fig"}). In contrast, there was a significant difference between the two groups in their assessments of the borderline examinees. Inexperienced examiners tended to assess the performance lower than their experienced counterparts. Both groups of examiners graded the social competence (item 5), despite identical standardization, lower for the borderline examinees than for the excellent ones (see table 3 [(Tab. 3)](#T3){ref-type="fig"} and figure 5 [(Fig. 5)](#F5){ref-type="fig"}). This difference was statistically significant (4.80 vs. 4.13, p\<0.001). #### Acceptance of standardized examinees Both groups of examiners perceived the standardized examinees to be authentic and viewed this new tool as an opportunity to make the OSCE even more objective. Both groups found it easier to assess the performance of good students than of borderline students, but still found no difficulties overall in assessing student performance. The regular use of standardized examinees to train experienced examiners was favored more by the group of inexperienced examiners than by the experienced group (2.9 vs. 2.0). The detailed results are presented in figure 6 [(Fig. 6)](#F6){ref-type="fig"}. Discussion ========== Detailed instructions on how to design, implement and ensure the quality of an OSCE and the resulting good, statistically measured results justify the use of this test format to assess and grade practical clinical skills at medical schools \[[@R9]\], \[[@R15]\], \[[@R16]\], \[[@R17]\], \[[@R18]\]. While OSCEs and OSPEs, to date, have been used primarily as internal university-specific assessments, the current discussion on including them in state medical examinations is making the need for widespread standardization very clear \[[@R19]\]. Despite established quality assurance measures, a variety of studies have been able to show that factors can potentially influence OSCE scores. Such studies often involve extensive staff resources, e.g. independent co-examiners, video analyses, etc. At the same time, it is impossible to eliminate individual influences stemming from examinees and examiners or to standardize these factors satisfactorily. Our aim was to develop a new tool for OSCE quality assurance by applying the concept of standardization to student performance, an approach that enables the identification of individual factors influencing the grading of student performance. Simultaneously, this new tool is also meant to serve as a strategy for training OSCE examiners in the future. As part of verifying the standardized examinees, it was demonstrated that it is possible to successfully standardize students to meet a previously defined performance level. The verification of the standardization revealed that deviations occurred in both groups of examinees. Excellent examinees tended more toward giving too few answers and had difficulties not appearing to recite previously memorized lists, while the borderline examinees gave both too few and too many answers. The deviations were overall more distinct for the borderline examinees indicating that the standardization for this performance level is more difficult to achieve. The answers given by borderline examinees deviated in particular for items in which the description of a diagnostic or therapeutic algorithm was required (see table 1 [(Tab. 1)](#T1){ref-type="fig"}). This suggests that increased complexity of the task makes standardization more difficult. Similar observations were made regarding the complex examination procedures on the abdominal examination checklist where the borderline examinees also deviated from the expected procedural steps (see table 1 [(Tab. 1)](#T1){ref-type="fig"}). In addition to the purely content-based deviations, individual students tended to over-exaggerate their roles. Both the content-based deviations by the standardized examinees and the different interpretations of the roles they played suggest that the process of standardization itself and specific training for the roles are essential. In the approach followed here, the students were trained using modified checklists on which, depending upon performance level, each possible answer was predefined and rehearsed, whether it was meant to be given or not. From these results it can be understood that the standardization should be trained in even more detail. As is the case when training simulated patients \[[@R26]\], it appears sensible to define a larger role in which the performance level or the characteristic being assessed can be embedded. Since the examiners tended to repeat questions precisely for borderline examinees, the students must be very specially trained for such situations. In particular, attention must be paid to complex tasks and medical examination procedures. Based on the experiences described here, it is wise to let students repeatedly rehearse their roles for verification and to simulate different ways in which examiners intervene in the assessment process to practice conformity with the assigned roles on the part of the standardized examinees. Verifying standardization in a nearly real OSCE is also another option to check if standardization has been satisfactorily achieved. Video recording with subsequent analysis by the trainers and standardized examinees represents an additional training strategy. An obvious disadvantage of this study is the low case numbers. The study involves one pilot project that is on par with a feasibility study. Future standardization of examinees should take place with more students and in a larger number of test situations than the selected number analyzed here. In the second part of the study, the video recordings of the OSCE station addressing the *management of a patient* with sigmoid diverticulitis were used for both standardization levels. The extent to which examiner experience affected the evaluation of examinee performance was investigated. This station was used because the standardization for it was the best. The results of this part of the study show that the two groups of examiners assessed the performance of borderline examinees differently. Inexperienced examiners graded the performance significantly lower and also applied a larger point range to do so. Basically, there are several conceivable explanations for this. Experienced examiners recognize the performance for what it is and correctly classify it as such. On the other hand, this observation could also indicate that experienced examiners do not use the full grading scale for recognizable performance levels and, as described by Iramaneerat, only apply a restricted range of points \[[@R19]\]. At the same time, this result could also be construed as indicating that inexperienced examiners are, under circumstances, less confident in classifying poor performances and thus rate them in a potentially exaggerated manner. A study by Yeates et al. demonstrated that different examiners focus on different aspects when assessing a performance \[[@R27]\]. The results here could therefore be a sign that with increasing clinical or assessment experience, the main focus for assigning points is selected unconsciously. It cannot be fully ruled out that all of the examiners here are not subject to a leniency error that is characterized by a general tendency to rate performances in an extreme manner as poorer or better than they actually are \[[@R13]\]. At the same time, it is possible that the effect described by Yeates et al. is present in that a borderline performance is rated especially poorly if it is observed directly after an excellent performance \[[@R7]\]. In the design selected here, the first and last performances in the video sequence were borderline performances, leaving only one instance where the constellation identified by Yeates et al. could have occurred. The lower score assigned to social competence for borderline examinees (4.80 vs. 4.13, p\<0.001), despite identical standardization and identical performance in the verification of standardization, leaves room to presume a halo effect for both examiner groups. The results of this study suggest that in terms of a halo effect, as described by Iramaneerat et al., the poorer content-based performance leads to a misperception of communication skills \[[@R21]\]. Experienced and inexperienced examiners were affected in equal measure by this, which points out that even having extensive experience as an OSCE assessor cannot negate this effect. The detected differences in the assessment of borderline examinees depending on the examiner's experience suggest that this effect could potentially be decisive for passing or failing an OSCE station. The latter makes it clear that targeted examiner preparation is essential, especially if OSCEs are to be used in future state medical exams. Another question that should be considered and explored in further studies is whether a difference exists in the grading behavior of experienced examiners depending on if they have experience as an OSCE assessor, or only have extensive clinical experience, or both. The experienced examiners in this study all had more than five years of clinical experience, but their experience as OSCE assessor varied between two and more than five times serving as OSCE examiners. This aspect was not pursued further since this study is a pilot project with a low case number. In this study the use of videos to carry out such an analysis does not, by itself, present a novel approach. It is rather the standardized examinees who offer a possibility in the future to conduct very similar analyses in an OSCE with standardized examinees unconnected to video analyses. It is conceivable that standardized examinees could be included as a "quality standard" in an OSCE. The type of training for standardization must be explored and developed further to minimize deviations. Whether it is possible to standardize a student for several checklists still remains open. Conclusions =========== Standardizing simulated examinees to meet defined performance levels represents a future possibility for directly analyzing influences on the grading behavior of OSCE examiners. Within the scope of high-stakes assessments, especially in regard to the future use of OSCEs in state medical exams, standardized examinees represent, alongside quality assurance, a potential tool to train and prepare OSCE examiners \[[@R19]\]. Competing interests =================== The authors declare that they have no competing interests. {#T1} {#T2} {#T3} {#F1} {#F2} {#F3} {#F4} {#F5} {#F6}
|
Mid
|
[
0.5795724465558191,
30.5,
22.125
] |
[The protection of renal function in the ACEI treatment of renal hypertension]. To explore the influence of angiotensin-converting enzyme inhibitors (ACEI) on plasma endothelin (ET-1), nitric oxide (NO) and renal function in renal-hypertension patients. Sixty renal-hypertension patients (Group II) were treated with ACEI (lotensin) for 10 weeks then we measure their blood pressure (BP), plasma ET-1, NO and renal functions (BUN, Scr and proteinuria) before and after the treatment. Thirty healthy persons (Group I) acted as control. The level of plasma ET-1 was higher and plasma NO was lower in Group II than those in Group I. After the treatment of ACEI plasma ET-1 and proteinuria were decreased (P < 0.01), and NO increased in Group II significantly (P < 0.01), while BUN and Scr decreased in abnormal-renal function patients (Group II2) (P < 0.05, P < 0.01). The Study indicates that: ACEI is effective to renal hypertension; it decreases plasma ET-1 and increases NO in the renal hypertension patients; ACEI may play an important role in protection of renal functions and prolonging the chronic renal failure.
|
High
|
[
0.680161943319838,
31.5,
14.8125
] |
Warts and All (Scream Queens) "'Warts and All'" is the second episode of the second season and the fifteenth overall, of the horror black comedy series Scream Queens. It was directed by Bradley Buecker and written by series co-creator Brad Falchuk. It premiered on September 27, 2016 on Fox Broadcasting Company. The episode centers on Chad's attempt to win Chanel back and facing competition from Dr. Brock. Meanwhile, Chanel #5 finds love with a patient with severe warts around his body. The episode was watched by 1.70 million viewers and received positive reviews from critics. Plot Chanel #5 (Abigail Breslin) is being interrogated by a detective regarding Catherine Hobart's decapitation. She is desperate because nobody believes her, especially Chanel (Emma Roberts) and Chanel #3 (Billie Lourd). Meanwhile, a new patient with a Neurofibromatosis type I, Tyler (Colton Haynes), is being admitted into the hospital, where Dr. Brock (John Stamos) admits that they have a problem: there is a device that could help removing his warts but it is expensive and the hospital does not have it yet, which makes Tyler desperate. Zayday (Keke Palmer) grows suspicious that Dean Cathy Musnch (Jamie Lee Curtis) is hiring the Chanels to the hospital to get rid of them one by one, so she enlists Chamberlain (James Earl) to investigate why Cathy builds a hospital in the first place. Later, Chanel and Dr. Brock goes on a movie date, where all of a sudden his hands uncontrollably grabs Chanel's breast and grips other man's popcorn. Unaffected, Chanel got smitten and they proceed to kiss. Chanel is on a night shift where she got chased by the Red Devil. It turns out to be Chad Radwell (Glen Powell) in disguise and it was revealed that it was him who scared Chanel in the asylum at the end of the first season. He is there to accompany his friend Randall, who has a severe trauma-caused screaming problem, while also planning to win her back. But he soon learn that Chanel is less impressed with Chad's new antics as she is really bewitched by Dr. Brock's charm. Chad challenge Dr. Brock for a squash game to fight over Chanel. He is easily beaten by Brock, but after noticing how his right hand is behaving strangely, he confronts Brock while Brock warns him to back off. Chad hires a private investigator and he discovers that Brock's right hand was donored from a squash player who happened to be a serial killer. He goes to confront him. Zayday and Chamberlain continue their investigation and finds out that in 1986, the entire staff of the old hospital (including Dr. Mike (Jerry O'Connell) and Nurse Thomas (Laura Bell Bundy) were murdered during a halloween party by someone in the Green Meanie costume, and they never found out the killer's identity. In the hallway, they bump into Nurse Ingrid Hoffel (Kirstie Alley) who ask Zayday about the Chanels' schedule to keep on track with them secretively. An unconvinced Zayday is even more repulsed by Ingrid obvious lies to her question. Ingrid angrily tells her off. Later Zayday confronts Musnch about her plans but Musnch revealed her true intentions of opening the hospital; she is trying to find a cure for herself as she has in incurable disease herself and that she might not have a long time, and she burst into tears. Zayday tries to find a cure for her, but after some trials deduce that there is no cure and she only got a year to live. Cathy tells her this must be kept a secret, but unknown to them Ingrid is secretly listening to their conversation. One night, Cathy got ambushed by the Green Meanie in the hospital hallways and is able to defeat it. Just when she is about to unmask the killer, Dr. Cassidy Cascade (Taylor Lautner) and Chanel #3 arrives and distract Cathy. She is furious because the killer's got to escape due to their sudden arrival. Realizing Chanel #5's claims are true, she enlists CIA agent Denise Hemphill (Niecy Nash) to help them investigate, and she suggest to ask Hester Ulrich (Lea Michelle) as she was a killer too. Hester demands to be transferred from the highly secure prison to the C.U.R.E. institute and given some discontinued beauty products and threatens more killings will occur if they refuse. Tyler and Chanel #5 got closer as he comforts her, where she tells him she is depressed of the Chanels' treatment to her and how she is not capable of getting a boyfriend. As a sympathetic Tyler reveals his photo before he has those warts, Chanel #5 is stunned to see how handsome he was and decide to raise funds for his surgery. Chanel and Chanel #3 mocks #5's fund raising video and tell her that as soon as Tyler is healed, he will feel the need to date her out of pity, while #5 is unsure. While both of them are on a dinner, Chanel #5 admits that she likes Tyler's personality despite his warts. She suddenly goes into a rampage when two guys insults Tyler. After calming down she apologizes to him for being uncontrollable but Tyler is impressed. As they grow closer, Chanel seemingly congratulates #5 for being able to look into someone's soul rather than their ugly appearance (she was actually congratulating Tyler, instead) and announces she manipulated Chad to donate for Tyler's surgery. The new couple are facetiming before his supposed surgery, but after it ends, #5 discover that his surgery is not scheduled that day and tells the Chanels about it. In the surgery room, Tyler is shocked to see the Green Meanie instead of the surgeons and the Green Meanie burns him with the laser supposed to be for surgery. The Chanels arrived too late to find him dead. As Chanel #5 mourns his death, Chanel states that they have another serial killer in their hands. Reception Ratings "Warts and All" was watched by 1.70 million viewers, and received a 0.7/3 rating in the 18-49 demographic. This was down significantly from the previous episode, "Scream Again." Viewers who were afflicted with NF realized the many flaws in this piece—They believed that neurofibromatosis was misrepresented, with the tumors it causes referred to as "warts" and the name "Neurofibromatosis" pronounced incorrectly by the actor who played Tyler. The Children's Tumor Foundation made an official statement asking the cast and crew of Scream Queens to educate themselves on NF. Scream Queens did not respond. Critical reception Joel Leaver from SpoilerTV gave the episode a positive review, citing "This episode gave me the laughs, chills and hotness that last week perhaps could've given me more of. Hopefully it'll only get better," while also praising Glen Powell's return. Blaise Hopkins from TVOvermind wrote "Ultimately, Scream Queens provides a great balance from week to week, giving viewers the humor and lightheartedness they want while also providing a compelling mystery to complement everything else." References Category:2016 American television episodes Category:Scream Queens (2015 TV series) episodes Category:Television episodes written by Brad Falchuk
|
Mid
|
[
0.6071428571428571,
38.25,
24.75
] |
Israel successfully tests Arrow 3 missile interceptor Published duration 25 February 2013 image caption Arrow 3 will extend Israel's multi-layered missile defence shield Israel's defence ministry says it has carried out a successful test of its new missile interceptor system. Arrow 3 detects an incoming missile, intercepts it and destroys it with a second missile above the earth's atmosphere. Officials said the cutting-edge system had been designed to defend Israel from the threat of a strike from Iran. Monday's test was conducted alongside US forces and is the first time the system has been tested. Israel already has the Iron Dome missile defence system, which - officials say - intercepted up to 85% of missiles fired from Gaza towards populated areas during the conflict in November 2012. Defence officials say that because the system will be able to shoot down missiles in space, it could cause nuclear and chemical warheads to disintegrate safely. An official said it was "the first time the interceptor with all of its equipment took off and flew, achieved the velocities, and did the manoeuvres in space". He said a full interception would be tested in future but would not say when. The official also declined to say when the system would be fully implemented. Arrow 3 will form part of Israel's multi-layered defence shield. Iron Dome During the recent Gaza conflict, the Israeli defence ministry claimed the first layer of that shield - the Iron Dome - knocked down 421 of 1,354 short-range missiles fired from Gaza. Of those that landed, 58 hit urban areas while the rest fell in open fields, causing no damage. Monday's test at an Israeli test range over the Mediterranean had nothing to do with growing regional tension, a defence official said. "In terms of long-term strength and the whole threat that we see of the ballistic Shahab missiles and other types of missiles from Iran, this is a main factor why we developed it and deployed it," he said. "But the date of the test is nothing to do with what's going on."
|
High
|
[
0.6837209302325581,
36.75,
17
] |
Article Title Authors Publication Details Brookes, L., & Walsh, G. (2010). When can the limitation period for childbirth claims be extended?. Precedent, (96), 44-45. Abstract Whether a court can, under the Limitation Act 2005 (WA) (the 2005 Act), permit an infant plaintiff to commence an action under a childbirth claim, where the applicable limitation period under the Limitation Act 1935 (WA) (the 1935 Act) has expired , was recently argued before Stevenson DCJ in the District Court of Western Australia. Share Peer-reviewed Please click here to view all peer-reviewed journal articles and conference papers. If you would like to refine your search, please go to Advanced Search and select Subject from the drop down box. Type ‘peer-reviewed’ into the available box. You can then narrow your search further by adding other search parameters (eg. Date).
|
High
|
[
0.6657608695652171,
30.625,
15.375
] |
226 U.S. 452 (1913) UBEDA v. ZIALCITA. No. 77. Supreme Court of United States. Submitted December 6, 1912. Decided January 6, 1913. APPEAL FROM THE SUPREME COURT OF THE PHILIPPINE ISLANDS. Mr. A.B. Browne, Mr. Alexander Britton, Mr. Evans Browne and Mr. W.A. Kincaid for appellant. No appearance for appellee. MR. JUSTICE HOLMES delivered the opinion of the court. The plaintiff and appellant is a manufacturer of gin and sues to restrain the use of a trade-mark like his own *453 and to recover double damages. The trade-mark consists of two concentric circles having the words Ginebra de Tres Campanas and the plaintiff's name between them, and in the centre a device of three bells (Tres Campanas) connected at the top by a ribbon and some ears of grain, with the words Extra Superior under the mouth of the bells. The plaintiff's autograph is reproduced across the middle of the circular space and the bells. More detail is unnecessary; but it may be mentioned that the plaintiff claims title under a grant from the Governor General dated December 16, 1898, and that the mark covered by the alleged grant had underneath the circles the word Amberes (Antwerp), indicating imported gin, while that now used has Manila in the same place and is applied to gin made in the Philippines. It may be assumed that the defendant's design has a deceptive resemblance to the plaintiff's notwithstanding a change from Tres Campanas to Dos Campanas and the substitution of the defendant's autograph for the plaintiff's. And whether the plaintiff has a title to the mark now used or not it also may be assumed that he might recover under the Philippine act of March 6, 1903, No. 166, § 4; Compiled Acts, p. 180, § 58, but for the following facts, on which the defendant had judgment in both courts below. The plaintiff's trade-mark in its turn closely imitates in most particulars a much earlier and widely known trade-mark of Van Den Bergh & Co., of Antwerp. It is true that in the latter there is but one bell, and that the title correspondingly is Ginebra de la Campana, but the intent to get the benefit of the Van Den Bergh device is too obvious to be doubted. We do not go into the particulars of the different registrations, &c., of this latter, beginning with a Spanish certificate to the Antwerp firm in 1873. For although the plaintiff elaborately argues that under the Spanish regime trade-mark rights could be *454 acquired only by statutory registered grant; that Van Den Bergh & Co. never acquired any such rights in the Philippines; that if they did they lost them by failing to register or lapse of time, and that he was free to get a registered title as against any certificate of theirs; those questions are immaterial in this case. With or without right the earlier trade-mark was in widespread use and well known, and the obvious intent and necessary effect of imitating it was to steal some of the good will attaching to it and to defraud the public. The courts below found the fraud and that both plaintiff's and defendant's marks were nothing more than variations upon the earlier mark. In such a case the Philippine act denies the plaintiff's right to recover. Act No. 666, § 9. See § 12, and No. 744, § 4. Compiled Acts, §§ 63, 66. It is said that to apply the rule there laid down would be giving a retrospective effect to § 9 as against the alleged Spanish grant of December 16, 1898, to the plaintiff, contrary to general principles of interpretation and to Article 13 of the Treaty of Paris, April 11, 1899, providing that the rights of property secured by copyrights and patents shall continue to be respected. But the treaty, if applicable, cannot be supposed to have been intended to contravene the principle of § 9, which only codifies common morality and fairness. The section is not retrospective in any sense, for it introduces no new rule. See Manhattan Medicine Co. v. Wood, 108 U.S. 218. Imposition on the public is not a ground on which the plaintiff can come into court, but it is a very good ground for keeping him out of it. Even if Van Den Bergh & Co. had no registered title and no such other rights under Spanish colonial law as they have under Act No. 666, § 4, the imposition on the public was still there and though not a matter of which the defendant could complain, it was a matter to which he could refer when the plaintiff sought to exclude him from doing just what the plaintiff had done himself. This *455 certainly would have been our law, and we should assume, if material, that the same doctrine would have prevailed in Spain, in the absence of the clearest proof to the contrary, which we do not find in the record or the brief. What we have said with reference to the plaintiff's claim under the Treaty applies in substance to his argument that by § 14 of Act No. 166 the Spanish certificate is conclusive evidence of the plaintiff's title. That section must be taken to be subject to general principles of law embodied in other sections to which we have referred. If there was any claim intended to be put forward on the ground of unfair competition, the prayers of the complaint and the plaintiff's testimony show that such claim depended fundamentally on the alleged infringement of trade-mark. Any matters of fact in dispute were sufficiently disposed of by the concurrent findings of the courts below. Judgment affirmed.
|
Low
|
[
0.5213675213675211,
30.5,
28
] |
The Ultimate CBD Edibles Guide CBD Gummies The superstar cannabinoid that is CBD is quickly becoming just as popular as THC, and the selection of cannabis-infused edibles on the market is reflecting that shift. Overall 100% 100% Dosage - 100% 100% Flavor - 100% 100% Price - 100% 100% Below is a list of lab-tested, delicious CBD edibles that work. Most of the products can be purchased online. The Kiva CBD chocolate bar is our only entry on the list that has CBD and THC. This tasty dark chocolate bar has 100mg of CBD and 20MG of THC. I recommend taking only one serving, 5mg of CBD and 1mg of THC per serving, wait about 45 minutes and see how you feel. The servings are premeasured into breakable pieces. Overall, if you struggle from insomnia the Kiva bar is amazing for sleep. If you are in California, you can Google CBD chocolate around me or use the Kiva website to find a location that sells the bars. Microdosed to suit your needs, the 5mg gluten-free and vegan CBD mints by Mr. Moxey are great for anxiety. Enriched with ginger, basil, chamomile, and lemongrass you’ll definitely love the flavor and effect from the Mr. Moxey CBD mints. We recommend 1-2 mints per serving. CBD + Pink Grapefruit Gourmet Marshmallows. Stylist magazine called this flavor “The millennial version of After Eights” and The Daily Mail stated, “These marshmallows are truly delicious and melt in the mouth.” Handcrafted from fluff to cut, each marshmallow contains 10mg of CBD. At $15.00 per package, the 6-pack of mallows is a great gift. Each sleep gummy contains 10mg of CBD and 3mg of melatonin. Charlotte’s web recommends taking 1-2 gummies 1 hour before sleep. I agree with this recommendation, the gummies take about 45 minutes to activate, once they do you’ll feel it. Your body immediately relaxes after activation. Enjoy 10mg of CBD and 125mg of Vitamin C per serving. Creating Better Days has formulated a tasty CBD gummy that has become part of my everyday morning routine. The $14.99 60mg CBD chocolate bar is delicious. Check out the lab report and ingredients here. The 100mg premium CBD chocolate bar by Rosebud is $22.99. The women of Calivolve and Rosebud CBD have come together to handcraft a decadent, vegan dark chocolate bar made with 100 mg of full-spectrum CBD, and 60% pure cacao. Pot D’Huile’s Hemp Infused Olive Oil contains 25mg of CBD oil. The recommended serving size is 5ml, which is about 1tsp. 1tsp contains 5mg of CBD. 20mg per gummy, the popular Lord Jones gummies are another great gift option. The box contains 9 CBD gummies that are lab tested, gluten-free and broad-spectrum. Plus CBD Gummies are extra strength and extra tasty. At 50mg of CBD per gummy if you need something strong then I would get Plus. Choose from three flavors: Blueberry, Grapefruit and Blackberry tea. The 10-pack Cheeba chews are $21.99 for 250mg, 10 servings. The sour apple chews are gluten-free, 11 calories/serving and really really tasty. Recommended if you love sour candy. Delicious vegan gummies offering 20mg of CBD per serving. The recommended serving size is 2-3 gummies when you feel anxious. The Velobar CBD Roasted Nuts, Dark Chocolate, and Sea Salt. $19.99 for 4 bars, 20mg of CBD and 7g of protein per bar. Santa Cruz Medicinals CBD Coconut Oil. Ingredients: Organic coconut oil and CBD. 1000mg per container, 1 tbsp is about 125mg of CBD. You can eat it raw, use it as a massage oil substitute, the possibilities are endless. The Hakuna confections mango slices come in a jar of 10 pieces. Each piece has 10mg of broad-spectrum CBD. Vegan & Gluten-Free. Ingredients: Dried mango and CBD oil. One of the higher end brands in the CBD space, To Whom It May is an artistic brand highlighting the lifestyle CBD offers. Packages range in three box sizes of 4, 8 or 24 chocolates, and the chocolates are offered in four different dosages ranging from 2.5mg to 15mg. Experience CBD in a new way. Wake up! It’s time for your CBD! Strava CBD Infused Coffee comes in a 12 oz. bag. Each bag contains 500mg of CBD, about 20mg per cup. The coffee bean is from Colombia with tasing notes of milk chocolate and black cherry. Brew a pot and see if you like CBD with your coffee. 15mg CBD per serving, 6g of sugar, 6.7 oz (200ml) glass bottle. We recommend buying the 3-pack. The 3-pack is a holiday gift and is only $19.95. Youg et three delicious flavors: Lavender Spice, Rosemary Grapefruit, and Cayenne Citrus. Wyld CBD gummies come in four flavors, our favorite is the Huckleberry. Each gummy contains 25mg of CBD. The gummies are made with real fruit and are vegan and gluten-free. Luce Farm’s 6oz. CBD Honey, 360mg of CBD. 10 mg of CBD per tsp, 24 calories per tsp. Ingredients: honey, organic coconut oil, and CBD. 3 Flavors offered, each can contains 10mg of CBD per, natural ingredients, 25 calories and 6g of sugar. Choose between Blackberry Chai, Peach Ginger, and Pomegranate Hibiscus. Sprig CBD comes in four flavors: Citrus original (our favorite), Citrus zero sugar, Lemon tea zero sugar, and melon zero sugar. The original Citrus contains 110 calories, 25 grams of sugar and 20mg of CBD. The zero sugar Citrus contains 5 calories and 0 grams of sugar, and 20mg of CBD. $50.00 for a 12-pack. Not Pot CBD gummies are great for stress or anxiety. The strawberry-flavored gummies contain 10mg of CBD each. 100% vegan. Fan of cold brews? Balanced Chill’s CBD cold brew contains 200mg of caffeine and 20mg of CBD per 8oz bottle. We love this product! 10mg of CBD per piece, all-natural organic ingredients, 30-day money-back guarantee. 25mg of CBD per Bite, 4 pieces per bag. Crispy on the outside, soft on the inside. Four flavors to choose from Chocolate Chip Therapy, Peanut Butter Chocolate Therapy, Pecan Shortbread Therapy, Snickerdoodle Therapy. 15mg of CBD per cookie. 30mg of CBD per bag. Ingredients: Sugar, Popcorn, Peanut Oil, Maltodextrin, Natural Flavors, CBD Hemp Oil, and salt. Sweet Reason CBD sparkling comes in three flavors: Cucumber Mint, Grapefruit, and Strawberry Lavender. Ingredients per bottle: Carbonated water, natural flavors, and 10mg CBD. 5 calories per bottle and zero grams of sugar. Welleryou Bites are $50.00 for the 10-pack, each pack contains 5-bites. 140 calories,9 grams of fat and 25mg of CBD. Smores CBD protein bar by JustCBD. Per bar: 25mg of CBD per bar, along with 180 calories, 7g of sugar, 2g of fiber, and 14g of protein. $4/bar. Super Seed Life CBD Cookies are grain-free, egg-free, dairy-free and Non-GMO. 90 calories per cookie,3g of sugar, 2g of protein and 20mg of CBD each. Super Seed Life CBD Brownies are grain-free, egg-free, dairy-free and Non-GMO. 80 calories per brownie,3g of sugar, 2g of protein and 20mg of CBD each. Each jar contains a total of 200mg CBD. Ingredients: Sugar, palm oil, hazelnuts, skim milk, cocoa, soy lecithin, vanillin. CBDifiori Swiss milk chocolate bar. Each bar contains 100 mg CBD (20mg CBD per serving). King Karl 90mg CBD is $20.00. 70%bitter sweet, Organic, Vegan. Ingredients: cacao, cane sugar, pure cacao butter, cacao beans, GMO-free soy lecithin, and broad-spectrum CBD. Choco Nugs look like real cannabis buds, they’re not. Just chocolate and CBD oil. Cool idea. The JusCBD dried fruit medley contains Pineapple, Papaya, Mango, Apricot, and Raisins. 3000mg of CBD for $120. This is the perfect trail mix addition to take to work. Stay calm all day. Each pack contains 250mg of CBD, 7g of carbs and 11g of protein. Choose between two flavors: Original and Teriyaki. Main ingredients: Beef, Brown Sugar, Water, Salt, Corn Protein. 2006 Tour de France winner Floyd Landis brings you his take on CBD protein powder. 2 scoops = 1 serving, per serving you get 27g of protein per serving and 25mg of CBD. The 1lb bag is $239.95 for approximately 10 servings. The SoKo website said it best: Try the vinaigrette on a salad, with French bread, or to compliment your favorite dish. Expect a full-bodied tangy balsamic with herbal undertones. 100% organic and made in the USA, each bottle contains 250mg CBD Rich in antioxidants and healthy fats, use the avocado oil a substitute for your favorite cooking oil, or simply enjoy with bread. 100% organic and made in the USA, each bottle contains 250mg CBD. The 4oz. bag contains 200mg of CBD and a blend of vegan, gluten-free nuts, seeds, and spices. 25mg of CBD, 12g+ High Protein, Omega-3 and Omega-6, dairy-free, gluten-free, no GMOs. Loaded with a combination of cashews, flavorful lemon zest, and delicious ginger. 25mg of CBD per bar. CBD Edibles FAQS: Why Should I Ingest CBD in Edible Form? For many people wishing to consume CBD, edibles are an easy alternative to other routes of administration, such as vaping and smoking. In addition, some patients cannot or simply choose not to consume their marijuana or CBD in whole leaf form. Some people don’t like the taste or have respiratory issues preventing them from smoking/vaping, etc. On average, consuming CBD through edibles lasts hours longer than doing so through smoking or vaporizing, and as CBD is used for medicinal purposes and not recreational, the duration of the relief matters a lot. Two thumbs up to CBD edibles for lasting a long time, and set in rather quickly. Plus, edibles provide a fun, new way to consume CBD. Will CBD Edibles Get You High? No, CBD will not make you high but it can cause you to fail a drug test. If you are taking CBD from a cannabis plant and not a hemp plant, check the THC %. CBD may be a cannabis compound, but many are surprised to find that it does not cause a “high.” Instead, CBD offers consumers a mellow uplifting mood with a sense of positive, alert energy. With its long-lasting nature, CBD can provide prolonged relief from mild pains and daily stresses. CBD is a great option for anyone hoping to enjoy the benefits of cannabis without an over-the-top psychoactive experience. Are There Different Types of CBD Edibles? Yes, there are three common types of cannabinoid edibles: gastrointestinal, oral, and hybrid. The CBD in gastrointestinal edibles, such as food products or pills, is absorbed through the stomach. Although gastrointestinal edibles may take up to two hours to fully kick in, the effects typically last for much longer than the other types of edibles and can provide as much as eight hours of relief. Baked edibles like cookies and brownies can last up to 16 hours, oil-based edibles like gummies are immediately absorbed in your mouth and wear off in about 2-4 hours. Check out recommended dosages for a variety of CBD products. Why is it important to have shorter effects? Edibles are all about place and time. Brennan Kilbane wrote in GQ that: “gummies are the drug you can do at work” because it doesn’t get you high. “And the secret to my happiness is a cannabinoid cousin of THC called CBD.” Brennan continues with: “Here’s how it works: Every morning when I get into my office, I pop a frosted CBD gummy into my mouth for breakfast. Then, I proceed to have a wonderful day.” “My CBD gummies, meanwhile, live in the top drawer of my desk. When I take one, I feel slightly but markedly better. My chair feels like it is mangling my body less. It’s harder to make a fist. It’s easier to navigate an hour or two of bullshit, which means it’s easier to do my job. It doesn’t matter if anybody notices that I am 10% more pleasant, because I feel 10% more pleasant, anyway.” “Simply: Every day is a better day with CBD edibles.” Are There any Side Effects of CBD? June 2017, the national center for biotechnology information published: “The safety and side effects of Cannabidiol, a review of clinical data and relevant animal studies.” Here is a summary of what NCBI published: In general, the often described favorable safety profile of CBD in humans was confirmed and extended by the reviewed research. The majority of studies were performed for treatment of epilepsy and psychotic disorders. Here, the most commonly reported side effects were tiredness, diarrhea, and changes in appetite/weight. In comparison with other drugs, used for the treatment of these medical conditions, CBD has a better side effect profile. This could improve patients’ compliance and adherence to treatment. What’s the difference between CBD edibles from hemp plants and edibles from cannabis plants? By the simplest definition, there is no difference between CBD obtained from either type of plant. However, there are still differences in the final product of hemp-derived CBD oil vs marijuana-derived CBD oil. Different methods of processing can also significantly alter the final CBD content and chemical makeup. The difference between THC levels in a hemp Cannabis Sativa plant and a marijuana Cannabis Sativa plant also plays a large role in determining its legality in the United States. Plants with high levels of THC remain illegal at the federal level, although state laws may vary. This means that marijuana-derived CBD is not legal in all 50 states. On the other hand, hemp-derived CBD remains in a legal grey zone in all 50 states. It contains very low levels of THC, and there is some believe it falls under the passage of the 2014 Farm Bill. There is still some discrepancy between federal departments on the final legal status of hemp-derived CBD oil. To start, both [male] hemp and [female] marijuana come from the same “mother” cannabis plant, as per United States Department Of Agriculture (USDA). Cannabis sativa can yield a [brother] hemp or a [sister] marijuana. Brother hemp naturally contains a meager amount of THC that warrants a very small legal limit. Hemp can legally only have trace amounts of THC. The highest amount of THC that [brother] hemp can have is less than one-half of one percent (0.3%). With such low amounts of THC in hemp, you can get away with saying hemp has ZERO psychoactive THC. Making this statement closer to the truth than not. If hemp has more than 0.3% THC, it’s legally not hemp and would require more strict regulations like sister marijuana. If you are looking for a THC free edible go with a hemp-based CBD product. And, if you want to feel minor effects from THC + CBD then you need a cannabis-derived CBD product. How Long Does it Take for CBD Edibles to Kick In? CBDinstead made a nice infographic detailing when CBD kicks in: If you want to learn more about how long each type of CBD edible takes to kick in than you should check out a recent article I wrote about CBD gummies after I tasted over 15 brands, and why I feel they are the ideal CBD edible. If you are looking for an instant edible with a shorter lifespan I’d take a gummy, if you want an edible that lasts all day and is mild, I would go with a baked good or CBD drink. How Long do CBD Edibles Last? Baked CBD edibles like cookies and brownies can last up to 10 hours, oil-based CBD edibles like gummies are immediately absorbed in your mouth and wear off in about 2-4 hours. My friends and I typically choose gummies or hard candies when we are at work. Why? They kick in fast and peak for about 2.5 hours max. It’s a nice way to soothe a tough moment. Whereas baked edibles are perfect for a day off. They offer longer effects, not necessarily stronger than gummies, but definitely longer and harder to just snap out of. Edibles are perfect for a long “me day.” In my article on why I believe gummies are the perfect edible, I wrote: Being able to control the amount of time and serving size of your edible is key to when you can take it. Gummies give you the flexibility and freedom to consume CBD anywhere, anytime. Feeling confident that you know when your CBD will wear off, changes your entire mindset going into whether or not you should consume a CBD edible. What Else Do I Need to Know Before I Buy a CBD edible? After reading this article you should feel confident with your CBD edible knowledge. The key pointers to take away from here are: 1. Some edibles are CBD and THC 2. Make sure your CBD edible is all-natural, and not synthetic cannabis 3. CBD has little to no side effects 4. CBD edibles kick in fast and can last up to 10 hours 5. CBD edibles like gummies come pre-measured making it easy to dose. 6. CBD edibles are popular, safe and effective.
|
Low
|
[
0.515970515970516,
26.25,
24.625
] |
t term in 47, 71, 87, 89, 71? 27 What comes next: 551, 1101, 1651, 2201, 2751, 3301? 3851 What is the next term in -82, -319, -724, -1303, -2062, -3007, -4144? -5479 What comes next: 16, 15, 14, 13, 12? 11 What is the next term in 184, 182, 180, 178, 176? 174 What comes next: -1009, -1026, -1043? -1060 What is next in 101, 200, 291, 374? 449 What comes next: -4039, -4048, -4069, -4108, -4171, -4264? -4393 What is the next term in 105, 199, 303, 423, 565? 735 What is next in -24, -92, -210, -384, -620, -924, -1302? -1760 What is next in 10, 25, 50, 85, 130? 185 What is the next term in 14401, 14409, 14431, 14473, 14541, 14641? 14779 What is the next term in 22, 29, 36? 43 What comes next: 62, 259, 588, 1049, 1642? 2367 What comes next: 81, 154, 225, 294, 361? 426 What is the next term in -334, -337, -340, -343? -346 What is next in -452, -902, -1352? -1802 What is next in -61, -58, -41, -4, 59, 154, 287, 464? 691 What is the next term in -28, -46, -40, 2, 92, 242, 464, 770? 1172 What comes next: 5, 1, -3, -13, -35, -75, -139? -233 What comes next: -29, -38, -31, -8, 31? 86 What is next in -8, -25, -42, -59, -76, -93? -110 What is the next term in 120, 260, 424, 612, 824, 1060, 1320? 1604 What comes next: -107, -113, -123, -137, -155, -177, -203? -233 What is next in 13, 76, 137, 196, 253, 308, 361? 412 What is next in 3106, 6213, 9320, 12427, 15534? 18641 What is next in -135, -168, -201? -234 What comes next: 153, 152, 151, 150, 149? 148 What comes next: -5888, -5889, -5890? -5891 What is the next term in -31, -25, -15, -1, 17, 39? 65 What comes next: 405, 808, 1213, 1620, 2029? 2440 What is next in 1446, 1447, 1448, 1449, 1450? 1451 What is next in -3, -31, -125, -321, -655? -1163 What comes next: -43, -146, -325, -586, -935, -1378, -1921, -2570? -3331 What comes next: 150, 327, 506, 687, 870, 1055, 1242? 1431 What comes next: -33, -99, -207, -357, -549? -783 What is the next term in -935, -1857, -2767, -3659, -4527, -5365, -6167, -6927? -7639 What comes next: 412, 294, 98, -176, -528? -958 What is the next term in -36, -158, -364, -654, -1028, -1486? -2028 What is next in 91, 112, 135, 160, 187? 216 What is the next term in -420, -417, -412, -405, -396, -385, -372? -357 What is next in -88, -82, -76, -70, -64? -58 What comes next: -160, -364, -566, -766, -964, -1160? -1354 What comes next: -672, -1341, -2010, -2679? -3348 What is the next term in 44, 41, 38, 35, 32? 29 What is next in 836, 1685, 2534, 3383, 4232? 5081 What is next in -1, 2, 3, 2, -1, -6, -13? -22 What is next in -236, -238, -240, -242? -244 What is next in 5, 28, 65, 122, 205, 320, 473? 670 What is the next term in 0, 67, 186, 363, 604, 915? 1302 What is the next term in -7, 6, 25, 50, 81, 118? 161 What is the next term in 8, 12, -12, -88, -240? -492 What is next in -764, -6102, -20594, -48818, -95352, -164774? -261662 What is next in -334, -672, -1012, -1354, -1698, -2044? -2392 What is the next term in -367, -732, -1111, -1510, -1935? -2392 What is next in -97, -169, -361, -733, -1345, -2257, -3529, -5221? -7393 What is the next term in -96, -205, -388, -645, -976, -1381, -1860? -2413 What comes next: -799, -847, -895? -943 What is next in -67, -61, -43, -7, 53, 143? 269 What is next in 17, 173, 595, 1415, 2765, 4777, 7583? 11315 What is next in 188, 858, 1968, 3512, 5484? 7878 What is next in -6127, -6128, -6129, -6130, -6131? -6132 What is the next term in -2, 18, 54, 112, 198, 318, 478? 684 What comes next: -2, -38, -76, -116? -158 What is the next term in -50, -61, -72, -83? -94 What comes next: 131, 143, 167, 209, 275, 371? 503 What is next in 141, 111, 81, 51? 21 What comes next: -136, -174, -212, -250? -288 What is next in -4278, -4298, -4354, -4464, -4646? -4918 What comes next: -123, -125, -137, -165, -215, -293, -405, -557? -755 What is the next term in 1570, 1567, 1554, 1525, 1474? 1395 What is the next term in 239, 476, 711, 944, 1175, 1404? 1631 What is the next term in -25, -70, -125, -190, -265, -350, -445? -550 What comes next: -870, -871, -872, -873? -874 What is the next term in 47, 78, 109, 140, 171, 202? 233 What is next in -14, -4, 30, 100, 218, 396? 646 What comes next: -617, -607, -593, -575, -553, -527, -497? -463 What is the next term in -10, -5, -4, -7? -14 What is the next term in -637, -629, -607, -565, -497? -397 What comes next: -638, -625, -588, -515, -394, -213, 40? 377 What is next in -15761, -15762, -15763? -15764 What is the next term in 211, 233, 265, 313, 383? 481 What is next in -188, -384, -566, -728, -864, -968, -1034? -1056 What is next in 1266, 2541, 3828, 5133, 6462, 7821? 9216 What comes next: -2365, -2361, -2355, -2347, -2337, -2325, -2311? -2295 What comes next: 4196, 8393, 12592, 16799, 21020, 25261, 29528, 33827? 38164 What comes next: 216, 166, 76, -60, -248, -494, -804, -1184? -1640 What comes next: 48, -67, -258, -525, -868? -1287 What comes next: -11, 9, 29, 49? 69 What is next in -24, -57, -104, -165? -240 What comes next: 116, 153, 174, 173, 144? 81 What comes next: -29, -53, -93, -149, -221? -309 What is the next term in -10458, -10459, -10460? -10461 What is the next term in -973, -1945, -2917, -3889? -4861 What is next in -279, -546, -811, -1074? -1335 What is the next term in -60, -133, -206? -279 What comes next: -3, -21, -53, -99, -159, -233, -321? -423 What is the next term in -59, -55, -51, -47, -43, -39? -35 What comes next: -31, -34, -47, -76, -127, -206? -319 What is next in -173, -139, -109, -83, -61, -43, -29? -19 What is the next term in -9, 98, 205, 312, 419? 526 What comes next: -780, -1560, -2340, -3120, -3900? -4680 What is next in -743, -742, -741? -740 What comes next: 699, 1398, 2095, 2790? 3483 What is next in -50, -92, -134, -176, -218? -260 What is the next term in 358, 331, 302, 271, 238? 203 What is next in 196, 182, 172, 166, 164, 166, 172? 182 What is next in -199, -166, -133? -100 What is the next term in 260, 262, 264, 266, 268? 270 What is next in 3959, 3962, 3965? 3968 What is next in 87, 148, 207, 264, 319, 372? 423 What is next in 5170, 10339, 15506, 20671, 25834, 30995, 36154? 41311 What is next in 2530, 5061, 7592? 10123 What is the next term in -3980, -3985, -3992, -4001, -4012, -4025, -4040? -4057 What is next in -54, -60, -70, -84, -102, -124, -150? -180 What is next in -1371, -1374, -1377, -1380, -1383? -1386 What is next in 75, 149, 223, 297, 371, 445? 519 What is next in 9, -9, -27, -39, -39? -21 What is next in 18, 65, 142, 249, 386? 553 What is next in -75, -291, -651, -1155, -1803, -2595, -3531? -4611 What is the next term in 997, 999, 1001? 1003 What comes next: 8501, 8509, 8531, 8573, 8641? 8741 What is the next term in 8, -107, -342, -691, -1148, -1707, -2362? -3107 What is next in -278, -525, -710, -803, -774, -593, -230, 345? 1162 What is next in -16, -131, -322, -589, -932, -1351, -1846? -2417 What is next in -32, 15, 92, 193, 312, 443, 580? 717 What is the next term in -25, -43, -65, -91, -121, -155, -193? -235 What is next in -260, -347, -432, -515, -596? -675 What is next in 15, 22, 33, 48, 67, 90, 117? 148 What comes next: -310, -621, -932, -1243, -1554? -1865 What is the next term in 754, 1537, 2308, 3061, 3790, 4489, 5152, 5773? 6346 What is the next term in 25, 45, 91, 175, 309, 505, 775? 1131 What comes next: -233, -232, -231, -230, -229, -228? -227 What is next in -1046, -1044, -1042, -1040, -1038? -1036 What is next in 24189, 48379, 72569, 96759? 120949 What is next in 42, 46, 48, 48, 46, 42? 36 What is the next term in -2, 21, 82, 199, 390, 673, 1066, 1587? 2254 What is the next term in 15, 18, 19, 18, 15, 10? 3 What is next in 41, 238, 757, 1760, 3409, 5866, 9293? 13852 What is next in -22, -68, -144, -250, -386, -552, -748? -974 What is the next term in -779, -3133, -7059, -12557, -19627? -28269 What is the next term in -49, -216, -487, -856, -1317, -1864? -2491 What is the next term in 21, 38, 67, 108, 161? 226 What is next in 1101, 2204, 3307, 4410, 5513, 6616? 7719 What is the next term in 1, -18, -51, -104, -183, -294, -443, -636? -879 What is next in 427, 867, 1307, 1747, 2187, 2627? 3067 What is next in 49, 115, 197, 295? 409 What comes next: 172, 157, 118, 43, -80, -263, -518? -857 What is the next term in 52, 62, 74, 88, 104? 122 What is the ne
|
Low
|
[
0.506738544474393,
23.5,
22.875
] |
Q: CICA Life a Citizen's company I'm not a US citizen and I have no idea about life insurance companies. The CICA Life company (part of Citizen's Inc) offered me life insurance in the US. I just want to know if the company can by trusted? What other options do I have to get a life insurance in the US while living elsewhere? A: I would not recommend. I know people outside of the US that have dealt with Citizen's Inc (not to be confused with other companies with similar names), and I have serious concerns about their investment practices. First off, CICA Life is probably not selling you just regular life insurance, but a complex retirement income product that includes life insurance as a side benefit. This is already problematic, because bundled products tend to be more expensive than if you shopped around for each product on their own. My main concern, however, is that Citizen's Inc reportedly invests their clients premiums into their own company stock. This is not what any independent investment adviser would recommend you if you asked them what to do with your retirement savings. It also artificially inflates their stock and creates big conflicts of interest. While Citizen's Inc is based in Austin, Texas, CICA Life itself is incorporated in Bermuda. Take that as you will, but the limited legal options that would leave you makes me question how secure is the income being promised 30-40 years in the future. If you want retirement savings, save on your own, or shop around for a product like an immediate or deferred annuity. If you want regular life insurance, there's also options in your country or the US that offer services to non-US citizens. I wouldn't normally mix the two, and definitely not with that company. Here's a source that talks about Citizen's practices, analyzing it as a potential stock investment: https://seekingalpha.com/article/4053091-citizens-egregious-stock-scheme
|
Mid
|
[
0.654939106901217,
30.25,
15.9375
] |
Pages Thursday, January 15, 2015 New Mexicans are very particular about their chile and no New Mexican worth their salsa would ever call our wonderful chile pepper chili. Chili is that Texas style bean and meat or lack of beans or lack of meat thick stew sort of concoction that you pour into a Fritos corn chip bag. Nothing wrong with chili but don't get it confused with chile. As much as I love my traditional style of making chile and all the various chile sauces there are times when a big ol' bowl of hot steaming chili is just the ticket, add cornbread, avocado and sour cream and this is a bowl of winter heaven. Christo's Big Cowboy Chili1 lb ground beef (or chicken or turkey)1 large onion diced3 cloves garlic minced3 Tbs cumin powder4 Tbs (use as much or as little as you want for level of heat desired) red chile powder (get it from New Mexico) 1 small can tomato paste2 cups watersalt and pepper to taste1 lb dry of cooked kidney beans (I cooked a pound of dry beans in the pressure cooker you could use canned and it would probably work out to 2 quarts of beans) Make sure you have at least 3 cups of liquid with your beansAvocado, sour cream and green onion for garnishBrown the meat in a large skillet. When meat is brown add the onion and garlic and saute until soft. When the onion and garlic have softened then add the cumin and the chile powder and stir till everything is coated (if its not spicy enough add more red chile powder). When all of this is a nice deep red color then add the tomato paste and water. Simmer gently for 15 minutes. Add this mixture to your cooked beans and bean broth and simmer for 5 more minutes. Serve with your garnish and cornbread of you have it. This is by far the easiest and tastiest chili you will ever have. No need to add any sugar as the onions add all the sweetness you need but if you do find that your particular can of tomato paste was extra acidy then add a squeeze of honey. Enjoy A bowl of chili is a soothing, warming and an inexpensive way to bring lots of flavor and nutrition for you and your family, make some. Saturday, January 10, 2015 Many of you have been following my blog from the very beginning and you know first hand the evolution I have gone thru. Some of you are newer and you have seen me now as I am. I am going to try to fill in the blanks but first lets talk about knives. I love knives. I have knives I never use because they are way too special and I don't want anything to happen to them and this custom knife by Middleton Made Knives is a perfect example - I treated my self to this knife for Christmas in 2013 - so its pretty new. I have knives I have customized with special handles that I use when I want people to ooh and ahh over how cool the knife is and this razor sharp Japanese beauty I have had for probably 20 years now is the one that shines in that spot light. Then I have the knives I use at home, gobs of them. You name it I probably have two and all sorts of other little doodads and whatnots. I already mentioned the knives I never use except on rare occasions and here they are all together. And then there are the knives I use for work, I have this knife bag and I also have a smaller one (not shown) with utility knives that I loan out to people who help me on occasion. This particular kit is now a year old as my other cherished set that I had collected over the years was stolen, very heart breaking. I also have a bag of zesters and other assorted tools and yet one more bag of specialty plating spoons but thats a post for another time as well as my oddball knives and hunting knives and machetes and ghurkas and you name it - plus my huge pocket knife collection - I get all frazzled just thinking about it. This is where we start the story - with the knives I use for work. Fortunate to have met Liza Minelli Fourteen years ago I decided that I was going to live my dream and work as a chef. This was not an easy path to take at 40 years old but I had to do it. Ten years ago I started my blog, that was an easy step to take and it introduced me to all of you. I was happy to find a community of like minded people who all shared their lovely food and inspirational words and blogging was exciting and new. I was able to learn a lot from the blogging community and at the same time I was able to hone my skills as a self taught chef. I worked hard at learning everything there was to learn. I had an adolescent boy as I was a single dad at this time (he is 16 now) that demanded a lot of my attention so I wasn't able to dedicate myself to any job so blogging became my biggest effort. I blogged and blogged and blogged and blogged. I also started to take foods to the playground for other moms and dads and child care givers to taste. I spent a lot of time in playgrounds with my son. Before you know it (3 years into it or so) I started a little soup subscription service where I made a couple of kinds of soup each week and would deliver containers of them for a small fee to whomever wanted them. I did this for a couple of years until one day one of my subscribers mentioned to me that they had a friend of a friend that was looking for chefs to help with a catering company and I should give them a call. I called them and started working right away. While working for this one company on occasion (the catering work wasn't regular which was perfect for my schedule and my adolescent son was now a teenager) - a party planner for another catering company asked me to assemble a tasting as they were looking for a new chef. I jumped at that opportunity and gave it all I had. I got the job. I was the new chef. It took me 6 years from when I first made the jump to when I was doing it as a pro at the level I am now. I could not be more grateful, happy and proud. I worked hard then and I still do now. I set a goal and I achieved it even at my later age. Don't let anyone ever stop you or discourage you from your goals, with persistency and consistency success will follow. I won't ever be able to thank all the individual bloggers that were and still are a huge inspiration to me but I think you know who you are. Thank You. Friday, January 09, 2015 Comfort food comes in many forms, for some its as simple as a cheeseburger with fries and for others its a warm plate of scrambled eggs and buttered toast or a big bowl of steaming pasta so to try to say that one dish is more comforting than another is like trying to get everyone to have the same favorite color (mine is purple - just in case). Memories play a big role in how we experience comfort. Sure a warm blanket and a hot cup of cocoa is comforting to many and certainly it stirs up fond memories but I am talking about a deeper connection. I grew up in rural New Mexico staying with my grandparents often on a small farm with a couple of goats, a milk cow a few sheep and some chickens as well as a very large sprawling garden and some acres of alfalfa and I had chores to do. Every day started at 4:30 or 5:00 with a cup of warm, chocolate flavored blue corn porridge called ATOLE - it was thicker than cocoa but thinner than oatmeal and I would have this with a cup of milky and sweet coffee before going out to do the morning chores before school. Comfort for me its a dish my grandmother made almost every single day for lunch. For many in New Mexico with my similar heritage this is a common dish that almost everyone made in some form or another. Caldo or broth, or soup or whatever the definition of caldo is, was and still is a popular and quick meal. Lunch would roll around pretty much exactly at noon and I would come in from whatever I was doing whether it was tending to the sheep as they grazed along the ditch bank or watering the alfalfa fields and my grandmother would have a skillet full of this for her and my grandfather and I to have with some fresh made and warm tortillas. This is Papas con Caldo - or Potato Soup and its simple and fast to make. Its a little onion and garlic, some browned meat and diced potatoes a little water and salt and pepper and it all simmers together to make a pot of heaven. Whats your favorite comfort food? Thursday, January 01, 2015 Perfectly poached eggs topped with parsley aioli and salmon caviar along with smoked trout on a garlic crostini with grilled plantains and mushrooms and a small salad are a New Years Day brunch anyone would love. Don't start the love till after you have had your "good luck" bowl of black eyed peas and collard greens - it just wouldn't be proper. Two quick tips: 1) black eyed peas cook fast 2) collard greens are best when simmered for a good long while to soften up and mellow out. Recipes. I am often asked about recipes and on that same note I am often told how I should write a cookbook. This whole cookbook idea is nothing new to me here on the blog as I have mentioned it often. As much as I want to write a cookbook I simply don't feel strongly enough that there is a market for one more cookbook. This year I will give it the most thought I have ever given it and we shall see. Innovations. Innovative cooking is all the rage. The curious thing about the word innovative is how loosely it can be construed. If you have never poached an egg before then you are being innovative. I am going to make it my mission to try to bring innovation to each and every one of you and do it in a way that is functional and feasible. Anything can be innovative and teaching is also a way of learning. Lets grow together in the coming year. I have written this blog for a very long time now, 10 years to be exact. I have had waves of readers from the many to the few to the very many down to the very few. I have learned a thing or two along the way and I want to pick up a few more things to round it out. I am looking forward to your feedback and your help in 2015. Wednesday, October 22, 2014 Thanksgiving is right around the corner and nothing is going to treat your bird better. How often have you wanted to make big league BBQ but you didn't have the big league equipment to do it. You don't have a 30 foot rig with decals and and a pit master to go with it nor do you have the space to park something like this. Drum roll please - in walks The Big Easy your answer, your ticket, your golden egg. Living in New York City I don't have room for a big rig but if I did you know I would get one so instead I get all my smoking and low and slow bbq action done in my Big Easy. Big turkey day will be here before you know it so whether its a big bird, a small bird or several little birds you want to be ready now. Hey maybe you don't want to make turkey this year - then don't - no one said you have to so instead make some ribs. I get my ribs ready by boldly rubbing them down with an exotic mix of chiles and spices. The tall rack is ideal for ribs, I am also using the optional rib hooks that attach to the basket. I have the electrical unit because I already have a large charbroil grill with a propane tank and I don't want to have too many propane tanks up on my roof for safety reasons. This unit gets nice and hot. I usually fill the smoker drawer with left over rosemary sticks that I save up. You know I just had to show you how perfect the ribs are. I like to switch things up when I make a bbq sauce - this one is a cactus fruit and habanero sauce that is sweet, tangy and tropically spicy. The Big Easy is awesome. You get a nice smoke ring, you get depth of flavor, you get it cooked perfectly because of the built in thermometer and heat settings, you get to show off. I have made turkey, porchetta, leg of lamb and lots and lots of ribs and every single thing has come out perfectly cooked. If you are short on space but you want to be long on flavor I wholeheartedly recommend you get your hands on a Char-Broil Big Easy. I actually don't know what I would do without it, grilling is one thing but low and slow smoking and making BBQ is something else entirely. You will love it. So what ever you choose, be it a big turkey a few racks of ribs, a leg of lamb or a crispy skinned porchetta you can bet The Big Easy is gonna make it easy. Don't these fine BBQ's items look awesome. Good BBQ takes time. For something special this Thanksgiving - go with The Big Easy. 2012 and 2014 WINNER OF THE CHAR-BROIL ALL-STARS COOK OFF Cool things I think you need in your pantry Followers follow me in the kitchen Subscribe to ChezWhat I am the FIrst and more will follow Help the Cause... Another Hero Christo Gonzales Learning recipes is great, learn technique and its grand. ChezWhat is a virtual bistro with an ever changing menu based on the freshest most unique ingredients of the season. I have a passion for words and food and I love how well they go together, my kitchen explorations are my life and my life is food. Come with me while we experiment with flavors.
|
Mid
|
[
0.5450346420323321,
29.5,
24.625
] |
1-1/4" Executive Green Thermal Binding Covers with Windows - 100pk ProductDescription These 1-1/4" executive green plain windowed front thermal covers are made from the highest quality materials and are some of the best thermal covers available on the market. Plus the exclusive U channel groove on our Thermal Binding Covers provides a strong longer lasting bind than most other Thermal Binding Covers on the market. Our Thermal Binding covers are available in more than 28 different stocks, 13 different sizes, with plain fronts, frost fronts, clear fronts or with plain windowed fronts. These 1-1/4" executive green thermal binding covers are incredibly simple to use... Just place your document in the binder, place the binder in your Thermal Binding Machine and in just seconds you have a finished, bound book. These 1-1/4" executive green plain windowed front thermal covers can handle 180-250 sheets of 8.5" x 11" size documents. Part Number BI114EXFGW. Please Note that these covers are made to order and take approximately 3 weeks to ship. Because these covers are made specifically for you they cannot be canceled once they have been submitted for production and are not returnable. If you are looking for a faster solution you might want to consider our Thermal Binding Utility Covers
|
Mid
|
[
0.604118993135011,
33,
21.625
] |
Highly sensitive and robust peroxidase-like activity of porous nanorods of ceria and their application for breast cancer detection. Porous nanorods of ceria (PN-Ceria), a novel ceria nanostructure with a large surface area and a high surface Ce(3+) fraction, exhibited strong intrinsic peroxidase activity toward a classical peroxidase substrate in the presence of H2O2. Peroxidase-like activity of ceria originated from surface Ce(3+) species as the catalytic center, thereby explaining the high performance of PN-Ceria as an artificial enzyme mimicking peroxidase. Compared with the natural enzyme horseradish peroxidase (HRP), PN-Ceria showed several advantages such as low cost, easy storage, high sensitivity, and, prominently, chemical and catalytic stability under harsh conditions. Importantly, the enzymatic activity of PN-Ceria remained nearly constant and stable over a wide range of temperature and pH values, ensuring the accuracy and reliability of measurements of its peroxidase-like activity. A PN-Ceria based novel diagnostic system was developed for breast cancer detection with a higher sensitivity than the standard HRP detection system. Our work has laid a solid foundation for the development of PN-Ceria as a novel diagnostic tool for clinical use.
|
High
|
[
0.6787709497206701,
30.375,
14.375
] |
How to easily host your own sites and access your files from anywhere. - codemechanic http://www.weboffspring.com/?p=308 ====== corin_ The fact that you have to pay an extra $50 to get LAMP on it is pretty shocking if you ask me ~~~ codemechanic The device comes with ubuntu 9.04. You can install LAMP from the repository directly if you want to for no cost. The image is meant for people who don't have time to do that. ~~~ corin_ Slightly less depessing then :) Although, how many people know what "you can buy a LAMP stack" means AND would chose to pay $50 if they have ssh access to the server?
|
Mid
|
[
0.538641686182669,
28.75,
24.625
] |
<?php class msResourceGetListProcessor extends modObjectGetListProcessor { public $classKey = 'modResource'; public $languageTopics = array('resource'); public $defaultSortField = 'pagetitle'; /** * @param xPDOQuery $c * * @return xPDOQuery */ public function prepareQueryBeforeCount(xPDOQuery $c) { if ($this->getProperty('combo')) { $c->select('id,pagetitle'); } if ($id = (int)$this->getProperty('id')) { $c->where(array('id' => $id)); } if ($query = trim($this->getProperty('query'))) { $c->where(array('pagetitle:LIKE' => "%{$query}%")); } return $c; } /** * @param xPDOObject $object * * @return array */ public function prepareRow(xPDOObject $object) { if ($this->getProperty('combo')) { $array = array( 'id' => $object->get('id'), 'pagetitle' => '(' . $object->get('id') . ') ' . $object->get('pagetitle'), ); } else { $array = $object->toArray(); } return $array; } } return 'msResourceGetListProcessor';
|
Mid
|
[
0.591792656587473,
34.25,
23.625
] |
(AP Photo/Alex Brandon) At some point President Trump is going to forget about the 2016 election and Hillary Clinton’s emails and turn his attention to the nation’s business. But today is not that day. President Trump is awake, angry, and tweeting. And you can probably guess the topic. Yes, President Trump has been in office for over six months and is still fixated on his former opponent Hillary Clinton. He started the morning with a tweet about Ukraine, a country that he accused of sabotaging his campaign and “working to boost Clinton.” He implored the Attorney General to look into this supposed Ukrainian influence, while tagging Fox News host and conspiracy theorist Sean Hannity in the tweet. Then he moved on to Hillary Clinton’s “crimes” involving her emails. What alleged crimes he’d like investigated weren’t specified. That shot at Attorney General Sessions as “weak” is meant to either get him to quit, or to get Sessions to start some kind of investigation into Clinton. Either way, multiple news outlets have reported that Trump wants to fire Sessions but simply doesn’t have the guts. The president then tweeted about acting FBI Director Andrew McCabe. President Trump said that McCabe’s wife received $700,000 from Hillary Clinton, which isn’t exactly true. In reality, McCabe’s wife received $500,000 from a political action committee tied to Virginia Governor Terry McAuliffe (who, yes, is an ally of Clinton’s by the nature of being a Democrat himself) and she reportedly received another $207,788 from the Democratic Party of Virginia. It’s certainly fair to argue that political donations to McCabe’s wife present a conflict in some way I suppose, but it’s still a bit bizarre to watch. But you’ll never guess what aired on Fox News shortly before Trump sent out his tweets. That’s right. More shit about Clinton and her destruction of phones with hammers, something that anyone practicing good opsec does when they’re disposing of old electronics. Trump then moved on to tweets about today’s vote in the Senate about health care. After saying that Obamacare had been around for “17 years” yesterday, he finally got the number right: But it still remains unclear whether Trump even understands the bill he’s promoting. Trump continued his morning rage-tweetstorm with a kind message for John McCain, a man who he previously denigrated for getting captured in Vietnam. McCain is racing back to Washington to help pass a bill that will strip health insurance from millions of Americans. And if he does that will certainly be his legacy. I’m not sure that we can call that the act of a hero though. Again, Trump seems to be just livetweeting Fox & Friends this morning, like he does practically every morning. And just as Fox News pivoted to more talk about Jared Kushner, Trump tweeted about it right on cue. He even suggested that the investigation into collusion would turn to include his 11-year-old son Barron. It can be so easy to forget that the president is a man with the knowledge of 17 intelligence agencies at his fingertips. Yet he seems to spend hours of his time each morning doing nothing but watching TV news. It’d be funny if it weren’t so dangerous. Because as the Trump regime implodes (there’s open speculation about whether everyone from Reince Priebus to Rex Tillerson will quit or be fired) the nation suffers at home and abroad. Correction: This story originally identified McCabe as the one who recommended that James Comey be fired. That was obviously Rod Rosenstein. I regret my own idiocy.
|
Low
|
[
0.5337690631808271,
30.625,
26.75
] |
### [UnrealEngine.Framework](./UnrealEngine-Framework.md 'UnrealEngine.Framework').[SplineComponent](./SplineComponent.md 'UnrealEngine.Framework.SplineComponent')
## SplineComponent.GetUpVectorAtSplinePoint(int, UnrealEngine.Framework.SplineCoordinateSpace) Method
Returns the spline's up vector at the spline point
```csharp
public System.Numerics.Vector3 GetUpVectorAtSplinePoint(int pointIndex, UnrealEngine.Framework.SplineCoordinateSpace coordinateSpace);
```
#### Parameters
<a name='UnrealEngine-Framework-SplineComponent-GetUpVectorAtSplinePoint(int_UnrealEngine-Framework-SplineCoordinateSpace)-pointIndex'></a>
`pointIndex` [System.Int32](https://docs.microsoft.com/en-us/dotnet/api/System.Int32 'System.Int32')
<a name='UnrealEngine-Framework-SplineComponent-GetUpVectorAtSplinePoint(int_UnrealEngine-Framework-SplineCoordinateSpace)-coordinateSpace'></a>
`coordinateSpace` [SplineCoordinateSpace](./SplineCoordinateSpace.md 'UnrealEngine.Framework.SplineCoordinateSpace')
#### Returns
[System.Numerics.Vector3](https://docs.microsoft.com/en-us/dotnet/api/System.Numerics.Vector3 'System.Numerics.Vector3')
|
Mid
|
[
0.613698630136986,
28,
17.625
] |
Simple preparative gas chromatographic method for isolation of menthol and menthone from peppermint oil, with quantitative GC-MS and (1) H NMR assay. The quantitative performance of a simple home-built preparative gas chromatography (prep-GC) arrangement was tested, incorporating a micro-fluidic Deans switch, with collection of the target compound in a deactivated uncoated capillary tube. Repeat injections of a standard solution and peppermint sample were made into the prep-GC instrument. Individual compounds were eluted from the trapping capillary, and made up to constant volume. Chloronaphthalene internal standard was added in some cases. Recovered samples were quantitatively assayed by using GC-MS. Calibration linearity of GC-MS for menthol standard area response against number of injections (2-20 repeat injections) was excellent, giving R(2) of 0.996. For peppermint, menthol correlation over 2-20 repeated injections was 0.998 for menthol area ratio (versus IS) data. Menthone calibration for peppermint gave an R(2) of 0.972. (1) H NMR spectroscopy was conducted on both menthol and menthone. Good correspondence with reference spectra was obtained. About 80 μg of isolated menthol and menthone solute was collected over a sequence of 80 repeat injections from the peppermint sample, as assayed by 600 MHz (1) H NMR analysis (∼100% recovery for menthol from peppermint). A procedure is proposed for prediction of number of injections required to acquire sufficient material for NMR detection.
|
High
|
[
0.680306905370844,
33.25,
15.625
] |
Q: How to use trekking clothes? I will be going to Peru in July and will do the Salkantay Trail to Machu Picchu, but I have no experience with layered clothing (I live in Brazil, sub 10°C is dead cold here). From what I understand, I will need base layers, then non-cotton t-shirts, then fleece sweaters, then a "anorak" (water and wind protection), and finally a rain poncho to help in case o rain. Is that right? For the pants it would be base layer, fleece pants and trekking pants/shorts, right? Reading a bit more, it seems that I will need 2 fleece sweaters (200 and 300 something) and combine them accordingly. Is that right or overkill? The base layer will be needed mostly at night for sleeping (during the day it is about 20° C and at night it goes down to -5°C). Do I need more than one set of base layer (considering a 4 night trek), or just one is enough? As for socks, I need trekking socks (again non-cottom that is able to breath). Can I reuse pairs every other day? A: That's the kind of gear I'd use when going to hike well into subzero temperatures. Attempting to hike in such a gear at +20°C, especially in humid air, you'll not be comfortable at all. It's a total overkill. Especially the pants. What I'd use for such a hike: t-shirt, preferably non-cotton stay-dry kind; soft shell fleece jacket (Windstopper or equivalent) waterproof, breathable jacket (Gore-Tex or equivalent) breathable hiking trousers (not thermo insulated, either waterproof or quick drying) waterproof, breathable hiking boots (tall ones) For sleeping at temperatures -5°C what you really need is not huge amount of clothes, but decent sleeping bag. With filling made either of goose down, or even better synthetic. You can find sleeping bags rated to as low as -30°C. As far as I know, in Cuzco you shouldn't expect torrential rains at that time of a year (or much any rain at all). So rain poncho doesn't seem necessary. Doesn't weight much though, so you might take it just in case. Preferably the kind that also covers your backpack. Especially if your backpack is not waterproof.
|
Mid
|
[
0.5656324582338901,
29.625,
22.75
] |
A semiconductor device that includes a current sensing portion to detect a current value of a principal current of a device is heretofore known, and semiconductor devices of, for example, Patent Documents 1 and 2 have been proposed.
|
Low
|
[
0.527227722772277,
26.625,
23.875
] |
0.000000 2.200000 3.400000 6.733333 8.666667 9.866667 11.000000 12.866667 14.200000 15.266667 16.466667 17.200000 19.266667
|
Low
|
[
0.43672456575682306,
22,
28.375
] |
I don't know that any of that's true, but it certainly seems like a smart plan to me. As conservatives generally point out whenever the context isn't military spending, it's very damaging to human welfare to have the government tax productive labor in order to spend money on something useless. So given that population aging is certain to lead to growing pressures on the federal budget, it's important to make up as much of the financing gap as possible by cutting spending elsewhere rather than with new taxes. And per the great Peterson Foundation chart above, the U.S. military budget is really large. Obviously, you don't want to cut the military all the way to the bone lest you invite an invasion from Mexico or Canada. But we're not even close to being overwhelmed by Canadian arms. And it's striking that if you look at non-U.S. defense spending, a majority of it appears to be by U.S. treaty allies—NATO members, Japan, Australia, South Korea, etc.—so we really do seem very safe. Obviously, this involves some considerations outside the Moneybox framework. Maybe Mexico's military is unusually cost effective and we have no choice but to spend 50 times what Mexico spends in order to defend ourselves adequately. But from a pure budgetary point of view, this really is a strikingly large pool of money.
|
Mid
|
[
0.6078886310904871,
32.75,
21.125
] |
Post navigation Auditors Block Productivity If you talk with staff members in relatively new projects at large organizations, especially governmental organizations, you may discover an interesting pattern. When the group is formed, it is given a vision of what it is supposed to do. The team jumps on board and does everything they can to achieve that vision, based on their skills, knowledge and budget. However, at some point the auditors turn up. They evaluate the “proper” use of budgetary funds and issue guidelines on what can / cannot be done, based on their view of the financial constraints. That is normally the beginning of the end of true efficiency for that group. Here is an example; A contractor was hired to remove graffiti reported by the people in town. They did an awesome job! Everyone was amazed about how quickly graffiti disappeared from view. However, the auditors showed up and slapped them down. Evil people that they were, they were spending city money to clean up private spaces and property belonging to other government organizations. City money cannot be used that way. Now, when their staff shows up at the location of some reported graffiti, they must first evaluate who can remove it. If the graffiti is on the wrong wall or building or property, they are only allowed to document it and report it to the correct organization. Even though they are there, with the tools needed to clean it up, they cannot remove it. However, they do get the pleasure of telling the person who originally reported the graffiti, that it has now been reported to another organization, and the ticket has been closed. The other side of the coin can be just as bad; I had a friend who was assigned to a three year stint in an auditing group of a large, private corporation. He was a computer technology researcher. On his first audit, he ran a scan of the network to verify what computers were actually hooked up in the site’s datacenter. Unfortunately, there was an old, fragile Windows NT system that crashed when his software sweep found it. It turned out that undocumented system was critical to the factory’s operations and the production line was down for three hours. Management’s response was simple. Attack the messenger for running an unregistered software tool on their local network. Thereafter, he restricted his exploration of the networks to reading the documentation and running software that had been approved by local managers in advance. Software scanning tools were never approved. I really don’t know how to fix these problems, given that auditing spending is what auditors do, and restricting access to the networks is what data managers do. Somehow, these organizations need to come up with ways to authorize actions outside of the “normal”, in the name of efficiency, security or productivity. Good luck with that!
|
Low
|
[
0.518962075848303,
32.5,
30.125
] |
In the manufacture of integrated circuits (chips) it is well known that it is desirable to encapsulate the chip in order to protected the chip from mechanical damage and contamination. Encapsulation techniques are also known to passivate the chips an enhance their long term performance. There are a variety of well known techniques available for encapsulating chips. These techniques include mounting chips within a cavity of a substrate or a die structure, wire bonding chips to a lead frame and then enclosing the package with a lid. Another technique includes mounting chips to a lead frame, wire bonding the chips to the lead frame and then passivating the chips and a portion of the lead frame in a molded plastic or plastic epoxy body. Yet another technique for packaging and passivating chips includes “flip-chip” bonding to a printed circuit board and then covering the chips with a plastic resin. There are several applications where the above mentioned packaging and passivation techniques are inadequate because the materials used to form the packaging are opaque and/or do not provide an optical window of suitable quality for optical applications. For example, such packaging is unsuitable for EPROM devices. An EPROM device is a read-only memory device. The program or data which is stored in an EPROM can only be erased through optical radiation (ultraviolet and/or visible) impinging on the surface of the EPROM. Conventional opaque chip packaging does not allow for such a device to be erased optically and, therefore, is unsuitable for packaging these devices. To solve this problem, makers of EPROM devices mount EPROM chips within the cavity of a ceramic package and hermetically seal the assembly with an optically transparently lid. Micro-electro-mechanical devices (MEM devices) are another class of silicon semiconductors devices. MEM devices are useful for a variety of applications including strain gauges, accelerometers, electronic levels, and also for display light valves or other optical applications. Because of their extremely small moving parts, MEM devices are particularly susceptible to ambient conditions. Accordingly, MEM devices are traditionally sealed within the cavity of a hermetic package to control the operating environment to which the MEM is subjected. When the MEM device is an optical MEM device, as for example in the case of display applications, the MEM device is required to be accessed optically through the packaging, wherein optical energy penetrates the package, impinges on a surface of the MEM device, and where the optical energy is reflected and/or modulated and then escapes from the package forming the optical image or signal. Though conventional ceramic packages can be hermetic, they also tend to be opaque and are, therefore, unsuitable for use with a variety of optical MEM devices. A package which includes an optically transparent window can represent a considerable portion of the manufacturing cost for making an optical MEM device. Under certain circumstances it is important to provide a package which has an optical window of suitable optical quality which has a controlled physical relationship relative to another portion of the MEM device, such as a mechanically active portion of the MEM device or the substrate of the MEM device. Specifically, in some applications it is important to position a transparent lid at an angle relative to an optical element or elements of the MEM device to reduce surface reflections from the optically transparent window, where reflections can interfere with the intended image and/or signal. Conventional silicon semiconductor chip packaging technology does not provide for the ability to control the physical relationship of a transparent window/lid with respect to other portions of a MEM device. Therefore, there is a need for a MEM device with an optical widow that can be controllably positioned at an angle relative to other portions of the MEM device, and in particular at an angle relative the reflective surface(s) of one or more encapsulated optical elements of the MEM device, and a method for making the same.
|
High
|
[
0.66,
33,
17
] |
474 F.Supp. 735 (1979) Tom W. RYAN et al., Plaintiffs, v. DEPARTMENT OF JUSTICE, Defendant. Charles R. HALPERN et al., Plaintiffs, v. DEPARTMENT OF JUSTICE, Defendant. Civ. A. Nos. 79-1042, 79-1043. United States District Court, District of Columbia. July 11, 1979. *736 Girardeau A. Spann, David C. Vladeck, Washington, D.C., for plaintiffs. Thomas W. Hussey, Atty., Dept. of Justice, Washington, D.C., for the Department. MEMORANDUM OPINION BARRINGTON D. PARKER, District Judge: The question presented in these consolidated proceedings is whether the Freedom of Information Act (FOIA), 5 U.S.C. § 552, requires disclosure to the plaintiffs of responses by United States Senators and their judicial nominating commissions to a questionnaire on judicial nominations prepared and sent to them by the Attorney General. The Court rules that disclosure is not required. Pursuant to the Omnibus Judgeship Act of 1978,[1] President Jimmy Carter issued Executive Order 12097[2] which established judicial merit-selection guidelines. The Attorney General's questionnaire sought from the Senators a description of the efforts extended to comply with the guidelines. The defendant Department of Justice denied the plaintiffs' FOIA requests for the material, viewing the questionnaire responses as communications between the President and Senators, closely linked to his constitutional appointment powers, and therefore not "agency records" subject to FOIA. Even assuming the responses are FOIA records, the Department argues that they are exempt from mandatory disclosure as pre-decisional advisory material under § 552(b)(5). The parties have filed cross-motions for summary judgment.[3] The Court has considered the several memoranda of points and authorities, the argument of counsel and other relevant case law. It has also had the benefit of an in camera review of a random sample of five questionnaire responses. Based on the undisputed material facts, the Court concludes as a matter of law that the questionnaire responses are not Department of Justice records for purposes of FOIA. For the reasons outlined below, the complaints are dismissed and summary judgment will be entered for the Department of Justice. BACKGROUND The material facts in this litigation are not disputed. The Omnibus Judgeship Act requires the President to provide for merit-selection of federal judges and thereby improve upon traditional bases of appointment. Facing the prospect of filling 117 new federal district court judgeships, President Carter issued merit-selection guidelines in EO 12097. Nominees are to have the requisite "character, experience and ability, and commitment to equal justice;" be "even-tempered and free of biases against any class of citizens or any religious or racial group;" and possess "outstanding legal ability and competence." Section 1-1 of the Executive Order charges the Attorney General with receiving, evaluating and making recommendations of potential nominees to the President. *737 As part of this process, he is to consider whether "an affirmative effort has been made . . . to identify qualified candidates, including women and members of minority groups," and whether the guidelines have been followed. To this end, on November 8, 1978, the Attorney General submitted the following questionnaire to all Senators:[4] 1. Describe the effort which was made to identify qualified candidates. 2. Describe the process by which all persons identified and interested were considered. 3. How many persons were considered? 4. With respect to each person recommended, does he or she meet each of the standards set forth in Section 2 of the Executive Order? 5. With respect to each person recommended, submit a copy of any questionnaire or resume of biographical information furnished by that person. 6. If a nominating commission was used: (a) how was the commission appointed? (b) how many persons were on the commission? (c) how many of the members were female? (d) how many of the members were of a minority race? (e) how many of the members were nonlawyers? The plaintiffs are individuals and groups who monitor federal judicial appointments with the goals of opening the selection process to public scrutiny and broadening the membership of the federal judiciary to include a wider representation of women, minorities and public interest lawyers.[5] In early 1979, the plaintiffs sought FOIA disclosure of the questionnaire responses received by the Department of Justice. On May 9, 1979, the Department of Justice denied plaintiffs' requests, asserting that the questionnaire responses are not agency records under FOIA and would, in any event, be exempt from mandatory disclosure as pre-decisional advisory material under § 552(b)(5). Government counsel represented at oral argument on June 27, 1979, that more than 50 responses had been received, representing all but four states. The responses vary in format and scope, in some instances including the names of individuals. Responses received as of May 9, 1979, are indexed in the affidavit of Quinlan J. Shea, Jr. ANALYSIS The threshold legal question is whether the questionnaire responses are "agency records" covered by the Freedom of Information Act. The position of the parties may be summarized. The plaintiffs contend that the responses are unquestionably Department of Justice records, since the questionnaire was drafted and issued by the Department for return, retention and evaluation by that agency to assist the Attorney General in fulfilling his duties under Executive Order 12097. The Department of Justice, on the other hand, argues that any connection it has with the questionnaire responses is coincidental and that no other agency is involved; the documents are written by Senators or state nominating commissions and contain information for President Carter, who receives the information through the Attorney General acting as his legal advisor. The law in this Circuit is clear that physical possession of records by a government agency is not the sole criterion for determining whether they are agency records. Goland v. CIA, No. 76-1800, slip op. at 11 (D.C.Cir. May 23, 1978); Forsham v. Califano, 190 U.S.App.D.C. 231, 239 n. 19, 587 F.2d 1128, 1136 n. 19 (1978). *738 The governing principle is that only if a federal agency has created or obtained a record . . . in the course of doing its work, is there an agency record that can be demanded under FOIA (footnotes omitted). Forsham, 190 U.S.App.D.C. at 239, 587 F.2d at 1136. The status of a document in the possession of an agency depends on "the circumstances attending the document's generation," the nature of the information, and the relationship of the agency to the documents and other parties involved. Goland, slip op. at 11-14. For example, in Goland, the court held that a copy of the transcript of a secret Executive Session Congressional Committee hearing on intelligence methodology, marked secret, retained by the CIA for internal reference purposes, was not a CIA document for FOIA purposes. The circumstances demonstrated Congress' intent to exert continued control over the transcript and the decision to make a copy public could only be made by Congress.[6] In this case, the questionnaire responses were returned to the Attorney General and are physically located at the Department of Justice. Personnel other than the Attorney General obviously are involved in processing the responses, most notably Assistant Attorney General Michael J. Egan, identified in Attorney General Bell's cover letter to the Senators as the appropriate official for them to contact. However, given the history of these responses, even these links to the Department of Justice do not make them agency records subject to disclosure under FOIA. The questionnaire responses ultimately owe their existence to President Carter's exercise of his Article II constitutional authority to nominate and, by and with the advice and consent of the Senate, appoint federal judges. Even before the Omnibus Judgeship Act of 1978, the President depended upon information and counsel from government and outside sources to determine nominees. The need for information and assistance has been magnified by President Carter's issuance of merit-selection guidelines in Executive Order 12097. For this reason, he specifically directed the Attorney General to gather and assemble material on candidates and on the nomination process itself from Senators and state nominating commissions. The Attorney General chose to solicit information by questionnaire. This particular approach to the judicial appointment process makes the status of the questionnaire responses unique. Unlike the congressionally-controlled transcript in Goland, they are not documents belonging to the Senators or the state nominating commissions. They are not solely presidential documents or records of the Attorney General. Nor are they, as defendant represents, strictly communications between United States Senators and the President. They are best described as the collective product and property of the President, the Attorney General, the Senators, and the state commissions, none of which are agencies for FOIA purposes.[7] The responses certainly are not under the control of the Department of Justice. Given the presidential appointment context from which the responses originated, they cannot be compared to compliance or other required reports routinely filed with and left in the custody of an executive agency. The Attorney General understandably has utilized his staff and facilities at the *739 Department of Justice to send, receive and digest the questionnaires and responses. This does not make the information gathering and analysis tasks any less his own, as specifically assigned by the President in the merit-selection guidelines of Executive Order 12097. Egan affidavit ¶¶ 6-7. The Attorney General is not conducting routine Department of Justice business but is acting as counsel and advisor to the President, who is exercising his Article II powers to nominate 117 new federal district judges. See Stassi, supra. Cf. Soucie v. David, 145 U.S.App.D.C. 144, 448 F.2d 1067 (1971). There has not been, and there could not be, any suggestion that the Attorney General is nominally presiding over what are in reality Department of Justice functions in order to protect the questionnaire responses from FOIA release.[8] The Court does not place undue emphasis on the relationship between the questionnaire responses and the President's constitutional nomination power; this is a Freedom of Information Act case and nothing more. There is no blanket privilege against disclosure of documents touching upon the President's appointment power. Information Acquisition Corp. v. United States Department of Justice, 444 F.Supp. 458, C.A. No. 77-840 at 3 (D.D.C. June 6, 1978); Information Acquisition Corp. v. United States Department of Justice, C.A. No. 77-839 at 4 (D.D.C. May 23, 1979).[9] However, the interplay between the FOIA and executive privilege comes into play only if documents are agency records. As discussed above, the questionnaire responses are not agency records because they do not fall out of the sphere of the appointment process into Department of Justice business. President Carter has made the nomination and appointment process far more open by publishing merit-selection guidelines and by enlisting the Attorney General's assistance in ensuring compliance with the guidelines by Senators and state nominating commissions. These efforts and developments are salutary. However, by making consideration of potential nominees less secretive and political than in the past, the President has not made all aspects of the process open to public scrutiny under FOIA. The Court is not unsympathetic to the plaintiffs' beliefs that the public interest favors Department of Justice disclosure of the questionnaire responses. However, a showing of need or public interest does not alone transform the requested documents into agency records subject to FOIA. Sterling Drug, Inc. v. FTC, 146 U.S.App.D.C. 237, 244, 450 F.2d 698, 705 (1971); Forsham, 190 U.S.App.D.C. at 237, 587 F.2d at 1134. The Court has also considered the argument that the Senatorial responses are the end product of the process of monitoring compliance with the guidelines, a process separate from the ongoing judicial appointment process. This argument has an initial appeal, but the fact is that the Attorney General generated the questionnaires to secure information for the President's use as an integral part of the Article II appointment power. The plaintiffs may of course attempt to obtain copies of the questionnaire responses from the Senators and state nominating commissions authoring them. The Department of Justice represented that it has no objections to such a course of action.[10] Indeed, government counsel represented at oral argument that the Department is more concerned with safeguarding this channel *740 of information in the appointment process than in preventing disclosure of the factual material in the responses filed. In light of this disposition, the Court need not address the second legal issue of whether the questionnaire responses, as agency records, would be exempt from mandatory disclosure as pre-decisional advisory memoranda to the President. Ordered accordingly. APPENDIX A *741 APPENDIX B *742 NOTES [1] 28 U.S.C. § 133 note. [2] 43 Fed.Reg. 52455 (Nov. 13, 1978). The Executive Order is attached as Appendix A. [3] Defendant filed a motion to dismiss for failure to state a claim or for summary judgment. Because the Court has considered affidavits and documents outside the pleadings, the Court treats this solely as a motion for summary judgment. Fed.R.Civ.P. 12(b). [4] Attorney General Griffin Bell's covering letter is attached as Appendix B. [5] Plaintiffs in C.A. No. 79-1043 seek broad access to all the responses. Plaintiffs Tom W. Ryan, Jr. and Missouri Public Interest Research Group in C.A. No. 79-1042 seek only the response of Senator Eagleton. [6] In Forsham, the court ruled that raw data in the hands of investigators and university groups who conducted research with federal funds, though available to a federal agency, were not agency records. See also Cook v. Willingham, 400 F.2d 885 (10th Cir. 1968) (presentence report is judicial document, not FOIA agency record). [7] S.Rep. No. 1200, 93rd Cong., 2d Sess. 15 (1974); H.Rep. No. 1380, 93rd Cong., 2d Sess. 14 (1974), U.S.Code Cong. & Admin.News 1974, p. 6267; Stassi v. United States Department of Justice, C.A. No. 78-0532 at 2 (D.D.C. March 30, 1979); Ciccone v. Waterfront Commission of New York Harbor, 438 F.Supp. 55, 58 (S.D.N.Y.1977). [8] If there was any intention of achieving this result, the Court notes that the Attorney General would have had the questionnaire responses sent directly to the President. [9] In the Information Acquisition cases, this Court ordered partial release of Department of Justice personnel files on Chief Justice Warren E. Burger and Justice William H. Rehnquist. FBI background check material that invaded personal privacy was withheld, while certain comments to the President or his designee concerning their appointments to government service were released. As compared to this case, the Information Acquisition cases clearly concerned Department records. The Court finds this distinction crucial. [10] Defendant's Motion to Dismiss at 31-32.
|
Low
|
[
0.48085106382978704,
28.25,
30.5
] |
Q: Are there any diagrammers implemented in JavaScript? I need to develop a javascript based diagrammer for designing node-and-connector diagrams for things like process flow, activity diagram etc. I am planning to use jQuery's drag-and-drop and templates to do this. But, is there any similar solution already out there that I could reuse? A: Take a look at JointJS. It uses Raphaël for rendering the diagrams, so it's pretty smooth. It already has the most common diagram types built in (UML, Flowchart, ERD, Petri net, ...) and can easily be extended.
|
High
|
[
0.707462686567164,
29.625,
12.25
] |
tst/yacc.c:345: warning: missing return value tst/yacc.c:349: warning: missing return value tst/yacc.c:360: warning: missing return value
|
Low
|
[
0.46962616822429903,
25.125,
28.375
] |
Q: Difference between and Can any one explain me difference between   and ? I have html data stored in database in binary form and space in that can be either of or   or sometimes  . Also issue is when I convert this HTML to plain text using JSoup lib it is converting it properly but if I use String.contains(my string) method of java. It looks like the HTML data which is having is different from which is having  . String is not found in either vice versa. Example: HTML1 : This is my test string HTML2 : This is my test string If I convert it to plain text using JSoup. It returns HTML 1 : This is my test string HTML 2 : This is my test string But still both string are not same. Why is it so? A:   is the classic space, the one you get when you hit your spacebar, represented by his HTML entity equivalent. and   represents the non-breaking space, often used to prevent collapse of multiple spaces togethers by the browser : "    " => " " (collapsed into only one space) " " => " " (not collapsed) If you are parsing a string containing both classic and non-breaking spaces, you can safely replace one by the other. A:  , is just a space character nothing more. Regular occurrence of this character will collapse to one space character at the end. Where as   and both represent non-breaking space character and if they occur continuously one after another, they will be collapse or break to one space character. Only, difference between them is that   is the HTML number and is a HTML name. Basically all of these are HTML entities. You can learn and know about them, seeing the following links. Link 1 Link 2 A:   is the character for the space key.   and   are both the characters for Non breaking space. If your data has come from different sources it may be possible that the space symbols have been encoded differently. In direct comparison they will likely be shown as being different.
|
Low
|
[
0.533477321814254,
30.875,
27
] |
{-# LANGUAGE Trustworthy #-} {-# LANGUAGE CPP, NoImplicitPrelude, ScopedTypeVariables, MagicHash, BangPatterns #-} ----------------------------------------------------------------------------- -- | -- Module : Data.List -- Copyright : (c) The University of Glasgow 2001 -- License : BSD-style (see the file libraries/base/LICENSE) -- -- Maintainer : [email protected] -- Stability : stable -- Portability : portable -- -- Operations on lists. -- ----------------------------------------------------------------------------- module Data.OldList ( -- * Basic functions (++) , head , last , tail , init , uncons , singleton , null , length -- * List transformations , map , reverse , intersperse , intercalate , transpose , subsequences , permutations -- * Reducing lists (folds) , foldl , foldl' , foldl1 , foldl1' , foldr , foldr1 -- ** Special folds , concat , concatMap , and , or , any , all , sum , product , maximum , minimum -- * Building lists -- ** Scans , scanl , scanl' , scanl1 , scanr , scanr1 -- ** Accumulating maps , mapAccumL , mapAccumR -- ** Infinite lists , iterate , iterate' , repeat , replicate , cycle -- ** Unfolding , unfoldr -- * Sublists -- ** Extracting sublists , take , drop , splitAt , takeWhile , dropWhile , dropWhileEnd , span , break , stripPrefix , group , inits , tails -- ** Predicates , isPrefixOf , isSuffixOf , isInfixOf -- * Searching lists -- ** Searching by equality , elem , notElem , lookup -- ** Searching with a predicate , find , filter , partition -- * Indexing lists -- | These functions treat a list @xs@ as a indexed collection, -- with indices ranging from 0 to @'length' xs - 1@. , (!!) , elemIndex , elemIndices , findIndex , findIndices -- * Zipping and unzipping lists , zip , zip3 , zip4, zip5, zip6, zip7 , zipWith , zipWith3 , zipWith4, zipWith5, zipWith6, zipWith7 , unzip , unzip3 , unzip4, unzip5, unzip6, unzip7 -- * Special lists -- ** Functions on strings , lines , words , unlines , unwords -- ** \"Set\" operations , nub , delete , (\\) , union , intersect -- ** Ordered lists , sort , sortOn , insert -- * Generalized functions -- ** The \"@By@\" operations -- | By convention, overloaded functions have a non-overloaded -- counterpart whose name is suffixed with \`@By@\'. -- -- It is often convenient to use these functions together with -- 'Data.Function.on', for instance @'sortBy' ('compare' -- \`on\` 'fst')@. -- *** User-supplied equality (replacing an @Eq@ context) -- | The predicate is assumed to define an equivalence. , nubBy , deleteBy , deleteFirstsBy , unionBy , intersectBy , groupBy -- *** User-supplied comparison (replacing an @Ord@ context) -- | The function is assumed to define a total ordering. , sortBy , insertBy , maximumBy , minimumBy -- ** The \"@generic@\" operations -- | The prefix \`@generic@\' indicates an overloaded function that -- is a generalized version of a "Prelude" function. , genericLength , genericTake , genericDrop , genericSplitAt , genericIndex , genericReplicate ) where import Data.Maybe import Data.Bits ( (.&.) ) import Data.Char ( isSpace ) import Data.Ord ( comparing ) import Data.Tuple ( fst, snd ) import GHC.Num import GHC.Real import GHC.List import GHC.Base infix 5 \\ -- comment to fool cpp: https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/phases.html#cpp-and-string-gaps -- ----------------------------------------------------------------------------- -- List functions -- | The 'dropWhileEnd' function drops the largest suffix of a list -- in which the given predicate holds for all elements. For example: -- -- >>> dropWhileEnd isSpace "foo\n" -- "foo" -- -- >>> dropWhileEnd isSpace "foo bar" -- "foo bar" -- -- > dropWhileEnd isSpace ("foo\n" ++ undefined) == "foo" ++ undefined -- -- @since 4.5.0.0 dropWhileEnd :: (a -> Bool) -> [a] -> [a] dropWhileEnd p = foldr (\x xs -> if p x && null xs then [] else x : xs) [] -- | \(\mathcal{O}(\min(m,n))\). The 'stripPrefix' function drops the given -- prefix from a list. It returns 'Nothing' if the list did not start with the -- prefix given, or 'Just' the list after the prefix, if it does. -- -- >>> stripPrefix "foo" "foobar" -- Just "bar" -- -- >>> stripPrefix "foo" "foo" -- Just "" -- -- >>> stripPrefix "foo" "barfoo" -- Nothing -- -- >>> stripPrefix "foo" "barfoobaz" -- Nothing stripPrefix :: Eq a => [a] -> [a] -> Maybe [a] stripPrefix [] ys = Just ys stripPrefix (x:xs) (y:ys) | x == y = stripPrefix xs ys stripPrefix _ _ = Nothing -- | The 'elemIndex' function returns the index of the first element -- in the given list which is equal (by '==') to the query element, -- or 'Nothing' if there is no such element. -- -- >>> elemIndex 4 [0..] -- Just 4 elemIndex :: Eq a => a -> [a] -> Maybe Int elemIndex x = findIndex (x==) -- | The 'elemIndices' function extends 'elemIndex', by returning the -- indices of all elements equal to the query element, in ascending order. -- -- >>> elemIndices 'o' "Hello World" -- [4,7] elemIndices :: Eq a => a -> [a] -> [Int] elemIndices x = findIndices (x==) -- | The 'find' function takes a predicate and a list and returns the -- first element in the list matching the predicate, or 'Nothing' if -- there is no such element. -- -- >>> find (> 4) [1..] -- Just 5 -- -- >>> find (< 0) [1..10] -- Nothing find :: (a -> Bool) -> [a] -> Maybe a find p = listToMaybe . filter p -- | The 'findIndex' function takes a predicate and a list and returns -- the index of the first element in the list satisfying the predicate, -- or 'Nothing' if there is no such element. -- -- >>> findIndex isSpace "Hello World!" -- Just 5 findIndex :: (a -> Bool) -> [a] -> Maybe Int findIndex p = listToMaybe . findIndices p -- | The 'findIndices' function extends 'findIndex', by returning the -- indices of all elements satisfying the predicate, in ascending order. -- -- >>> findIndices (`elem` "aeiou") "Hello World!" -- [1,4,7] findIndices :: (a -> Bool) -> [a] -> [Int] #if defined(USE_REPORT_PRELUDE) findIndices p xs = [ i | (x,i) <- zip xs [0..], p x] #else -- Efficient definition, adapted from Data.Sequence -- (Note that making this INLINABLE instead of INLINE allows -- 'findIndex' to fuse, fixing #15426.) {-# INLINABLE findIndices #-} findIndices p ls = build $ \c n -> let go x r k | p x = I# k `c` r (k +# 1#) | otherwise = r (k +# 1#) in foldr go (\_ -> n) ls 0# #endif /* USE_REPORT_PRELUDE */ -- | \(\mathcal{O}(\min(m,n))\). The 'isPrefixOf' function takes two lists and -- returns 'True' iff the first list is a prefix of the second. -- -- >>> "Hello" `isPrefixOf` "Hello World!" -- True -- -- >>> "Hello" `isPrefixOf` "Wello Horld!" -- False isPrefixOf :: (Eq a) => [a] -> [a] -> Bool isPrefixOf [] _ = True isPrefixOf _ [] = False isPrefixOf (x:xs) (y:ys)= x == y && isPrefixOf xs ys -- | The 'isSuffixOf' function takes two lists and returns 'True' iff -- the first list is a suffix of the second. The second list must be -- finite. -- -- >>> "ld!" `isSuffixOf` "Hello World!" -- True -- -- >>> "World" `isSuffixOf` "Hello World!" -- False isSuffixOf :: (Eq a) => [a] -> [a] -> Bool ns `isSuffixOf` hs = maybe False id $ do delta <- dropLengthMaybe ns hs return $ ns == dropLength delta hs -- Since dropLengthMaybe ns hs succeeded, we know that (if hs is finite) -- length ns + length delta = length hs -- so dropping the length of delta from hs will yield a suffix exactly -- the length of ns. -- A version of drop that drops the length of the first argument from the -- second argument. If xs is longer than ys, xs will not be traversed in its -- entirety. dropLength is also generally faster than (drop . length) -- Both this and dropLengthMaybe could be written as folds over their first -- arguments, but this reduces clarity with no benefit to isSuffixOf. -- -- >>> dropLength "Hello" "Holla world" -- " world" -- -- >>> dropLength [1..] [1,2,3] -- [] dropLength :: [a] -> [b] -> [b] dropLength [] y = y dropLength _ [] = [] dropLength (_:x') (_:y') = dropLength x' y' -- A version of dropLength that returns Nothing if the second list runs out of -- elements before the first. -- -- >>> dropLengthMaybe [1..] [1,2,3] -- Nothing dropLengthMaybe :: [a] -> [b] -> Maybe [b] dropLengthMaybe [] y = Just y dropLengthMaybe _ [] = Nothing dropLengthMaybe (_:x') (_:y') = dropLengthMaybe x' y' -- | The 'isInfixOf' function takes two lists and returns 'True' -- iff the first list is contained, wholly and intact, -- anywhere within the second. -- -- >>> isInfixOf "Haskell" "I really like Haskell." -- True -- -- >>> isInfixOf "Ial" "I really like Haskell." -- False isInfixOf :: (Eq a) => [a] -> [a] -> Bool isInfixOf needle haystack = any (isPrefixOf needle) (tails haystack) -- | \(\mathcal{O}(n^2)\). The 'nub' function removes duplicate elements from a -- list. In particular, it keeps only the first occurrence of each element. (The -- name 'nub' means \`essence\'.) It is a special case of 'nubBy', which allows -- the programmer to supply their own equality test. -- -- >>> nub [1,2,3,4,3,2,1,2,4,3,5] -- [1,2,3,4,5] nub :: (Eq a) => [a] -> [a] nub = nubBy (==) -- | The 'nubBy' function behaves just like 'nub', except it uses a -- user-supplied equality predicate instead of the overloaded '==' -- function. -- -- >>> nubBy (\x y -> mod x 3 == mod y 3) [1,2,4,5,6] -- [1,2,6] nubBy :: (a -> a -> Bool) -> [a] -> [a] #if defined(USE_REPORT_PRELUDE) nubBy eq [] = [] nubBy eq (x:xs) = x : nubBy eq (filter (\ y -> not (eq x y)) xs) #else -- stolen from HBC nubBy eq l = nubBy' l [] where nubBy' [] _ = [] nubBy' (y:ys) xs | elem_by eq y xs = nubBy' ys xs | otherwise = y : nubBy' ys (y:xs) -- Not exported: -- Note that we keep the call to `eq` with arguments in the -- same order as in the reference (prelude) implementation, -- and that this order is different from how `elem` calls (==). -- See #2528, #3280 and #7913. -- 'xs' is the list of things we've seen so far, -- 'y' is the potential new element elem_by :: (a -> a -> Bool) -> a -> [a] -> Bool elem_by _ _ [] = False elem_by eq y (x:xs) = x `eq` y || elem_by eq y xs #endif -- | \(\mathcal{O}(n)\). 'delete' @x@ removes the first occurrence of @x@ from -- its list argument. For example, -- -- >>> delete 'a' "banana" -- "bnana" -- -- It is a special case of 'deleteBy', which allows the programmer to -- supply their own equality test. delete :: (Eq a) => a -> [a] -> [a] delete = deleteBy (==) -- | \(\mathcal{O}(n)\). The 'deleteBy' function behaves like 'delete', but -- takes a user-supplied equality predicate. -- -- >>> deleteBy (<=) 4 [1..10] -- [1,2,3,5,6,7,8,9,10] deleteBy :: (a -> a -> Bool) -> a -> [a] -> [a] deleteBy _ _ [] = [] deleteBy eq x (y:ys) = if x `eq` y then ys else y : deleteBy eq x ys -- | The '\\' function is list difference (non-associative). -- In the result of @xs@ '\\' @ys@, the first occurrence of each element of -- @ys@ in turn (if any) has been removed from @xs@. Thus -- -- > (xs ++ ys) \\ xs == ys. -- -- >>> "Hello World!" \\ "ell W" -- "Hoorld!" -- -- It is a special case of 'deleteFirstsBy', which allows the programmer -- to supply their own equality test. (\\) :: (Eq a) => [a] -> [a] -> [a] (\\) = foldl (flip delete) -- | The 'union' function returns the list union of the two lists. -- For example, -- -- >>> "dog" `union` "cow" -- "dogcw" -- -- Duplicates, and elements of the first list, are removed from the -- the second list, but if the first list contains duplicates, so will -- the result. -- It is a special case of 'unionBy', which allows the programmer to supply -- their own equality test. union :: (Eq a) => [a] -> [a] -> [a] union = unionBy (==) -- | The 'unionBy' function is the non-overloaded version of 'union'. unionBy :: (a -> a -> Bool) -> [a] -> [a] -> [a] unionBy eq xs ys = xs ++ foldl (flip (deleteBy eq)) (nubBy eq ys) xs -- | The 'intersect' function takes the list intersection of two lists. -- For example, -- -- >>> [1,2,3,4] `intersect` [2,4,6,8] -- [2,4] -- -- If the first list contains duplicates, so will the result. -- -- >>> [1,2,2,3,4] `intersect` [6,4,4,2] -- [2,2,4] -- -- It is a special case of 'intersectBy', which allows the programmer to -- supply their own equality test. If the element is found in both the first -- and the second list, the element from the first list will be used. intersect :: (Eq a) => [a] -> [a] -> [a] intersect = intersectBy (==) -- | The 'intersectBy' function is the non-overloaded version of 'intersect'. intersectBy :: (a -> a -> Bool) -> [a] -> [a] -> [a] intersectBy _ [] _ = [] intersectBy _ _ [] = [] intersectBy eq xs ys = [x | x <- xs, any (eq x) ys] -- | \(\mathcal{O}(n)\). The 'intersperse' function takes an element and a list -- and \`intersperses\' that element between the elements of the list. For -- example, -- -- >>> intersperse ',' "abcde" -- "a,b,c,d,e" intersperse :: a -> [a] -> [a] intersperse _ [] = [] intersperse sep (x:xs) = x : prependToAll sep xs -- Not exported: -- We want to make every element in the 'intersperse'd list available -- as soon as possible to avoid space leaks. Experiments suggested that -- a separate top-level helper is more efficient than a local worker. prependToAll :: a -> [a] -> [a] prependToAll _ [] = [] prependToAll sep (x:xs) = sep : x : prependToAll sep xs -- | 'intercalate' @xs xss@ is equivalent to @('concat' ('intersperse' xs xss))@. -- It inserts the list @xs@ in between the lists in @xss@ and concatenates the -- result. -- -- >>> intercalate ", " ["Lorem", "ipsum", "dolor"] -- "Lorem, ipsum, dolor" intercalate :: [a] -> [[a]] -> [a] intercalate xs xss = concat (intersperse xs xss) -- | The 'transpose' function transposes the rows and columns of its argument. -- For example, -- -- >>> transpose [[1,2,3],[4,5,6]] -- [[1,4],[2,5],[3,6]] -- -- If some of the rows are shorter than the following rows, their elements are skipped: -- -- >>> transpose [[10,11],[20],[],[30,31,32]] -- [[10,20,30],[11,31],[32]] transpose :: [[a]] -> [[a]] transpose [] = [] transpose ([] : xss) = transpose xss transpose ((x:xs) : xss) = (x : hds) : transpose (xs : tls) where -- We tie the calculations of heads and tails together -- to prevent heads from leaking into tails and vice versa. -- unzip makes the selector thunk arrangements we need to -- ensure everything gets cleaned up properly. (hds, tls) = unzip [(hd, tl) | (hd:tl) <- xss] -- | The 'partition' function takes a predicate a list and returns -- the pair of lists of elements which do and do not satisfy the -- predicate, respectively; i.e., -- -- > partition p xs == (filter p xs, filter (not . p) xs) -- -- >>> partition (`elem` "aeiou") "Hello World!" -- ("eoo","Hll Wrld!") partition :: (a -> Bool) -> [a] -> ([a],[a]) {-# INLINE partition #-} partition p xs = foldr (select p) ([],[]) xs select :: (a -> Bool) -> a -> ([a], [a]) -> ([a], [a]) select p x ~(ts,fs) | p x = (x:ts,fs) | otherwise = (ts, x:fs) -- | The 'mapAccumL' function behaves like a combination of 'map' and -- 'foldl'; it applies a function to each element of a list, passing -- an accumulating parameter from left to right, and returning a final -- value of this accumulator together with the new list. mapAccumL :: (acc -> x -> (acc, y)) -- Function of elt of input list -- and accumulator, returning new -- accumulator and elt of result list -> acc -- Initial accumulator -> [x] -- Input list -> (acc, [y]) -- Final accumulator and result list {-# NOINLINE [1] mapAccumL #-} mapAccumL _ s [] = (s, []) mapAccumL f s (x:xs) = (s'',y:ys) where (s', y ) = f s x (s'',ys) = mapAccumL f s' xs {-# RULES "mapAccumL" [~1] forall f s xs . mapAccumL f s xs = foldr (mapAccumLF f) pairWithNil xs s "mapAccumLList" [1] forall f s xs . foldr (mapAccumLF f) pairWithNil xs s = mapAccumL f s xs #-} pairWithNil :: acc -> (acc, [y]) {-# INLINE [0] pairWithNil #-} pairWithNil x = (x, []) mapAccumLF :: (acc -> x -> (acc, y)) -> x -> (acc -> (acc, [y])) -> acc -> (acc, [y]) {-# INLINE [0] mapAccumLF #-} mapAccumLF f = \x r -> oneShot (\s -> let (s', y) = f s x (s'', ys) = r s' in (s'', y:ys)) -- See Note [Left folds via right fold] -- | The 'mapAccumR' function behaves like a combination of 'map' and -- 'foldr'; it applies a function to each element of a list, passing -- an accumulating parameter from right to left, and returning a final -- value of this accumulator together with the new list. mapAccumR :: (acc -> x -> (acc, y)) -- Function of elt of input list -- and accumulator, returning new -- accumulator and elt of result list -> acc -- Initial accumulator -> [x] -- Input list -> (acc, [y]) -- Final accumulator and result list mapAccumR _ s [] = (s, []) mapAccumR f s (x:xs) = (s'', y:ys) where (s'',y ) = f s' x (s', ys) = mapAccumR f s xs -- | \(\mathcal{O}(n)\). The 'insert' function takes an element and a list and -- inserts the element into the list at the first position where it is less than -- or equal to the next element. In particular, if the list is sorted before the -- call, the result will also be sorted. It is a special case of 'insertBy', -- which allows the programmer to supply their own comparison function. -- -- >>> insert 4 [1,2,3,5,6,7] -- [1,2,3,4,5,6,7] insert :: Ord a => a -> [a] -> [a] insert e ls = insertBy (compare) e ls -- | \(\mathcal{O}(n)\). The non-overloaded version of 'insert'. insertBy :: (a -> a -> Ordering) -> a -> [a] -> [a] insertBy _ x [] = [x] insertBy cmp x ys@(y:ys') = case cmp x y of GT -> y : insertBy cmp x ys' _ -> x : ys -- | The 'maximumBy' function takes a comparison function and a list -- and returns the greatest element of the list by the comparison function. -- The list must be finite and non-empty. -- -- We can use this to find the longest entry of a list: -- -- >>> maximumBy (\x y -> compare (length x) (length y)) ["Hello", "World", "!", "Longest", "bar"] -- "Longest" maximumBy :: (a -> a -> Ordering) -> [a] -> a maximumBy _ [] = errorWithoutStackTrace "List.maximumBy: empty list" maximumBy cmp xs = foldl1 maxBy xs where maxBy x y = case cmp x y of GT -> x _ -> y -- | The 'minimumBy' function takes a comparison function and a list -- and returns the least element of the list by the comparison function. -- The list must be finite and non-empty. -- -- We can use this to find the shortest entry of a list: -- -- >>> minimumBy (\x y -> compare (length x) (length y)) ["Hello", "World", "!", "Longest", "bar"] -- "!" minimumBy :: (a -> a -> Ordering) -> [a] -> a minimumBy _ [] = errorWithoutStackTrace "List.minimumBy: empty list" minimumBy cmp xs = foldl1 minBy xs where minBy x y = case cmp x y of GT -> y _ -> x -- | \(\mathcal{O}(n)\). The 'genericLength' function is an overloaded version -- of 'length'. In particular, instead of returning an 'Int', it returns any -- type which is an instance of 'Num'. It is, however, less efficient than -- 'length'. -- -- >>> genericLength [1, 2, 3] :: Int -- 3 -- >>> genericLength [1, 2, 3] :: Float -- 3.0 genericLength :: (Num i) => [a] -> i {-# NOINLINE [1] genericLength #-} genericLength [] = 0 genericLength (_:l) = 1 + genericLength l {-# RULES "genericLengthInt" genericLength = (strictGenericLength :: [a] -> Int); "genericLengthInteger" genericLength = (strictGenericLength :: [a] -> Integer); #-} strictGenericLength :: (Num i) => [b] -> i strictGenericLength l = gl l 0 where gl [] a = a gl (_:xs) a = let a' = a + 1 in a' `seq` gl xs a' -- | The 'genericTake' function is an overloaded version of 'take', which -- accepts any 'Integral' value as the number of elements to take. genericTake :: (Integral i) => i -> [a] -> [a] genericTake n _ | n <= 0 = [] genericTake _ [] = [] genericTake n (x:xs) = x : genericTake (n-1) xs -- | The 'genericDrop' function is an overloaded version of 'drop', which -- accepts any 'Integral' value as the number of elements to drop. genericDrop :: (Integral i) => i -> [a] -> [a] genericDrop n xs | n <= 0 = xs genericDrop _ [] = [] genericDrop n (_:xs) = genericDrop (n-1) xs -- | The 'genericSplitAt' function is an overloaded version of 'splitAt', which -- accepts any 'Integral' value as the position at which to split. genericSplitAt :: (Integral i) => i -> [a] -> ([a], [a]) genericSplitAt n xs | n <= 0 = ([],xs) genericSplitAt _ [] = ([],[]) genericSplitAt n (x:xs) = (x:xs',xs'') where (xs',xs'') = genericSplitAt (n-1) xs -- | The 'genericIndex' function is an overloaded version of '!!', which -- accepts any 'Integral' value as the index. genericIndex :: (Integral i) => [a] -> i -> a genericIndex (x:_) 0 = x genericIndex (_:xs) n | n > 0 = genericIndex xs (n-1) | otherwise = errorWithoutStackTrace "List.genericIndex: negative argument." genericIndex _ _ = errorWithoutStackTrace "List.genericIndex: index too large." -- | The 'genericReplicate' function is an overloaded version of 'replicate', -- which accepts any 'Integral' value as the number of repetitions to make. genericReplicate :: (Integral i) => i -> a -> [a] genericReplicate n x = genericTake n (repeat x) -- | The 'zip4' function takes four lists and returns a list of -- quadruples, analogous to 'zip'. -- It is capable of list fusion, but it is restricted to its -- first list argument and its resulting list. {-# INLINE zip4 #-} zip4 :: [a] -> [b] -> [c] -> [d] -> [(a,b,c,d)] zip4 = zipWith4 (,,,) -- | The 'zip5' function takes five lists and returns a list of -- five-tuples, analogous to 'zip'. -- It is capable of list fusion, but it is restricted to its -- first list argument and its resulting list. {-# INLINE zip5 #-} zip5 :: [a] -> [b] -> [c] -> [d] -> [e] -> [(a,b,c,d,e)] zip5 = zipWith5 (,,,,) -- | The 'zip6' function takes six lists and returns a list of six-tuples, -- analogous to 'zip'. -- It is capable of list fusion, but it is restricted to its -- first list argument and its resulting list. {-# INLINE zip6 #-} zip6 :: [a] -> [b] -> [c] -> [d] -> [e] -> [f] -> [(a,b,c,d,e,f)] zip6 = zipWith6 (,,,,,) -- | The 'zip7' function takes seven lists and returns a list of -- seven-tuples, analogous to 'zip'. -- It is capable of list fusion, but it is restricted to its -- first list argument and its resulting list. {-# INLINE zip7 #-} zip7 :: [a] -> [b] -> [c] -> [d] -> [e] -> [f] -> [g] -> [(a,b,c,d,e,f,g)] zip7 = zipWith7 (,,,,,,) -- | The 'zipWith4' function takes a function which combines four -- elements, as well as four lists and returns a list of their point-wise -- combination, analogous to 'zipWith'. -- It is capable of list fusion, but it is restricted to its -- first list argument and its resulting list. {-# NOINLINE [1] zipWith4 #-} zipWith4 :: (a->b->c->d->e) -> [a]->[b]->[c]->[d]->[e] zipWith4 z (a:as) (b:bs) (c:cs) (d:ds) = z a b c d : zipWith4 z as bs cs ds zipWith4 _ _ _ _ _ = [] -- | The 'zipWith5' function takes a function which combines five -- elements, as well as five lists and returns a list of their point-wise -- combination, analogous to 'zipWith'. -- It is capable of list fusion, but it is restricted to its -- first list argument and its resulting list. {-# NOINLINE [1] zipWith5 #-} zipWith5 :: (a->b->c->d->e->f) -> [a]->[b]->[c]->[d]->[e]->[f] zipWith5 z (a:as) (b:bs) (c:cs) (d:ds) (e:es) = z a b c d e : zipWith5 z as bs cs ds es zipWith5 _ _ _ _ _ _ = [] -- | The 'zipWith6' function takes a function which combines six -- elements, as well as six lists and returns a list of their point-wise -- combination, analogous to 'zipWith'. -- It is capable of list fusion, but it is restricted to its -- first list argument and its resulting list. {-# NOINLINE [1] zipWith6 #-} zipWith6 :: (a->b->c->d->e->f->g) -> [a]->[b]->[c]->[d]->[e]->[f]->[g] zipWith6 z (a:as) (b:bs) (c:cs) (d:ds) (e:es) (f:fs) = z a b c d e f : zipWith6 z as bs cs ds es fs zipWith6 _ _ _ _ _ _ _ = [] -- | The 'zipWith7' function takes a function which combines seven -- elements, as well as seven lists and returns a list of their point-wise -- combination, analogous to 'zipWith'. -- It is capable of list fusion, but it is restricted to its -- first list argument and its resulting list. {-# NOINLINE [1] zipWith7 #-} zipWith7 :: (a->b->c->d->e->f->g->h) -> [a]->[b]->[c]->[d]->[e]->[f]->[g]->[h] zipWith7 z (a:as) (b:bs) (c:cs) (d:ds) (e:es) (f:fs) (g:gs) = z a b c d e f g : zipWith7 z as bs cs ds es fs gs zipWith7 _ _ _ _ _ _ _ _ = [] {- Functions and rules for fusion of zipWith4, zipWith5, zipWith6 and zipWith7. The principle is the same as for zip and zipWith in GHC.List: Turn zipWithX into a version in which the first argument and the result can be fused. Turn it back into the original function if no fusion happens. -} {-# INLINE [0] zipWith4FB #-} -- See Note [Inline FB functions] zipWith4FB :: (e->xs->xs') -> (a->b->c->d->e) -> a->b->c->d->xs->xs' zipWith4FB cons func = \a b c d r -> (func a b c d) `cons` r {-# INLINE [0] zipWith5FB #-} -- See Note [Inline FB functions] zipWith5FB :: (f->xs->xs') -> (a->b->c->d->e->f) -> a->b->c->d->e->xs->xs' zipWith5FB cons func = \a b c d e r -> (func a b c d e) `cons` r {-# INLINE [0] zipWith6FB #-} -- See Note [Inline FB functions] zipWith6FB :: (g->xs->xs') -> (a->b->c->d->e->f->g) -> a->b->c->d->e->f->xs->xs' zipWith6FB cons func = \a b c d e f r -> (func a b c d e f) `cons` r {-# INLINE [0] zipWith7FB #-} -- See Note [Inline FB functions] zipWith7FB :: (h->xs->xs') -> (a->b->c->d->e->f->g->h) -> a->b->c->d->e->f->g->xs->xs' zipWith7FB cons func = \a b c d e f g r -> (func a b c d e f g) `cons` r {-# INLINE [0] foldr4 #-} foldr4 :: (a->b->c->d->e->e) -> e->[a]->[b]->[c]->[d]->e foldr4 k z = go where go (a:as) (b:bs) (c:cs) (d:ds) = k a b c d (go as bs cs ds) go _ _ _ _ = z {-# INLINE [0] foldr5 #-} foldr5 :: (a->b->c->d->e->f->f) -> f->[a]->[b]->[c]->[d]->[e]->f foldr5 k z = go where go (a:as) (b:bs) (c:cs) (d:ds) (e:es) = k a b c d e (go as bs cs ds es) go _ _ _ _ _ = z {-# INLINE [0] foldr6 #-} foldr6 :: (a->b->c->d->e->f->g->g) -> g->[a]->[b]->[c]->[d]->[e]->[f]->g foldr6 k z = go where go (a:as) (b:bs) (c:cs) (d:ds) (e:es) (f:fs) = k a b c d e f ( go as bs cs ds es fs) go _ _ _ _ _ _ = z {-# INLINE [0] foldr7 #-} foldr7 :: (a->b->c->d->e->f->g->h->h) -> h->[a]->[b]->[c]->[d]->[e]->[f]->[g]->h foldr7 k z = go where go (a:as) (b:bs) (c:cs) (d:ds) (e:es) (f:fs) (g:gs) = k a b c d e f g ( go as bs cs ds es fs gs) go _ _ _ _ _ _ _ = z foldr4_left :: (a->b->c->d->e->f)-> f->a->([b]->[c]->[d]->e)-> [b]->[c]->[d]->f foldr4_left k _z a r (b:bs) (c:cs) (d:ds) = k a b c d (r bs cs ds) foldr4_left _ z _ _ _ _ _ = z foldr5_left :: (a->b->c->d->e->f->g)-> g->a->([b]->[c]->[d]->[e]->f)-> [b]->[c]->[d]->[e]->g foldr5_left k _z a r (b:bs) (c:cs) (d:ds) (e:es) = k a b c d e (r bs cs ds es) foldr5_left _ z _ _ _ _ _ _ = z foldr6_left :: (a->b->c->d->e->f->g->h)-> h->a->([b]->[c]->[d]->[e]->[f]->g)-> [b]->[c]->[d]->[e]->[f]->h foldr6_left k _z a r (b:bs) (c:cs) (d:ds) (e:es) (f:fs) = k a b c d e f (r bs cs ds es fs) foldr6_left _ z _ _ _ _ _ _ _ = z foldr7_left :: (a->b->c->d->e->f->g->h->i)-> i->a->([b]->[c]->[d]->[e]->[f]->[g]->h)-> [b]->[c]->[d]->[e]->[f]->[g]->i foldr7_left k _z a r (b:bs) (c:cs) (d:ds) (e:es) (f:fs) (g:gs) = k a b c d e f g (r bs cs ds es fs gs) foldr7_left _ z _ _ _ _ _ _ _ _ = z {-# RULES "foldr4/left" forall k z (g::forall b.(a->b->b)->b->b). foldr4 k z (build g) = g (foldr4_left k z) (\_ _ _ -> z) "foldr5/left" forall k z (g::forall b.(a->b->b)->b->b). foldr5 k z (build g) = g (foldr5_left k z) (\_ _ _ _ -> z) "foldr6/left" forall k z (g::forall b.(a->b->b)->b->b). foldr6 k z (build g) = g (foldr6_left k z) (\_ _ _ _ _ -> z) "foldr7/left" forall k z (g::forall b.(a->b->b)->b->b). foldr7 k z (build g) = g (foldr7_left k z) (\_ _ _ _ _ _ -> z) "zipWith4" [~1] forall f as bs cs ds. zipWith4 f as bs cs ds = build (\c n -> foldr4 (zipWith4FB c f) n as bs cs ds) "zipWith5" [~1] forall f as bs cs ds es. zipWith5 f as bs cs ds es = build (\c n -> foldr5 (zipWith5FB c f) n as bs cs ds es) "zipWith6" [~1] forall f as bs cs ds es fs. zipWith6 f as bs cs ds es fs = build (\c n -> foldr6 (zipWith6FB c f) n as bs cs ds es fs) "zipWith7" [~1] forall f as bs cs ds es fs gs. zipWith7 f as bs cs ds es fs gs = build (\c n -> foldr7 (zipWith7FB c f) n as bs cs ds es fs gs) "zipWith4List" [1] forall f. foldr4 (zipWith4FB (:) f) [] = zipWith4 f "zipWith5List" [1] forall f. foldr5 (zipWith5FB (:) f) [] = zipWith5 f "zipWith6List" [1] forall f. foldr6 (zipWith6FB (:) f) [] = zipWith6 f "zipWith7List" [1] forall f. foldr7 (zipWith7FB (:) f) [] = zipWith7 f #-} {- Note [Inline @unzipN@ functions] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The inline principle for @unzip{4,5,6,7}@ is the same as 'unzip'/'unzip3' in "GHC.List". The 'unzip'/'unzip3' functions are inlined so that the `foldr` with which they are defined has an opportunity to fuse. As such, since there are not any differences between 2/3-ary 'unzip' and its n-ary counterparts below aside from the number of arguments, the `INLINE` pragma should be replicated in the @unzipN@ functions below as well. -} -- | The 'unzip4' function takes a list of quadruples and returns four -- lists, analogous to 'unzip'. {-# INLINE unzip4 #-} -- Inline so that fusion with `foldr` has an opportunity to fire. -- See Note [Inline @unzipN@ functions] above. unzip4 :: [(a,b,c,d)] -> ([a],[b],[c],[d]) unzip4 = foldr (\(a,b,c,d) ~(as,bs,cs,ds) -> (a:as,b:bs,c:cs,d:ds)) ([],[],[],[]) -- | The 'unzip5' function takes a list of five-tuples and returns five -- lists, analogous to 'unzip'. {-# INLINE unzip5 #-} -- Inline so that fusion with `foldr` has an opportunity to fire. -- See Note [Inline @unzipN@ functions] above. unzip5 :: [(a,b,c,d,e)] -> ([a],[b],[c],[d],[e]) unzip5 = foldr (\(a,b,c,d,e) ~(as,bs,cs,ds,es) -> (a:as,b:bs,c:cs,d:ds,e:es)) ([],[],[],[],[]) -- | The 'unzip6' function takes a list of six-tuples and returns six -- lists, analogous to 'unzip'. {-# INLINE unzip6 #-} -- Inline so that fusion with `foldr` has an opportunity to fire. -- See Note [Inline @unzipN@ functions] above. unzip6 :: [(a,b,c,d,e,f)] -> ([a],[b],[c],[d],[e],[f]) unzip6 = foldr (\(a,b,c,d,e,f) ~(as,bs,cs,ds,es,fs) -> (a:as,b:bs,c:cs,d:ds,e:es,f:fs)) ([],[],[],[],[],[]) -- | The 'unzip7' function takes a list of seven-tuples and returns -- seven lists, analogous to 'unzip'. {-# INLINE unzip7 #-} -- Inline so that fusion with `foldr` has an opportunity to fire. -- See Note [Inline @unzipN@ functions] above. unzip7 :: [(a,b,c,d,e,f,g)] -> ([a],[b],[c],[d],[e],[f],[g]) unzip7 = foldr (\(a,b,c,d,e,f,g) ~(as,bs,cs,ds,es,fs,gs) -> (a:as,b:bs,c:cs,d:ds,e:es,f:fs,g:gs)) ([],[],[],[],[],[],[]) -- | The 'deleteFirstsBy' function takes a predicate and two lists and -- returns the first list with the first occurrence of each element of -- the second list removed. deleteFirstsBy :: (a -> a -> Bool) -> [a] -> [a] -> [a] deleteFirstsBy eq = foldl (flip (deleteBy eq)) -- | The 'group' function takes a list and returns a list of lists such -- that the concatenation of the result is equal to the argument. Moreover, -- each sublist in the result contains only equal elements. For example, -- -- >>> group "Mississippi" -- ["M","i","ss","i","ss","i","pp","i"] -- -- It is a special case of 'groupBy', which allows the programmer to supply -- their own equality test. group :: Eq a => [a] -> [[a]] group = groupBy (==) -- | The 'groupBy' function is the non-overloaded version of 'group'. groupBy :: (a -> a -> Bool) -> [a] -> [[a]] groupBy _ [] = [] groupBy eq (x:xs) = (x:ys) : groupBy eq zs where (ys,zs) = span (eq x) xs -- | The 'inits' function returns all initial segments of the argument, -- shortest first. For example, -- -- >>> inits "abc" -- ["","a","ab","abc"] -- -- Note that 'inits' has the following strictness property: -- @inits (xs ++ _|_) = inits xs ++ _|_@ -- -- In particular, -- @inits _|_ = [] : _|_@ inits :: [a] -> [[a]] inits = map toListSB . scanl' snocSB emptySB {-# NOINLINE inits #-} -- We do not allow inits to inline, because it plays havoc with Call Arity -- if it fuses with a consumer, and it would generally lead to serious -- loss of sharing if allowed to fuse with a producer. -- | \(\mathcal{O}(n)\). The 'tails' function returns all final segments of the -- argument, longest first. For example, -- -- >>> tails "abc" -- ["abc","bc","c",""] -- -- Note that 'tails' has the following strictness property: -- @tails _|_ = _|_ : _|_@ tails :: [a] -> [[a]] {-# INLINABLE tails #-} tails lst = build (\c n -> let tailsGo xs = xs `c` case xs of [] -> n _ : xs' -> tailsGo xs' in tailsGo lst) -- | The 'subsequences' function returns the list of all subsequences of the argument. -- -- >>> subsequences "abc" -- ["","a","b","ab","c","ac","bc","abc"] subsequences :: [a] -> [[a]] subsequences xs = [] : nonEmptySubsequences xs -- | The 'nonEmptySubsequences' function returns the list of all subsequences of the argument, -- except for the empty list. -- -- >>> nonEmptySubsequences "abc" -- ["a","b","ab","c","ac","bc","abc"] nonEmptySubsequences :: [a] -> [[a]] nonEmptySubsequences [] = [] nonEmptySubsequences (x:xs) = [x] : foldr f [] (nonEmptySubsequences xs) where f ys r = ys : (x : ys) : r -- | The 'permutations' function returns the list of all permutations of the argument. -- -- >>> permutations "abc" -- ["abc","bac","cba","bca","cab","acb"] permutations :: [a] -> [[a]] permutations xs0 = xs0 : perms xs0 [] where perms [] _ = [] perms (t:ts) is = foldr interleave (perms ts (t:is)) (permutations is) where interleave xs r = let (_,zs) = interleave' id xs r in zs interleave' _ [] r = (ts, r) interleave' f (y:ys) r = let (us,zs) = interleave' (f . (y:)) ys r in (y:us, f (t:y:us) : zs) ------------------------------------------------------------------------------ -- Quick Sort algorithm taken from HBC's QSort library. -- | The 'sort' function implements a stable sorting algorithm. -- It is a special case of 'sortBy', which allows the programmer to supply -- their own comparison function. -- -- Elements are arranged from lowest to highest, keeping duplicates in -- the order they appeared in the input. -- -- >>> sort [1,6,4,3,2,5] -- [1,2,3,4,5,6] sort :: (Ord a) => [a] -> [a] -- | The 'sortBy' function is the non-overloaded version of 'sort'. -- -- >>> sortBy (\(a,_) (b,_) -> compare a b) [(2, "world"), (4, "!"), (1, "Hello")] -- [(1,"Hello"),(2,"world"),(4,"!")] sortBy :: (a -> a -> Ordering) -> [a] -> [a] #if defined(USE_REPORT_PRELUDE) sort = sortBy compare sortBy cmp = foldr (insertBy cmp) [] #else {- GHC's mergesort replaced by a better implementation, 24/12/2009. This code originally contributed to the nhc12 compiler by Thomas Nordin in 2002. Rumoured to have been based on code by Lennart Augustsson, e.g. http://www.mail-archive.com/[email protected]/msg01822.html and possibly to bear similarities to a 1982 paper by Richard O'Keefe: "A smooth applicative merge sort". Benchmarks show it to be often 2x the speed of the previous implementation. Fixes ticket https://gitlab.haskell.org/ghc/ghc/issues/2143 -} sort = sortBy compare sortBy cmp = mergeAll . sequences where sequences (a:b:xs) | a `cmp` b == GT = descending b [a] xs | otherwise = ascending b (a:) xs sequences xs = [xs] descending a as (b:bs) | a `cmp` b == GT = descending b (a:as) bs descending a as bs = (a:as): sequences bs ascending a as (b:bs) | a `cmp` b /= GT = ascending b (\ys -> as (a:ys)) bs ascending a as bs = let !x = as [a] in x : sequences bs mergeAll [x] = x mergeAll xs = mergeAll (mergePairs xs) mergePairs (a:b:xs) = let !x = merge a b in x : mergePairs xs mergePairs xs = xs merge as@(a:as') bs@(b:bs') | a `cmp` b == GT = b:merge as bs' | otherwise = a:merge as' bs merge [] bs = bs merge as [] = as {- sortBy cmp l = mergesort cmp l sort l = mergesort compare l Quicksort replaced by mergesort, 14/5/2002. From: Ian Lynagh <[email protected]> I am curious as to why the List.sort implementation in GHC is a quicksort algorithm rather than an algorithm that guarantees n log n time in the worst case? I have attached a mergesort implementation along with a few scripts to time it's performance, the results of which are shown below (* means it didn't finish successfully - in all cases this was due to a stack overflow). If I heap profile the random_list case with only 10000 then I see random_list peaks at using about 2.5M of memory, whereas in the same program using List.sort it uses only 100k. Input style Input length Sort data Sort alg User time stdin 10000 random_list sort 2.82 stdin 10000 random_list mergesort 2.96 stdin 10000 sorted sort 31.37 stdin 10000 sorted mergesort 1.90 stdin 10000 revsorted sort 31.21 stdin 10000 revsorted mergesort 1.88 stdin 100000 random_list sort * stdin 100000 random_list mergesort * stdin 100000 sorted sort * stdin 100000 sorted mergesort * stdin 100000 revsorted sort * stdin 100000 revsorted mergesort * func 10000 random_list sort 0.31 func 10000 random_list mergesort 0.91 func 10000 sorted sort 19.09 func 10000 sorted mergesort 0.15 func 10000 revsorted sort 19.17 func 10000 revsorted mergesort 0.16 func 100000 random_list sort 3.85 func 100000 random_list mergesort * func 100000 sorted sort 5831.47 func 100000 sorted mergesort 2.23 func 100000 revsorted sort 5872.34 func 100000 revsorted mergesort 2.24 mergesort :: (a -> a -> Ordering) -> [a] -> [a] mergesort cmp = mergesort' cmp . map wrap mergesort' :: (a -> a -> Ordering) -> [[a]] -> [a] mergesort' _ [] = [] mergesort' _ [xs] = xs mergesort' cmp xss = mergesort' cmp (merge_pairs cmp xss) merge_pairs :: (a -> a -> Ordering) -> [[a]] -> [[a]] merge_pairs _ [] = [] merge_pairs _ [xs] = [xs] merge_pairs cmp (xs:ys:xss) = merge cmp xs ys : merge_pairs cmp xss merge :: (a -> a -> Ordering) -> [a] -> [a] -> [a] merge _ [] ys = ys merge _ xs [] = xs merge cmp (x:xs) (y:ys) = case x `cmp` y of GT -> y : merge cmp (x:xs) ys _ -> x : merge cmp xs (y:ys) wrap :: a -> [a] wrap x = [x] OLDER: qsort version -- qsort is stable and does not concatenate. qsort :: (a -> a -> Ordering) -> [a] -> [a] -> [a] qsort _ [] r = r qsort _ [x] r = x:r qsort cmp (x:xs) r = qpart cmp x xs [] [] r -- qpart partitions and sorts the sublists qpart :: (a -> a -> Ordering) -> a -> [a] -> [a] -> [a] -> [a] -> [a] qpart cmp x [] rlt rge r = -- rlt and rge are in reverse order and must be sorted with an -- anti-stable sorting rqsort cmp rlt (x:rqsort cmp rge r) qpart cmp x (y:ys) rlt rge r = case cmp x y of GT -> qpart cmp x ys (y:rlt) rge r _ -> qpart cmp x ys rlt (y:rge) r -- rqsort is as qsort but anti-stable, i.e. reverses equal elements rqsort :: (a -> a -> Ordering) -> [a] -> [a] -> [a] rqsort _ [] r = r rqsort _ [x] r = x:r rqsort cmp (x:xs) r = rqpart cmp x xs [] [] r rqpart :: (a -> a -> Ordering) -> a -> [a] -> [a] -> [a] -> [a] -> [a] rqpart cmp x [] rle rgt r = qsort cmp rle (x:qsort cmp rgt r) rqpart cmp x (y:ys) rle rgt r = case cmp y x of GT -> rqpart cmp x ys rle (y:rgt) r _ -> rqpart cmp x ys (y:rle) rgt r -} #endif /* USE_REPORT_PRELUDE */ -- | Sort a list by comparing the results of a key function applied to each -- element. @sortOn f@ is equivalent to @sortBy (comparing f)@, but has the -- performance advantage of only evaluating @f@ once for each element in the -- input list. This is called the decorate-sort-undecorate paradigm, or -- Schwartzian transform. -- -- Elements are arranged from lowest to highest, keeping duplicates in -- the order they appeared in the input. -- -- >>> sortOn fst [(2, "world"), (4, "!"), (1, "Hello")] -- [(1,"Hello"),(2,"world"),(4,"!")] -- -- @since 4.8.0.0 sortOn :: Ord b => (a -> b) -> [a] -> [a] sortOn f = map snd . sortBy (comparing fst) . map (\x -> let y = f x in y `seq` (y, x)) -- | Produce singleton list. -- -- >>> singleton True -- [True] -- -- @since 4.14.0.0 -- singleton :: a -> [a] singleton x = [x] -- | The 'unfoldr' function is a \`dual\' to 'foldr': while 'foldr' -- reduces a list to a summary value, 'unfoldr' builds a list from -- a seed value. The function takes the element and returns 'Nothing' -- if it is done producing the list or returns 'Just' @(a,b)@, in which -- case, @a@ is a prepended to the list and @b@ is used as the next -- element in a recursive call. For example, -- -- > iterate f == unfoldr (\x -> Just (x, f x)) -- -- In some cases, 'unfoldr' can undo a 'foldr' operation: -- -- > unfoldr f' (foldr f z xs) == xs -- -- if the following holds: -- -- > f' (f x y) = Just (x,y) -- > f' z = Nothing -- -- A simple use of unfoldr: -- -- >>> unfoldr (\b -> if b == 0 then Nothing else Just (b, b-1)) 10 -- [10,9,8,7,6,5,4,3,2,1] -- -- Note [INLINE unfoldr] -- We treat unfoldr a little differently from some other forms for list fusion -- for two reasons: -- -- 1. We don't want to use a rule to rewrite a basic form to a fusible -- form because this would inline before constant floating. As Simon Peyton- -- Jones and others have pointed out, this could reduce sharing in some cases -- where sharing is beneficial. Thus we simply INLINE it, which is, for -- example, how enumFromTo::Int becomes eftInt. Unfortunately, we don't seem -- to get enough of an inlining discount to get a version of eftInt based on -- unfoldr to inline as readily as the usual one. We know that all the Maybe -- nonsense will go away, but the compiler does not. -- -- 2. The benefit of inlining unfoldr is likely to be huge in many common cases, -- even apart from list fusion. In particular, inlining unfoldr often -- allows GHC to erase all the Maybes. This appears to be critical if unfoldr -- is to be used in high-performance code. A small increase in code size -- in the relatively rare cases when this does not happen looks like a very -- small price to pay. -- -- Doing a back-and-forth dance doesn't seem to accomplish anything if the -- final form has to be inlined in any case. unfoldr :: (b -> Maybe (a, b)) -> b -> [a] {-# INLINE unfoldr #-} -- See Note [INLINE unfoldr] unfoldr f b0 = build (\c n -> let go b = case f b of Just (a, new_b) -> a `c` go new_b Nothing -> n in go b0) -- ----------------------------------------------------------------------------- -- Functions on strings -- | 'lines' breaks a string up into a list of strings at newline -- characters. The resulting strings do not contain newlines. -- -- Note that after splitting the string at newline characters, the -- last part of the string is considered a line even if it doesn't end -- with a newline. For example, -- -- >>> lines "" -- [] -- -- >>> lines "\n" -- [""] -- -- >>> lines "one" -- ["one"] -- -- >>> lines "one\n" -- ["one"] -- -- >>> lines "one\n\n" -- ["one",""] -- -- >>> lines "one\ntwo" -- ["one","two"] -- -- >>> lines "one\ntwo\n" -- ["one","two"] -- -- Thus @'lines' s@ contains at least as many elements as newlines in @s@. lines :: String -> [String] lines "" = [] -- Somehow GHC doesn't detect the selector thunks in the below code, -- so s' keeps a reference to the first line via the pair and we have -- a space leak (cf. #4334). -- So we need to make GHC see the selector thunks with a trick. lines s = cons (case break (== '\n') s of (l, s') -> (l, case s' of [] -> [] _:s'' -> lines s'')) where cons ~(h, t) = h : t -- | 'unlines' is an inverse operation to 'lines'. -- It joins lines, after appending a terminating newline to each. -- -- >>> unlines ["Hello", "World", "!"] -- "Hello\nWorld\n!\n" unlines :: [String] -> String #if defined(USE_REPORT_PRELUDE) unlines = concatMap (++ "\n") #else -- HBC version (stolen) -- here's a more efficient version unlines [] = [] unlines (l:ls) = l ++ '\n' : unlines ls #endif -- | 'words' breaks a string up into a list of words, which were delimited -- by white space. -- -- >>> words "Lorem ipsum\ndolor" -- ["Lorem","ipsum","dolor"] words :: String -> [String] {-# NOINLINE [1] words #-} words s = case dropWhile {-partain:Char.-}isSpace s of "" -> [] s' -> w : words s'' where (w, s'') = break {-partain:Char.-}isSpace s' {-# RULES "words" [~1] forall s . words s = build (\c n -> wordsFB c n s) "wordsList" [1] wordsFB (:) [] = words #-} wordsFB :: ([Char] -> b -> b) -> b -> String -> b {-# INLINE [0] wordsFB #-} -- See Note [Inline FB functions] in GHC.List wordsFB c n = go where go s = case dropWhile isSpace s of "" -> n s' -> w `c` go s'' where (w, s'') = break isSpace s' -- | 'unwords' is an inverse operation to 'words'. -- It joins words with separating spaces. -- -- >>> unwords ["Lorem", "ipsum", "dolor"] -- "Lorem ipsum dolor" unwords :: [String] -> String #if defined(USE_REPORT_PRELUDE) unwords [] = "" unwords ws = foldr1 (\w s -> w ++ ' ':s) ws #else -- Here's a lazier version that can get the last element of a -- _|_-terminated list. {-# NOINLINE [1] unwords #-} unwords [] = "" unwords (w:ws) = w ++ go ws where go [] = "" go (v:vs) = ' ' : (v ++ go vs) -- In general, the foldr-based version is probably slightly worse -- than the HBC version, because it adds an extra space and then takes -- it back off again. But when it fuses, it reduces allocation. How much -- depends entirely on the average word length--it's most effective when -- the words are on the short side. {-# RULES "unwords" [~1] forall ws . unwords ws = tailUnwords (foldr unwordsFB "" ws) "unwordsList" [1] forall ws . tailUnwords (foldr unwordsFB "" ws) = unwords ws #-} {-# INLINE [0] tailUnwords #-} tailUnwords :: String -> String tailUnwords [] = [] tailUnwords (_:xs) = xs {-# INLINE [0] unwordsFB #-} unwordsFB :: String -> String -> String unwordsFB w r = ' ' : w ++ r #endif {- A "SnocBuilder" is a version of Chris Okasaki's banker's queue that supports toListSB instead of uncons. In single-threaded use, its performance characteristics are similar to John Hughes's functional difference lists, but likely somewhat worse. In heavily persistent settings, however, it does much better, because it takes advantage of sharing. The banker's queue guarantees (amortized) O(1) snoc and O(1) uncons, meaning that we can think of toListSB as an O(1) conversion to a list-like structure a constant factor slower than normal lists--we pay the O(n) cost incrementally as we consume the list. Using functional difference lists, on the other hand, we would have to pay the whole cost up front for each output list. -} {- We store a front list, a rear list, and the length of the queue. Because we only snoc onto the queue and never uncons, we know it's time to rotate when the length of the queue plus 1 is a power of 2. Note that we rely on the value of the length field only for performance. In the unlikely event of overflow, the performance will suffer but the semantics will remain correct. -} data SnocBuilder a = SnocBuilder {-# UNPACK #-} !Word [a] [a] {- Smart constructor that rotates the builder when lp is one minus a power of 2. Does not rotate very small builders because doing so is not worth the trouble. The lp < 255 test goes first because the power-of-2 test gives awful branch prediction for very small n (there are 5 powers of 2 between 1 and 16). Putting the well-predicted lp < 255 test first avoids branching on the power-of-2 test until powers of 2 have become sufficiently rare to be predicted well. -} {-# INLINE sb #-} sb :: Word -> [a] -> [a] -> SnocBuilder a sb lp f r | lp < 255 || (lp .&. (lp + 1)) /= 0 = SnocBuilder lp f r | otherwise = SnocBuilder lp (f ++ reverse r) [] -- The empty builder emptySB :: SnocBuilder a emptySB = SnocBuilder 0 [] [] -- Add an element to the end of a queue. snocSB :: SnocBuilder a -> a -> SnocBuilder a snocSB (SnocBuilder lp f r) x = sb (lp + 1) f (x:r) -- Convert a builder to a list toListSB :: SnocBuilder a -> [a] toListSB (SnocBuilder _ f r) = f ++ reverse r
|
Low
|
[
0.533477321814254,
30.875,
27
] |
################################################################################ # This test validates the functioning of the version handshake algorithm on # group replication. # The test makes the group go through several changes involving different member # versions validating the expected outcome on each case. # The test script is: # *) The test requires five servers: S1, S2, S3, S4 # # Server Step Version (to base) Outcome #------------------------------------------------------------------------------- # (S1) join member with a higher patch version* (patch version + 1) OK # (S2) join member with a higher minor version (minor version + 1) OK # (S3) join member with the base version (base version) Failure # (S3) member leaves # (S3) join member with a higher major version (major version +1) OK # (S4) join member with a higher minor version (minor version + 1) OK # (S1) member leaves # (S1) join member with the base version (base version) Failure # # *) The group start happens with Server 1 joining # # The base version is version of the plugin associated with this test: # Version= MAJOR.MINOR.PATCH # ################################################################################ --source include/not_valgrind.inc --source include/big_test.inc --source include/have_debug.inc --let $group_replication_group_name= 8a1da670-05fa-11e5-b939-0800200c9a66 --source include/have_group_replication_plugin.inc --let $rpl_skip_group_replication_start= 1 --let $rpl_server_count= 4 --source include/group_replication.inc --echo # --echo # Check the version of member 1 is fully visible in the plugin table --echo # --let $assert_text= The plugin major and minor versions are visible in the version column --let $assert_cond= [SELECT COUNT(*) FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME= "group_replication" and PLUGIN_VERSION= "1.1" ] = 1; --source include/assert.inc --let $assert_text= The plugin total version can be seen in the description column --let $assert_cond= [SELECT COUNT(*) FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME= "group_replication" and PLUGIN_DESCRIPTION= "Group Replication (1.1.0)" ] = 1; --source include/assert.inc --echo # --echo # Setup a new member with version a higher patch version --echo # Version= Base version + 1 patch version --echo # --connection server1 --echo server1 SET @debug_saved_s1= @@GLOBAL.DEBUG; SET @@GLOBAL.DEBUG= '+d,group_replication_compatibility_higher_patch_version'; --source include/start_and_bootstrap_group_replication.inc # Add some data for recovery CREATE TABLE t1 (c1 INT NOT NULL PRIMARY KEY) ENGINE=InnoDB; INSERT INTO t1 VALUES (1); --echo # --echo # Try to add a new member with a higher minor version --echo # Version = Base version + 1 minor version --echo # The member will join the group --echo # --connection server2 --echo server2 SET @debug_saved_s2= @@GLOBAL.DEBUG; # Cause the member to broadcast and compare himself using a higher version SET @@GLOBAL.DEBUG= '+d,group_replication_compatibility_higher_minor_version'; SET session sql_log_bin=0; call mtr.add_suppression("Member version is read compatible with the group."); SET session sql_log_bin=1; --source include/start_group_replication.inc # Check the data is there --let $assert_text= On the recovered member, the table should contain 1 elements --let $assert_cond= [SELECT COUNT(*) FROM t1] = 1; --source include/assert.inc --echo # --echo # Try to add a new member with a base version. --echo # Version = Base version --echo # It will fail since group lowest version is (patch + 1) --echo # Try to add server 3 again with higher major version. --echo # Version= Base version + 1 major version --echo # It will succeed and join group in read only mode. --echo # --connection server3 --echo server3 SET session sql_log_bin=0; call mtr.add_suppression("Member version is incompatible with the group"); call mtr.add_suppression("Member version is read compatible with the group."); SET session sql_log_bin=1; --eval SET GLOBAL group_replication_group_name= "$group_replication_group_name" --error ER_GROUP_REPLICATION_CONFIGURATION START GROUP_REPLICATION; # Cause the member to broadcast and compare himself using a higher version SET @@GLOBAL.DEBUG= '+d,group_replication_compatibility_higher_major_version'; --source include/start_group_replication.inc # Check the data is there --let $assert_text= On the recovered member, the table should contain 1 elements --let $assert_cond= [SELECT COUNT(*) FROM t1] = 1; --source include/assert.inc --echo # --echo # Try to add a new member with a major version equal to the base version, --echo # but a higher minor version. --echo # Version = Base version + 1 minor version --echo # --connection server4 --echo server4 SET @debug_saved_s4= @@GLOBAL.DEBUG; SET session sql_log_bin=0; call mtr.add_suppression("Member version is read compatible with the group."); SET session sql_log_bin=1; # Cause the member to broadcast and compare himself using a high version SET @@GLOBAL.DEBUG= '+d,group_replication_compatibility_higher_minor_version'; SET GLOBAL group_replication_group_name= "8a1da670-05fa-11e5-b939-0800200c9a66"; # Before < 8.0.16 this join used to fail because higher major version member is present i.e. S3 server # Post 8.0.16 this will succeed, since comparison is done with only lowest version i.e. S1 in this scenario # S4 is compatible since its greater then patch version present in group i.e. S1 server --source include/start_group_replication.inc # Check the data is there --let $assert_text= On the recovered member, the table should contain 1 elements --let $assert_cond= [SELECT COUNT(*) FROM t1] = 1; --source include/assert.inc --echo # --echo # Stop GR on server 1 and start server 1 with base version. --echo # Version = Base version --echo # It will fail since group lowest version is (minor + 1) --echo # --connection server1 --echo server1 # DROP table now, else we will have to start all servers for cleanup DROP TABLE t1; --source include/rpl_sync.inc --source include/stop_group_replication.inc SET session sql_log_bin=0; call mtr.add_suppression("Member version is incompatible with the group"); SET session sql_log_bin=1; SET @@GLOBAL.DEBUG= @debug_save_s1; --eval SET GLOBAL group_replication_group_name= "$group_replication_group_name" --error ER_GROUP_REPLICATION_CONFIGURATION START GROUP_REPLICATION; --echo # --echo # Clean up --echo # --connection server2 --echo server2 SET @@GLOBAL.DEBUG= @debug_save_s2; --source include/stop_group_replication.inc --connection server3 --echo server3 SET @@GLOBAL.DEBUG= @debug_save_s3; --source include/stop_group_replication.inc --connection server4 --echo server4 SET @@GLOBAL.DEBUG= @debug_save_s4; --source include/stop_group_replication.inc --source include/group_replication_end.inc
|
Mid
|
[
0.638095238095238,
33.5,
19
] |
(CNN) A woman who ran a "birth tourism" operation was released from jail Monday after a federal judge gave her the lightest sentence possible. Now she faces deportation. Dongyuan Li's operation, You Win USA Vacation Services Corp., helped Chinese customers -- including doctors, lawyers and government officials -- travel to the United States to give birth so their children would receive US citizenship The operation coached families on what to say in their visa interviews, created ways for them to bypass immigration controls and housed them in upscale apartments in California for up to three months. Li pleaded guilty to conspiracy to commit immigration fraud and visa fraud. She was arrested in January and has been in jail since then. Judge James Selna said very little except that Li would have received more time if the US attorney proved she was connected to more cases of lying on visa applications. Tom O'Brien, Li's defense attorney, told reporters he was happy for his client. Dongyuan Li and two others were arrested in January. "My client admitted her role in VISA fraud. She didn't want to go to trial, she accepted an offer from the government and took it, accepted responsibility," O'Brien said. "Today the government was asking for additional years in custody and we're pleased and honored that Judge Selna listened to our arguments and agreed with us that 10 months is enough time for the offense she committed." US Attorney Charles Pell said he was very disappointed in the sentence. Prosecutors argued for 33 months in prison. "This is not the sentence we were expecting. It's disappointing," he said. In two years, Li had received $3 million in international wire transfers from China, the US Immigration and Customs Enforcement said. O'Brien said a lot of the money the government cited wasn't from birth tourism, but from proceeds from the other businesses the family had back in China. "She didn't make $3 million," he said. "There was certain money that was certainly made in this business. That money and anything that was purchased with that money has either been forfeited or is in the process of being forfeited to the government." Li will be released by the end of the day Monday and faces deportation. "We will now pursue deportation since we can show that the current visa she has was derived from her husband who committed fraud to gain his visa," said Daniel Showalter, a Special Agent within Homeland Security Investigations. Li is one of three people arrested earlier this year on charges of running Chinese "birth tourism" schemes and among a total of 19 people indicted who were tied to similar businesses, ICE said. Those charges stem from a 2015 raid of dozens of apartments that hosted mothers-to-be, the agency said. The indictments are the "first-ever" federal criminal charges the American government has brought against birth tourism businesses and customers, ICE said. 'Strategies to Maximize the Chance of Entry' "Birth tourists" travel to foreign countries to give birth so that their children receive that country's citizenship. The American legal principle of jus soli means that babies born on US soil automatically gain citizenship. That's not the case in many other countries, including Switzerland or Japan, which do not grant citizenship unless one or more parents are also citizens. Li and others advertised their businesses online touting the US had the "most attractive nationality," military, political, technological and cultural strength, 13 years of free education, less pollution, retirement benefits and high quality healthcare services, the federal indictment says. A cartoon used by YouWin USA Vacation Services Corp. to show birth tourism Citizenship would also give children "priority for jobs in US government, public companies, and large corporations" and would also make it easier for the parents to eventually immigrate to the US, the indictment says. "America's way of life is not for sale," said Joseph Macias, Special Agent in Charge of Homeland Security Investigations in Los Angeles, in a statement earlier this year. "Anyone who would exploit our nation's generosity and our legal immigration system should be on notice -- they may end up being the ones to pay a very steep price." To make it inside, the company offered customers tips on how to navigate the loopholes of the customs and visa processes. That was all listed in a document titled "Strategies to Maximize the Chance of Entry," the indictment says. One of their recommendations was "that, for best results, Chinese birth tourists should list on their visa applications that "they intended to stay at the '5-star' hotel of 'TRUMP INTERNATIONAL HOTEL WAIKIKI BEACH (Trump Hawaii Hotel),' in Honolulu, Hawaii," the indictment says. It also suggested customers book flights to Hawaii first and a second to California, as it'd be easier to make it through customs check that way, the indictment says. Li also suggested women come during the early stages of their pregnancy so they were able to hide that they were pregnant. They were instructed to lie on their visa applications, claiming they'd be staying in Hawaii, New York or Los Angeles for two weeks. Instead, customers spent up to three months in Irvine, California. She lived a lavish lifestyle Li claimed she served more than 500 customers and that her business was working with a 100-person team. She would use "agents and employees in China to recruit pregnant Chinese nationals" who wanted to give birth in the US, the indictment says. And when they made their way to the US, they were housed in one of about 20 upscale apartments that Li leased throughout Orange County. Meanwhile, Li was also living a lavish lifestyle in California, Showalter said. Li's home in Irvine, California "She was living in an exclusive area of Irvine in a house with housekeepers, her numerous Mercedes (sedans,)" he said. Government agents have seized two of Li's properties, six vehicles, more than $1 million from bank accounts and "10 gold bars, 10 gold coins and various gold jewelry," Showalter said. Her Irvine home was worth $2.1 million, ICE said. "We do need to send the message that defrauding our immigration system for profit will not be tolerated and if you do so, you do so at your own peril," Showalter told CNN. "We will prosecute you and seize your proceeds, bank accounts, vehicles, houses."
|
Low
|
[
0.42857142857142805,
25.125,
33.5
] |
Q: Difficulties with Cypress scrollTo method I am having a few issues testing a virtual scroll component with Cypress. I have a test that checks the li elements present in the DOM after scrolling to the bottom of a container. When written like this the test passes: cy.get('.virtual-scroll').scrollTo('bottom') cy.wait(0) cy.get('li').last().children('h4').contains('1999') When written like this it doesn't: cy.get('.virtual-scroll').scrollTo('bottom') cy.get('li').last().children('h4').contains('1999') This also fails: cy.get('.virtual-scroll').scrollTo('bottom').get('li').last().children('h4').contains('1999') In the second and third examples, get('li') is returning the li elements present before the scroll has completed, and therefore failing the test. I can fix this by adding .wait, but don't fully understand the behaviour and wonder if this is a bug. Any ideas? A: Make an assertion that will always pass when the DOM is rendered, such as using .get() for an element that gets added to the DOM ex) if you had a <ul class="myloadedlist">.... : cy.get('.virtual-scroll').scrollTo('bottom') cy.get('ul.myloadedlist') cy.get('li').last().children('h4').contains('1999') That way, Cypress will continue with the test as soon as that element becomes visible. Why? I'm assuming the elements get added to the DOM in some sort of scroll eventListener. In that case this is correct behavior. Essentially what you've tested is the race condition of a user scrolling very quickly to the bottom of the page, to see that the DOM has not yet finished rendering- a valid senario. Since you targeted the last() li element, Cypress finds the last element of the page before the DOM gets updated, and expects it to contain 1999, which it does not, even after Cypress retries for 4 seconds. This is actually a great feature of Cypress, because you can test on the state of the DOM at times that the User might only see for a split second.
|
High
|
[
0.665710186513629,
29,
14.5625
] |
Prostate cancer metastasis will claim the lives of over 30,000 Americans this year. Boring et al., Cancer Statistics 1991, 19. The mode of dissemination however, remains very poorly understood. An almost dogmatic view of metastasis holds that prostate cancer cells first spread through the prostatic capsule then into the lymphatics, and eventually hematogenously travel to bone. Byar et al., Cancer 1972, 30, 5; Winter, C. C., Surg. Gynecol. Obstet. 1957, 105, 136; Hilaris et al., Am. J. Roentgenol. 1974, 121, 832; McLaughlin et al., J. Urol. 1976, 115, 89; Jacobs, S. C., Urology 1983, 21, 337; Batson, O. V., Ann. Surg. 1940, 112,138; Saitoh et al., Cancer 1984, 54, 3078-3084; Whitmore, W. F., Jr., Cancer 1973, 32, 1104. However, this model has been based on histopathologic studies which have significant limitations, and in actuality the sequence of metastatic events remain unknown. Solid tumor animal experiments suggest that only 0.01% of circulating cancer cells eventually create a single metastatic deposit. Fidler et al., Science 1982, 217,998- 1001; Liotta et al., Cancer Res. 1974, 34, 997; Schirrmacher, B., Adv. Cancer Res. 1985, 43, 1-32. Ostensibly, a single bone metastasis from human prostatic adenocarcinoma (PAC) could be generated by 10,000 circulating cancer cells (2 cells/1 ml blood). In the past, detection of such a low concentration of cells has been difficult or impossible. Recently, however, Wu et al. used keratin-19 (K-19) mRNA PCR to detect breast cancer micrometastasis in patient lymph nodes and bone marrow. Wu et al., Lab. Inv. 1990, 62, 109A. Miyomura et al., also reported the detection of minimal residual acute lymphoblastic leukemia by PCR in patients harboring the Philadelphia chromosome. Miyomura et al., Blood 1992, 79, 1366-1370. A method of detecting the micrometastasis of prostate cancer would be greatly desirable.
|
High
|
[
0.6792452830188671,
29.25,
13.8125
] |
using System;
using System.Collections.Generic;
using OpenRasta.Client;
using OpenWrap.Configuration;
using OpenWrap.Configuration.Core;
using OpenWrap.Services;
namespace OpenWrap.Commands.Core
{
[Command(Verb = "set", Noun = "configuration")]
public class SetConfigurationCommand : AbstractCommand
{
readonly IConfigurationManager _configurationManager;
public SetConfigurationCommand() : this(ServiceLocator.GetService<IConfigurationManager>())
{
}
public SetConfigurationCommand(IConfigurationManager configurationManager)
{
_configurationManager = configurationManager;
}
[CommandInput]
public string Proxy { get; set; }
protected override IEnumerable<ICommandOutput> ExecuteCore()
{
var core = _configurationManager.Load<CoreConfiguration>() ?? new CoreConfiguration();
var changes = new List<string>();
if (Proxy != null)
{
var proxyUri = Proxy.ToUri();
if (proxyUri == null || !proxyUri.IsAbsoluteUri)
{
yield return new InvalidProxy(Proxy);
yield break;
}
if (!string.IsNullOrEmpty(proxyUri.UserInfo))
{
var builder = new UriBuilder(proxyUri);
changes.AddRange(SetUsernamePassword(core, builder));
builder.UserName = string.Empty;
builder.Password = string.Empty;
proxyUri = builder.Uri;
}
core.ProxyHref = proxyUri.ToString();
changes.Add("proxy-href");
}
if (changes.Count == 0)
{
yield return new Error("No configuration has been provided.");
yield break;
}
_configurationManager.Save(core);
yield return new ConfigurationUpdated(changes);
}
static IEnumerable<string> SetUsernamePassword(CoreConfiguration core, UriBuilder builder)
{
core.ProxyUsername = null;
core.ProxyPassword = null;
if (!string.IsNullOrEmpty(builder.UserName))
{
core.ProxyUsername = builder.UserName;
yield return "proxy-username";
}
if (!string.IsNullOrEmpty(builder.Password))
{
core.ProxyPassword = Uri.UnescapeDataString(builder.Password);
yield return "proxy-password";
}
}
}
}
|
Mid
|
[
0.6000000000000001,
31.875,
21.25
] |
Henriette Kjær Henriette Kjær (born 3 May 1966) is a retired Danish politician, former member of the Danish parliament (Folketinget) for the Conservative People's Party elected in Aarhus' fourth constituency. Henriette Kjær was Social Minister and Minister for Gender Equality from 27 November 2001 to 2 August 2004 and Minister for Family and Consumer Affairs from 2 August 2004 to 18 February 2005, both posts in the Cabinet of Anders Fogh Rasmussen I. On 17 January 2005, when Henriette Kjær was Minister for Family and Consumer Affairs, she announced that there would be no initiatives for families with children in the next two months. However, on the next day, when the 2005 Danish parliamentary election was announced, the coalition leaders Anders Fogh Rasmussen and Bendt Bendtsen announced lower institution child care costs and higher børnecheck (direct financial aid for all families with children) . In February 2005, just before the 2005 Danish parliamentary election, her domestic partner, Erik Skov Pedersen, became the subject of media attention due to disorder in the couple's private finances, forcing them to default on their payments. On 16 February 2005, a week after the 2005 Danish parliamentary election had taken place and two days before Prime Minister Anders Fogh Rasmussen was to announce his new cabinet, Henriette Kjær resigned as minister. Henriette Kjær was later appointed political spokesperson and group leader of the Conservative party, but resigned from those posts on 25 January 2011, due to renewed media attention concerning the state of her private finances and her ability to fulfil her political tasks. Furthermore, she announced her intention to leave politics altogether after the parliamentary election held on 15 September 2011. Karina Boldsen succeeded Henriette Kjær as parliamentary candidate on 14 April 2011 but was not elected receiving 2,432 direct votes against 'Henriette Kjær's 10,195 on 13 November 2007 . References News story about the reasons for Henriette Kjær's losing her minister post - From TV2. Danmark's Statistik. Folketingsvalget den 13. November 2007 Danmark Færørne Grønland, Indenrigs- og Socialministeriet. Category:1966 births Category:Government ministers of Denmark Category:Living people Category:Members of the Folketing Category:Ministers for children, young people and families Category:Conservative People's Party (Denmark) politicians Category:People from Aarhus Category:21st-century Danish politicians Category:21st-century Danish women politicians Category:Women members of the Folketing Category:Women government ministers of Denmark
|
Mid
|
[
0.620218579234972,
28.375,
17.375
] |
# This file is automatically created by Recurly's OpenAPI generation process # and thus any edits you make by hand will be lost. If you wish to make a # change to this file, please create a Github issue explaining the changes you # need and we will usher them to the appropriate places. module Recurly module Requests class AccountAcquisitionCost < Request # @!attribute amount # @return [Float] The amount of the corresponding currency used to acquire the account. define_attribute :amount, Float # @!attribute currency # @return [String] 3-letter ISO 4217 currency code. define_attribute :currency, String end end end
|
Low
|
[
0.41990291262135904,
21.625,
29.875
] |
The fate of human malignant melanoma cells transplanted into zebrafish embryos: assessment of migration and cell division in the absence of tumor formation. Certain aggressive melanoma cell lines exhibit a dedifferentiated phenotype, expressing genes that are characteristic of various cell types including endothelial, neural, and stem cells. Moreover, we have shown that aggressive melanoma cells can participate in neovascularization in vivo and vasculogenic mimicry in vitro, demonstrating that these cells respond to microenvironmental cues and manifest developmental plasticity. To explore this plasticity further, we transplanted human metastatic melanoma cells into zebrafish blastula-stage embryos and monitored their behavior post-transplantation. The data show that human metastatic melanoma cells placed in the zebrafish embryo survive, exhibit motility, and divide. The melanoma cells do not form tumors nor integrate into host organs, but instead become scattered throughout the embryo in interstitial spaces, reflecting the dedifferentiated state of the cancer cells. In contrast to the fate of melanoma cells, human melanocytes transplanted into zebrafish embryos most frequently become distributed to their normal microenvironment of the skin, revealing that the zebrafish embryo contains possible homing cues that can be interpreted by normal human cells. Finally, we show that within the zebrafish embryo, metastatic melanoma cells retain their dedifferentiated phenotype. These results demonstrate the utility of the zebrafish embryonic model for the study of tumor cell plasticity and suggest that this experimental paradigm can be a powerful one in which to investigate tumor-microenvironment interactions.
|
High
|
[
0.686419753086419,
34.75,
15.875
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.