proba
float64 0.5
1
| text
stringlengths 16
174k
|
---|---|
0.999913 |
The first time I heard those words uttered from an airline representative to simply put my two bags on the plane I was stunned. You mean on top of all the fees you charge us with our airline ticket, I now have to pay to take my bags with me too?
Now a days, paying for your luggage is commonplace unless you are a frequent flyer and have joined the airlines “clubs” or you find airlines that don’t charge a fees (Southwest, JetBlue, Frontier, etc.). But let’s face it, not all of us have the luxury of flying these airlines to the destinations we are going.
I didn’t used to think I could pack for 3 days in a carry-on, much less an extended vacation. But I do not like to pay for my bags to go on vacation too (yes, I’m a cheapskate!). So I have been working on how to pack my carry-on better, so that I can pocket that $25-30 and always have my items with me. Here are a few tips that are helping me learn how to pack a carry-on.
Make a list: The act of writing down each outfit you need for your trip will allow you to see what you NEED to bring versus what you WANT to bring. Think about taking clothing items that you can mix & match, such as one pair of jeans or a t-shirt that could work with more than one outfit.
Lay it out: Lay everything you want to pack out on your bed. This is when you can pare down to only pack what you NEED to pack. Do you REALLY need 4 pairs of shoes? I’m guessing not.
Roll it up: Lay your shirts one on top of the other and roll them up tightly. Soft, wrinkle-resistant materials like cotton & knits can all be rolled up tightly, since loose rolling will result in wrinkles.
Stuff your shoes: Stuff as many socks & undergarments into your shoes as you can and lay them on the bottom of the suitcase, as the bottom layer.
Wear the Heavy: Depending on the trip, wear your heavier items while traveling. I always wear my coat or sweater on the plane, as I get cold easily. Also try to wear the heaviest shoes, so there is more room in the suitcase for other items.
Utilize the Pockets: Many suitcases have great pockets to stash other items you may need (bathing suits, books, etc.) Use them to their fullest potential!
LIGHT: This is the lightest suitcase I have ever carried in my life. Seriously – it was packed to the brim and even my 5 year old could pull it along in the airport with no problem. It’s also very light to put in an overhead compartment on the plane.
MULTIPLE COMPARTMENTS: I love the 2 sides of the suitcase to separate my shoes & toiletries from my clothes. Plus there are many different zipper compartments to fill.
WHEELS: The wheels slip around in every possible direction, allowing for it to be super easy to pull or push.
HARDCASE: I didn’t think I would like a hard case suitcase, but I did. I feel like everything is held together better and I am expecting the suitcase to have a longer life than most others.
EXPANDS: The case expands with a zip, which came in handy when I needed more room to bring home souvenirs.
SECURE: There is a built in TSA-compliant lock to secure your suitcase if you are checking it.
Other than a sticky zipper on one occasion, I have found nothing to complain about this luggage. Seriously, it surprised me with how much I love this piece. In the past, I used whatever suitcase my husband had bought, but after seeing the comparison of how easy it is to lift & maneuver the Ricardo, I can’t see me using much else. Plus, I like the 10 year warranty it comes with, in case something were to happen to it.
Friends, do you travel with a carry-on suitcase? What are some of your best packing tips?
I’m very impressed! The suitcase likely pays for itself after saving the checked baggage fee for just a couple of flights. That’s so smart of you to pack your own blowdryer too. It takes me about an hour to straighten my hair with a hotel blowdryer.
|
0.999997 |
Do you have a trading system that works well during declining markets? You can add the moving average short system to your toolbox.
A successful trader must have a variety of tools in his or her trading toolbox to match the various conditions he or she will encounter. The market has periods when it is trending up, trending down, and times when it is just basing. It is very difficult to find a system that works well in all three types of market conditions. A more fruitful approach is to use a different tool for each type of market. Using a system specifically designed for each type of market environment generally produces better results than trying to use one generic tool for all market conditions.
The overall market direction is a powerful force that pushes most stocks in one direction. Just as it is difficult to swim against the tide, it is hard to make money in a declining market if your only tool was designed to find good long setups. When the market is declining, it is usually best to focus on shorts. One of the systems I use during market declines involves shorting pullbacks to a declining moving average. This technique is based on the observation that trends continue, and a pullback or retracement in the trend represents a low-risk (and well-defined) entry point.
|
0.951686 |
Lancelot was the son of King Ban of Benwick and Queen Elaine. He was the First Knight of the Round Table, and he never failed in gentleness, courtesy, or courage. Launcelot was also a knight who was very willing to serve others.
It has been said that Lancelot was the greatest fighter and swordsman of all the knights of the Round Table. Legend tells us that as a child, Lancelot was left by the shore of the lake, where he was found by Vivien, the Lady of the Lake. She fostered and raised him, and in time Lancelot became one of history's greatest knights.
Legend also says that Lancelot was the father of Galahad by Elaine. It was another Elaine, Elaine of Astolat, who died of a broken heart because Launcelot did not return her love and affection.
Many sources tell us of the love shared toward each other of Lancelot and Queen Guinevere. There may be some truth to this since Lancelot was a favorite of the Queen's, and he rescued her from the stake on two different occasions. It was at one of these rescues that Lancelot mistakenly killed Sir Gareth, which led to the disbandment of the Round Table. After the Queen repented to an abbey as a nun, Lancelot lived the rest of his life as a hermit in penitence.
Did Lancelot originate in Celtic mythology, was he a continental invention, or did he really live as a famous knight and hero? We may never know... but Launcelot will always live in our imaginations as one of the greatest knights in history.
|
0.932374 |
The Arabian Desert is twice as large as the Gobi, and four times larger than Great Victoria and Kalahari. It occupies most of the Arabian Peninsula.
How many countries use Arabic as the official language?
Arabic is the official language in 14 countries in Africa (including Egypt, Morocco, Tunisia, Somalia, Chad) and 13 countries in Asia (including Saudi Arabia, Lebanon, Bahrain, Kuwait, Yemen).
Samsung C&T Corporation (Construction and Trade), established in 1938, is the origin of Samsung Group. Samsung C&T was the primary contractor of 3 worlds tallest skyscrapers: Burj Khalifa, Petronas Twin Towers and Taipei 101.
The English name Egypt is derived from the Ancient Greek Aígyptos, but it has nothing in common with the local name of the country (except maybe with the Late Egyptian name of Memphis - Hikuptah). Ancient Egyptians called their land Kemet, meaning "black soil" (as opposed to the desert). Today, Misr is the official name of Egypt, while Masr is the Egyptian-local pronunciation in the Egyptian Arabic.
Noted for her charismatic authority, Benazir Bhutto was nicknamed "Iron Lady" because of her hard line against the trade unions and tough rhetorical opposition to her rivals. In 1988, she became the first woman elected to lead a Muslim state.
Which river flows through Baghdad?
Today, the Tigris joins the Euphrates to form the one big river, Shatt-al-Arab. In the ancient times, both rivers had separate outlets in the sea.
Which country had a single-coloured flag?
The flag of the Libyan Arab Jamahiriya was adopted on 11 November 1977 and consisted of a green field. It was the only national flag in the world with just one color and no design, insignia, or other details. It was changed in 2011.
The name of fez comes from the city of Fes, Morocco, where it became fashionable among Andalusian Arabs in the 17th century. The fez got widely popular in the Ottoman Empire after 1829, when Sultan Mahmud II ordered his civil officials to wear the plain fez, and also banned the wearing of turbans.
|
0.951131 |
WHAT IS INCOME TAX? Income Tax refers to the tax levied by the government for financing its various operations. It is counted on an annual basis, of the income earned in the previous year, also called the Assessment Year.
|
0.996781 |
How much do you know about prescription drug abuse in Wisconsin?
In the past, many supporting and advocacy organizations have been developed with a singular focus (i.e., substance abuse, mental health, prevention, treatment or recovery). The Wisconsin Behavioral Health Association feels it is important to represent both substance use and mental health service sectors along the continuum of service delivery. The phrase “behavioral health” is used to describe service systems that encompass prevention and promotion of emotional health; prevention of mental and substance use disorders, substance use, and related problems; treatments and services for mental and substance use disorders; and recovery support.
|
0.999996 |
Many people understand that socialism is an alternative to capitalism, but few know what socialism really means. The nature of an economic system depends upon which social class controls the means of production. Power over the means of production enables the controlling class to govern the entire economic system.
Three basic economic systems (each with many variations) are possible in a modern technologically advanced society: capitalism, state collectivism and socialism. Under capitalism, the owners of productive property (i.e. capitalists) control the means of production. Capitalism is the economic system that currently exists in most parts of the world. Under state collectivism, the government bureaucracy controls the means of production. State collectivism was the economic system of Communist countries like the Soviet Union and is often mistaken for socialism. Under socialism, working people collectively control the means of production. Although some societies have adopted a few socialist institutions (e.g. economic planning, free health care, cooperative banks) there has never been a full-fledged socialist society in the modern world.
Socialism has five principal goals. 1. Sustainability: The economic system must be organized to sustain human life on our planet for the indefinite future. 2. Equality: The economic system must move toward complete economic equality. All forms of work are equally valued. Complete equality is the long-term goal, but limited inequality based upon differential contributions to the economy exists initially. 3. Comprehensive Democracy: All major economic and political decisions are made through genuine democratic processes. 4. Personal Security: All fundamental personal needs are guaranteed by society. This guarantee includes food, clothing, shelter, health care, education, child care, elder care, etc. The levels at which personal needs are guaranteed increase as the socialist economic system matures. 5. Solidarity: A spirit of mutual support, cooperation and friendship is created among all people. Socialist solidarity contrasts with the egoism and competitiveness fostered by capitalism.
What social institutions can achieve these five socialist goals? Socialists have different views on this subject, particularly on the issue of whether socialism should use markets. Here are some of the institutions proposed by socialists: (a) a democratic state that invites maximum participation and frequent circulation of political officials; (b) democratic and self-governing councils of workers and consumers; (c) jobs balanced for difficulty and desirability by workers councils (hazardous and unpleasant work being divided among all competent adults); (d) compensation according to effort as determined by fellow workers; (e) democratic and participatory economic planning in which workers councils have a major part; (f) use of computers and extensive feedback to reach a feasible and sustainable economic plan.
Building socialism in the context of a capitalist society involves a three-prong strategy: (i) consciousness raising — developing socialist consciousness within the capitalist public; (ii) institution building — creating socialist institutions based upon cooperation, equality and rational planning within capitalist society (e.g. workers cooperatives, strong labor unions, environmental regulation); (iii) political organizing — establishing an effective political party committed to socialism that contests for power within the capitalist political system.
|
0.962673 |
This article is about the company known as Allergan, Plc. For the company which was acquired by Actavis, Plc, see Allergan, Inc.
Allen Chao, Ph.D. and David Hsia, Ph.D.
Dublin, Ireland and Parsippany-Troy Hills, New Jersey, United States.
40 manufacturing facilities, 27 global R&D centers and marketing/sales facilities worldwide.
On February 18, 2015, the company formerly known as Actavis, Plc announced its intention to change its name to Allergan, Plc. This was completed as of June 15, 2015. Actavis, Plc then became Actavis which now forms the American Generics division of the company.
After the acquisition of Allergan, Inc by Actavis, Plc, the new company made its first acquisition on July 6, when the company acquired start-up, Oculeve, for $125 million. On July 7 the company announced it would acquire Merck & Cos late stage CGRP migraine portfolio, as well as two experimental drugs (MK-1602 and MK-8031) for $250 million. In July, Allergan agreed to sell off its small molecule generic drug business, Actavis, to Teva Pharmaceutical Industries for $40.5 billion ($33.75 billion in cash and $6.75 billion worth of shares), a transaction to be completed in Q1 2016. A day later, the company announced it would acquire Naurex Inc for $560 million with more tied to regulatory milestones. In September the company announced it would acquire ophthalmic device start-up AqueSys for $300 million plus future sums tied to approval/sales milestones. In November the company acquired aesthetic device company Northwood Medical Innovation. Two days after announcing the record breaking deal with Pfizer, the company announced it would partner with Rugen Therapeutic to develop new therapies for autism spectrum disorder, rabies and obsessive compulsive disorder.
In late October 2015, The Wall Street Journal reported that merger talks between Allergan and Pfizer were in early phases, with Pfizer approaching Allergan due to an industry-wide drop in share prices. Any merger with Allergan would then also give Pfizer the ability to re-domicile to Ireland, taking advantage of its lower tax rates. On 23 November 2015, the two companies announced their intention to merge for an approximate sum of $160 billion making this the largest pharmaceutical deal ever and the third largest merger in history. As part of the deal, Pfizer CEO Ian Read would have remained CEO and Chairman of the combined company (to be called Pfizer Plc), with Allergan CEO, Brent Saunders, becoming President and Chief operating officer. As part of the deal Allergan shareholders will receive 11.3 shares of the company, with Pfizer shareholders receiving one. Pfizer discontinued the acquisition on 5 April 2016, after the Obama administration announced its plan to move ahead with a resolution banning this form of tax avoidance, known as a tax inversion. Pfizer will pay Allergan a breakup fee of $150 million.
In April, the company announced it would join Heptares Therapeutics in a deal valued up-to-$3.3 billion collaborating on the development of a subtype-selective muscarinic receptor agonists for Alzheimer's disease and other major neurological disorders. Later in the same month the company announced it would acquire Topokine Therapeutics for $85 million (plus undisclosed milestone payments), gaining the phase IIb/III compound XAF5 - a potential first-in-class treatment for steatoblepharon or bags under the eyes. In August 2016, Teva after completing the $39 billion acquisition of Actavis Generics, announced another smaller deal with Allergan, agreeing to acquire its generic distribution business Anda for $500 million. In August the company acquired ForSight VISION5 for more than $95 million, expanding Allergans' offering in eye-care. In September, the company announced it would acquire RetroSense Therapeutics for more than $60 million, gaining the positive photosensitivity gene therapy treatment, RST-001. RST-100 is to be used in retinas in which rod and cone photoreceptors have degenerated over time, causing in increase in the sensitivity of light hitting the retina. Later in the same month the company announced it would acquire Vitae Pharmaceuticals, Inc. for $21 per share - $639 million in total - boosting the company's dermatology pipeline, Tobira Therapeutics for $1.695 billion and a day later Akarna Therapeutics for $50 million. The two latter acqusitions aimed at boosting Allergans liver disease portfolio. In October, the company announced it would acquire Motus Therapeutics, further expanding it's presence in the gastrointestinal market, for $200 million. In November 2016 the company acquired Chase Pharmaceuticals.
In 2016, the company restructured into four divisions: US Specialised Therapeutics (containing - eye care, medical aesthetics, dermatology and botox therapeutics) US General Medicine (containing - CNS, Cardiovascular, GI, Womens health, anti-infectives and urology) and International. The fourth division consisted solely of the Anda distribution company, which has since been sold.
In November 2014 Actavis, plc announced its intention to acquire Allergan, inc, the manufacturer of Botox Completion of the deal would increase its market capitalization to $147 billion. On March 17, 2015, Actavis, plc completed the acquisition of Allergan, inc in a cash and equity transaction valued at approximately $70.5 billion. The combination created a $23 billion diversified global pharmaceutical company with commercial reach across 100 countries. In June 2015, Actavis, plc officially changed its name to Allergan, plc.
Tazorac (tazarotene) for acne and psoriasis.
Zenpep (pancrelipase) for the treatment of exocrine pancreatic insufficiency due to cystic fibrosis, or other conditions.
↑ TheStreet Transcripts. "Actavis (ACT) Earnings Report: Q1 2015 Conference Call Transcript". The Street. Retrieved 27 November 2015.
↑ Calia, Michael (18 February 2015). "Actavis Adopting Allergan Name; Earnings Soar as Closing of Deal Looms". The Wall Street Journal. Retrieved 19 February 2015.
↑ Haggerty, Neil (15 June 2015). "Actavis Changes Name to Allergan After Deal For Botox Maker". The Wall Street Journal. Retrieved 15 June 2015.
↑ Garde, Damian (6 July 2015). "Allergan bets $125M on a new approach to dry eye disease". FierceBiotech. Retrieved 27 November 2015.
↑ Carroll, John (7 July 2015). "UPDATED: Bowing out of crowded race, Merck sells a migraine drug to Allergan for $250M". FierceBiotech. Retrieved 27 November 2015.
↑ "Allergan Accelerates Transformation to Branded Growth Pharma Leader by Divesting Global Generics Business to Teva for $40.5 Billion". Allergan. 27 July 2015. Retrieved 27 November 2015.
↑ Koons, Cynthia (27 July 2015). "Teva CEO: $40.5 Billion Allergan Deal is Just the Beginning". Bloomberg. Retrieved 27 November 2015.
↑ Somayaji, Chitra (27 July 2015). "Teva Snaps Up Allergan's Generics Arm, Dumping Mylan". Bloomberg. Retrieved 27 November 2015.
↑ O'Donnell, Carl (3 December 2015). "Teva divesting $1 billion in assets to clear Allergan deal". Reuters. Retrieved 4 December 2015 – via Yahoo! Finance.
↑ Carroll, John (27 July 2015). "Allergan joins the hunt for a 'transformational' biotech deal". FierceBiotech. Retrieved 27 November 2015.
↑ Carroll, John (26 July 2015). "UPDATED: Allergan bags NMDA depression drugs in $560M-plus Naurex buyout". FierceBiotech. Retrieved 27 November 2015.
↑ Lawrence, Stacy (4 September 2015). "Allergan to grab AqueSys for $300M+ to add minimally invasive glaucoma microshunt". FierceMedicalDevices. Retrieved 27 November 2015.
↑ Saxena, Varun (5 November 2015). "Allergan beefs up in aesthetics with implant purchase. Does M&A suitor Pfizer approve?". FierceMedicalDevices. Retrieved 27 November 2015.
↑ "Allergan Partners with Rugen to Develop ASD, OCD Therapies". GEN. 25 November 2015. Retrieved 27 November 2015.
↑ Rockoff, Jonathan D.; Mattioli, Dana; Cimilluca, Dana (29 October 2015). "Pfizer and Allergan Begin Merger Talks". The Wall Street Journal. Retrieved 27 November 2015.
↑ Hammond, Ed; Baigorri, Manuel; Koons, Cynthia (29 October 2015). "Pfizer, Allergan Said to Discuss Record Pharmaceutical Deal". Bloomberg. Retrieved 27 November 2015.
↑ "Pfizer seals $160bn Allergan deal to create drugs giant". BBC News. 23 November 2015. Retrieved 27 November 2015.
↑ Rockoff, Jonathan. "Pfizer Walks Away From Allergan Deal". The Wall Street Journal.
↑ "Allergan Joins Heptares in $3.3B+ Neurological Drug Alliance". GEN.
↑ "Allergan Acquires Topokine Therapeutics for $85M Upfront". GEN.
↑ "Allergan Snaps Up Akarna in Second NASH-Related Purchase This Week". GEN News Highlights. Genetic Engineering & Biotechnology News. 21 September 2016. Retrieved 4 November 2016.
↑ "Actavis to buy Botox-maker Allergan $66bn". BBC News. 17 November 2014. Retrieved 27 November 2015.
↑ Chen, Caroline (17 November 2014). "Actavis Surges to Top Drugmaker Ranks With Acquisitions". Bloomberg. Retrieved 27 November 2015.
↑ Willhite, James (17 November 2014). "The Morning Ledger: Actavis Could Again Evade Treasury's Grasp for Post-Inversion Tax Break". The Wall Street Journal. (subscription required (help)).
↑ "Actavis plc is now Allergan plc". Allergan. 15 June 2015. Retrieved 16 June 2015.
|
0.961628 |
Short intermediate municipal no load funds are funds which hold investments in municipal debt securities for a time period that is between one year and five years. These funds may also be called municipal debt funds, municipal bond debt mutual funds, or simply no load municipal bond funds. These funds use their investment pool to invest in municipal debt, which are bonds and other debt securities issued by municipalities, which can include local governments like cities, counties, and states, and other public entities. Municipal debt securities are issued to raise money needed by the municipality for projects that benefit the public. These can include airports, schools, colleges, roads, and other improvements. Municipal bonds and debt are used to better the communities of the local population, whether it is a town, state, or other municipality. Municipal debt funds allow you to invest in many types of municipal debt by making one investment, because these funds usually have diversified portfolios and are professionally managed.
Short intermediate no load municipal bond funds my work well for a wide variety of investors, but these funds are not for everyone. All investments carry some risks, whether these risks are large or small, and you should never invest money that you can not afford to lose. Having said that, municipal debt mutual funds are considered one of the safer investment methods. Because this debt is backed by the municipality, it is less likely that there will be a default or missed payments. These bonds can also be found insured, which means for a higher amount if a default does occur a third party will insure the payment. Municipal bonds are like all other types of bonds, they are rated according to their creditworthiness and risks involved. These bonds can be rated from the highest quality all the way down to junk bonds. This rating can help you determine how much risk a bond actually carries. No load municipal bond funds usually involve a number of municipal bonds that may have different ratings, making for a very diverse investment. Using no load funds also helps improve the quality and value of your investment, because these funds do not charge high load fees. These fees deduct from the value and performance of your investment, making it worth less and positioned behind the no load funds at the very start of the investment. This is especially true of front end load charges.
Short intermediate municipal funds that do not carry a load charge are usually the best investment choices, as long as you are comfortable making your own investment decisions or are willing to learn what you do not know or understand. These funds offer tax exempt status, which means federal and possibly state taxes are waived on any income resulting from the investment. The tax exempt status makes no load municipal bond funds extremely attractive to many investors in higher tax brackets. One of the short intermediate municipal funds available is the Short Intermediate Muni Income Fund, which trades under the symbol FSTFX. This is one of the no load municipal bond funds which offers federal tax exemption, and the fund holds around one hundred investments into many types and categories of municipal bonds. This fund does have a redemption fee if the shares are not held for a specific time period though, and this fee equals one half of one percent of the shares redeemed, but only if the shares are held less than thirty days. This redemption fee is not a load fee, but rather covers the costs of frequent transactions so that all fund members do not have to cover the costs for investors who frequently buy and sell shares. This is more of a management or administrative cost, and does not apply if you hold shares in this short intermediate municipal bond fund.
|
0.940129 |
Biography of Victorian Romantic Painter of Lady of Shalott.
Best known as the creator of The Lady of Shalott, one of England's great masterpieces of Romanticism, John William Waterhouse started painting in a manner close to Lawrence Alma-Tadema (1836-1912) and Frederic Leighton (1830-96), depicting classical and historical scenes, but then turned to the depiction of literary themes executed in a dreamy, romantic manner. Although influenced by the Pre-Raphaelites, his sensuous handling of paint gave his works a unique identity. As a youth, Waterhouse worked in his father's studio developing a talent for sculpture and painting, before he later attended the Royal Academy schools. His early classical themes were exhibited at the Royal Academy and the Society of British Artists. His later works can be classified as either Pre-Raphaelite or Neoclassical. He became financially successful and renowned during his lifetime, and prints of his works were widely popular among the middle classes. Famous examples of his pre-Raphaelite style of realist painting include Lady of Shalott (1888, Tate London) and Hylas and the Nymphs (1896, Manchester City Art Gallery).
Towneley Hall Art Gallery, Burnley.
Very little is known of Waterhouse's private life, as few of his letters have survived. Although born in Rome, where his father was working as an artist, within a few years he returned to England where his father set up a studio. Waterhouse assisted him, and thus absorbed the basic techniques of watercolour, oil painting and sculpture at an early age. In 1870 he entered the Royal Academy school to train as an artist. In 1872 he exhibited with the British Artists and from 1874 with the Royal Academy. His first paintings were mainly classical and historical themes, revealing the influence of Alma-Tadema, one of the most eminent classical painters of late 19th century England. Waterhouse was also influenced by Frederic Leighton, a sculptor and painter who painted classical, historical and biblical subjects.
Often associated with the Pre-Raphaelites, Waterhouse was born the year after they first exhibited at the Royal Academy. It wasn't until the 1880s, that he came under the influence of the movement, which had revived literary themes in paintings. He inherited their taste for the myths of enchantresses, as well as the works of John Keats, William Shakespeare and Alfred Tennyson. The Pre-Raphaelite Brotherhood originally came together in opposition to the Royal Academy's promotion of Renaissance master Raphael (1483-1520) as the ideal artist of all time. They also rebelled against what they considered the triviality of genre painting, which was immensely popular in the mid-1800s. The themes they preferred were initially religious, then later literary: notably themes of love and death. The founding members of the group were William Holman Hunt (1827-1910), John Everett Millais (1829-96) and Dante Gabriel Rossetti (1828-82). Later they were joined by Ford Madox Brown (1821-93), Frederic George Stephens (1828-1907), James Collinson (1825-81) and Thomas Woolner (1825-92) and Edward Burne-Jones (1833-1898). In fact, they represented one of the first movements of avant-garde art, although they have been denied this status because of their support for traditional concepts of mimesis (imitation of nature) and history painting. The movement was mainly inspired by Romanticism, although they were later to be divided over the argument of Realism versus Idealism. The Pre-Raphaelites were to influence many other artists including Gustave Moreau (1826-98).
Waterhouse found Pre-Raphaelite subject matter quite agreeable, being especially fond of the femme fatale genre within Romantic settings. However, his painting technique differed considerably from the rest of the group. His fondness for blocks of colour and broad chunky brushstrokes were derived from French Realist painter Jules Bastien-Lepage (1848-84). It was a style introduced to him by Stanhope Forbes (1857-1947) and other painters from the Newlyn School, based in St Ives in Cornwall. Waterhouse's famous masterpiece The Lady of Shalott (1888) is an illustration of Alfred Tennyson's poem the Lady of Shalott from Camelot. She sits in a boat, staring at 3 candles which symbolize life. Two have gone out, indicating her life will soon end. Alongside John Everett Millais' painting of Ophelia (1851, Tate), The Lady of Shalott is one of the most commonly reproduced posters of Pre-Raphaelite art, and is a strong candidate for the title of England's favourite painting. Waterhouse's mythological painting Hylas and the Nymphs (1896) is also a wonderful piece of art: a mythical scene showing Hylas being tempted to his death by river nymphs.
Another English painter whose works - like those of Waterhouse - caught the mood of Victorian England was the popular animal artist Sir Edwin Landseer (1802-73), best-known for his anthropomorphic paintings and prints of dogs.
Waterhouse was elected an associate of the Royal Academy of Arts in London in 1885 and became a full member in 1895. By the mid-1880s he was exhibiting at several galleries throughout the country, including London's Grosvenor Gallery and New Gallery. This brought a certain amount of financial success. By the 1880s he began exhibiting portrait paintings, as commissions increased due to his rising reputation. In 1901 he joined the St John's Wood Arts Club, which included the highly versatile artist George Clausen (1852-1944) and Alma-Tadema. He also served on an advisory council, advising up and coming artists such as the Indian-born British painter and designer Byam Shaw (1872-1919). Despite suffering from illness in the last decade of his life, Waterhouse continued painting until his death in 1917.
Today, his paintings are housed in some of the best art museums around the world, including those in England, America, Canada and Australia. In 2009 the Royal Academy of Arts hosted the largest ever retrospective of his works, entitled J.W. Waterhouse: Garden of Enchantment. This was the first exhibition ever to feature works from his entire career. The exhibition moved to the Montreal Museum of Fine Arts in 2010.
For more biographies of important modern artists, see: Famous Painters.
|
0.999997 |
Exceptions happen. There’s no way around that. But not all exceptions are created equally. For instance, a 404 “Not found” error can (and should) be handled correctly in your application. Let me give you an example of how to handle a ActiveRecord::RecordNotFound exception. Let’s assume you have an application that could show a user profile: # GET /p/:name def show @profile = Profile.find(params[:name]) end Now, it may happen that the :name paramater contains a value that cannot be found in our database, most likely because someone made a typo in the URL.
|
0.999964 |
What kinds of jobs require foreign language skills?
There are worlds of opportunities that will open up for you should you decide to dedicate your time and attention to the study of a foreign language.
If you aspire to secure a government position, over 80 federal agencies rely on professionals with high-level competence in foreign languages. In addition, an increasing number of U.S. businesses are working internationally and need employees who can both communicate in foreign languages and understand other cultures. Senior executives have identified the lack of language skills as an enormous barrier to increasing American participation in overseas markets and have recognized language acquisition and cultural competence as critical assets for businesses.
|
0.968771 |
A cooperative, deck building game in a zombie infested world featuring dice rolling, card movement, and a single currency - combat!
Search and Survive is a cooperative, deck building game set in a zombie-infested world called The Wasteland where survivors cling to life and scavenge what they need from the ruins of the world. Players take on unique roles, grab their dice, and try to complete their objective, whether it be finding food, completing research, or gathering weapons.
The game has several features unusual to the popular deck building genre, namely dice rolling for attacks, line of sight, the use of only 1 currency (combat), and different abilities for each player through individual player mats. For example, the Medic can sustain more damage while the Scrapper excels at melee combat.
Players must plan ahead for future turns, work together to rescue survivors, keep one another alive long enough to survive the zombie advancements, and achieve the group's goals. Speaking of goals, the game has multiple objectives to choose from at the start of a session. Some involve finding certain equipment or supplies, and others directly involve achieving combat oriented accomplishments.
Search and Survive uses a combination of card values and dice rolls in combat in contrast to the standard deck building game. The luck factor of the dice makes attacking exciting, but risky. Players take damage card(s) if they fail an attack, and damage cards are a finite resource. Each player begins the game with 10 damage cards and if any player takes their last damage card the game ends in defeat.
The dangers of attacking can be mitigated by rescuing survivors and finding locations. Locations and survivors are permanents that enhance the players' ability to survive in The Wasteland. These powerful cards can allow players to reroll dice, draw additional cards, or even sacrifice their turn to heal themselves.
When players fail an attack they take damage cards from their personal damage supply and add them to their discard pile. Damage cards not only serve as life counters, but are also incorporated into players' decks when their decks are shuffled. They normally serve as dead weight in a player's hand, but when a player has 3 at a time they can be discarded to gain a Surge Token. Surge Tokens can be used to prevent a player's final point of damage, to reroll an attack, or to save a survivor/location from a negative event.
Unfortunately the most prevalent inhabitants of The Wasteland are the zombies. Players must make attacks against them, suffer their abilities, and weather the harsh effects of special Event Walkers. These powerful, and sometimes helpful, zombies spawn randomly and are either resolved when they first spawn or when they are killed. Defeating the zombie horde will be no easy feat.
The dice! Dice are used primarily for attacking, but are also used to resolve events, resolve card effects, and determine which zombies will advance.
Advancement! Instead of having multiple currencies, Search and Survive only has one, and that's combat. Every new card has to be found and fought for before being added to a player's deck.
Player boards! Players begin the game with character boards that give them unique abilities.
The Wasteland! The play area is populated with zombies and survivors. The cards interact with one another and zombies are always trying to advance to damage the players.
Objectives! At the beginning of a game of Search and Survive players must choose an objective. The different objectives combined with random player boards and the millions of combinations of cards possible in The Wasteland add a lot of replay ability.
The Cards! Not only are the cards beautiful in a creepy sort of way, but their colored backgrounds also serve a purpose. Each type of card has a unique color to make cards in The Wasteland and in hand easily distinguishable.
*This video is an introduction of game play showing examples of attacking, taking damage, the different types of cards in the game, zombie movement, and zombie abilities.
*This video is a game play example showcasing The Wasteland, ranged combat, acquiring resources, and the Horde Phase.
The game is setup with 3 rows of cards in a set number of columns for the number of players. For example, in a 2 player game there are 5 columns. There's a Resource Row where items, weapons, and locations spawn in the rear of the play area. In front of the Resource Row are the 2 rows for zombies and survivors, called the Back Row and the Front Row.
In Search and Survive there are 4 player turns divided amongst the players. In a 1 player game the lone player takes all four turns, while in a 4 player game each player gets a single turn. No matter how many players, there's always 4 player turns.
Successful attacks against zombies in the back row will reveal a resource(s) on the Resource Row. When a player exposes a resource through combat that resource card is added to the player's discard pile. This is yet another way that Search and Survive distinguishes itself from other deck builders. There is no currency in the game except for combat. New resources must be fought for.
After the player turns is the Horde Phase. During the Horde Phase the zombies in The Wasteland activate their abilities and have a chance to advance. If the zombies advance past the Front Row of The Wasteland then they pass off of the play area, go to the zombie discard pile, and deal damage to the players.
I asked playtesters for Search and Survive to send me their thoughts on the game. Here are their unedited responses.
1. Search and Survive - With this level you will get 1 production copy of Search and Survive with all unlocked Stretch Goals! You can have this tier shipped to you or it can be picked up locally in Grand Junction, CO!
2. Search and Survive - Producer Level - The Producer Level tier will get you 1 production copy of Search and Survive with all unlocked Stretch Goals. This limited package will also get your name in the rule book for Search and Survive as a producer. As the campaign unfolds some more goodies might even be added to this level. This is for those who want to support the production of the game in a big way and get some bragging rights out of the deal!
3. Search and Survive - FLGS Level - Local game stores are near and dear to my heart, and as such I have worked closely with my wonderful local board game store, The Jester's Court, to come up with a commercially viable package for store owners. Kickstarter, while wonderful in assisting game developers, can take away some of the income of FLGSs. Local stores are a vital part of the board game community and as such should be passionately supported. I wanted to attempt to alleviate the problem by having this option open for them. While anyone can purchase from this limited tier, it was designed specifically for FLGSs. This package includes 5 copies of Search and Survive and all unlocked Stretch Goals!
If you live somewhere that is not currently supported by this campaign, please let me know and I will get a shipping quote and get that fixed ASAP!
Due to the dimensions of the box for Search and Survive, certain shipping rates were astronomical. I tried to make all shipping prices as fair as I could.
The game is finished but certain components might go through minor aesthetic changes between now and printing.
Shipping will be fulfilled using the USPS.
Stretch Goal components have been tested already, but will go through a final test period before being added to the game. The artwork for the stretch goals will have to be completed as well.
Once the campaign ends a final, all-inclusive testing phase will take place in order to ensure the highest quality gaming experience we can offer. This will include any unlocked Stretch Goal components.
Though we are anticipating fulfillment of the game in late Q1 of 2017, we are hoping for an earlier finish. We'd rather under-promise and over-deliver than vice versa.
The black borders around the cards in the above images are there simply for contrast and are not present on the finished products (see game play video). The cards are borderless.
The Stretch Goal components, if not successfully funded in this campaign, will be added to a future expansion for Search and Survive.
In the introduction video I said that every card has a secondary ability, but what I should have said is "every situational card (cards that can be sacrificed for a powerful effect) has a secondary ability". Cards that a player might not want to use right away have the ability to be discarded (and remain in the deck) to draw a new card, thus attempting to eliminate dead weight cards.
Search and Survive has 1 card with a drug reference. While neither the name or ability is overly explicit, parents might want to take this into consideration.
*This was the first version of the Kickstarter video. I liked it but I used a camera I wasn't familiar with and the sound quality is way off. If you're interested in more videos though, here's one!
*These are images from our fantastic FLGS here in Grand Junction, Co, The Jester's Court! And if anyone asks I totally didn't get red paint on their garage door and then accidentally take the door's base paint off while trying to clean it. It was Zane.
As with any project, this one is subject to potential obstacles. Here's what I think the potential obstacles are and what I've done to mitigate them.
Challenge 1: Being a New Designer - This is my first game going into production. As I'm sure any first time designer will attest (and they have to me multiple times) it isn't an easy process. There's a lot to consider and many choices to make. Luckily, I have friends who have attempted (both successfully and unsuccessfully) and who were more than happy to lend guidance. From books to mentors who have completed the process themselves, I have sought out help at every step of the way in order to prepare myself for a successful campaign.
Challenge 2: Adequate Funding - I have worked with the wonderful people at QPC Games for over a year to come up with a realistic plan for producing Search and Survive. I have accurate figures for the printing of the game, but I also have a comfortable "unforeseen problems" cushion built into the budget.
If the game successfully funds at the level I have set for this Kickstarter campaign we'll be able to do a large print run and have some to spare. So what happens to the "cushion money" if there are no problems, you might ask. Simple, if everything goes off without a hitch then Search and Survive will simply have a larger print run ordered, however the money is there to take care of any problems that could pop up should the opposite happen.
Challenge 3: Shipping Costs - Since shipping costs have to be set by country, calculating accurate shipping costs was quite difficult. No mater how much research I did it seemed that the best answer anyone could provide was *shrug*. After extensive research I have set prices for shipping based on my multiple trips to the USPS locally and hours on their website. Since shipping boxes will also have to be made for the game I have a feeling that a portion of shipping is going to come out of pocket no matter what. I'm ok with that as long as everyone gets their game!
Thank you for showing your support for this project! You rock!
A copy of Search and Survive along with all unlocked Stretch Goals!
A copy of Search and Survive with all unlocked Stretch Goals to be picked up at The Jester's Court in Grand Junction, CO. There's no shipping costs associated with this tier. The game will be waiting for you at the store!
A copy of Search and Survive with all unlocked Stretch Goals and your name in the rule book as a producer for the game!
5 copies of Search and Survive with all unlocked Stretch Goals! Also, the shipping has already been included in this tier!
|
0.970219 |
Ten friends are going out to dinner to celebrate their graduation. Alice has two close friends, Brenda and Clarice, and the three of them wish to sit next to each other. However, David also wants to sit next to Alice. If all ten friends sit around one large circular table, how many possible unique seating arrangements are there that make all four people happy?
Note that the questions says nothing about the chairs being numbered, or anything else that breaks the symmetry of the round table, so you may count all rotational symmetries as one arrangement.
|
0.9248 |
What simple things can i do to make myself more presentable to females?
DON'T gel your hair. It comes off as tacky and immature.
However, smiling more and taking care of your personal hygiene are the best options. You'll not only be presentable to females but presentable to everyone else.
I'd advise using wax or a pomade. A little goes a very, very long way.
yeah i can relate to the whole shaving thing because i can not grow a good facial hair. I'll be sure to keep it nice and clean thanks. Is the deterrent just for sweating or smell to? Is wearing cologne for day to day activities like shopping or school generally good?
I see no problem with glasses. Button up shirts are always better than polos. Polos are better than t-shirts.
Next, be the type of person someone would want to talk to. Make sure you can talk (at least a bit) about art, movies, politics. Go to a museum, read non-fiction, take a cooking class. Make sure that you have something to say after "hello."
And maybe call them "people" and not "females." Your title makes it sound like you are hunting deer.
thanks and i know what you mean about calling them people, i just wanted to specify what i'm interesting in.
dude - rather than having an objective of being more presentable to females, of which there are a wide variety with differing desires, recommend you be yourself and allow that to attract the type of woman who would like you.
i hear ya, i dont plan on a extreme makeover. Just looking for general advice is all.
|
0.999863 |
This article is about the geographic region. For the university, see Northern Illinois University. For that school's athletic program, see Northern Illinois Huskies.
Northern Illinois is a region generally covering the northern third of the U.S. state of Illinois.
Northern Illinois is dominated by the metropolitan areas of Chicago, the Quad Cities, and Rockford, which contain a majority (over 75%) of Illinois' population and economic activity, including numerous Fortune 500 companies and a heavy manufacturing, commercial, retail, service, and office based economy. Much of the economic activity of the region is centered in the Chicago Loop, the Illinois Technology and Research Corridor, and the Golden Corridor. However, rural sections of this region are highly productive agriculturally, and are part of the Corn Belt. The headquarters for John Deere farming equipment are located in Moline. Additional smaller cities in this area include Kankakee, LaSalle-Peru, Ottawa, Freeport, Dixon, and Sterling-Rock Falls, which still have predominantly manufacturing and agricultural economies. Northern Illinois is also one of the world's busiest freight railroad and truck traffic corridors.
Interstate 80 is sometimes referenced as the informal southern boundary of Northern Illinois, and is often used in weather reports as a reference point, as in "south of Interstate 80 will see sleet and rain, but north of Interstate 80 can expect mostly snow."
Interstate 88 (the Ronald Reagan Memorial Tollway) connects the region, east-west, stretching from the Quad Cities, eastward through Sterling-Rock Falls, Dixon, DeKalb, Aurora, Naperville, and into Chicago. Northern Illinois is also the only region of the state in which there are tollways, which are run by the Illinois State Toll Highway Authority, another trait separating this region from Central and Southern Illinois.
Northern Illinois University (NIU), in DeKalb, IL, is located at the heart of Northern Illinois and is the state's second largest institute of higher education. According to the Regional History Center at NIU, their area of service to the northern portion of Illinois includes the 18 northernmost counties, excluding Cook, Grundy, Kankakee, Mercer and Rock Island Counties, which are covered by Eastern Illinois University and Western Illinois University, respectively, and University of Illinois at Chicago.
Several major colleges can be found in the Chicago area including Illinois' third largest state school, the University of Illinois at Chicago, as well as the University of Chicago and Northwestern University. Other notable schools include the Illinois Institute of Technology, Loyola University, DePaul University, Columbia College, Northeastern Illinois University, and Roosevelt University.
Several liberal arts schools such as Aurora University, Lewis University, North Central College, Elmhurst College, Wheaton College, Concordia University, and North Park University dot the Metropolitan Chicago landscape. Other institutions of higher education are found in Rockford, including Rockford University, Rock Valley College, Northern Illinois University-Rockford, University of Illinois College of Medicine-Rockford, a branch of Rasmussen College, and a branch of Judson University. Other colleges near the Quad Cities include Western Illinois University-Quad Cities and Augustana College.
These schools, along with several others, help to make Northern Illinois a vibrant research area. Such significant developments in science including the creation of the Atomic Bomb and the Fujita Scale were rooted in Northern Illinois institutions.
Politically, the region is quite diverse, with Cook County and Rock Island County being long-time strongholds for Democrats and north central Illinois counties (Boone, Ogle, Lee, etc.) being reliable for Republicans. Suburban Chicago counties such as DuPage, Kane, Kendall and McHenry Counties were also very reliably Republican until recently. Some counties, such as Lake, Winnebago and DeKalb, were once Republican strongholds, but are now more evenly divided. Famous politicians native to the area include Ulysses S. Grant, Ronald Reagan, J. Dennis Hastert, Donald Rumsfeld, Hillary Clinton, and Mayors Richard J. Daley and Richard M. Daley.
Culturally, the area is tied heavily to Chicago Most residents of Northern Illinois tend to root for Chicago teams and lean towards the Chicago media market. The major college football program in Northern Illinois is the NIU Huskies. Northern Illinois also has large fanbases for the Illinois Fighting Illini, Notre Dame Fighting Irish, Iowa Hawkeyes, and the Northwestern Wildcats. In Central and Southern Illinois, residents are tied primarily to St. Louis. Additionally, regional dialects in Northern Illinois vary from those in other parts of Illinois. Surprisingly, different areas in Northern Illinois have their own independent cultures. Typically, areas west of Interstate 39 have more ties to Iowa and the Quad Cities area, as that is roughly the location of the westernmost terminus of the Chicago media area. Even dialects within Northern Illinois are different, emphasizing the above. Depending on location and ethnicity, a resident of the Chicago Metropolitan Area may have the stereotypical Chicago dialect, whereas those in more affluent areas, such as Lake County, may have a less easily pinpointed manner of speaking. Those west of Chicago have more stereotypical Midwestern dialects, and might not be able to be distinguished from people in Iowa or Nebraska.
Depending on how close to a specific metropolitan area a county is, their culture and media reflect that of the metro area. There are exceptions, however. McHenry County may sometimes be considered Chicago-influenced, and, at times, Rockford-influenced. Areas such as the Ottawa-Streator Micropolitan Statistical Area have a comfortable mix of culture from the Chicago area, Quad Cities area, and Peoria, perhaps being due to its location in the center of the region.
The Chicago metropolitan area, or Chicagoland, is the metropolitan area associated with the city of Chicago, Illinois, and its suburbs. It is the area that is closely linked to the city through geographic, social, economic, and cultural ties.
Chicago (/ʃɪˈkɑːɡoʊ/ ( listen) or /ʃɪˈkɔːɡoʊ/) is the third most populous city in the United States, after New York City and Los Angeles. With 2.7 million residents, it is the most populous city in both the U.S. state of Illinois and the American Midwest. Its metropolitan area, sometimes called Chicagoland, is home to 9.5 million people and is the third-largest in the United States. Chicago is the seat of Cook County, although a small part of the city extends into DuPage County.
The collar counties are the five counties of Illinois that border on Chicago's Cook County. The collar counties (DuPage, Kane, Lake, McHenry, and Will) are tied to Chicago economically, but, like many suburban areas in the United States, have very different political leanings than does the core city. Chicago has long been a Democratic stronghold, and the collar counties are known for being historically Republican strongholds.
While the demographics of these suburban Chicago counties are fairly typical for American metropolitan areas, the term is apparently unique to this area. And because Chicago is so firmly entrenched in the Democratic column, and rural Downstate Illinois is so overwhelmingly Republican, the collar counties are routinely cited as being the key to any statewide election. However, that conventional wisdom was challenged by the fact that in 2010 Democrat Pat Quinn became governor while winning only Cook, St. Clair, Jackson and Alexander counties. All five collar counties went Republican, so the key to winning that gubernatorial election was simply winning Cook County, but by a wide enough margin to overwhelm the rest of the state.
While the term is perhaps most often employed in political discussions, that is not its exclusive use. Barack Obama used the term in his speech before the Democratic National Convention in 2004.
The Fox Valley—also commonly known as the Fox River Valley—is a rural, suburban, and exurban region within Illinois along the western edges of the Chicago metropolitan area. This region centers on the Fox River of Illinois and Wisconsin. Around 1 million people live in this area. Native American tribes that lived near the Fox River included the Potawatomi, Sac, and Fox tribes. Some of cities in the Fox River Valley are part of the rust belt. Within this region is Aurora, the second largest city in the state, Elgin, and the nearby cities of Batavia, St. Charles, and Geneva, which have been known as "the Tri-City area" since the early 20th century.
Northwestern Illinois is generally considered to consist of the following area: Jo Daviess County, Carroll County, Whiteside County, Stephenson County, Winnebago County, Ogle County, and Lee County. Northwestern Illinois borders the states of Iowa to the west and Wisconsin to the north.
The Rockford Metropolitan Statistical Area, as defined by the United States Census Bureau, is an area consisting of four counties in north-central Illinois, anchored by the city of Rockford. As of the 2010 census, the MSA had a population of 349,431.
The Quad Cities is a group of five cities straddling the Mississippi River on the Iowa–Illinois boundary, in the United States. These cities, Davenport and Bettendorf (in Iowa) and Rock Island, Moline, and East Moline (in Illinois), are the center of the Quad Cities Metropolitan Area, which, as of 2012, had a population estimate of 382,630 and a CSA population of 474,226, making it the 90th largest CSA in the nation. The Quad Cities is midway between Minneapolis and St. Louis, north and south, and Chicago and Des Moines, east and west. The area is the largest 300-mile market west of Chicago.
^ "Regions of Illinois". Illinois Department of Natural Resources. Retrieved 4 October 2013.
^ "Illinois Regions". Illinois Environmental Protection Agency. Retrieved 4 October 2013.
^ "18 Northern Illinois Counties". Regional History Center. Northern Illinois University. Retrieved 4 October 2013.
^ "Annual Estimates of the Resident Population for Incorporated Places Over 50,000, Ranked by July 1, 2012 Population: April 1, 2010 to July 1, 2012". U.S. Census Bureau. May 2013. Retrieved 2013-12-05.
^ "Table 1. Annual Estimates of the Population of Metropolitan and Micropolitan Statistical Areas: April 1, 2010 to July 1, 2012". U.S. Census Bureau. March 2013. Archived from the original on April 1, 2013. Retrieved 2013-12-05.
^ "Collar Counties". Encyclopedia.chicagohistory.org. Retrieved 2013-12-18.
^ AC4508. "PSB: Progressive Illinois Politics:: The Collar County Shift". Prairiestateblue.com. Archived from the original on 2013-12-19. Retrieved 2013-12-18.
^ "Quinn-Brady race may be decided in collar counties". Chicago Sun-Times. Retrieved 2013-12-18.
^ "Why the Collar Counties are Trending GOP". NBC Chicago. Retrieved 13 September 2013.
^ "Ballots Cast". Illinois State Board of Elections. Retrieved 2013-12-18.
^ Mount, Charles (30 May 1989). "Collar Counties Cutting Court Backlogs". Chicago Tribune. Retrieved 13 September 2013.
^ "Collar County Homepage". Northern Illinois University. Retrieved 2013-12-18.
^ "Welcome to the Quad Cities". City Guide Post Inc. Retrieved February 2, 2008.
^ "Community Visitor Information". Illinois Quad Cites Chamber of Commerce. Archived from the original on December 14, 2007. Retrieved February 2, 2008.
^ Johnson, Dirk (October 20, 1987). "East Moline Journal; Friday Night High, in the Bleachers". The New York Times. Retrieved February 2, 2008.
^ "Cool Community". Quad Cities Chamber.
^ "Annual Estimates of the Population of Metropolitan and Micropolitan Statistical Areas: April 1, 2010 to July 1,". 2011 Population Estimates. United States Census Bureau, Population Division. June 2012. Archived from the original (CSV) on April 27, 2012. Retrieved 2012-08-01.
^ "Annual Estimates of the Population of Metropolitan and Micropolitan Statistical Areas: April 1, 2010 to July 1, 2011" (CSV). 2012 Population Estimates. United States Census Bureau, Population Division. April 2012. Retrieved 2013-03-16.
|
0.999993 |
The vector space model is a widely used model in computer science. Its wide use is due to the simplicity of the model and its very clear conceptual basis that corresponds to the human intuition in processing information and data. The idea behind the model is very simple, and it is an answer to the question, how can we compare objects in a formal way? It seems that the only way to describe the objects is to use a representation with features (characteristics) and their values. It is a universal idea, and it even seems to be the only possible way to work with formal objects.
|
0.975112 |
Han 漢, or Shu-Han 蜀漢 was one of the three empires during the period of the Three Kingdoms (220-280). It fought for supremacy over the lands of China and contested against the states Wei and Wu. Founded by Liu Bei 劉備, who claimed descent of the royal Liu-family of Han. In 220 when the Later Han dynasty came to an end and Wei was founded in its stead, Liu Bei founded his own dynasty in 221 and called it Han. It was meant as a continuation of the Han dynasty. He crowned himself emperor of Han, despite the fact that the deposed emperor Xian of Han, was still alive. Because of this, some historians consider Liu Bei an usurper. After taking the crown he no longer pursued the release of former emperor Xian.
Liu Bei's Han is often referred to as "Shu-Han", or simply "Shu", to distinguish it from the previous Han dynasties. Among enemies Liu Bei's Han was also called "Shu", instead of "Han", because they did not acknowledge the state as a continuation of Han. In the Records of the Three Kingdoms, the official history for the period, each of the three empires has a "book", and Han's book is called the Book of Shu, instead of the Book of Shu-Han or Book of Han.
Liu Bei was not a very famous figure throughout China during the Yellow Turban Rebellion being just a minor officer under the command of Zou Jing 鄒靖. Eventually Liu Bei was assigned a post in Xu Province and became an officer under Tao Qian 陶謙.
After Tao Qian's defeat at the hands of Cao Cao 曹操, Liu Bei fled and eventually joined up with Lü Bu 呂布 at Xiapi. Liu Bei left Lü Bu and joined up with Cao Cao at the Battle of Xiapi where Lü Bu was eventually defeated and when Lü Bu tried to persuade Cao Cao to keep him alive, Liu Bei made sure Cao Cao would execute him. Liu Bei then headed to Jing Province, which was at that time under the control of Liu Biao 劉表, also a kinsmen of the Han. However, Cao Cao marched south and took over Jing Province and much of the northern plains and became very powerful. Cao Cao feared Liu Bei might grow in power and decided he had to get rid of him. Liu Bei managed to escape Cao Cao at Changban and headed towards the lands of Wu, to seek help from their ruler, Sun Quan 孫權. The Wu forces, with troops from Liu Bei, managed to defeat Cao Cao at the decisive Battle of Chibi.
Following Chibi, it was Liu Bei who seized the most land in Jing Province. Wu thought this unfair because Liu Bei's role in the Battle of Chibi was much smaller than Wu's, yet Liu Bei reaped a much bigger reward in the form of almost an entire province, whereas Wu only obtained few commanderies. Liu Bei used Jing Province as a base until his strategist, Pang Tong 龐統, created a plan to defeat Liu Zhang 劉璋, a relative of Liu Bei, and take over Yi Province. Liu Bei succeeded in taking Yi Province, but at the cost of Pang Tong's life. Now holding Jing and Yi Province, Liu Bei was the second most powerful warlord, behind Cao Cao, but above Sun Quan. In 219 Guan Yu, one of Liu Bei's most talented generals, was stationed in Jing Province but was defeated by Wu's Lü Meng 呂蒙 and Jing subsequently fell in the hands of Wu, leaving only Yi Province under the rule of Liu Bei. Later in the year 221 Liu Bei declared himself Emperor of a new Han dynasty.
In 222 Liu Bei wanted to attack Wu to avenge the death of Guan Yu. Before the attack, another of his generals, Zhang Fei, was killed by his subordinates who fled to Wu afterwards. Guan Yu and Zhang Fei had been with Liu Bei since the beginning of his career and their deaths made a big impact on Liu Bei. When Liu Bei met the forces of Wu, led by Lu Xun 陸遜, he was ultimately swiftly and heavily beaten. He fled back to Yi Province where he died in 223. He was succeeded by his, often considered incapable, son Liu Shan 劉禪 who was to administer state affairs with Zhuge Liang 諸葛亮.
Zhuge Liang made peace with Wu and re-newed their alliance. He then headed south where barbarian rebels were causing havoc. He pacified the barbarian regions and gained their loyalty. He then embarked on a large scale offensive on Wei; five campaigns in total, of which all except one failed. On the contrary, despite losing many battles, Zhuge Liang did manage to lose only few men and resources, in contrast to Wei, who lost many men, fine generals and resources.
When Zhuge Liang died in 234 he was succeeded by Jiang Wei 姜維, who also embarked on a campaign against Wei; six in total. He was defeated each time and unlike Zhuge Liang, did lose many men and resources and was slowly draining the state.
In 263 Liu Shan submitted Shu-Han to the invading forces of Wei led by Deng Ai 鄧艾 and Zhong Hui 鍾會.
When Liu Bei founded his own dynasty, he called it Han 漢. It was meant as a continuation of the fallen Han dynasty. His rivals did not acknowledge Liu Bei's Han and therefore referred to it as just Shu 蜀, or Shu-Han 蜀漢, because Liu Bei's capital city, Chengdu, was located in Shu commandery.
Shu was a commandery Yi province, so Liu Bei's rivals could've called Bei's state Yi, but did not, perhaps because the Lius did not always completely rule Yi. Some southerner groups, like the Nanman, who occupied some commanderies south in Yi, did not always acknowledge the rule of the Lius.
The name of a commandery in Yi Province.
Part of the name Shu-Han.
The reign colours of Chinese Dynasties were in accordance with the theories and phases of the Five Powers (Wǔxíng 五行). The Han dynasty ruled through the power of Fire and its colour Red. When Cao Pi forced the abdication of Han, Liu Bei said he was forced to re-found the Han dynasty and make himself its emperor. Thus Liu Bei's Han must've ruled through the same power as the Han dynasty, which was Fire and its associated reign colour was Red.
↑ Liu Shan was posthumously honoured with the title Duke Si of Anle 安樂思公 during the Jin dynasty. In the 4th century he was honoured as Emperor Xiaohuai of Han 漢孝懷皇帝 by Liu Yuan 劉淵 of the Han Zhao state 漢趙. A state of which the founders claimed descent of the Han Imperial familyline.
…Shu-Han was probably Red. Not green like in Koei-Tecmo's Three Kingdoms themed videogames.
Wu, Jonathan, "Introduction to the Kingdom of Shu". Retrieved from Kongming's Archives: kongming.net.
|
0.96167 |
Im feeling like an old parent. Please help. Charles is one-third as old as his father.
Im feeling like an old parent. Please help.
Charles is one-third as old as his father. In six years, Charles will be three-sevenths as old as his father. If Charles is less than 18 years old, how old is Charles' father?
Which would not be a benefit for a child when their parent participates in their classroom? 1.the child feels special and important. 2.the child feels transitory distress by having to say good-bye to a parent twice in one morning.
|
0.999995 |
SAFIYAH TASNEEM : Naked Arab makeup look..
.. or should I rephrase: Arabic look with the Naked palette!
I chose to do another Arabic look as I'm getting in the mood for my holiday to Dubai on Sunday.. So I'll be away for a week but I'll be back soon, so bear with me!
After using Bulletproof tinted moisturiser all over the face and filling in the brows with the Brown Sugar box, I used the UDPP all over the eyes followed by NYX JEP in Milk.
For the inner corner I used Virgin, followed by Half Baked and Smog on the middle of the eyelid. For the outer corner I used Darkhorse and Creep to make Darkhorse darker. I used La Femme white eyeshadow for the browbone, but you can use any white eyeshadow for this.
To line the eyes I used MAC fluidline in Blacktrack, making sure to line the waterline all the way round, including the inner corner of the eye. Using a thin liner brush, I drew another flick with MAC fluidline creating a "V" shape from the outer corner of the eyes.
I added Eylure lashes in Nadine from the Girls Aloud range, although I think Kimberley lashes would have suited this look better!
After concealing under the eyes, I used UD baked bronzer on the cheeks forehead and chin, followed by Sleek's Pan-Tao blush on the cheeks.
The lips were filled in with UD Wicked lip liner (soft pink colour) and Sleek pout polish in Sugar May.
I hope you like this look, I loved doing it and am quite pleased with the results.. so here's some inspiration for the Naked Palette lovers out there!
Definitely a sultry arab look. I'm so going to try re-create this look!
Ooh I wanna try this!!! I love how you shape your shadows, and I love love love the blending! Your hijab color suits this look so well and also your skin tone, love it!
Those eyes are perfection! I love!
amazing eye makeup, I really want to try it but I'm not sure if it'll come out to be this good!!!
amazing eye!!! mashaalah love it!!
That is really gorgeous, you have very pretty eyes!
OHH WOOOW :) BETTER,IMPOSIBLE! YOU ARE GREAT!
MASHALLAH i love it!!! AHHHH i wanna be able to be that good! and i love neutral looks so yes im one of those people you mentioned who absolutely adore the naked palette! :-P lol but yeah AH-MAZING JOB!! s0o gorgeous!
@ halima, ailah, rakhshanda - please do try, would love to see your versions!
@sami - yep I have to admit i prefer UD to MAC if I'm honest with you.. for eyeshadow pigmentation anyway!
@kiranK.A - aww thank you, everybody loves this palette, so do play around with it more as the looks are endless!
@fbegumstar - glad you found my blog, hope you like it!
perfection has a new name: frootibeauty!
It looks gorgeous! Love the colours you chose for the look.
This is by far the best look that I've seen done with the Naked palette!
This look is amazing! You look stunning with those colours! You've really inspired me to recreate this look!
Wow, wow and WOW!! That looks amaaaazing! By far the BEST makeup tutorial I've ever seen!
I am loving this blog!!! I can't wait to learn some new skills from you :-) This colour scheme is amazing btw. You look beautiful with your hijab.
i'm loving this tutorial. every time i enter you blog i have to see it:)). It is AMAZING!!!!!
@liloo aww thank you, you're too kind!
@stacey yes please do, I'd love to see your version of this!
@ everyone - thank you soo much for your lovely comments, i hope some of you do try this look, I would love to see everyones variations!
|
0.950701 |
Marion Price Daniel, Sr. (October 10, 1910August 25, 1988), was a Democratic U.S. Senator and the 38th Governor of the state of Texas. He was appointed by President Lyndon B. Johnson to be a member of the National Security Council, Director of the Office of Emergency Preparedness, and Assistant to the President for Federal-State Relations. Daniel also served as Associate Justice of the Texas Supreme Court.
President Johnson later appointed Daniel to head the Office of Emergency Preparedness. In 1971, Governor Preston Smith named Daniel to the 9-member Texas Supreme Court, filling a vacancy left by the retirement of Clyde E. Smith. He was re-elected twice in 1972 and 1978, and retired at the end of his second term.
After retiring from the Texas Supreme Court, he served as pro-bono legal council for the Alabama-Coushatta Indians. As their counsel, he was instrumental in the 1965 creation of the Texas Commission for Indian Affairs (TCIA), 59th Legislature, House Bill 1096. On April 5, 1967, the Texas Legislature passed House Concurrent Resolution No. 83 recognizing Daniel for his contributions to the tribe and to the creation of the TCIA.
Marion Price Daniel Sr (properly Marion Price Daniel II) was born October 10, 1910 in Dayton, Texas, to Marion Price Daniel Sr (1882 – 1937) and Nannie Blanch Partlow (1886 –1955), in Liberty Texas. He was the eldest child. Sister Ellen Virginia Daniel was born in 1912, and brother William Partlow Daniel in 1915. As a teenager he was a reporter for the Fort Worth Star-Telegram. He put himself through law school at Baylor University by working as a janitor and dishwasher and by working at the Waco News Tribune. He received his degree from Baylor in 1932. After graduation he established his own practice in Liberty County and often accepted livestock and acreage for his fees.
The Jean and Price Daniel Home and Archives came under full ownership of the State of Texas in October 1998. Governor and Mrs. Daniel began construction on the Greek Revival style Liberty, Texas house in 1982, with an official opening in 1984. It was patterned after the Governor's Mansion in Austin designed by architect Abner Cook. The Daniels donated the home and of land, reserving a lifetime interest, to the Texas State Library Archives. The home is the repository of the library, archives, furniture, and mementos that document the Daniels' lives and years of public service.
The Price Daniel House, maintained and funded by the Atascosito Historical Society, is located on the grounds of the Sam Houston Regional Library and Research Center, a part of the Archives and Information Services Division of the Texas State Library and Archives Commission. Located north of Liberty on FM 1011, the Center is open Monday through Friday, 8 AM to 5 PM and Saturday 9 AM to 4 PM. Free admission.Tours are available by appointment; group tours must be arranged two weeks in advance.
Marion Price Daniel Sr. is also known as Marion Price Daniel Jr. and as Marion Price Daniel II, because his father Marion Price Daniel Sr (1882 – 1937) was the first generation with the name. Daniel II married Jean Houston Baldwin on June 28, 1940. Their son publicly known as Marion Price Daniel Jr. is properly Marion Price Daniel, III. The couple also had three other children: Jean Houston Murph, Houston Lee, and John Baldwin.
|
0.685147 |
What's making a petal-pretty showing after this dry winter run?
A SIGHT WORTH SEEING: After you've seen a bit of azure poking through the sand in an arid land, the want, and even need, to experience the rare moment again can be a lifelong thing. Which means that long about late winter your mind turns to the California deserts and whether they're getting some good wildflower action. The short answer, at least following the droughty winter of 2013-2014, is no, the petal scene is not prime, but this is not shocker; predictions, and the lack of rain, said this was to be so. But of course flowers do make a showing, because they always do, if not in blanket form, then a peep here and a peep there. Dr. Ian Malcolm, as played by Jeff Goldblum, said in "Jurassic Park": "...life finds a way." It always does, each spring, and wildflowerians can take heart: Beautiful buds are making a showing around the Anza-Borrego as of late February.
BUT... if you just want some wildflowers, and the setting does not need to be desert-y, consider heading into the Sierra, where wildflower walks grow popular once spring arrives. The meadows of Yosemite make for some fine flower-searching, even when things have been on the drier side. Fingers and stems crossed for a wetter next-winter.
|
0.981816 |
With the intensification of globalization, national boundaries no longer act as major barriers to cross-border economic activities. As such, firms increasingly have a propensity not just to limit their markets to their home countries, but also to try to achieve organizational growth by selling their products in foreign economies. Conversely, this means that before the emergence of globalization, firms’ food chains were relatively stable, and they could peacefully run their businesses within domestic markets. However, the increasingly intensified globalization has triggered a situation in which international markets have turned into ferocious battlefields for multinational corporations (MNCs). Hence, these companies face “struggle for existence” challenges and will eventually die out if they fail to appropriately evolve in such environments (Tian and Slocum, 2015). This highlights the important fact that not all MNCs are enjoying a period of prosperity or continual success. For instance, while some MNCs have accomplished extensive evolution and emerged as powerful organizations in the global arena (e.g., Samsung in Korea, Tata in India, Haier in China, and Cemex in Mexico), some other MNCs also have been losing their edge in the race representing the survival of the fittest and also falling behind their competition (i.e., natural selection of MNCs). Recent examples of such behavior could perhaps be found in some Japanese MNCs, as no one can deny that Japan was a quondam economic giant and that Japanese MNCs were dominant market leaders in both the domestic and international markets.
However, unlike other MNCs achieving successful evolution, certain Japanese firms and several organizations both from developed and emerging economies have autonomously evolved on an isolated island at a long distance from the continent, which can be denoted as the Galápagos syndrome triggering MNC decline in the global markets. This syndrome is used by the media to indicate the geographically isolated development of an otherwise globally available product. Some MNCs have developed a number of specialized products (e.g., Japanese 3G mobile phones and smartphones, NTT DoCoMo's i-mode, Nintendo video game consoles, etc.), but have been unsuccessful abroad (Akimoto, 2011; Isawa, 2016; Tabuchi, 2009). In addition, the previous difficulties experienced by Korean automakers (e.g., Daewoo and Ssangyong) were also caused by the fact that they kept producing outdated models or models that were only attractive to consumers in their home market. Thus, this phenomenon raises an interesting research question: why were these MNCs in danger of dying out?
The current theories and extensive literature in the international business field have been mainly focusing on answering research questions on such issues as (a) why MNCs choose foreign direct investment (Buckley and Casson, 1976; Fan et al., 2016; Park, Lee and Hong, 2011), (b) how MNCs maximize their earnings in host countries in spite of the presence of the liabilities of foreignness (Buckley and Casson, 1999; Miller, Lavie and Delios, 2016), and (c) which conditions influence MNCs’ choice of entry modes (Dunning, 1993, 2000; Majocchi, Mayrhofer and Camps, 2013; Williams, Lukoianova and Martinez, 2017) and, consequently, we know little about why some MNCs do not evolve successfully while others avoid retrogression by natural selection. In addition, discussions dealing with the heterogeneity of firms declining in the global markets are still in its infancy. There are different patterns of heterogeneity of firms' declines in the global markets according to their status of various development levels, though. Therefore, the aim of this Special Issue is to bring together the theoretical and empirical advancements by focusing on discussions on the evolution of MNCs versus their decline in international business territories. Thus, we welcome conceptual and empirical papers using quantitative, qualitative and mixed-method approaches on any level and across levels of analysis.
• Why do some MNCs become victims of intensified globalization and what factors influence this?
• Is there a particular relationship between non-innovative corporate behaviors and natural selection in competition?
• Why does a good corporate image die out over the course of time and what are the key factors affecting the maintenance of a MNC’s image?
• What are the primary conditions that enhance the survival of emerging market MNCs and minimize the negative organizational outcomes resulting from outward FDI from emerging countries?
• Do emerging market MNCs and developed country MNCs follow different paths of evolution?
• What internal and external factors enable MNCs to overcome the crisis of natural selection?
• How do MNCs from the least developed countries transform their environmental capability in order to internationally link it to organizational advantages under the institutional void?
• Do consumers, employees and investors respond differently to MNCs’ products and services in different countries?
• Does the country of origin affect the evolution of MNCs versus their decline?
We encourage scholars to use other disciplines and regard novel theoretical, methodological and empirical touches in order to understand, measure, and analyze the above topics. Intersectoral and interdisciplinary studies, associated with these topics, will enhance major theoretical and empirical contributions in various combinations of multiple sectors and disciplines. These research ideas are not exhaustive, and other topics within a main category of this special issue are welcome.
All authors who are invited to revise and resubmit their manuscripts are expected to present their papers at a MD Special Issue Workshop at Chongqing Technology & Business University (CTBU), China (July 26, 2019), but their participation is not compulsory. CTBU will provide individual accommodations for two nights to one author of every paper invited to revise and resubmit after the first round, and cover all local workshop expenses (food, drinks, etc.). During the workshop, the guest editors for the special issue and MD editorial board members will give constructive feedback to paper presentations to improve the quality of their papers to enlarge the effect of the special issue.
Akimoto, A. (2011), In the battle with smart phones is i-mode dead? The Japan Times, April 20, 2011. Available at https://www.japantimes.co.jp/life/2011/04/20/digital/in-the-battle-with-smart-phones-is-i-mode-dead/#.WkSQMFVl_3g [accessed on December 28, 2017].
Buckley, P.J. and Casson, M. (1999), “A theory of international operations”, In: Buckley, P. J. and Ghauri, P. N. (eds.), The internationalization of the firm. London: International Thomson Business Press, 55-60.
Buckley, P.J. and Casson, M. (1976), The future of the multinational enterprise. London: Macmillan.
Dunning, J.H. (2000), “The eclectic paradigm as an envelope for economic and business theories of MNE activity”, International Business Review, Vol. 9, pp. 163-190.
Dunning, J.H. (1993), Multinational enterprises and the global economy. Wokingham: Addison-Wesley.
Fan, D., Cui, L., Li, Y. and Zhu, C.J. (2016), “Localized learning by emerging multinational enterprises in developed host countries: A fuzzy-set analysis of Chinese foreign direct investment in Australia”, International Business Review, Vol. 25 No. 1, pp. 187-203.
Isawa, M. (2016), Nintendo keeps paying, despite less playing. Nikkei Asian Review, March 4, 2016. Available at https://asia.nikkei.com/Business/Companies/Nintendo-keeps-paying-despite-less-playing [accessed on December 28, 2017].
Miller, S. R., Lavie, D. and Delios, A. (2016), “International intensity, diversity, and distance: Unpacking the internationalization–performance relationship”, International Business Review, Vol. 25 No. 4, pp. 907-920.
Park, Y.-R., Lee, J.Y. and Hong, S. (2011), “Effects of international entry-order strategies on foreign subsidiary exit: The case of Korean chaebols”, Management Decision, Vol. 49 No. 9, pp. 1471-1488.
Tabuchi, H. (2009), Why Japan’s cellphones haven’t gone global. The New York Times, July 19, 2009. Available at http://www.nytimes.com/2009/07/20/technology/20cell.html [accessed on December 28, 2017].
Tian, X. and Slocum, J. W. (2015), “The decline of global market leaders”, Journal of World Business, Vol. 50, pp. 15-25.
Williams, C., Lukoianova, T. and Martinez, C.A. (2017), “The moderating effect of bilateral investment treaty stringency on the relationship between political instability and subsidiary ownership choice”, International Business Review, Vol. 26 No. 1, pp. 1-11.
|
0.999554 |
Looking for a new winter hobby to help pass the colder months? Knitting is a fun pastime that can be very fun and rewarding. Although projects such as sweaters and blankets can seem quite daunting at first, knitting is actually a fairly easy hobby to begin. Here are some suggestions for those interested in learning how to knit.
If you’re someone who needs more guidance, local craft stores often times offer very reasonably priced knitting classes for a variety of levels. Some community centers such as YMCAs even have beginner knitting groups to join for little or no cost. If you’re someone who does better with online instruction, check out craftsy.com or even YouTube for beginner tutorials.
Choose a small project to begin. Even if you feel confident enough to begin with a larger project, practicing on a smaller scale is always helpful. Consider testing out your new yarn by simply knitting a small square first to get the hang of the new yarn you’re using. Hold on to all these little squares and overtime, knit them all together to create a patchwork inspired blanket.
Do your research before beginning a project. Especially for items like scarves and blankets, be sure to research the proper width for the type of yarn and needles you’re using. You can always make a scarf or blanket longer, but the width has to be decided on during the first row, so make sure you get it right from the start!
If you’re worried about lumps and bumps in the stitching, choose a variegated yarn that has multiple different colors in it. Not only does this help to cover up a messed up stitch or two, but it’s also self-striping, making the stitches look more complicated than they are.
|
0.999063 |
Microkernel-based systems divide the operating system functionality into individual and isolated components. The system components are subject to applicationclass protection and isolation. This structuring method has a number of benefits, such as fault isolation between system components, safe extensibility, co-existence of different policies, and isolation between mutually distrusting components. However, such strict isolation limits the information flow between subsystems including information that is essential for performance and scalability in multiprocessor systems.
Semantically richer kernel abstractions scale at the cost of generality and minimality two desired properties of a microkernel. I propose an architecture that allows for dynamic adjustment of scalability-relevant parameters in a general, flexible, and safe manner. I introduce isolation boundaries for microkernel resources and the system processors. The boundaries are controlled at user-level. Operating system components and applications can transform their semantic information into three basic parameters relevant for scalability: the involved processors (depending on their relation and interconnect), degree of concurrency, and groups of resources.
5.) efficiently track and communicate resource usage in a component-based operating system.
Based on my architecture, it is possible to efficiently co-host multiple isolated, independent, and loosely coupled systems on larger multiprocessor systems, and also to fine-tune individual subsystems of a system that have different and potentially conflicting scalability and performance requirements.
I describe the application of my techniques to a real system: L4Ka::Pistachio, the latest variant of an L4 microkernel. L4Ka::Pistachio is used in a variety of research and industry projects. Introducing a new dimension to a system - parallelism of multiprocessors - naturally introduces new complexity and overheads. I evaluate my solutions by comparing with the most challenging competitor: the uniprocessor variant of the very same and highly optimized microkernel.
|
0.991981 |
Graduate from spaghetti and meatballs to this pasta dish.
1.) Sauté carrots and celery in a Dutch oven over high heat with 1 tablespoon butter and a drizzle of olive oil. After a minute, add onion. Stir. Cook down until vegetables are caramelized.
2.) Add beef to pot, breaking it up with your spoon. Stir. Cook for 5 minutes. Add white wine to deglaze.
3.) Lower heat, add milk and stir. Freshly grate nutmeg into pot. 4.) Add the tomato paste and stir. Add can of tomatoes, crushing the tomatoes as you stir. Add water and bring to simmer. Cover and slow cook for five hours.
5.) One hour before serving, remove top, bring heat up a bit to evaporate some of the liquids.
6.) When ready to serve, add in cooked pasta of your choice. Add salt to taste. Grate Parmigiano-Reggiano over the top, sprinkle with chopped parsley, and serve.
|
0.971293 |
Reduce Stress / Anxiety – £120 per session or £ 300 for 3 sessions.
Prices Quit smoking is £250 – Most people quit after one session, however if a follow up is needed this is included in the price.
Those suffering from medical conditions such as epilepsy or those with psychiatric conditions (for example, psychosis, personality disorder, schizophrenia, bipolar, hysteria) should not engage in hypnotherapy. People with life-threatening diseases should consult their GP/doctor before using hypnotherapy.
Can you give me some examples of how hypnotherapy works?
People who are overweight often consciously know that they need to control their weight and yet they still feel drawn to eating unhealthily or eating too much. This is because their unconscious mind thinks differently, has learned this behaviour and keeps them doing what they’ve always done. Hypnotherapy deals with the unconscious mind and makes suggestions for healthy, lasting change.
Similarly, smokers often consciously want to stop smoking because of the health risks etc, yet they keep on smoking and just can’t seem to stop. This is because their more powerful, unconscious mind is in charge and continuing their old habits. Hypnotherapy tackles this unconscious behaviour.
People with phobias can often consciously think that they should not be afraid, yet whenever they are faced with the object of their phobia their unconscious fear kicks in and they react. Hypnotherapy deals with the causes of the fear as well as the symptoms and gradually builds confidence.
Hypnotherapy can also help with medical conditions where the illness or pain often has a psychological or emotional aspect to it. The mind and body are inextricably linked and our thoughts, behaviours, memories and emotions can have a direct effect on our body. Dealing with stress and anxiety can effectively help with healing and can reduce pain. Hypnotherapy is complementary to medicine and all medical conditions should be checked by a qualified medical practitioner before coming to a hypnotherapist. Do not stop taking any medication without consultation with your GP and do not self-diagnose.
|
0.999998 |
How do I choose the best breast augmentation surgeon?
While breast augmentation surgery is one of the most popular cosmetic procedures worldwide, it is prudent to choose a doctor you truly connect with, as honest, straightforward communication is the best guarantee you will achieve the look you want. For almost a decade, Dr. Lahijani has consistently performed thousands of successful breast surgeries in Beverly Hills, Valencia and Palmdale. He is chosen for his considerate, approachable demeanor, listening ability and commitment to unique, patient-centric outcomes rather than a pedestrian one-implant-fits-all mentality.
Breast implants - come in a wide variety of shapes, textures and sizes and are customized to your frame, lifestyle and wishes. Choosing the right implant can be explored in depth during your consultation with the doctor.
Saline - Saline implants are typically filled after they are inserted in the chest capsules, affording the surgeon smaller incisions. Saline implants may be more appropriate in cases where breast symmetry is being corrected, and where an extra large breast size is requested. Although they tend to feel less authentic, saline implants are a more cost-effective option for many clients.
Silicone - Less rippling, wrinkling and migration are some advantages of the silicone implant, along with a more realistic feel and appearance. The negatives include their higher price tag, larger incisions, and a higher incidence of complications.
Gummy Bear - With their tear-shaped design and authentic feel, Gummy Bear implants are quite natural and are considered a leap forward in breast implant engineering. Their firm silicone gel-based interiors are "form-stable" and have a low incidence of rupturing.
Inframammary - In this case, the incision is placed in the crease where the breast meets the chest, also known as the inframammary fold. This is typically the best choice when inserting silicone gel breast implants.
Periareolar - Here the incision is hidden in the darkly pigmented border of the areola complex, resulting in less visible scarring. This technique may favor surgery that includes a breast lift. A periareolar incision affords the surgeon the most control and precision in placing the implant, but may present a risk to women who wish to breastfeed in the future.
Transaxillary - During this technique, the breast implants are inserted through inconspicuous incisions placed at the armpits. An endoscope may be used to increase the surgeon's accuracy.
Transumbilical (TUBA) - With TUBA, a small incision at the navel is used to create access to the breast area through a tunnel, often navigated with an endoscope. This process ensures almost no scarring; however, in a few instances it can present difficulties with breast symmetry.
Implants can be placed either beneath the pec muscles (submuscular), or above the pecs and beneath the breast glands (subglandular). Most doctors agree that a submuscular placement tends to offer certain benefits, including a reduction in capsular contracture rates, better support for the implant, and a more natural implant appearance. However, the placement chosen by the doctor will depend on a number of factors, all of which can be discussed in detail during your private consultation.
With any major surgery, minor discomfort, swelling, bruising and fatigue is to be expected. Dr. Lahijani will furnish you with pain medications, ointments, creams and a support bra, and will provide a comprehensive written recovery plan detailing your care. For the first few weeks, you are encouraged to abstain from household chores and activities that involve pressure on the chest, bending or lifting. Although it can be hard to slow down in our accomplishment-driven culture, it is best to take the appropriate amount of time to heal rather than compromise the results of your surgery.
|
0.978043 |
The greatest asset brought by the Zionists settling Palestine was their organizational acumen, which allowed for the institutionalization of the movement despite deep ideological cleavages. The WZO established an executive office in Palestine, thus implementing the language of the Mandate prescribing such an agency. In August 1929, the formalized Jewish Agency was established with a council, administrative committee, and executive. Each of these bodies consisted of an equal number of Zionist and nominally non-Zionist Jews. The president of the WZO was, however, ex officio president of the agency. Thereafter, the WZO continued to conduct external diplomatic, informational, and cultural activities, and the operational Jewish Agency took over fundraising, activities in Palestine, and local relations with the British Mandate Authority (administered by the colonial secretary). In time, the World Zionist Organization and the Jewish Agency became two different names for virtually the same organization.
Other landmark developments by the WZO and the Jewish Agency under the Mandate included creation of the Asefat Hanivharim (Elected Assembly) and the Vaad Leumi (National Council) in 1920 to promote religious, educational, and welfare services; establishment of the chief rabbinate in 1921; centralized Zionist control of the Hebrew school system in 1919, opening of the Technion (Israel Institute of Technology) in Haifa in 1924, and dedication of the Hebrew University of Jerusalem in 1925; and continued acquisition of land--largely via purchases by the Jewish National Fund--increasing from 60,120 hectares in 1922 to about 155,140 hectares in 1939, and the concurrent growth of Jewish urban and village centers.
The architect of the centralized organizational structure that dominated the Yishuv throughout the Mandate and afterward was Ben- Gurion. To achieve a centralized Jewish economic infrastructure in Palestine, he set out to form a large-scale organized Jewish labor movement including both urban and agricultural laborers. In 1919 he founded the first united Labor Zionist party, Ahdut HaAvodah (Unity of Labor), which included Poalei Tziyyon and affiliated socialist groups. This achievement was followed in 1920 by the formation of the Histadrut, or HaHistadrut HaKlalit shel HaOvdim B'Eretz Yisrael (General Federation of Laborers in the Land of Israel).
The Histadrut was the linchpin of Ben-Gurion's reorganization of the Yishuv. He designed the Histadrut to form a tightly controlled autonomous Jewish economic state within the Palestinian economy. It functioned as much more than a traditional labor union, providing the Yishuv with social services and security, setting up training centers, helping absorb new immigrants, and instructing them in Hebrew. Its membership was all-inclusive: any Jewish laborer was entitled to belong and to obtain shares in the organization's assets. It established a general fund supported by workers' dues that provided all members with social services previously provided by individual political parties. The Histadrut also set up Hevrat HaOvdim (Society of Workers) to fund and manage large-scale agricultural and industrial enterprises. Within a year of its establishment in 1921, Hevrat HaOvdim had set up Tenuvah, the agriculture marketing cooperative; Bank HaPoalim, the workers' bank; and Soleh Boneh, the construction firm. Originally established by Ahdut HaAvodah after the Arab riots in 1920, the Haganah under the Histadrut rapidly became the major Jewish defense force.
From the beginning, Ben-Gurion and Ahdut HaAvodah dominated the Histadrut and through it the Yishuv. As secretary general of the Histadrut, Ben-Gurion oversaw the development of the Jewish economy and defense forces in the Yishuv. This centralized control enabled the Yishuv to endure both severe economic hardship and frequent skirmishes with the Arabs and British in the late 1920s. The resilience of the Histadrut in the face of economic depression enabled Ben-Gurion to consolidate his control over the Yishuv. In 1929 many private entrepreneurs were forced to look to Ahdut HaAvodah to pull them through hard economic times. In 1930 Ahdut HaAvodah was powerful enough to absorb its old ideological rival, HaPoel HaTzair. They merged to form Mifleget Poalei Eretz Yisrael (better known by its acronym Mapai), which would dominate political life of the State of Israel for the next two generations.
The hegemony of Ben-Gurion's Labor Zionism in the Yishuv did not go unchallenged. The other major contenders for power were the Revisionist Zionists led by Vladimir Jabotinsky, who espoused a more liberal economic structure and a more zealous defense policy than the Labor movement. Jabotinsky, who had become a hero to the Yishuv because of his role in the defense of the Jews of Jerusalem during the riots of April 1920, believed that there was an inherent conflict between Zionist objectives and the aspirations of Palestinian Arabs. He called for the establishment of a strong Jewish military force capable of compelling the Arabs to accept Zionist claims to Palestine. Jabotinsky also thought that Ben- Gurion's focus on building a socialist Jewish economy in Palestine needlessly diverted the Zionist movement from its true goal: the establishment of a Jewish state in Palestine.
The appeal of Revisionist Zionism grew between 1924 and 1930 as a result of an influx of Polish immigrants and the escalating conflict with the Arabs. In the mid-1920s, a political and economic crisis in Poland and the Johnson-Lodge Immigration Act passed by the United States Congress, which curtailed mass immigration to America, spurred Polish-Jewish immigration to Israel. Between 1924 and 1931, approximately 80,000 Jews arrived in Palestine from Central Europe. The Fourth Aliyah, as it was called, differed from previous waves of Jewish immigration. The new Polish immigrants, unlike the Bolshevik-minded immigrants of the Second Aliyah, were primarily petty merchants and small-time industrialists with their own capital to invest. Not attracted to the Labor Party's collective settlements, they migrated to the cities where they established the first semblance of an industrialized urban Jewish economy in Palestine. Within five years, the Jewish populations of Jerusalem and Haifa doubled, and the city of Tel Aviv emerged. These new immigrants disdained the socialism of the Histadrut and increasingly identified with the laissez-faire economics espoused by Jabotinsky.
Another reason for Jabotinsky's increasing appeal was the escalation of Jewish-Arab violence. Jabotinsky's belief in the inevitable conflict between Jews and Arabs and his call for the establishment of an "iron wall" that would force the Arabs to accept Zionism were vindicated in the minds of many Jews after a confrontation over Jewish access to the Wailing Wall in August 1929 turned into a violent Arab attack on Jews in Hebron and Jerusalem. By the time the fighting ended, 133 Jews had been killed and 339 wounded. The causes of the disturbances were varied: an inter- Palestinian power struggle, a significant cutback in British military presence in Palestine, and a more conciliatory posture by the new British authorities toward the Arab position.
The inability of the Haganah to protect Jewish civilians during the 1929 riots led Jewish Polish immigrants who supported Jabotinsky to break away from the Labor-dominated Haganah. They were members of Betar, an activist Zionist movement founded in 1923 in Riga, Latvia, under the influence of Jabotinsky. The first Betar congress met at Danzig in 1931 and elected Jabotinsky as its leader. In 1937, a group of Haganah members left the organization in protest against its "defensive" orientation and joined forces with Betar to set up a new and more militant armed underground organization, known as the Irgun. The formal name of the Irgun was the Irgun Zvai Leumi (National Military Organization), sometimes also called by the acronym, Etzel, from the initial letters of the Hebrew name. The more extreme terrorist group, known to the British as the Stern Gang, split off from the Irgun in 1939. The Stern Gang was formally known as the Lohamei Herut Israel (Fighters for Israel's Freedom), sometimes identified by the acronym Lehi. Betar (which later formed a nucleus for Herut) and Irgun rejected the Histadrut/Haganah doctrine of havlaga (self-restraint) and favored retaliation.
Although the 1929 riots intensified the Labor-Revisionist split over the tactics necessary to attain Jewish sovereignty in Palestine, their respective visions of the indigenous Arab population coalesced. Ben-Gurion, like Jabotinsky, came to realize that the conflict between Arab and Jewish nationalisms was irreconcilable and therefore that the Yishuv needed to prepare for an eventual military confrontation with the Arabs. He differed with Jabotinsky, however, on the need to make tactical compromises in the short term to attain Jewish statehood at a more propitious time. Whereas Jabotinsky adamantly put forth maximalist demands, such as the immediate proclamation of statehood in all of historic Palestine--on both banks of the Jordan River--Ben-Gurion operated within the confines of the Mandate. He understood better than Jabotinsky that timing was the key to the Zionist enterprise in Palestine. The Yishuv in the 1930s lacked the necessary military or economic power to carry out Jabotinsky's vision in the face of Arab and British opposition.
Another development resulting from the 1929 riots was the growing animosity between the British Mandate Authority and the Yishuv. The inactivity of the British while Arab bands were attacking Jewish settlers strengthened Zionist anti-British forces. Following the riots, the British set up the Shaw Commission to determine the cause of the disturbances. The commission report, dated March 30, 1930, refrained from blaming either community but focused on Arab apprehensions about Jewish labor practices and land purchases. The commission's allegations were investigated by an agrarian expert, Sir John Hope Simpson, who concluded that about 30 percent of the Arab population was already landless and that the amount of land remaining in Arab hands would be insufficient to divide among their offspring. This led to the Passfield White Paper (October 1930), which recommended that Jewish immigration be stopped if it prevented Arabs from obtaining employment and that Jewish land purchases be curtailed. Although the Passfield White Paper was publicly repudiated by Prime Minister Ramsay MacDonald in 1931, it served to alienate further the Yishuv from the British.
The year 1929 also saw the beginning of a severe economic crisis in Germany that launched the rise of Adolf Hitler. Although both Germany and Austria had long histories of anti-Semitism, the genocide policies preached by Hitler were unprecedented. When in January 1930 he became chancellor of the Reich, a massive wave of mostly German Jewish immigration to Palestine ensued. Recorded Jewish immigration was 37,000 in 1933, 45,000 in 1934, and an all- time record for the Yishu of 61,000 in 1935. In addition, the British estimated that a total of 40,000 Jews had entered Palestine without legal certificates during the period from 1920 to 1939. Between 1929, the year of the Wailing Wall disturbances, and 1936, the year the Palestinian Revolt began, the Jewish population of Palestine increased from 170,000 or 17 percent of the population, to 400,000, or approximately 31 percent of the total. The immigration of thousands of German Jews accelerated the pace of industrialization and made the concept of a Jewish state in Palestine a more formidable reality.
|
0.954629 |
Genome assemblies across all domains of life are being produced routinely. Initial analysis of a new genome usually includes annotation and comparative genomics. Synteny provides a framework in which conservation of homologous genes and gene order is identified between genomes of different species. The availability of human and mouse genomes paved the way for algorithm development in large-scale synteny mapping, which eventually became an integral part of comparative genomics. Synteny analysis is regularly performed on assembled sequences that are fragmented, neglecting the fact that most methods were developed using complete genomes. It is unknown to what extent draft assemblies lead to errors in such analysis.
We fragmented genome assemblies of model nematodes to various extents and conducted synteny identification and downstream analysis. We first show that synteny between species can be underestimated up to 40% and find disagreements between popular tools that infer synteny blocks. This inconsistency and further demonstration of erroneous gene ontology enrichment tests raise questions about the robustness of previous synteny analysis when gold standard genome sequences remain limited. In addition, assembly scaffolding using a reference guided approach with a closely related species may result in chimeric scaffolds with inflated assembly metrics if a true evolutionary relationship was overlooked. Annotation quality, however, has minimal effect on synteny if the assembled genome is highly contiguous.
Our results show that a minimum N50 of 1 Mb is required for robust downstream synteny analysis, which emphasizes the importance of gold standard genomes to the science community, and should be achieved given the current progress in sequencing technology.
The essence of comparative genomics lies in how we compare genomes to reveal species’ evolutionary relationships. Advances in sequencing technologies have enabled the exploration of many new genomes across all domains of life [1–8]. Unfortunately, in most instances correctly aligning even just two genomes at base-pair resolution can be challenging. A genome usually contains millions or billions of nucleotides and is different from the genome of a closely related species as a result of evolutionary processes such as sequence mutations, chromosomal rearrangements, and gene family expansion or loss. There are high computational costs when trying to align and assign multiple copies of DNA that are identical to each other, such as segmental duplications and transposable elements [9–12]. In addition, it has been shown that popular alignment methods disagree with each other .
An alternative and arguably more practical approach relies on the identification of synteny blocks [13, 14], first described as homologous genetic loci that co-occur on the same chromosome [15, 16]. Synteny blocks are more formally defined as regions of chromosomes between genomes that share a common order of homologous genes derived from a common ancestor [17, 18]. Alternative names such as conserved synteny or collinearity have been used interchangeably [13, 19–22]. Comparisons of genome synteny between and within species have provided an opportunity to study evolutionary processes that lead to diversity of chromosome number and structure in many lineages across the tree of life [23, 24]; early discoveries using such approaches include chromosomal conserved regions in nematodes and yeast [25–27], evolutionary history and phenotypic traits of extremely conserved Hox gene clusters across animals and MADS-box gene family in plants [28, 29], and karyotype evolution in mammals and plants . Analysis of synteny in closely related species is now the norm for every new published genome. However, assembly quality comes into question as it has been demonstrated to affect subsequent analysis such as annotation or rate of lateral transfer [32, 33].
In general, synteny identification is a filtering and organizing process of all local similarities between genome sequences into a coherent global picture . The most intuitive way to identify synteny would be to establish from selective genome alignments [35, 36], but levels of nucleotide divergence between species may make such methodologies challenging. Instead, many tools use orthologous relationships between protein-coding genes as anchors to position statistically significant local alignments. Approaches include the use of a directed acyclic graph [37, 38], a gene homology matrix (GHM) , and an algorithm using reciprocal best hits (RBH) . All of these methods generally agree on long synteny blocks, but have differences in handling local shuffles as well as in the resolution of synteny identification [34, 40]. Better resolution of micro-rearrangements in synteny patterns has been shown when using an improved draft genome of Caenorhabditis briggsae versus Caenorhabditis elegans [26, 41]. Hence, synteny analysis depends highly on assembly quality. For example, missing sequences in an assembly may lead to missing gene annotations and subsequently missing orthologous relationships . With respect to assembly contiguation, it still remains unclear whether assembly fragmentation affects homology assignments for identifying anchors, sequence arrangements for examining order and gaps, or other factors in synteny analysis.
In this study, we focus on how assembly quality affects the identification of genome synteny. We investigate the correlation between error rate (%) in detecting synteny and the level of assembly contiguation using five popular software packages (DAGchainer , i-ADHoRe , MCScanX , SynChro , and Satsuma ) on four nematodes: Caenorhabditis elegans, Caenorhabditis briggsae, Strongyloides ratti, and Strongyloides stercoralis. We also carried out and explored the effects of three scenarios associated with synteny analysis: gene ontology (GO) enrichment, reference-guided assembly scaffolding, and annotation quality. Our findings show that assembly quality does matter in synteny analysis, and fragmented assemblies ultimately lead to erroneous findings. In addition, the true evolutionary relationship may be lost if a fragmented assembly is scaffolded using a reference-guided approach. Our main aim here is to determine a minimum contiguation of assembly for subsequent synteny analysis to be trustworthy, which should be possible using the latest sequencing technologies .
We begin with some terminology used throughout this study. As shown in Fig. 1, a synteny block is defined as a region of genome sequence spanning a number of genes that are orthologous and co-arranged with another genome. Orientation is not considered (Fig. 1, block a and b). The minimum number of co-arranged orthologs said to be the anchors can be set and vary between different studies. A higher number of minimum anchors may result in fewer false positives, but also a more conservative estimate of synteny blocks (Additional file 1: Figure S1). In some programs, some degrees of gaps—defined as the number of skipped genes or the length of unaligned nucleotides—are tolerated (Fig. 1, block c). A score is usually calculated, and synteny breaks are regions that do not satisfy a certain score threshold. Possible scenarios that lead to synteny breaks include a lack of anchors in the first place (Fig. 1, break a), a break in anchor order (Fig. 1, break b), or gaps (Fig. 1, break c). Genome insertions and duplications may cause oversized gaps. An example is break c in Fig. 1, which is due to either a large unaligned region (Fig. 1, P1-Q1 and Q2-R2) or a high number of skipped genes (Fig. 1, S2-T2-X2 within Q2-R2). Alternatively, an inversion (Fig. 1, orthologs K and L), deletion, or transposition (Fig. 1, ortholog X) may cause a loss of anchors (Fig. 1, gene D in species 1) or a break in the arrangement of anchors. Typically, synteny coverage is commonly used as a summary metric obtained by taking the summed length of blocks and dividing it by genome size. Note that synteny coverage is asymmetrical between reference and query genomes, as demonstrated by the difference of block length in block c (Fig. 1).
There are several programs developed to identify synteny blocks, which can produce quite different results . Our first aim is to systematically assess the synteny identification of four popular anchor-based tools: DAGchainer , i-ADHoRe , MCScanX , SynChro and one based solely on nucleotide alignments: Satsuma . As whole genome alignments between bacteria, which have relatively small genomes, is becoming common practice , we conduct this study on species with larger genome sizes. We chose Caenorhabditis elegans, a model eukaryote with a 100 megabase (Mb) reference genome. Our first question was if these programs would produce 100% synteny coverage if the C. elegans genome was compared to itself. As expected, all anchor-based tools accurately achieved almost 100% synteny coverage, with the exception of Satsuma reaching 96% (Fig. 2).
We then fragmented the C. elegans genome into fixed intervals of either 100 kb, 200 kb, 500 kb or 1 Mb to evaluate the performance of different programs when using self-comparisons (see Methods). Synteny coverages of the fragmented assembly (query) against the original assembly (reference) were calculated for both query and reference sequences. Generally, synteny coverage decreased as the assembly was broken into smaller pieces. For example, an average of 16% decrease in synteny coverage was obtained using the assembly with fixed fragment size of 100 kb (Additional file 2: Table S1). Sites of fragmentation are highly correlated with synteny breaks in anchor-based programs (Fig. 2, Additional file 3: Figure S2, and Additional file 4: Figure S3). One explanation is that the fragmented assembly introduced breaks within genes that resulted in loss of anchors (Fig.1, break a), which can be common in real assemblies if introns contain hard to assemble sequences . Another explanation is that the breaks between genes lead to the number of anchors not reaching the required minimum (Fig. 1, Break a). For the case of Satsuma, synteny identification was not affected by assembly fragmentation (Fig. 2, Additional file 3: Figure S2, and Additional file 4: Figure S3; Additional file 2: Table S1).
More fragmented assemblies led to greater differences in synteny coverage predicted between the four anchor-based tools (Fig. 2, Additional file 3: Figure S2, and Additional file 4: Figure S3). We carefully examined regions where synteny was predicted in some programs but not the other (Figs. 2 and 3). Figure 3 demonstrates such a case of disagreement. It is apparent that Satsuma is neither affected by genome fragmentation nor gene distribution (Fig. 3). For the other programs, DAGchainer and i-ADHoRe produced the same results, whilst MCScanX and SynChro detected less and more synteny, respectively (Fig. 3). MCScanX’s gap scoring scheme used a stricter synteny definition, and more synteny blocks can be identified when the gap threshold was lowered (Fig. 3, situation a; also see Additional file 5: Figure S4). SynChro has its own dedicated orthology assignment approach that assigns more homologous anchors (Fig. 3, situation b). Additionally, SynChro uses only 2 genes as anchors to form a synteny block, while the default is at least five gene anchors in other three tools (Fig. 3, situation b). Together, these results suggest that the default parameters set by different tools will lead to differences in synteny identification and need to be tuned before undertaking subsequent analysis.
To quantify the effect of assembly contiguation in synteny analysis, we used four nematode genomes: Caenorhabditis elegans, Caenorhabditis briggsae, Strongyloides ratti, and Strongyloides stercoralis. Nematodes are useful models in synteny analysis as 1) extensive chromosomal rearrangement is a hallmark of their genome evolution [7, 25, 26, 45, 46] and 2) their genome sequences are highly contiguous and assembled into chromosomes [7, 25, 26, 45]. These two genera were also chosen to investigate the intrinsic species effect as they differ in gene density (Table 1). Our fragmentation approach was first used to break the C. elegans and S. ratti genomes into fixed sequence sizes of either 100 kb, 200 kb, 500 kb, or 1 Mb. Here, we define the error rate as the difference between the original synteny coverage (almost at 100%) and fragmented assembly. For each fixed length, the fragmentation was repeated 100 times for most programs so that assemblies got broken at different places to obtain a distribution; the fragmentation was only repeated 10 times in Satsuma due to its long run time. There is a positive correlation between error rate and level of fragmentation, except for synteny blocks detected by Satsuma (Fig. 4a and b; Additional file 2: Table S1). Amongst the four anchor-based tools, the median error rate can be as high as 18% for 100 kb fragmented assemblies (Additional file 2: Table S1) and the fragmentation has the largest effect in MCScanX and smallest in SynChro (Fig. 4a and b; Additional file 2: Table S1).
A common analysis when reporting a new genome is inferring synteny against a closely related species. Hence, we reanalyzed synteny between C. elegans and C. briggsae, S. ratti and S. stercoralis. Satsuma found only 19% and 54% synteny in C. elegans-C. briggsae and S. ratti-S. stercoralis comparisons, respectively, presumably because of difficulty in establishing orthology at the nucleotide level (Additional file 2: Table S1). On average, the four anchor-based tools found 77% and 83% synteny between C. elegans-C. briggsae and S. ratti-S. stercoralis respectively (Additional file 2: Table S1). In contrast to the general agreement on within-species self-comparisons, the anchor-based tools varied considerably on these inter-species (i.e. more diverged) comparisons (Additional file 6: Figure S5 and Additional file 2: Table S1). For example, in the C. elegans-C. briggsae comparisons, a difference of 25% in synteny coverage was found between the results of i-ADHoRe and SynChro (Additional file 6: Figure S5 and Additional file 2: Table S1), while this tool variation was interestingly much lower in S. ratti-S. stercoralis—only a 9% difference (Additional file 2: Table S1). To increase the complexity, we fragmented C. briggsae and S. stercoralis into fixed sequence sizes using the same approach as previously mentioned and compared them with the genome of C. elegans and S. ratti, respectively. We found that MCScanX still underestimated synteny the most as the scaffold size decreased from 1 Mb to 100 kb. Strikingly, the median error rate was as high as 40% in C. elegans-C. briggsae but only 12% in S. ratti-S. stercoralis comparisons (Fig. 4c and d). The error rate is also as high as 40% and largely variable in the comparison between S. ratti and 100 kb fragmented S. stercoralis using Satsuma (Fig. 4d). This observation suggests that higher gene density leads to a more robust synteny detection in fragmented assemblies when more anchors (genes) are available in a given sequence (Additional file 1: Table 1 and Additional file 1: Figure S1).
To assess the robustness of our observations from the fragmentation approach, we sought to compare real assemblies of various contiguities. A recent publicly available genome of C. elegans using long reads data and three older versions of C. briggsae genomes assemblies were retrieved (see Methods). An error rate of 1.1% in synteny identified from DAGchainer was obtained when comparing the recent C. elegans assembly with N50 of 1.6 Mb against the reference, which is very similar to our 1 Mb fragmented assemblies of 1.5% (Fig. 5a). When we compared C. elegans against C. briggsae assemblies of different contiguation, error rates were negatively correlated with N50, regardless of how the C. briggsae assemblies were derived, i.e., simulated or published assemblies (Fig. 5a). The distribution of sequence length in the assemblies indicate that our fragmented approach of fixed sizes may not capture the sequence length residing at either tail of the distribution (Fig. 5b). The short sequences were abundant in published assemblies, but occupy less than 2.5% of the assemblies (as specified to the left of N97.5 in Fig. 5b). Nevertheless, in terms of synteny coverage, these results suggest that our fragmentation approach is robust.
Functional enrichment of genes of interest is often performed after the establishment of orthology and synteny [26, 47–50]. Synteny breaks are caused by rearrangements, the insertion of novel genes, or the presence of genes that are too diverged to establish an orthologous relationship or have undergone expansion or loss. Functions of these genes are often of interest in comparative genomics analyses. To further estimate the effect of poor assembly contiguation on synteny analysis, GO enrichment was performed on genes present in C. briggsae synteny breaks identified from DAGchainer in C. elegans vs. 100 kb fragmented C. briggsae. This approach was then repeated 100 times with assemblies fragmented randomly. We found that fragmented assemblies make GO terms that were originally not found in the top 100 ranks consistently appear in the top 10 during the 100 replicates (Fig. 6 and Additional file 7: Table S2). Furthermore, the orders of the original top 10 GO terms shifted in fragmented assemblies (Fig. 6 and Additional file 7: Table S2). In addition, the 10th top GO term failed to appear in the top 10 in 100 replicates (Fig. 6 and Additional file 7: Table S2). These results suggest that an underestimation of synteny relationship due to poor assembly contiguation can lead to a number of erroneous findings in subsequent analysis.
Although assembly quality plays an important role in synteny analysis, it has been demonstrated that poor assembly contiguity of one species can be scaffolded by establishing synteny with a more contiguous assembly of a closely related species [42, 51–53]. However, we hypothesized that the true synteny relationship between two species may be incorrectly inferred when an assembly of one species is scaffolded based on another closely related species, by assuming the two genomes are syntenic. To investigate this, ALLMAPS was used to order and orient sequences of 100 kb fragmented C. briggsae based on C. elegans as well as 100 kb fragmented S. stercoralis assembly based on S. ratti. ALLMAPS reduced the number of sequences in both fragmented assemblies impressively, increasing the N50 from 100 kb to 19 Mb and 15 Mb in C. briggsae and S. stercoralis, respectively (Additional file 8: Table S3). Synteny coverage from these scaffolded assemblies was even higher than the original fragmented 100 kb sequences in MCScanX, much lower in i-ADHoRe, and similar in DAGchainer, SynChro, and Satsuma (Fig. 7). However, despite synteny coverage being close to that of the original assemblies in DAGchainer and SynChro, further investigation of synteny block linkages in C. elegans-C. briggsae using identification from DAGchainer revealed frequent false ordering and joining of contigs, resulting in false synteny blocks. Intra-chromosomal rearrangements are common between C. elegans and C. briggsae, but the scaffolded assemblies produced by ALLMAPS show a false largely collinear relationship in the chromosomes between the two species (Fig. 8). Hence, if a true evolutionary relationship was not known, simply undergoing reference guided scaffolding would produce pseudo-high quality assemblies that may have ordering bias towards the reference genome and result in an incorrect assembly with inflated metrics.
Genome annotation is a bridging step between genome assembly and synteny analysis. An incomplete annotation may lead to lack of homology information in synteny analysis. We compared synteny coverage in three datasets of C. elegans that differ in quality of annotation: 1) manually curated WormBase C. elegans annotation, 2) optimized Augustus annotation with its built-in Caenorhabditis species training set, and 3) semi-automated Augustus annotation with the BUSCO nematoda species training set. In all cases, we found that synteny coverage varies at most 0.02% in the reference genome (Table 2). As a result, with a well-assembled genome and proper species training set, the quality of annotation has little effect on synteny analysis compared to assembly quality.
Synteny analysis is a practical way to investigate the evolution of genome structure [28–31, 56]. In this study, we have revealed how genome assembly contiguity affects synteny analysis. We present a simple scenario of breaking an assembly into a more fragmented state, which only mimics part of the poor assembly problem. Our genome fragmentation method randomly breaks sequences into same-sized pieces, which gives rise to a distribution of sequence length with peaks enriched in tiny sequences and fixed-size fragments (Fig. 5b). Real assemblies, which usually comprise a few large sequences and many more tiny sequences that are difficult to assemble because of their repetitive nature [25, 26], possess very different sequence length distributions (Fig. 5b). It is probable that we overestimated error rate in regions that can be easily assembled and underestimated error rate in regions that will be more fragmented, but overall synteny coverage were comparable (Fig. 5a). Note that some of the sequences in real assemblies may contain gaps (scaffolds) that will result in more missing genes and will result in further underestimation of synteny. Our results are quite similar when a de novo Pacbio C. elegans assembly and three older versions C. briggsae assemblies were compared to the reference C. elegans genome (Fig. 5a). The use of long read technology and advanced genome mapping such as the Hi-C approach are becoming the norm for de novo assembly projects. Assemblies with lower contiguation were not compared here as we emphasize the responsibility of research groups to produce assemblies that are of the higher contiguity, made possible by long reads .
Synteny identification from different programs (i.e., DAGchainer , i-ADHoRe , MCScanX , SynChro , and Satsuma ) performed across different species (C. elegans, C. briggsae, S. ratti, and S. stercoralis) have allowed us to examine the wide-ranging effects of assembly contiguation on downstream synteny analysis. Satsuma demonstrates fewer contiguation-dependent patterns as its detection of synteny relies on nucleotide alignments (Fig. 2). However, we show that Satsuma was only robust when comparing species with very low divergence, for example, between strains or assembly versions from the same species. Only ~ 19% of C. elegans and C. briggsae were identified as syntenic using Satsuma, and ~ 54% in S. ratti-S. stercoralis (Additional file 2: Table S1). Because initial identification of synteny coverage was low, any regions that failed to align in fragmented assemblies would consist of larger proportion of the original synteny coverage and lead to a higher error rate (Fig. 4d).
The other four programs, which are anchor-based, tend to produce the same results when the original assembly is compared to itself, but differ extensively when assemblies become fragmented (Fig. 2). It is interesting to note that DAGchainer and MCScanX use the same scoring algorithm for determining synteny regions, except that DAGchainer uses the number of genes without orthology assignment as gaps while MCScanX uses unaligned nucleotides. When comparing closely related species, results from the four anchor-based programs fluctuate even without fragmentation in Caenorhabditis species, while the pattern remains similar to self-comparison in Strongyloides species (Fig. 4). Sensitivity in synteny identification drops sharply as the genome assembly becomes fragmented, and thus genome assembly contiguation must be considered when inferring synteny relationships between species. Our fragmentation approach only affects N50, which mostly leads to loss of anchors in synteny analysis. Other scenarios such as unknown sequences (NNNs) in the assembly, rearrangements causing a break in anchor ordering (Fig. 1, break b), or insertions/deletions (Fig. 1, break c) were not addressed and may lead to greater inaccuracies.
We have shown that genomic features such as gene density and length of intergenic regions play an essential role during the process of synteny identification (Fig. 4; Tables 1 and Additional file 2: Table S1). Synteny identification can be established more readily in species with higher gene density or shorter intergenic space, which is related to the initial setting of minimum anchors needed for synteny identification (Fig. 1 and Additional file 1: Figure S1). Repetitiveness of paralogs is another factor in how anchors were chosen from homology assignments. For example, we found that synteny coverage is low along chromosomal arm regions of C. elegans in both self-comparison and versus C. briggsae, which has been reported to have expansion of G protein-coupled receptor gene families (Fig. 2 and Additional file 6: Figure S5). Nevertheless, this case may be a result of a combination of repetitive paralogs and high gene density.
Interestingly, synteny comparison with scaffolded assemblies using ALLMAPS exhibited unexpected variation among programs. Unfortunately, we did not resolve the reason behind the sharp decrease in synteny coverage from i-ADHoRe (Fig. 7). Nevertheless, we have shown that it is dangerous to scaffold an assembly using a reference from closely related species without a priori information about their synteny relationship. Subsequent synteny identification would be misleading if the same reference was compared again (Fig. 8). We also considered the interplay between genome annotation, assembly and synteny identification. Although it may be intuitive to assume lower annotation quality can lead to errors in synteny analysis, we demonstrated that such influence was minimal if an initial genome assembly of good contiguation is available (Table 2).
In conclusion, this study has demonstrated that a minimum quality of genome assembly is essential for synteny analysis. To keep the error rate below 5% in synteny identification, we suggest that an N50 of 200 kb and 1 Mb is required when gene density of species of interest are 290 and 200 genes per Mb, respectively (Tables 1 and Additional file 1: Figure S1). This is a minimum standard, and a higher N50 may be required for other species with lower gene density or highly expanded gene families.
Assemblies and annotations of C. elegans and C. briggsae (release WS255), S. ratti and S. stercoralis (release WBPS8) were obtained from WormBase (http://www.wormbase.org/) . A new assembly of C. elegans using long reads was obtained from a Pacific Bioscienceces dataset (https://github.com/PacificBiosciences/DevNet/wiki/C.-elegans-data-set). Initially published assemblies of C. briggsae were obtained from UCSC Genome Browser (http://hgdownload.cse.ucsc.edu/downloads.html#c_briggsae). The N50 of long reads assembled C. elegans genome, cb1 final version of C. briggsae genome, cb1 supercontig version of C. briggsae genome and cb1 contig version of C. briggsae genome are ~ 1.6 Mb, ~ 1.3 Mb, 474 kb and 41 kb respectively. Gene models of these assemblies were annotated de novo using Augustus . Since some genes produce multiple alternative splicing isoforms and all of these isoforms represent one gene (locus), we used the longest isoform as a representative. Further, non-coding genes were also filtered out from our analysis. To simulate the fragmented state of assemblies, a script was made to randomly break scaffolds into fixed size fragments (Pseudocode shown in Fig. 9; scripts available at https://github.com/dangliu/Assembly-breaking.git). Sequences shorter than the fixed length were kept unchanged.
The four anchor-based programs DAGchainer , i-ADHoRe (v3.0), MCScanX and SynChro , and the nucleotide alignment-based Satsuma , were used to identify synteny blocks. Settings for each program were modified to resemble each other on the results of C. elegans vs. C. elegans, where synteny should be close to 100%, with the exception of default setting in Satsuma. All of the anchor-based programs use gene orthology to find anchor points during process of synteny blocks detection. For DAGchainer, i-ADHoRe and MCScanX, we obtained gene orthology from OrthoFinder (v0.2.8). SynChro has an implemented program called OPSCAN to scan for gene orthology. We arranged the following parameters for each program: DAGchainer (accessory script filter_repetitive_matches.pl was run with option 5 before synteny identification as recommended by manual; options: -Z 12 -D 10 -A 5 -g 1), i-ADHoRe (only top 1 hit of each gene in input blast file was used as recommended; options: cluster_type = collinear, alignment_method = gg2, max_gaps_in_alignment = 10, tandem_gap = 5, gap_size = 10, cluster_gap = 10, q_value = 0.9, prob_cutoff = 0.001, anchor_points = 5, level_2_only = false), MCScanX (only top 5 hits of each gene in the input blast file was used as suggested; options: default) and SynChro (options: 0 6; 0 for all pairwise, and 6 for delta of RBH genes). To calculate synteny coverage, the total length of merged synteny blocks along scaffolds was divided by total assembly size.
Visualization of synteny linkages was made by R (v3.3.1) and circos (v0.69–4). Gene ontology enrichment analysis was performed using the topGO (v1.0) package in R and only focused on Biological Process (options: nodeSize = 3, algorithm = “weight01”, statistic = “Fisher”). Gene ontology associations files for C. elegans and C. briggsae were downloaded from WormBase WS255 . Gene orthology was assigned by OrthoFinder . Then, assemblies were scaffolded using ALLMAPS with a reference guided approach. De novo annotations of C. elegans were predicted using either the manually trained species parameter or from BUSCO (v2.0).
We thank John Wang for commenting on the manuscript.
D.L and I.J.T are funded by Academia Sinica.
Analysis: DL Wrote the manuscript: DL, MH and IJT. Conceived and directed the project: IJT. All authors read and approved the final manuscript.
|
0.59978 |
The classification of the wren family has long been an area of dispute among scientists. The commonly held view over much of the twentieth century was that wrens were very closely related to dippers and more distantly related to mockingbirds and thrushes. Given the obvious physical similarities between wrens and dippers—the stumpy shape, usually short tail, and rounded wings—as well as the fact that both groups build domed nests with a side entrance-hole, this seemed eminently reasonable. However, more recent work based on DNA studies has radically changed this picture. Wrens appear to be most closely allied to the New World gnatcatch-ers and gnatwrens, and rather more distantly to the creepers (treecreepers) and nuthatches; dippers, by contrast, are evo-lutionarily closer to thrushes than to wrens.
The ancestral seat of the wrens is in the New World, but precisely where is a source of debate. At the time when most present-day passerine families were evolving, there was no continuous land bridge between North and South America. One theory postulates an ancestral center somewhere in present southwestern North America, followed by the invasion of South America. This viewpoint is by no means universally held, and an origin in northern South America has some advocates. The fossil record of wrens is extremely scanty and so recent as to be of little help in elucidating the geographic origins of the family. The greatest abundance of modern wrens is in southern Central America and northwestern South America.
The precise number of species of wrens is a fluid and debatable quantity. Although only four totally new species have been described to science since 1945 (the most recent in 1985), different authorities have wildly differing opinions as to the taxonomy of the family. Thus the house wren group, occupying the Americas from Canada to Tierra del Fuego, has been variously treated as one species or as many as ten. Peters checklist contains 60 species in 14 genera; more recently, Clements has 78 species in 16 genera, while the most recent studies suggests 83 species in 16 genera. Wren taxonomy is currently in a state of great flux, and both the total number of species and their allocation into genera will almost certainly change in the near future. The bizarre and aberrant Donacobius, a raucous and rambunctious inhabitant of South American marshes, is sometimes classified as a wren, a viewpoint that varies among taxonomies.
|
0.985615 |
This is a guest post by Praveen Ghanta, known on The Oil Drum as praveen. Praveen is an IT consultant in Atlanta, with degrees in economics and computer science. This was originally posted on Praveen's blog, at truecostblog.com.
Is peak oil real? The BP Statistical Review of World Energy provides the data needed to answer this question. Using the 2009 edition, I have compiled a list of all oil producing countries and regions in the world, along with the production status of each, ordered by year of peak production. BP groups minor producers into categories like "Other Africa", and "Other Middle East", and that notation is used here. All production numbers are quoted in thousands of barrels/day.
Only 14 out of 54 oil producing countries and regions in the world continue to increase production, while 30 are definitely past their production peak, and the remaining 10 appear to have flat or declining production . Put another way, peak oil is real in 61% of the oil producing world when weighted by production. Since 2008 capped a record run for oil prices, most countries and oil companies were trying all-out to increase production. While a handful of producers (think Iraq) might be limited by above-ground factors, the majority of producers simply couldn't do any better in 2008 .
The evidence of the demise of the cheap oil era has become insurmountable. In the face of the highest oil prices on record, the great majority of the world's oil producers were incapable of taking advantage and producing more oil. Many nations including the US saw their oil production peak decades ago - there simply is no turning the clock back. This list shows that we are relying on a small number of countries to keep providing cheap oil. We need to move faster to alternatives and greater energy efficiency, before the last fourteen peak as well.
Russian Federation - Russia's oil production collapsed by the early 90's as the Soviet Union collapsed, but despite a decade of growth, Russia's own oil execs don't think the old peak can be surpassed.
India's production appeared to plateau in 1995, and has stayed within a steady range since. The EIA forecasts Indian oil production to remain flat or decline slightly in the near future.
Other Central & South America - The remaining countries of the Americas hit a production peak in 2003, though it's still too soon to know if this will be final peak.
Other Africa - Oil production in much of Africa is potentially impacted by above-ground constraints, so it's definitely possible that production will rise here. It will rise from a low base of only 50,000 bpd however, and may not have much impact on total world production.
Chad's oil production history is too short to definitively identify a peak in production, but the drop-off since 2005 has been dramatic.
Italy has been on a production plateau for over 10 years, and it's unlikely that a mature economy is significantly under-exploiting its resource potential.
Ecuador's production grew rapidly until 2004, but has leveled off and declined somewhat since then.
To be considered past-peak, a producer's current (2008) production has to be at least 10% less than its best year, and the best year must have occurred prior to 2005. Some countries' production has been artificially constrained by political and other non-geological considerations. But in some of these cases, it will be difficult to pass an old peak because decades of depletion have occurred since that peak. Iraq peaked in 1979, making it all the more difficult to pass that now.
|
0.994264 |
Here are just a few quotes from Mr. Rogers that can help you be a better parent.
In his book, The World According to Mister Rogers: Important Things to Remember, Mr. Rogers teaches us that it's OK to feel sad, mad, or lonely. Those feelings don't make us weak, but actually showcase our courage and strength.
1. "Confronting our feelings and giving them appropriate expression always takes strength, not weakness."
3. “The greatest gift you ever give is your honest self."
4. "When I was a boy and I would see scary things in the news, my mother would say to me, 'Look for the helpers. You will always find people who are helping.'"
5. "I'm proud of you for the times you came in second, or third, or fourth, but what you did was the best you have ever done."
7. "Play is often talked about as if it were a relief from serious learning. But for children play is serious learning. Play is really the work of childhood."
8. "Often when you think you're at the end of something, you're at the beginning of something else."
9. "In the external scheme of things, shining moments are as brief as the twinkling of an eye, yet such twinklings are what eternity is made of — moments when we human beings can say, 'I love you,' 'I'm proud of you,' 'I forgive you,' 'I'm grateful for you.' That's what eternity is made of: invisible imperishable good stuff."
10. "Mutual caring relationships require kindness and patience, tolerance, optimism, joy in the other's achievements, confidence in oneself, and the ability to give without undue thought of gain."
11. "You can't really love someone else unless you really love yourself first."
12. "Whether we're a preschooler or a young teen, a graduating college senior or a retired person, we human beings all want to know that we're acceptable, that our being alive somehow makes a difference in the lives of others."
14. "There are three ways to ultimate success: The first way is to be kind. The second way is to be kind. The third way is to be kind."
15. "Being able to resolve conflicts peacefully is one of the greatest strengths we can give our children."
16. "You're much more than your job description or your age or your income or your output."
17. "Some days, doing the best we can may still fall short of what we would like to be able to do, but life isn't perfect on any front-and doing what we can with what we have is the most we should expect of ourselves or anyone else."
|
0.996528 |
Egypt's Mohammed Morsi has made the first foreign visit of his presidency to Saudi Arabia, friend of the CIA and Mossad and enemy of Syria and Iran.
Morsi needs Saudi money to help Egypt out of its economic crisis. The Saudis have recently put $1 billion into Egypt's central bank.
Egypt's foreign minister, Nabil el-Araby, has made 'tentative overtures' to Iran.
But, Morsi's visit to Saudi Arabia is probably intended to send the signal that Egypt supports Saudi Arabia, rather than Iran and Syria.
According to Abdel Raouf El Reedy, a former Egyptian ambassador to the USA, "Egypt is a very important pillar in Saudi Arabian security. We used to say that there was a golden triangle between Egypt, Syria and Saudi Arabia. Now without Syria, Egypt is in an even more important place."
Morsi is linked to the Muslim Brotherhood, which has been used by the CIA and its friends, to topple the more liberal and nationalist Arab states, such as Tunisia.
But the Muslim Brotherhood has not toppled the monarchies in Morocco and the Gulf, which are close allies of the Pentagon.
Does Morsi work for the CIA?
|
0.978604 |
Is real-time search marketing the next big opportunity for digital marketing? This article by Scott Morrison looks at how to monetize real-time search and services such as Twitter.
SAN FRANCISCO (Dow Jones)--Micro-blogging phenomenon Twitter Inc. hasn't figured out how to make money, but that hasn't stopped Web giants Google Inc. (GOOG), Yahoo Inc. (YHOO) and Microsoft Corp. (MSFT) from racing to establish real-time search capabilities.
Real-time search helps Internet users find Web posts, including those from San Francisco's Twitter Inc., seconds after publication. The field has grown in importance amid the exploding popularity of services like Twitter, which lets users blast short messages rapid fire from computers and mobile phones.
The growth of Twitter has fueled expectations that real-time search could drive Internet advertising to new heights by allowing marketers to target relevant ads at consumers interested in breaking events, hot topics or their favorite celebrities. Some proponents argue real-time data and search could develop into a billion-dollar market.
"Every conceivable advertiser will be interested," said Ron Conway of SV Angel LLC, an early investor in Google and Twitter. "It will create a huge monetization opportunity."
Just how that opportunity will unfold remains unclear. There is no shortage of real-time search startups - such as OneRiot LLC and Scoopler Inc., not to mention Twitter itself - that are attempting to make sense of the growing universe of real-time user-generated data. It is telling, however, that even Twitter still hasn't said how it hopes to turn user updates, known as "tweets," into revenue.
Still, Google, Yahoo and Microsoft are pouring time and resources into the real-time Web. All three have had discussions with Twitter seeking some sort of search or advertising deal, according to people familiar with the situation. They also are looking beyond the micro-blogging leader.
The search giants note other sources of user-generated real-time data, such as Web recommendation engine Digg Inc. or micro-blogging services like Tumblr Inc. They also point to their own properties. Microsoft, for example, notes its Messenger and Spaces services are real-time data sources, while Yahoo highlights its Answers service and its experimental Brazilian micro-blogging property Meme. In January, Google pulled back from its Jaiku service, but recent blog rumors suggest it is poised to launch a service that indexes and ranks content from microblogging services, like Twitter.
Making sense of real-time data poses technological challenges for the big search companies. Their current algorithms return results heavily weighed towards older Web pages that have established credibility and attracted large audiences, an approach at odds with real-time search.
Twitter is like a fire hose spewing out a flood of tweets, many of which are seconds old and from obscure users with little track record. Tweets many times contain acronyms, Web site address abbreviations and emoticons, all of which make it difficult for traditional search engines to evaluate their relevance - and filter out "tweet spam."
"Whoever figures out how to filter out spam best will win the real-time search battle," said Kevin Lee, chief executive of search engine marketer Didit.com LLC.
Prabhakar Raghavan, who runs Yahoo's search strategy, says the company is looking at how it might data mine tweets and other real-time feeds, a process that will help it evaluate and summarize content more efficiently. Yahoo is also looking at whether it might map tweets, allowing advertisers to target geographies where interest in a product or service is growing.
Microsoft senior program manager Andy Oakley says his company is also determining how to filter, summarize and present real-time tweets. He suggests up-to-the minute micro-posts and links could be displayed in an "updates" section within a traditional search results page.
Google last month introduced a "recent results" option to its search engine, and co-founder Larry Page has spoken publicly about the need to continually quicken the pace at which the company's spiders index Web pages. A company spokesman said Google was looking at ways to make real-time data more useful to its users.
Tobias Peggs, general manager at OneRiot, says real-time searchers tend to search the Web many times a day because they expect results will be updated more quickly than on established engines like Google or Yahoo. He believes that gives companies like OneRiot more opportunities to serve up relevant ads based on the changing situations.
"If the latest update on Britney Spears says she wore green Gucci dress last night, that would be an opportunity for Gucci to advertise that green dress," Peggs said.
Digital media planning and buying can be a difficult vocation. No matter how good you are though, sometimes no matter how hard you try, you still get it very wrong. Here's an unfortunate example of an online ad appearing in exactly the wrong place.
For all you Twitter try-hards.
|
0.994666 |
I have 3 days in e.g. Paris what should i do?
I have travelled around the world for two years. So I am confident to answer the question about travel.
|
0.99619 |
In this assignment you will implement a prototype of a text-based conversational agent = dialogue system that is able to converse with a human in ordinary natural language and provide rail information for a Sydney suburban line Northern Line. The user should be able to ask for timetable information and information about planned trackwork for the fortnight ahead. The assignment consists of a basic task where the conversational agent is completely in control of the conversation and an advanced task in which the conversation shows some form of mixed-initiative between the conversational agent and the human as discussed in the lecture. For the implementation of your agent you have to use Python 3.6.x and SQLite and optionally if you wish you can use the Natural Language Toolkit NLTK.Please note: the aim of this assignment is to implement a goal-oriented conversational agent andnota chatbot.
For this basic task you have to implement a conversational agent who is always in control and drives the conversation. Your agent should be able to handle the following two dialogues timetable information 2.1 and trackwork information 2.2 that we will use to test your implementation. But of course you don’t want to hard-code these dialogues and you should allow for different trips and destinations on the Northern Line. Also you should use a grammar most likely implemented via regular expressions or string methods that allows for some variation in the user input. The agent should recover gracefully if something goes wrong in a dialogue state and re-prompt the user accordingly.
Computer: Welcome to Sydney Train’s text-based rail information service. This service will help you finding convenient ways to travel by asking you a number of questions. If you are not sure about how to answer a question simply type ‘help’. You can ‘quit’ the conversation anytime.
Computer: Would you like timetable information or information on trackwork?
User: I would like timetable information.
Computer: Sure. What station would you like to leave from?
Computer: Okay Epping. And where are you travelling to?
User: Hmm - to Central Station.
Computer: Okay to Central and do you want to travel on a weekday or on a weekend?
User: I want to travel next Monday.
Computer: Travelling on Monday September 18 and what time would you like to depart from Epping?
User: At about 10:45 or so in the morning.
Computer: Let me see - I have a train leaving Epping at 10:47 and arriving at Central at 11:23. Would you like an earlier or later train?
User: A later one please.
Computer: Let me check - I have a train leaving Epping at 11:00 and arriving at Central at 11:25. Would you like a later train?
User: No thank you - this train suits me well.
Computer: Okay thank you for tavelling with Sydney Train - cost effective reliable and convenient.
Computer: On what line are you planning to travel on?
Computer: On what day will you be travelling?
Computer: There is trackwork on the Northern Line on Saturday 16 September 02:00 to Monday 18 September 02:00 Buses replace trains between Hornsby and Strathfield via Eastwood.
Computer: Would you like more information on timetable or trackwork?
Note that the welcome prompt should only be used the first time and the user should be able to select between “timetable information” and “trackwork information”.
Secondary prompt: Type either timetable or trackwork.
Help prompt: This service can provide information on train timetable or trackwork information. Please type either timetable or trackwork.
Event Handling: For example if the answer is not in the scope of the grammar then the user should be re-prompted or if the user types ‘help’ then the help prompt should be used. In certain states the agent needs to connect to the database and extract the relevant information from there.
This basic version of the conversational agent should be implemented as a Python program with the name train-1.py and the dialogue should run in the Python shell. Together with your main program you have to submit all additional non-standard modules that are necessary to run your code.
User: I wanna travel from Epping to Central.
Here you have to collect two pieces of information during a dialogue state. How far you take this approach is up to you and you can come up with your own prompts in this advanced task. However keep in mind that you want to strive for a high task completion rate and give the user an good experience. You can implement your own SQLite queries for this advanced task if you like and extend the database if you like and add additional features of your choice to the conversational agent. If you choose to add additional features to the conversational agent you will need to document these features in a two-page report and explain why these feature add value to your system. This advanced version of the conversational agent should be implemented as a Python program with the name train-2.py and the dialogue should run in the Python shell. Together with your main program you have to submit all additional non-standard modules that are necessary to run the advanced version of the code.
In order to converse with the agent you have to access an small SQLite database train.db where you can find the relevant information for the basic task. Since this assignment is not about databases and database design we provide only a very simple database which you can find under Assignment 1: Train Database together with a Python program train-db.py that has been used to generate the database.The database contains only information for connections on the Northern Line between about 10:20 and 12:20. Note that the following train stations are part of the Northern Line this information is important because these names will be part – among other things – of your grammar.
For the basic task of this assignment you don’t have write any SQL queries from scratch. The following two SQL queries answer the questions raised in the basic task where the user is looking for a train from Epping to Central that leaves Epping at around 10:45 on a weekday WD note the code for weekend is WE and whether there is some trackwork on the Northern Line on 16 August at 10:00 or not. You can use these SQL queries as templates for other similar queries.
As you can see there exists a train that leaves Epping at 10:47 and arrives at Central at 11:23. This is the closest match to the SQL query. There exist earlier and later trains.
You can find an introduction to SQLite in this tutorial.
Functionality: The conversational agent processes both dialogues as illustrated above re-prompts the user if the input was not recognised and provides help if required for each dialogue state.
Grammar coverage: The grammar regular expressions/string methods shows some sophistication in pattern matching.
Quality of the code: The Python code is modular well-documented and uses a consistent naming convention.
Quality of prompts: The prompts have been selected carefully to implement a form of mixed-initiative.
Additional features and report: The additional features at most two are useful add value to the conversational agent and are carefully implemented 2 marks. The choice of the additional features is well-motivated and clearly described in the report 2 marks.
You have to submit a ZIP file comp329-assignment-1.zip that contains the Python program train-1.py for the basic version of the conversational agent and the Python program train-2.py for the advanced version incl. all non-standard modules and the SQLite database that are necessary to run your code. Your code needs to run out-of-the-box we only assume a basic Python installation together with the NLTK toolkit. If you submit a solution to the advanced task of the assignment that contains additional features then you have to add a twopage report in form of a PDF file to the ZIP file of your submission.
|
0.93505 |
Born: 25 May 1879, Huddersfield, Yorkshire, United Kingdom.
Died: 27 March 1951, Pietermaritzburg, South Africa.
Robert Beckett Denison studied at the Yorkshire College of Science in Leeds (from 1904 the University of Leeds), where he was awarded the degree Bachelor of Science (BSc) and an 1851 Exhibition research scholarship. He continued his studies in Germany, studying electro-chemistry and electro-metallurgy at Aix-la-Chapelle and specialising in physical chemistry at Breslau (now Wroclaw, Poland). In 1903 he obtained the degree Doctor of Philosophy (PhD) at the University of Breslau with a thesis on the speed of migration of ions in acqueous solutions, entitled Beitraege zur direkten messung von ueberfuehrungszahlen (Leipzig, 1903). After further research in Berlin and at University College, London, which was reported on in a number of published papers, he was awarded the degree Doctor of Science (DSc) by the University of Leeds. While still in Germany he, in collaboration with B.D. Steele, developed what became a standard method of measuring ionic velocities in solution, namely the moving boundary method. Their joint papers included 'The transport number of very dilute solutions' (Journal of the Chemical Society, 1902), 'On the accurate measurement of ionic velocities' (Transactions of the Royal Society of London, Series A, 1906) and 'A new method for the measurement of hydrolysis in acquous solution based on a consideration of the motion of ions' (Journal of the Chemical Society, 1906).
In 1904 Denison was appointed lecturer in chemistry at the Heriot-Watt College in Edinburgh, where he subsequently became assistant professor of chemistry. His publications based on his work there included 'Research on the relative rate of migration of ions in acquous solution' (1909), and 'Contributions to the knowledge of liquid mixtures' (1912), both in the Transactions of the Faraday Society. In March 1910 he married Ruby A. Newth, but they had no children. A few weeks later, on 17 April 1910, he arrived in Durban to take up an appointment as professor of chemistry and physics at the newly founded Natal University College in Pietermaritzburg.
After two years at the college Denison was able to give up his involvement in physics and continued as professor of chemistry until 1939. He devoted most of his time to developing his department and teaching, in which he set high standards, leaving little time for research. His publications during this period included "The electro-motive series of the metals as an aid to the teaching of inorganic chemistry; with special reference to the action of acids on metals" (Report of the South African Association for the Advancement of Science, 1911, pp. 245-252); and "The formation of chemical compounds in homogeneous liquid systems: a contribution to the theory of concentrated solutions" (Ibid, 1912, pp. 132-147). These papers proved to be more or less the end of his research in chemistry, owing mainly to a lack of facilities and a heavy teaching load. Many years later, as co-author with A.P.D. McClean, he published "An accurate colorimetric method for the estimation of very small amounts of carbon dioxide" (South African Journal of Science, 1926, Vol. 23, pp. 253-258).
Denison was an unpretentious, devoted and selfless man and an excellent administrator who played a leading role in college affairs. He was dean of the Faculty of Science from 1924 to 1926. In 1938 he succeeded J.W. Bews as principal of the college, a position he held until his retirement at the end of 1944. The college's Denison Library was named after him. On the basis of his doctoral degree from the University of Breslau, the University of the Cape of Good Hope admitted him to the Master of Arts (MA) degree in 1911 and during 1915-1916 he was an examiner for the university's MA papers in chemistry. He was also a member of the senate of the University of South Africa (successor to the University of the Cape of Good Hope) and for some years served as a member of its council. In 1946 the university awarded him an honorary DSc degree.
Denison was a foundation member and served on the council of the South African Association of Analytical Chemists (from 1921 the South African Chemical Institute), from its inception in 1912 to the end of his career in chemistry. He was elected president of the association for 1917/8. His spare time was devoted to golf, fishing and gardening.
Coblans, Herbert. Robert Beckett Denison: A memoir. South African Industrial Chemist, August 1951, Vol. 5, p. 156.
Google scholar. http://scholar.google.co.za Publications by R.B. Denison.
Guest, B. Stella Aurora: The history of a South African University. Vol. 1: Natal University College (1909-1949). Pietermaritzburg: Occasional Publication of the Natal Society Foundation. Retrieved from http://www.natalia.org.za/Files/Publications/Stella Aurorae.pdf on 2018-10-16.
National Automated Archival Information Retrieval System (NAAIRS). http://www.national.archives.gov.za/naairs.htm Documents relating to Denison, Robert Beckett / Denison, R.B.
Petrie, A. Memoir of Dr Robert Beckett Denison. Theoria, 1952, No. 4, pp. 1-5.
University of the Cape of Good Hope. Calendar, 1912/3, 1915/6, 1917/8.
|
0.931431 |
The room in Báb's house in Shiraz where he declared his mission on the evening of May 22, 1844.
Bábism or Bábíism (Persian: بابیه, Babiyye), also known as the Bábi, Bâbi, or Bábí faith, was a new religious movement that flourished in Persia from 1844 to 1852, then lingered on in exile in the Ottoman Empire, especially Cyprus, as well as underground. Its founder was ʿAli Muhammad Shirazi, who took the title Báb (lit. "Gate") out of the belief that he was the gate to the Twelfth Imam. The Bábí movement signaled a break with Islam and started a new religious system. While the Bábí movement was violently opposed and crushed by the clerical and government establishments in the country in the mid-1850s, the Bábí movement led to the founding of the Bahá'í Faith which sees the religion brought by the Báb as a predecessor to their own religion. "The relative success of Bahaism inside Iran (where it constitutes the largest religious minority) and in numerous other countries, where it claims the status of an independent religion, gives renewed significance to its Babi origins", as Bahaism continued many aspects of the earlier sect.
Twelver Shi'i Muslims regard the Twelfth Imam, Muhammad al-Mahdi, as the last of the Imams. They contend that Muhammad al-Mahdi went into the Occultation in 874 CE, at which time communication between the Imam and the Muslim community could only be performed through mediators called Bābs "gates" or Nā'ibs "representatives". In 940, the fourth nā'ib claimed that Imam Muhammad al-Mahdi had gone into an indefinite "Grand Occultation", and that he would cease to communicate with the people. According to Twelver belief, the Hidden Imam is alive in the world, but in concealment from his enemies, and that he would only emerge shortly before the Last Judgment. At that time, acting as al-Qā'im ("He who will arise"), a messianic figure also known as the Mahdi ("He who is rightly guided"), the Hidden Imam would start a holy war against evil, would defeat the unbelievers, and would start a reign of justice.
In 1830s Qajar Persia, Sayyid Kazim Rashti was the leader of the Shaykhis, a sect of Twelvers. The Shaykhis were a group expecting the imminent appearance of al-Qāʾim. At the time of Kazim's death in 1843, he had counselled his followers to leave their homes to seek the Lord of the Age whose advent would soon break on the world.
On 22 May, 1844 Mullá Husayn of Boshruyeh in Khorasan, a prominent disciple of Sayyid Kāẓim, entered Shiraz following the instruction by his master to search for al-Qā'im. Soon after he arrived in Shiraz, Mullá Husayn came into contact with the Báb. On the night of May 22, 1844, Mulla Husayn was invited by the Báb to his home; on that night Mullá Husayn told him that he was searching for the possible successor to Sayyid Kāẓim, al-Qā'im, and the Báb told Mullá Husayn privately that he was Sayyid Kāẓim's successor and the bearer of divine knowledge. Through the night of the 22nd to dawn of the 23rd, Mulla Husayn became the first to accept the Báb's claims as the gateway to Truth and the initiator of a new prophetic cycle; the Báb had replied in a satisfactory way to all of Mullá Husayn's questions and had written in his presence, with extreme rapidity, a long commentary on the surah of Yusuf, which has come to be known as the Qayyūmu l-Asmā' and is considered the Báb's first revealed work. This night and the following day are observed in the Bahá'í Faith as a holy day since then.
After Mulla Husayn accepted the Báb's claim, the Báb ordered him to wait until 17 others had independently recognized the station of the Báb before they could begin teaching others about the new revelation.
After his declaration, he soon assumed the title of the Báb. Within a few years the movement spread all over Iran, causing controversy. His claim was at first understood by some of the public at the time to be merely a reference to the Gate of the Hidden Imám of Muhammad, but this understanding he publicly disclaimed. He later proclaimed himself, in the presence of the heir to the Throne of Persia and other notables, to be al-Qā'im. In the Báb's writings, the Báb appears to identify himself as the gate (báb) to Muhammad al-Mahdi and later he begins to explicitly proclaim his station as equivalent to that of the Hidden Imam and a new messenger from God. Saiedi states the exalted identity the Báb was claiming was unmistakable, but due to the reception of the people, his writings appear to convey the impression that he is only the gate to the Hidden Twelfth Imam. To his circle of early believers, the Báb was equivocal about his exact status, gradually confiding in them that he was not merely a gate to the Hidden Imam, but the Manifestation of the Hidden Imam and al-Qā'im himself. During his early meetings with Mullá Husayn, the Báb described himself as the Master and the Promised One; he did not consider himself just Sayyid Kāẓim Rashti's successor, but claimed a prophetic status, with a sense of deputyship delegated to him not just from the Hidden Imam, but from Divine authority; His early texts, such as the Commentary on the Sura of Yusuf, used Qur'anic language that implied divine authority and identified himself effectively with the Imam. When Mullā ʿAlī Basṭāmī, the second Letter of the Living, was put on trial in Baghdad for preaching about the Báb, the clerics studied the Commentary on the Sura of Yusuf, recognized in it a claim to divine revelation, and quoted from it extensively to prove that the author had made a messianic claim.
The Báb's message was disseminated by the Letters of the Living through Iran and southern Iraq. One of these initial activities were communicated to the West starting January 8, 1845 as an exchange of diplomatic reports concerning the fate of Mullá ʿAli-e Bastāmi, the second Letter. These were exchanges between Sir Henry Rawlinson, 1st Baronet who wrote first to Stratford Canning, 1st Viscount Stratford de Redcliffe. Followups continued until in 1846 he was sentenced by the Ottomans to serve in the naval ship yards at hard labor—the Ottoman ruler refusing to banish him as it would be "difficult to control his activities and prevent him spreading his false ideas." Quddús and other early followers then were sent on to Shiraz to begin public presentations of the new religion. Indeed various activities the Báb initiated were devolved to various Letters of the Living like preaching activities and answering questions from the community. In particular as these first public activities multiplied opposition by the Islamic clergy arose and prompted the Governor of Shiraz to order the Báb's arrest. The Báb, upon hearing of the arrest order, left Bushehr for Shiraz in June 1845 and presented himself to the authorities. This series of events become the first public account of the new religion in the West when they were published Nov 1, 1845 in the The Times. The story was also carried from Nov 15 by the Literary Gazette which was subsequently echoed widely. The Báb was placed under house arrest at the home of his uncle, and was restricted in his personal activities, until a cholera epidemic broke out in the city in September 1846.
The Báb was released and departed for Isfahan. There, many came to see him at the house of the imám jum'ih, head of the local clergy, who became sympathetic. After an informal gathering where the Báb debated the local clergy and displayed his speed in producing instantaneous verses, his popularity soared. After the death of the Governor of Isfahan, Manouchehr Khan Gorji, an Iranian Georgian, who had become his supporter, pressure from the clergy of the province led to the Shah, Mohammad Shah Qajar, ordering the Báb to Tehran in January, 1847. After spending several months in a camp outside Tehran, and before the Báb could meet the Shah, the Prime Minister sent the Báb to Tabriz in the northwestern corner of the country, and later Maku and Chehriq, where he was confined. During his confinement, he was said to have impressed his jailers with his patience and dignity. Communication between the Báb and his followers was not completely severed but was quite difficult, and more responsibilities were devolved to the Letters as he was not able to elucidate his teachings to the public. With Bábí teachings now mostly spread by his followers, they faced increasing persecution themselves.
The role played by Táhirih in Karbalāʾ was particularly significant. She began an effort of innovation in religion based on her station as a Letter of the Living and the incarnation of Fatimah. In his early teachings, the Báb emphasized observing Sharia and extraordinary acts of piety. However, his claim of being the Bāb, i.e. the authority direct from God, was in conflict with this more conservative position of supporting Sharia. Táhirih innovated an advance in understanding of the priority of the Báb's station above that of Islamic Sharia by wedding the concept of the Bāb’s overriding religious authority with ideas originating in Shaykhism pointing to an age after outward conformity. She seems to have made this connection circa 1262/1846 even before the Bāb himself. The matter was taken up by the community at large at the Conference of Badasht.
This conference was one of the most important events of the Bábí movement when in 1848 its split from Islam and Islamic law was made clear. Three key individuals who attended the conference were Bahá'u'lláh, Quddús, and Táhirih. Táhirih, during the conference, was able to persuade many of the others about the Bábí split with Islam based on the station of the Báb and an age after outward conformity. She appeared at least once during the conference in public without a veil, heresy within the Islamic world of that day, signalling the split. During this same month the Báb was brought to trial in Tabriz and made his claim to be the Mahdi public to the Crown Prince and the Shi'a clergy.
Several sources agree that by 1848 or 1850 there were 100,000 converts to Babism. In the fall of 1850 newspaper coverage fell behind quickly unfolding events. Though the Báb was named for the first time he had in fact already been executed.
By 1848 the increased fervour of the Bábís and the clerical opposition had led to a number of confrontations between the Bábís and their government and clerical establishment. After the death of Mohammad Shah Qajar, the shah of Iran, a series of armed struggles and uprisings broke out in the country, including at Tabarsi. These confrontations all resulted in Bábí massacres; Bahá'í authors give an estimate of 20,000 Bábís killed from 1844 to present, with most of the deaths occurring during the first 20 years. Former Professor of Islamic Studies Denis MacEoin studied documented deaths, both for individuals and for round figures, from Bábí, Bahá'í, European, and Iranian sources, and confirmed at most two to three thousand. He stated that he could not find evidence for any higher figures. Supporters of the Bábís paint their struggle as basically defensive in nature; Shi'i writers on the other hand point to this period as proof of the subversive nature of Bábísm. MacEoin has pointed out that the Bábís did arm themselves, upon the Báb's instructions, and originally intended an uprising, but that their eventual clashes with state forces were defensive, and not considered an offensive jihad. In mid-1850 a new prime-minister, Amir Kabir, was convinced that the Bábí movement was a threat and ordered the execution of the Báb which was followed by the killings of many Bábís.
Of the conflicts between the Bábís and the establishment, the first and best known took place in Māzandarān at the remote shrine of Shaykh Tabarsi, about 22 kilometres southeast of Bārfarush (modern Babol). From October 1848 until May 1849, around 300 Bábís (later rising to 600), led by Quddús and Mullá Husayn, defended themselves against the attacks of local villagers and members of the Shah's army under the command of Prince Mahdi Qoli Mirza. They were, after being weakened through attrition and starvation, subdued through false promises of safety, and put to death or sold into slavery.
The revolt at the fortress of ʿAli Mardan Khan in Zanjan in northwest Iran was by far the most violent of all the conflicts. It was headed by Mullā Muhammad ‘Ali Zanjani, called Hujjat, and also lasted seven or eight months (May 1850–January 1851). The Bábí community in the city had swelled to around 3000 after the conversion of one of the town's religious leaders to the Bábí movement. The conflict was preceded by years of growing tension between the leading Islamic clergy and the new rising Bábí leadership. The city governor ordered that the city be divided into two sectors, with hostilities starting soon thereafter. The Bábís faced resistance against a large number of regular troops, and led to the death of several thousand Bábís. After Hujjat was killed, and the Bábí numbers being greatly reduced, the Bábís surrendered in January 1851 and were massacred by the army.
Meanwhile a serious but less protracted struggle was waged against the government at Neyriz in Fars by Yahya Vahid Darabi of Nayriz. Vahid had converted around 1500 people in the community, and had thus caused tensions with the authorities which led to an armed struggle in a nearby fort. The Bábís resisted attacks by the town's governor as well as further reinforcements. After being given a truce offer on 17 June 1850, Vahid told his followers to give up their positions, which led to Vahid and the Bábís being killed; the Bábí section of the town was also plundered, and the property of the remaining Bábís seized. Later, in March 1853 the governor of the city was killed by the Bábís. These further events led to a second armed conflict near the city where the Bábís once again resisted troop attacks until November 1853, when a massacre of Bábís happened, with their women being enslaved.
The revolts in Zanjan and Nayriz were in progress when in 1850 the Báb, with one of his disciples, was brought from his prison at Chehriq to Tabriz and publicly shot in front of the citadel. The body, after being exposed for some days, was recovered by the Bábís and conveyed to a shrine near Tehran, whence it was ultimately removed to Haifa, where it is now enshrined.
Bábism, though at present a proscribed religion in Persia, is far from being extinct, or even declining, and the Báb may yet contest with Mahomed(sic) the privilege of being regarded as the real prophet of the faithful. Bábism in its infancy was the cause of a greater sensation than that even which was produced by the teaching of Jesus, if we may judge from the account of Josephus of the first days of Christianity.
Latter commentators also noted these kinds of views: Ernest Renan, Stephen Greenleaf Bulfinch, son of Charles Bulfinch, and others.
For the next two years comparatively little was heard of the Bábís. The Bábís became polarized with one group speaking of violent retribution against Naser al-Din Shah Qajar, while the other, under the leadership of Baha’u’llah, looked to rebuild relationships with the government and advance the Babí cause by persuasion and the example of virtuous living.
The militant group of Babis was between thirty and seventy persons, only a small number of the total Babi population of perhaps 100,000. Their meetings appear to have come under the control of a "Husayn Jan", an emotive and magnetic figure who obtained a high degree of personal devotion to himself from the group. Meanwhile Tahirih and Baha'u'llah, visible leaders of the community previously, were removed from the scene - Tahirih by arrest and in the case of Baha'u'llah an invitation to go on pilgrimage to Karbila. On August 15, 1852, three from this small splinter group, acting on their own initiative, attempted to assassinate Naser al-Din Shah Qajar as he was returning from the chase to his palace at Niavarān. Notwithstanding the assassins' claim that they were working alone, the entire Bábí community was blamed, and a slaughter of several thousand Bábís followed, starting on the 31 August 1852 with some thirty Bábís, including Táhirih. Dr Jakob Eduard Polak, then the Shah's physician, was an eye-witness to her execution. Bahá'u'lláh surrendered himself and he along with a few others were imprisoned in the Siāhchāl "Black Pit", an underground dungeon in Tehran. Meanwhile echoes of the newspaper coverage of the violence continued into 1853.
In most of his prominent writings, the Báb alluded to a Promised One, most commonly referred to as "He whom God shall make manifest", and that he himself was "but a ring upon the hand of Him Whom God shall make manifest." Within 20 years of the Báb's death, over 25 people claimed to be the Promised One, most significantly Bahá'u'lláh.
Shortly before the Báb's execution, a follower of the Báb, Abd al-Karim, brought to the Báb's attention the necessity to appoint a successor; thus the Báb wrote a certain number of tablets which he gave to Abd al-Karim to deliver to Subh-i Azal and Bahá'u'lláh. These tablets were later interpreted by both Azalis and Bahá'ís as proof of the Báb's delegation of leadership. Some sources state that the Báb did this at the suggestion of Bahá'u'lláh. In one of the tablets, which is commonly referred to as the Will and Testament of the Báb, Subh-i Azal is viewed to have been appointed as leader of the Bábis after the death of the movement's founder; the tablet, in verse 27, orders Subh-i Azal "...to obey Him Whom God Shall Make Manifest." At the time of the apparent appointment Subh-i Azal was still a teenager, had never demonstrated leadership in the Bábí movement, and was still living in the house of his older brother, Bahá'u'lláh. All of this lends credence to the Bahá'í claim that the Báb appointed Subh-i Azal the head of the Bábí Faith so as to divert attention away from Bahá'u'lláh, while allowing Bábís to visit Bahá'u'lláh and consult with him freely, and allowing Bahá'u'lláh to write Bábís easily and freely.
Subh-i Azal's leadership was controversial. He generally absented himself from the Bábí community spending his time in Baghdad in hiding and disguise; and even went so far as to publicly disavow allegiance to the Báb on several occasions. Subh-i Azal gradually alienated himself from a large proportion of the Bábís who started to give their alliance to other claimants. During the time that both Bahá'u'lláh and Subh-i-Azal were in Baghdad, since Subh-i Azal remained in hiding, Bahá'u'lláh performed much of the daily administration of the Bábí affairs.
Bahá'u'lláh claimed that in 1853, while a prisoner in Tehran, he was visited by a "Maid of Heaven", which symbolically marked the beginning of his mission as a Messenger of God. Ten years later in Baghdad, he made his first public declaration to be He whom God shall make manifest to a small number of followers, and in 1866 he made the claim public. Bahá'u'lláh's claims threatened Subh-i Azal's position as leader of the religion since it would mean little to be leader of the Bábís if "Him Whom God Shall Make Manifest" were to appear and start a new religion. Subh-i-Azal responded by making his own claims, but his attempt to preserve the traditional Bábísm was largely unpopular, and his followers became the minority.
Eventually Bahá'u'lláh was recognized by the vast majority of Bábís as "He whom God shall make manifest" and his followers began calling themselves Bahá'ís. By 1908, there were probably from half a million to a million Bahá'ís, and at most only a hundred followers of Subh-i Azal.
Subh-i Azal died in Famagusta, Cyprus in 1912, and his followers are known as Azalis or Azali Bábis. MacEoin notes that after the deaths of those Azali Babis who were active in the Persian Constitutional Revolution, the Azali form of Babism entered a stagnation from which it has not recovered as there is no acknowledged leader or central organization.
Current estimates of Azalis are that there are no more than a few thousand. The World Religion Database estimated 7.3 million Bahá'ís in 2010 and stated: "The Baha'i Faith is the only religion to have grown faster in every United Nations region over the past 100 years than the general population; Baha'i(sic) was thus the fastest-growing religion between 1910 and 2010, growing at least twice as fast as the population of almost every UN region." Bahá'í sources since 1991 usually estimate the worldwide Bahá'í population at "above 5 million". See Bahá'í statistics.
The Báb's major writings include the Qayyúmu'l-Asmá' (a commentary on the Sura of Joseph), and the Persian Bayán, which the Bábís saw as superseding the Qur'an. The latter has been translated into French; only portions exist in English. Unfortunately, most of the writings of the Báb have been lost. The Báb himself stated they exceeded five hundred thousand verses in length; the Qur'an, in contrast, is 6300 verses in length. If one assumes 25 verses per page, that would equal 20,000 pages of text. Nabíl-i-Zarandí, in The Dawn-breakers, mentions nine complete commentaries on the Qur'an, revealed during the Báb's imprisonment at Máh-Kú, which have been lost without a trace. Establishing the true text of the works that are still extant, as already noted, is not always easy, and some texts will require considerable work. Others, however, are in good shape; several of the Báb's major works are available in the handwriting of his trusted secretaries.
Most works were revealed in response to specific questions by Bábís. This is not unusual; the genre of the letter has been a venerable medium for composing authoritative texts as far back as Paul of Tarsus. Three quarters of the chapters of the New Testament are letters, were composed to imitate letters, or contain letters within them. Sometimes the Báb revealed works very rapidly by chanting them in the presence of a secretary and witnesses.
The Archives Department at the Bahá'í World Centre currently holds about 190 Tablets of the Báb. Excerpts from several principal works have been published in the only English language compilation of the Báb's writings: Selections from the Writings of the Báb. Denis MacEoin, in his Sources for Early Bābī Doctrine and History, gives a description of many works; much of the following summary is derived from that source. In addition to major works, the Báb revealed numerous letters to his wife and followers, many prayers for various purposes, numerous commentaries on verses or chapters of the Qur'an, and many khutbihs or sermons (most of which were never delivered). Many of these have been lost; others have survived in compilations.
The Báb's teachings can be grouped into three broad stages which each have a dominant thematic focus. His earliest teachings are primarily defined by his interpretation of the Qur'an and other Islamic traditions. While this interpretive mode continues throughout all three stages of his teachings, a shift takes place where his emphasis moves to philosophical elucidation and finally to legislative pronouncements. In the second philosophical stage, the Báb gives an explanation of the metaphysics of being and creation, and in the third legislative stage his mystical and historical principles are explicitly united. An analysis of the Báb's writings throughout the three stages shows that all of his teachings were animated by a common principle that had multiple dimensions and forms.
In Twelver Shi'a Islamic belief there were twelve Imams, the last of which, known as Imam Mahdi, who communicated with his followers only through certain representatives. According to the Twelver's belief, after the last of these representatives died, the Imam Mahdi went into a state of Occultation; while still alive, he was no longer accessible to his believers. Shi'a Muslims believe that when the world becomes oppressed, the Imam Mahdi (also termed the Qa'im) will come out of occultation and restore true religion on Earth before the cataclysmic end of the world and judgement day.
In Bábí belief the Báb is the return of the Imam Mahdi, but the doctrine of the Occultation is implicitly denied; instead the Báb stated that his manifestation was a symbolic return of the Imam, and not the physical reappearance of the Imam Mahdi who had died a thousand years earlier. In Bábí belief the statements made from previous revelations regarding the Imam Mahdi were set forth in symbols. The Báb also stated that he was not only the fulfillment of the Shi`i expectations for the Qá'im, but that he also was the beginning of a new prophetic dispensation.
The Báb taught that his revelation was beginning an apocalyptic process that was bringing the Islamic dispensation to its cyclical end, and starting a new dispensation. He taught that the terms "resurrection", "Judgement Day", "paradise" and "hell" used in Shi'a prophecies for the end-times are symbolic. He stated that "Resurrection" means that the appearance of a new revelation, and that "raising of the dead" means the spiritual awakening of those who have stepped away from true religion. He further stated that "Judgement Day" refers to when a new Manifestation of God comes, and the acceptance or rejection of those on the Earth. Thus the Báb taught that with his revelation the end times ended and the age of resurrection had started, and that the end-times were symbolic as the end of the past prophetic cycle.
In the Persian Bayán, the Báb wrote that religious dispensations come in cycles, as the seasons, to renew "pure religion" for humanity. This notion of continuity anticipated future prophetic revelations after the Báb.
While the Báb claimed a station of revelation, he also claimed no finality for his revelation. One of the core Bábí teachings is the great Promised One, whom the Báb termed He whom God shall make manifest, promised in the sacred writings of previous religions would soon establish the Kingdom of God on the Earth. In the books written by the Báb he constantly entreats his believers to follow He whom God shall make manifest when he arrives and not behave like the Muslims who have not accepted his own revelation.
The Báb abrogated Islamic law and in the Persian Bayán promulgated a system of Bábí law, thus establishing a separate religion distinct from Islam. Some of the new laws included changing the direction of the Qibla to the Báb's house in Shiraz, Iran and changing the calendar to a solar calendar of nineteen months and nineteen days (which became the basis of the Bahá'í calendar) and prescribing the last month as a month of fasting.
The Báb also created a large number of rituals and rites which remained largely unpracticed. Some of these rituals include the carrying of arms only in times of necessity, the obligatory sitting on chairs, the advocating of the cleanliness displayed by Christians, the non-cruel treatment of animals, the prohibition of beating children severely, the recommendation of the printing of books, even scripture and the prohibition on the study of logic or dead languages. While some statements in the Bayan show tolerance, there are other very harsh regulations in regards to relations with non-believers. For example, non-believers are forbidden to live in five central Iranian provinces, the holy places of previous religions are to be demolished, all non-Bábí books should be destroyed, believers are not to marry or sit in the company of non-believers, and the property of non-believers can be taken from them. Some further ritual include elaborate regulations regarding pilgrimage, fasting, the manufacture of rings, the use of perfume, and the washing and disposal of the dead.
Denis MacEoin writes, regarding the Bayán: "One comes away from the Bayan with a strong sense that very little of this is to be taken seriously. It is a form of game, never actually intended to be put into practice." Instead he states that "the Bábí shari'a made an impact... it stated very clearly that the Islamic code could be replaced." Nader Saiedi states that the severe laws of the Bayán were never meant to be put in practice, because their implementation depended on the appearance of He whom God shall make manifest, while at the same time all of the laws would be abrogated unless the Promised One would reaffirm them. Saiedi concludes that these can then only have a strategic and symbolic meaning, and were meant to break through traditions and to focus the Báb’s followers on obedience to He whom God shall make manifest.
1 2 Saiedi, Nader (2008). Gate of the Heart. Waterloo, ON: Wilfrid Laurier University Press. p. 19. ISBN 978-1-55458-035-4.
1 2 3 MacEoin, Dennis (2011). "Babism". Encyclopædia Iranica.
↑ Smith, Peter (2000). "Shi'ism". A concise encyclopedia of the Bahá'í Faith. Oxford: Oneworld Publications. pp. 312–313. ISBN 1-85168-184-1.
1 2 Saiedi, Nader (2008). Gate of the Heart. Waterloo, ON: Wilfrid Laurier University Press. p. 15. ISBN 978-1-55458-035-4.
1 2 3 4 5 6 Bausani, A. (1999). "Bāb". Encyclopedia of Islam. Leiden, The Netherlands: Koninklijke Brill NV.
↑ Mehrabkhani, R. (1987). Mullá Ḥusayn: Disciple at Dawn. Los Angeles, CA, USA: Kalimat Press. pp. 58–73. ISBN 0-933770-37-5.
1 2 3 4 5 6 MacEoin, Dennis (1989). "Bāb, Sayyed `Ali Mohammad Sirazi". Encyclopædia Iranica.
↑ "The Time of the Báb". BBC. Retrieved 2 July 2006.
↑ Amanat, Resurrection and Renewal, 191.
↑ Saiedi, Nader (2008). Gate of the Heart. Waterloo, ON: Wilfrid Laurier University Press. p. 19. ISBN 978-1-55458-035-4.
↑ Amanat, Resurrection and Renewal, 171.
1 2 Amanat, Resurrection and Renewal, 230-31.
1 2 3 Moojan Momen (1981) . The Bábí and Bahá'í religions 1844-1944: some contemporary western accounts. G. Ronald. pp. xv, xvi, 4, 11, 26–38, 62–5, 83–90, 100–104. ISBN 978-0-85398-102-2.
1 2 3 "MacEoin, Denis M". ENCYCLOPÆDIA IRANICA. Online. 15 December 1988. Retrieved 8 November 2013.
↑ National Spiritual Assembly of the Bahá'ís of the United States (1977). World order. National Spiritual Assembly of the Baha'is of the United States. Retrieved 20 August 2013.
↑ Amanat, Resurrection and Renewal, 257.
↑ Cheyne, The Reconciliation of Races and Religions, 29.
↑ Amanat, Resurrection and Renewal, 258.
1 2 3 4 Smith, Peter (2000). "Báb". A concise encyclopedia of the Bahá'í Faith. Oxford: Oneworld Publications. pp. 55–59. ISBN 1-85168-184-1.
↑ Smith, Peter (Spring–Summer 1984). "Research Note; A note on Babi and Baha'i Numbers in Iran". Iranian Studies. International Society for Iranian Studies. 17 (2–03): 295–301. doi:10.1080/00210868408701633. JSTOR 4310446.
↑ "Early mention of Bábís in western newspapers, summer 1850". Historical documents and Newspaper articles. Bahá'í Library Online. 2010-09-17 [Autumn 1850]. Retrieved August 20, 2013.
1 2 MacEoin, Denis (1983). "From Babism to Baha'ism: Problems of Militancy, Quietism, and Conflation in the Construction of a Religion". Religion. 13 (1983): 219–55. doi:10.1016/0048-721X(83)90022-2.
1 2 MacEoin, Denis (1983). "A Note on the Numbers of Babi and Baha'i Martyrs". Baha'i Studies Bulletin. 02 (3–1983): 68–72.
1 2 MacEoin, Denis (1983). "A Note on the Numbers of Babi and Baha'i Martyrs in Iran". Baha'i Studies Bulletin. 02 (2–1983): 84–88.
1 2 Smith, Peter (2000). "Tabarsi, Shaykh". A concise encyclopedia of the Bahá'í Faith. Oxford: Oneworld Publications. p. 331. ISBN 1-85168-184-1.
1 2 3 4 Smith, Peter (2000). "Zanjan". A concise encyclopedia of the Bahá'í Faith. Oxford: Oneworld Publications. pp. 368–369. ISBN 1-85168-184-1.
1 2 3 Smith, Peter (2000). "Nayriz". A concise encyclopedia of the Bahá'í Faith. Oxford: Oneworld Publications. p. 260. ISBN 1-85168-184-1.
↑ Shoghi, Effendi (1944). God Passes By. Wilmette, Illinois, USA: Bahá'í Publishing Trust. pp. 273–289. ISBN 0-87743-020-9.
↑ A History of Persia from the Beginning of the Nineteenth Century to the Year 1858 by Robert Grant Watson, 1866.
↑ Momen, Moojan (August 2008). "Millennialism and Violence: The Attempted Assassination of Nasir al-Din Shah of Iran by the Babis in 1852". Nova Religio: The Journal of Alternative and Emergent Religions. 12 (1): 57–82. doi:10.1525/nr.2008.12.1.57. JSTOR 10.1525/nr.2008.12.1.57.
↑ "POLAK, Jakob Eduard". ENCYCLOPÆDIA IRANICA. Online. December 15, 2009. Retrieved 2010-07-07.
↑ Polak, Jakob Eduard (1865). "Martyrdom of Tahirih (Dr Jakob Eduard Polak)". Persien. F.A. Brockhaus. p. 350.
1 2 Hutter, Manfred (2005). "Bahā'īs". In Ed. Lindsay Jones. Encyclopedia of Religion. 2 (2nd ed.). Detroit: Macmillan Reference USA. pp. 737–740. ISBN 0-02-865733-0.
1 2 Amanat, Abbas (1989). Resurrection and Renewal: The Making of the Babi Movement in Iran. Ithaca: Cornell University Press. p. 384.
↑ `Abdu'l-Bahá (2004) . Browne, E.G. (Tr.), ed. A Traveller's Narrative: Written to illustrate the episode of the Bab (2004 reprint, with translator's notes ed.). Los Angeles, USA: Kalimát Press. p. 37. ISBN 1-890688-37-1.
↑ Taherzadeh, Adib (1976). The Revelation of Bahá'u'lláh, Volume 1. Oxford, UK: George Ronald. p. 37. ISBN 0-85398-270-8.
↑ Manuchehri, S. (2004). "The Primal Point's Will and Testament". Research Notes in Shaykhi, Babi and Baha'i Studies. 7 (2).
1 2 3 4 Cole, Juan. "A Brief Biography of Baha'u'llah". Retrieved 22 June 2006.
1 2 3 4 5 MacEoin, Dennis (1989). "Azali Babism". Encyclopædia Iranica.
1 2 Barrett, David (2001). The New Believers. London, UK: Cassell & Co. p. 246. ISBN 0-304-35592-5.
↑ Johnson 2013, pp. 59-62.
↑ MacEoin, Sources for Early Bābī Doctrine and History, 15.
↑ Denis MacEoin, The Sources for Early Bābī Doctrine and History (Leiden: Brill, 1992), 88.
↑ MacEoin, Sources for Early Bābī Doctrine and History, 12-15.
↑ On letters as a medium of composition of the New Testament, see Norman Perrin, The New Testament: An Introduction, Proclamation and Parenesis, Myth and History (New York: Harcourt Brace Jovanovitch, 1974), 96-97.
↑ Unpublished letter from the Universal House of Justice. "Numbers and Classifications of Sacred Writings Texts". Retrieved 16 December 2006.
↑ MacEoin, Sources for Early Bābī Doctrine and History, 15-40.
↑ Saiedi, Nader (2008). Gate of the Heart. Waterloo, ON: Wilfrid Laurier University Press. pp. 27–28. ISBN 978-1-55458-035-4.
↑ Saiedi, Nader (2008). Gate of the Heart. Waterloo, ON: Wilfrid Laurier University Press. p. 49. ISBN 978-1-55458-035-4.
1 2 3 4 5 6 7 Browne, Edward G. (1889). Bábism.
1 2 3 4 5 6 Amanat, Abbas (2000). Stephen J. Stein, ed., ed. "The Resurgence of Apocalyptic in Modern Islam". The Encyclopedia of Apocalypticism. New York: Continuum. III: 230–254.
1 2 3 Esslemont, J.E. (1980). Bahá'u'lláh and the New Era (5th ed.). Wilmette, Illinois, USA: Bahá'í Publishing Trust. ISBN 0-87743-160-4.
↑ Farah, Caesar E. (1970). Islam: Beliefs and Observances. Woodbury, NY: Barron's Educational Series.
↑ Hutter, Manfred (2005). "Babis". In Lindsay Jones. Encyclopedia of Religion. Vol. 2 (2nd ed.). Detroit: Macmillan Reference USA. pp. 727–729.
↑ Walbridge, John (2002). "Chap. 3". Essays and Notes on Bábí and Bahá'í History. East Lansing, Michigan: H-Bahai Digital Library.
1 2 3 4 5 6 MacEoin, Denis (23 March 2006). "Deconstructing and Reconstructing the Shari'a: the Bábí and Bahá'í Solutions to the Problem of Immutability". bahai-library.org. Retrieved 11 July 2006.
"Bábi", Encyclopædia Britannica, 9th ed., Vol. III, New York: Charles Scribner's Sons, 1878, p. 180 .
"Bâbi", Encyclopædia Britannica, American Rev., Vol. III, Chicago: Werner Co., 1893, pp. 180–181 .
"Bábíism", Encyclopædia Britannica, 11th ed., Vol. III, Cambridge: Cambridge University Press, 1911, pp. 94–95 .
Amanat, Abbas (1989). Resurrection and Renewal: The Making of the Bábí Movement in Iran 1844–1850. Ithaca, NY: Cornell University Press. ISBN 0-8014-2098-9.
Cheyne, Thomas Kelly (2007). The Reconciliation of Races and Religions. Echo Library. ISBN 1406845469.
Esslemont, J. E. (1980). Bahá'u'lláh and the New Era, An Introduction to the Bahá'í Faith (5th ed.). Wilmette, Illinois: Bahá'í Publishing Trust. ISBN 0-87743-160-4.
MacEoin, Denis (1994). Rituals in Babism and Baha'ism. Cambridge, UK: British Academic Press and Centre of Middle Eastern Studies, University of Cambridge. ISBN 1-85043-654-1.
Johnson, Todd M.; Brian J. Grim (2013). "Global Religious Populations, 1910–2010". The World's Religions in Figures: An Introduction to International Religious Demography. John Wiley & Sons. pp. 59–62. doi:10.1002/9781118555767.ch1. ISBN 9781118555767.
MacEoin, Denis (1992). The Sources for Early Bābī Doctrine and History: A Survey. Leiden, The Netherlands: Brill. ISBN 90-04-09462-8.
Nabíl-i-Zarandí (1932). The Dawn-Breakers: Nabíl’s Narrative. trans. Shoghi Effendi. Wilmette, Illinois: Bahá'í Publishing Trust. ISBN 0-900125-22-5.
Saiedi, Nader (2008). Gate of the Heart: Understanding the Writings of the Báb. Canada: Wilfrid Laurier University Press. ISBN 978-1-55458-056-9.
Smith, Peter (1987). The Bábí and Bahá'í Religions: From Messianic Shi'ism to a World Religion. Cambridge, UK: Cambridge University Press. ISBN 0-521-30128-9.
Wikisource has the text of the 1905 New International Encyclopedia article Babism.
Afnan, Habibuʾllah (2008). Ahang Rabbani, ed. The Genesis of the Bábí-Bahá'í Faiths in Shíráz and Fárs. Numen Book Series - Studies in the History of Religions -Texts and Sources in the History of Religions. 122. Boston, USA: Brill; Leiden. ISBN 978-90-04-17054 4. ISSN 0169-8834.
MacEoin, Denis (2009). The Messiah of Shiraz: Studies in Early and Middle Babism. Leiden, The Netherlands: Brill. ISBN 90-04-17035-9.
|
0.967448 |
How to choose motocross exhaust: Possibly the most common aftermarket upgrade to any bike is replacing the stock exhaust, whether it be a slip-on or full system. Actually, it's almost less common to actually see a bike with a stock exhaust, instead of a shiny and trick system. It could be for extra power, sound, or regulated areas, but either way we almost all want one. What's there to look for? Read on to find out.
When looking for an exhaust there is something to ask yourself, first...two-stroke or four-stroke? This will make a bit of a difference on price, along with what options you have to work with in materials and combinations.
Let's start with four-stroke, do you want a slip-on or full system? A slip-on for most bikes consists of an exhaust can and connecting (mid-pipe) pipe to the stock head pipe. These systems have most of the look from a full system, plus some power gains (as they're usually less restrictive than stock). They're also available with spark arrestors or in forestry models for those looking to meet standards for the trails in their area.
If you're looking to take more advantage of an aftermarket exhaust, you can also look at getting a full system. A full system includes the exhaust can and mid-pipe found in most slip-on kits, plus a head pipe to finish connecting the system to the cylinder. For most four-strokes, the different head pipes will create the most noticeable and largest difference in power delivery. Enough so that some brands offer different headpipe options for some models, to suite the power characteristics the customer is looking for.
On the two-stroke side there are pipes and silencers available. These can be bought separately or together, and with some you can even mix and match brands to modify the power delivery to your desires. The pipe and its expansion chamber design can make a huge change on the power delivery of a two-stroke, and some brands offer different builds to achieve either low-down, roll-on torque feel, while others go for a more aggressive snap. Both can be advantage, depending on the terrain and type of riding you like to do.
Silencers also offer a bit of a tuning option, as most motocross models are shorter in design, while a more off-road design is typically quite a bit longer. These once again can fine tune where and how hard the hit of the power is.
For four-strokes, a slip-on system can run between $300-$500, depending upon the model and materials used. Most slip-ons are typically are made from stainless steel and aluminum, sometimes featuring a little bling in the form of a carbon end-cap.
Moving up the price range we jump into full-systems which are a bit wider ranging, starting as low as $600 but jumping to around $1400, depending upon materials and single vs. dual cans (like on some Hondas). The cheaper systems in this range are made from stainless steel and aluminum, while the more expensive models feature titanium and even carbon fiber. These more expensive models typically offer the same power as their cheaper brethren, but are lighter in weight. For the majority of the public, a stainless/aluminum system will do just fine, but for those that want every weight advantage or bling factory, the titanium/carbon models are available.
With two-strokes however, there aren't as many materials to look at. For pipes, there are different finishes and designs, but most come in between the $200-$250 range and are built out of different forms of steel. Research is really key here, as the products are close in price but all offer a different feel.
With silencers there are a few more options with materials and designs. Price wise, you'll find most motocross "shorties" in stainless/aluminum to be about $100-$120. Moving on to off-road or longer moto-based silencers, you'll see the price rise a bit from $140-$200. The moto versions of these longer models are for a different power characteristic to their shortie counterparts, while the off-road models typically are quieter, have spark arrestors, and offer the most calm form of power delivery. Last up are a few specialty titanium and carbon fiber/kevlar moto silencers, which are lighter and quite exotic. Some are old-school two-stroke race setups and offer a very aggressive hit, while others are more based on four-stroke designs and offer a much broader power delivery. These are definitely pricey, falling in the highest range at about $220-$300.
|
0.95721 |
When cycling, never underestimate the importance of wearing cycling eye wear. Beyond looking fashionable and protecting your eyes from bright light, eye wear offers valuable protection from harmful UV rays and potential flying debris, e.g., glass, small rocks, insects, dust., etc. While competitive cyclists often focus on light weight lenses and aerodynamics, upgrades may include prescription, photo-chromatic and polarized lenses. The following are a few well known brands; Oakley, Rudy Project, Tifosi Optics, Bolle, Uvex, POC and Smith. Here are a few things to look for when purchasing your next pair of cycling eye wear.
Lens Type: Depending on your sensitivity to brightness, having a good pair of lenses is crucial while riding in order to see clearly ahead, avoiding as much glare as possible, and for those on dawn or dusk patrol, provide clear sight while out spinning in the dark. Purchase your eye wear based on your riding conditions, especially if the particular pair you select includes a single set of lenses.
Polarized lenses will help you avoid harsh glare lines created by the reflection of the sun off pavement, especially when riding towards that beautiful sunset.
Some cycling eye-wear companies include an additional set of clear lenses that will help avoid riding in the dark without protection.
Similar to transition lenses you often find on prescription glasses, some manufacturers offer Photochromic lenses as an option that darken depending on the source of light, such a great option as this will eliminate the need to carry two pairs of lenses having to change them out as the sun rises or sets.
Shatter Proof: Although most lens manufacturers commonly make their lenses shatter resistant, make certain the pair you select to purchase offers the safety of shatterproof lenses. You won’t want to be caught in a crash or accident with a shard of plastic or glass in your eyes.
Nose pads: Something often overlooked is the comfort offered by those tiny sweat proof nose pads. Enjoy whatever amount of time you spend on your bike without the worry of losing your eye wear at the first bump in the road simply by wearing appropriate fitting glasses with sweat proof nose pads.
Safety and comfort should always be your focus when shopping for cycling eye wear. And of course, selecting a fashionable frame that fits properly ensures you will be more than likely to wear them on every ride.
|
0.956353 |
"Illegal Immigrants Steal Americans' Jobs."
The logic of economics applies across borders: county, state, and national. Deny this, and you deny economics.
Conservatives deny economics. They promote tariffs and import quotas across national borders, but not state and county borders.
Ludwig von Mises had a word for this: polylogism. This means multiple systems of logic.
Conservatives usually oppose trade union restrictions on hiring. They understand vaguely that the politicians have gotten into the act, and they have created legal restrictions on employers who wish to hire workers who are not members of a trade union. A few conservatives may even understand that restrictions on hiring non-union members discriminates against non-union members. The law forces non-union members to seek employment from non-unionized businesses, which pay lower wages.
A very few conservatives may recognize that this form of labor discrimination is a subsidy to non-unionized businesses, which can hire non-union workers at below-market (government-protected market) wages. Usually, only libertarians are willing and able to follow the logic of unionized labor this far.
What we find, over and over, is that conservatives who reject the idea of union discrimination in the labor markets favor tariffs, import quotas, and laws against employing people who are not American citizens or green-card holders. In other words, as soon as they see an invisible line where a customs gate is located, they adopt exactly the same economic logic that is used by union members to justify discrimination that subsidizes them. This is polylogism.
The General Agreement on Tariffs and Trade (GATT) was a multilateral agreement regulating international trade. According to its preamble, its purpose was the “substantial reduction of tariffs and other trade barriers and the elimination of preferences, on a reciprocal and mutually advantageous basis.” It was negotiated during the United Nations Conference on Trade and Employment and was the outcome of the failure of negotiating governments to create the International Trade Organization (ITO). GATT was signed in 1947, took effect in 1948, and lasted until 1994; it was replaced by the World Trade Organization in 1995.
The Kennedy Round was the sixth session of General Agreement on Tariffs and Trade (GATT) trade negotiations held between 1964 and 1967 in Geneva, Switzerland. Congressional passage of the U.S. Trade Expansion Act in 1962 authorized the White House to conduct mutual tariff negotiations, ultimately leading to the Kennedy Round. Participation greatly increased over previous rounds. Sixty-six nations, representing 80% of world trade, attended the official opening on May 4, 1964, at the Palais des Nations. Despite several disagreements over details, the director general announced the round’s success on May 15, 1967, and the final agreement was signed on June 30, 1967–the very last day permitted under the Trade Expansion Act. The round was named after U.S. President John F. Kennedy, who was assassinated six months before the opening negotiations.
Teddy Kennedy was the ramrod in the Senate for the Immigration Act of 1965. Lyndon Johnson signed it in 1968. This pried open borders, and low-wage workers by the millions streamed through. These workers were a threat to the labor union movement. Cesar Chavez organized a labor union in 1962 to keep non-union workers from lowering wages and working conditions in the fields. His first major strike was in 1965. The following Wikipedia article mentions the split in the Kennedy family.
|
0.999999 |
Našli jsme další záznamy k osobě פייגה איבשין.
פייגה איבשין je pohřben(a) na hřbitově New Kiryat Shmona Cemetery v místě zobrazeném na níže uvedené mapě. Tyto informace o GPS jsou k dispozici POUZE na stránkách BillionGraves. Naše technologie vám pomůže najít umístění hrobu a také další členy rodiny, pohřbené poblíž.
פייגה איבשין was 14 years old when Adolf Hitler signs an order to begin the systematic euthanasia of mentally ill and disabled people. Adolf Hitler was a German politician, demagogue, and Pan-German revolutionary, who was the leader of the Nazi Party, Chancellor of Germany from 1933 to 1945 and Führer ("Leader") of Nazi Germany from 1934 to 1945. As dictator, Hitler initiated World War II in Europe with the invasion of Poland in September 1939, and was central to the Holocaust.
פייגה איבשין was 20 years old when World War II: Hiroshima, Japan is devastated when the atomic bomb "Little Boy" is dropped by the United States B-29 Enola Gay. Around 70,000 people are killed instantly, and some tens of thousands die in subsequent years from burns and radiation poisoning. World War II, also known as the Second World War, was a global war that lasted from 1939 to 1945, although conflicts reflecting the ideological clash between what would become the Allied and Axis blocs began earlier. The vast majority of the world's countries—including all of the great powers—eventually formed two opposing military alliances: the Allies and the Axis. It was the most global war in history; it directly involved more than 100 million people from over 30 countries. In a state of total war, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, blurring the distinction between civilian and military resources. World War II was the deadliest conflict in human history, marked by 50 to 85 million fatalities, most of whom were civilians in the Soviet Union and China. It included massacres, the genocide of the Holocaust, strategic bombing, premeditated death from starvation and disease and the only use of nuclear weapons in war.
פייגה איבשין was 32 years old when Space Race: Launch of Sputnik 1, the first artificial satellite to orbit the Earth. The Space Race refers to the 20th-century competition between two Cold War rivals, the Soviet Union (USSR) and the United States (US), for dominance in spaceflight capability. It had its origins in the missile-based nuclear arms race between the two nations that occurred following World War II, aided by captured German missile technology and personnel from the Aggregat program. The technological superiority required for such dominance was seen as necessary for national security, and symbolic of ideological superiority. The Space Race spawned pioneering efforts to launch artificial satellites, uncrewed space probes of the Moon, Venus, and Mars, and human spaceflight in low Earth orbit and to the Moon.
פייגה איבשין was 38 years old when John F. Kennedy was assassinated by Lee Harvey Oswald in Dallas, Texas; hours later, Lyndon B. Johnson was sworn in aboard Air Force One as the 36th President of the United States. John Fitzgerald Kennedy, commonly referred to by his initials JFK, was an American politician who served as the 35th President of the United States from January 1961 until his assassination in November 1963. He served at the height of the Cold War, and the majority of his presidency dealt with managing relations with the Soviet Union. As a member of the Democratic Party, Kennedy represented the state of Massachusetts in the United States House of Representatives and the U.S. Senate prior to becoming president.
פייגה איבשין was 48 years old when Vietnam War: The last United States combat soldiers leave South Vietnam. The Vietnam War, also known as the Second Indochina War, and in Vietnam as the Resistance War Against America or simply the American War, was a conflict that occurred in Vietnam, Laos, and Cambodia from 1 November 1955 to the fall of Saigon on 30 April 1975. It was the second of the Indochina Wars and was officially fought between North Vietnam and the government of South Vietnam. The North Vietnamese army was supported by the Soviet Union, China, and other communist allies; the South Vietnamese army was supported by the United States, South Korea, Australia, Thailand and other anti-communist allies. The war is considered a Cold War-era proxy war by some US perspectives. The majority of Americans believe the war was unjustified. The war would last roughly 19 years and would also form the Laotian Civil War as well as the Cambodian Civil War, which also saw all three countries become communist states in 1975.
פייגה איבשין was 64 years old when The tanker Exxon Valdez spilled 10.8 million US gallons (260,000 bbl; 41,000 m3) of oil into Prince William Sound, Alaska, causing one of the most devastating man-made maritime environmental disasters. A tanker is a ship designed to transport or store liquids or gases in bulk. Major types of tankship include the oil tanker, the chemical tanker, and gas carrier. Tankers also carry commodities such as vegetable oils, molasses and wine. In the United States Navy and Military Sealift Command, a tanker used to refuel other ships is called an oiler but many other navies use the terms tanker and replenishment tanker.
פייגה איבשין was 64 years old when Nelson Mandela is released from Victor Verster Prison outside Cape Town, South Africa after 27 years as a political prisoner. Nelson Rolihlahla Mandela was a South African anti-apartheid revolutionary, political leader, and philanthropist who served as President of South Africa from 1994 to 1999. He was the country's first black head of state and the first elected in a fully representative democratic election. His government focused on dismantling the legacy of apartheid by tackling institutionalised racism and fostering racial reconciliation. Ideologically an African nationalist and socialist, he served as President of the African National Congress (ANC) party from 1991 to 1997.
|
0.989024 |
Joy Paul Guilford (March 7, 1897 – November 26, 1987) was an American psychologist, one of the leading American exponents of factor analysis in the assessment of personality. He is well remembered for his psychometric studies of human intelligence and creativity. Guilford was an early proponent of the idea that intelligence is not a unitary concept. Based on his interest in individual differences, he explored the multidimensional aspects of the human mind, describing the structure of the human intellect based on a number of different abilities. His work emphasized that scores on intelligence tests cannot be taken as a unidimensional ranking that some researchers have argued indicates the superiority of some people, or groups of people, over others. In particular, Guilford showed that the most creative people may score lower on a standard IQ test due to their approach to the problems, which generates a larger number of possible solutions, some of which are original. Guilford's work, thus, allows for greater appreciation of the diversity of human thinking and abilities, without attributing different value to different people.
Joy Paul Guilford, known as J. P. Guilford, was born on March 7, 1897 in Marquette, Nebraska. His interest in individual differences started in his childhood, when he observed the differences in ability among the members of his own family.
As an undergraduate student at the University of Nebraska, he worked as an assistant in the psychology department. While in graduate school at Cornell University, from 1919 to 1921, he studied under Edward Titchener. He conducted intelligence testing on children. During his time at Cornell, he also served as director of the university's psychological clinic.
From 1927 to 1928, Guilford worked at the University of Kansas, after which he became Associate Professor at University of Nebraska, remaining there from 1928 to 1940. In 1940 he was appointed a psychology professor at the University of Southern California, where he stayed until 1967.
During World War II, Guilford worked for the US Air Force Psychological Research Unit, as the Director of Psychological Research #3 at Santa Ana Army Air Base. He formed the Aptitude Project at the University of Southern California, and worked on the selection and ranking of aircrew trainees.
After the war, Guilford continued to work on the intelligence tests, focusing particularly on divergent thinking and creativity. He designed numerous tests that measured creative thinking.
Guilford retired from teaching in 1967, but continued to write and publish. He died on November 26, 1987 in Los Angeles, California.
Throughout his whole career Guilford was interested in individual differences in people. He was best known for his work in intelligence and creativity. Unlike many researchers who generated great controversy by suggesting that different groups ranked higher or lower on a measurement scale of intelligence (notably Hans Eysenck, Cyril Burt and others who suggested differences in intelligence among races), Guilford valued the differences. His research sought ways to uncover and understand the diverse ways the human intellect functions, recognizing that differences in scores on a test did not necessarily imply quantitative differences in a single ability, but rather qualitatively different abilities.
elaboration (the ability to systematize and organize the details of an idea in a head and carry it out).
[o]rdinary IQ scales assess only a limited number of . . . [one's abilities], usually those most important for learning in school . . . [and one] may be high in some, medium in others, and low in still others (Guilford 1977, p. 13).
During his tenure at the University of Southern California, Guilford devised several tests to measure the intellectual ability of creative people. Many of his divergent thinking tests have been adapted for use in schools and other educational settings to measure the ability of gifted students in placing them in special programs.
Building upon the views of L. L. Thurstone, Guilford rejected Charles Spearman's view that intelligence could be characterized by a single numerical parameter ("general intelligence factor" or g). He argued that intelligence consists of numerous intellectual abilities. He first proposed a model with 120, then 150, and finally 180 independently operating factors in intelligence.
Guilford proposed a three-dimensional cubical model to explain his theory of the structure of the intellect. According to this theory, an individual's performance on an intelligence test can be traced back to the underlying mental abilities, or "factors" of intelligence. These factors (abilities) were then organized along three dimensions: operations, content, and products.
Cognition - The ability to understand, comprehend, discover, and become aware.
Memory Recording - The ability to encode information.
Memory Retention - The ability to recall information.
Convergent Production - The process of deducing a single solution to a problem.
Evaluation - The process of judging whether an answer is accurate, consistent, or valid.
Cognition—The ability to understand, comprehend, discover, and become aware of information.
Auditory - Information perceived through hearing.
Visual - Information perceived through seeing.
Symbolic - Information perceived as symbols or signs that have no meaning by themselves; for example, Arabic numerals or the letters of an alphabet.
Semantic - Information perceived in words or sentences, whether oral, written, or silently in one's mind.
Behavioral - Information perceived as acts of an individual or individuals.
Unit - Represents a single item of information.
Class - A set of items that share some attributes.
Relation - Represents a connection between items or variables; may be linked as opposites or in associations, sequences, or analogies.
System - An organization of items or networks with interacting parts.
Transformation - Changes perspectives, conversions, or mutations to knowledge; such as reversing the order of letters in a word.
Implication - Predictions, inferences, consequences, or anticipations of knowledge.
Therefore, according to Guilford there are 6 x 5 x 6 = 180 intellectual abilities or factors. Each ability stands for a particular operation in a particular content area and results in a specific product, such as Comprehension of Figural Units or Evaluation of Semantic Implications.
Guilford's original model was composed of 120 components because he had not separated Figural Content into separate Auditory and Visual contents, nor had he separated Memory into Memory Recording and Memory Retention. When he separated Figural into Auditory and Visual contents, his model increased to 5 x 5 x 6 = 150 categories. When Guilford separated the Memory functions, his model finally increased to the final 180 factors (Guilford 1980).
Guilford was one of the first psychologists, together with L. L. Thurstone, who perceived intelligence not as a unitary concept, which could be captured in a single score, but as a set of possibly independent factors. Research from different fields, such as developmental psychology, artificial intelligence, and neurology, shows that the mind consists of several independent (albeit interdependent) modules or "intelligences."
Although his theory of intelligence factors has been superseded by more developed theories of multiple intelligence (most notably by those of Robert Sternberg and Howard Gardner), Guilford left a significant mark on research into intelligence. Many tests that are still used in modern intelligence testing were modified and developed under his guidance.
Guilford, J.P. 1939. General Psychology. Van Nostrand.
Guilford, J.P. 1950. Creativity. American Psychologist 5: 444-454.
Guilford, J.P. 1956. A Factor-analytic Study of Verbal Fluency: Studies of Aptitudes of High-level Personnel. University of Southern California.
Guilford, J.P. 1956. Fourteen Dimensions of Temperament. American Psychological Association.
Guilford, J.P. 1959. Traits of creativity in Creativity and its Cultivation. pp. 142-161. Harper and Row.
Guilford, J.P. 1968. Intelligence, Creativity and their Educational Implications. Robert R. Knapp.
Guilford, J.P. 1980. Some changes in the structure of intellect model. Educational and Psychological Measurement 48: 1-4.
Guilford, J.P. 1982. Cognitive psychology's ambiguities: Some suggested remedies. Psychological Review 89: 48-59.
Divergent Thinking. BookRags.com. Retrieved on March 9, 2007.
Jay Paul Guilford. Psychology Department, University of Sydney. Retrieved on March 9, 2007.
One Intelligence or Many? Alternative Approaches to Cognitive Abilities – Han S. Paik from the Washington University.
The Creativity / IQ Interface: Old Answers and Some New Questions – Maria McCann, Flinders University.
History of "J. P. Guilford"
|
0.935491 |
Did you know that Cannibal Holocaust a film that depicted genuine on-screen animal killings and violence so convincing, the director was arrested and forced to prove the 4 actors werent actually killed on screen.
Cannibal Holocaust is a 1980 Italian cannibal film directed by Ruggero Deodato from a screenplay by Gianfranco Clerici, starring Carl Gabriel Yorke, Robert Kerman, Francesca Ciardi and Luca Barbareschi. Cannibal Holocaust was filmed in the Amazon Rainforest with real indigenous tribes interacting with American and Italian actors.
The film tells the story of a missing documentary film crew who had gone to the Amazon to film cannibal tribes. A rescue mission, led by the New York University anthropologist Harold Monroe, recovers the film crew’s lost cans of film, which an American television station wishes to broadcast. Upon viewing the reels, Monroe is appalled by the team’s actions, and after learning their fate, he objects to the station’s intent to air the documentary.
Cannibal Holocaust is unique for its “found footage” structure, in which the gradual revelation of the recovered film’s content functions similar to a flashback. The film’s notion of “recovered footage” has influenced the now-popular genre of found footage horror films, such as The Blair Witch Project.
Cannibal Holocaust achieved notoriety as its graphic violence aroused a great deal of controversy. After its premiere in Italy, it was seized by a local magistrate, and Deodato was arrested on obscenity charges. He was charged with making a snuff film due to rumors that claimed some actors were killed on camera. Although Deodato was later cleared, the film was banned in Italy, Australia, and several other countries due to its disturbing portrayal of graphic brutality, sexual assault, and animal violence. Some nations have since revoked the ban, but the film is still banned in several countries. Critics have suggested that the film is a commentary about civilized versus uncivilized society.
Cannibal Holocaust premiered on 7 February 1980 in the Italian city of Milan. Although the courts confiscated the film based on a citizen’s complaint, the initial audience reaction was positive. After seeing the film, director Sergio Leone wrote a letter to Deodato, which stated [translated], “Dear Ruggero, what a movie! The second part is a masterpiece of cinematographic realism, but everything seems so real that I think you will get in trouble with all the world.” In the ten days before it was seized, the film had grossed approximately $2 million.
Detractors, however, criticize the acting, the over-the-top gore, and the genuine animal slayings and point to an alleged hypocrisy that the film presents. Nick Schager criticized the brutality of the film, saying, “As clearly elucidated by its shocking gruesomeness—as well as its unabashedly racist portrait of indigenous folks it purports to sympathize with—the actual savages involved with Cannibal Holocaust are the ones behind the camera.” Some argue that Schager’s racism argument is supported by the fact that the real indigenous peoples in Brazil whose names were used in the film—the Yanomamo and Shamatari—are not fierce enemies as portrayed in the film, nor is either tribe truly cannibalistic (although the Yanomamo do partake in a form of post-mortem ritual cannibalism).
Robert Firsching of Allmovie made similar criticisms of the film’s content, saying, “While the film is undoubtedly gruesome enough to satisfy fans, its mixture of nauseating mondo animal slaughter, repulsive sexual violence, and pie-faced attempts at socially conscious moralizing make it rather distasteful morally as well.” Slant Magazine’s Eric Henderson said it is “…artful enough to demand serious critical consideration, yet foul enough to christen you a pervert for even bothering.” Cannibal Holocaust currently holds a 60% “Fresh” rating on the film review aggregate website Rotten Tomatoes, with an average rating of 5/10.
In recent years, Cannibal Holocaust has received accolades in various publications, as well as cult following. The British film magazine Total Film ranked Cannibal Holocaust as the tenth greatest horror film of all time, and the film was included in a similar list of the top 25 horror films compiled by Wired. The film also came in eighth on IGN’s list of the ten greatest grindhouse films.
Deodato’s intentions regarding the Italian media coverage of the Red Brigades have also fallen under critical examination and has been expanded to include all sensationalism. Carter explores this, claiming that “[The lack of journalistic integrity] is shown through the interaction between Professor Monroe and the news agency that had backed the documentary crew. They continually push Monroe to finish editing the footage because blood and guts equal ratings.” Director Lloyd Kaufman claims that this form of exploitative journalism can still be seen in the media today and in programming such as reality television.
Despite these interpretations, Deodato has said in interviews that he had no intentions in Cannibal Holocaust but to make a film about cannibals. Actor Luca Barbareschi asserts this as well and believes that Deodato only uses his films to “put on a show”. Robert Kerman contradicts these assertions, however, stating that Deodato did tell him of political concerns involving the media in the making of this film.
|
0.999383 |
Common methods for monetary calculations such as interest and depreciation.
Returns the depreciation of an asset for a specified period using the fixed-declining balance method. Cost is the initial cost of the asset. Salvage is the value at the end of the depreciation (sometimes called the salvage value of the asset). Life is the number of periods over which the asset is being depreciated (sometimes called the useful life of the asset). Period is the period for which you want to calculate the depreciation. Period must use the same units as life. Month is the number of months in the first year. If month is omitted, it is assumed to be 12.
The fixed-declining balance method computes depreciation at a fixed rate. Db uses the following formulas to calculate depreciation for a period: (cost - total depreciation from prior periods) * rate where: rate = 1 - ((salvage / cost) ^ (1 / life)), rounded to three decimal places. Depreciation for the first and last periods is a special case. For the first period, Db uses this formula: cost * rate * month / 12. For the last period, Db uses this formula: ((cost - total depreciation from prior periods) * rate * (12 - month)) / 12.
Data Assumptions: Initial cost=1,000,000 (A2); Salvage value=100,000 (A3); Lifetime in years=6 (A4).
Ex. Db([A2],[A3],[A4],1,7) - Depreciation in first year, with only 7 months calculated (186,083.33).
Ex. Db([A2],[A3],[A4],2,7) - Depreciation in second year (259,639.42).
Ex. Db([A2],[A3],[A4],3,7) - Depreciation in third year (176,814.44).
Ex. Db([A2],[A3],[A4],4,7) - Depreciation in fourth year (120,410.64).
Ex. Db([A2],[A3],[A4],5,7) - Depreciation in fifth year (81,999.64).
Ex. Db([A2],[A3],[A4],6,7) - Depreciation in sixth year (55,841.76).
Ex. Db([A2],[A3],[A4],7,7) - Depreciation in seventh year, with only 5 months calculated (15,845.10).
Returns the depreciation of an asset for a specified period using the double-declining balance method or some other method you specify. Cost is the initial cost of the asset. Salvage is the value at the end of the depreciation (sometimes called the salvage value of the asset). Life is the number of periods over which the asset is being depreciated (sometimes called the useful life of the asset). Period is the period for which you want to calculate the depreciation. Period must use the same units as life. Factor is the rate at which the balance declines. If factor is omitted, it is assumed to be 2 (the double-declining balance method).
Note: All five arguments must be positive numbers.
The double-declining balance method computes depreciation at an accelerated rate. Depreciation is highest in the first period and decreases in successive periods. Ddb uses the following formula to calculate depreciation for a period: ((cost-salvage) - total depreciation from prior periods) * (factor/life). Change factor if you do not want to use the double-declining balance method. Use the Vdb function if you want to switch to the straight-line depreciation method when depreciation is greater than the declining balance calculation.
Data Assumptions: Initial cost=2400 (A2); Salvage value=300 (A3); Lifetime in years=10 (A4).
Ex. Ddb([A2],[A3],[A4]*365,1) - First day's depreciation.
Ex. Ddb([A2],[A3],[A4]*12,1,2) - First month's depreciation (40.00).
Ex. Ddb([A2],[A3],[A4],1,2) - First year's depreciation (480.00).
Ex. Ddb([A2],[A3],[A4],10) - Tenth year's depreciation.
Note: The results are rounded to two decimal places.
Returns the future value of an investment based on periodic, constant payments and a constant interest rate.
For a more complete description of the arguments in Fv and for more information on annuity functions, see Pv (Above). Rate is the interest rate per period. Nper is the total number of payment periods in an annuity. Pmt is the payment made each period; it cannot change over the life of the annuity. Typically, Pmt contains principal and interest but no other fees or taxes. If Pmt is omitted, you must include the Pv argument. Pv is the present value, or the lump-sum amount that a series of future payments is worth right now. If Pv is omitted, it is assumed to be 0 (zero), and you must include the Pmt argument. Type is the number 0 or 1 and indicates when payments are due. If type is omitted, then it is assumed to be 0. Make sure that you are consistent about the units you use for specifying Rate and Nper. If you make monthly payments on a four-year loan at 12 percent annual interest, use 12%/12 for rate and 4*12 for Nper. If you make annual payments on the same loan, use 12% for Rate and 4 for Nper. For all the arguments, cash you pay out, such as deposits to savings, is represented by negative numbers; cash you receive, such as dividend checks, is represented by positive numbers.
Data Assumptions: Annual interest rate=6% (A2); Number of payments=10 (A3); Amount of the payment=-200 (A4); Present value=-500 (A5); Payment is due at the beginning of the period=1 (A6)...(see above).
Ex. Fv([A2]/12, [A3], [A4], [A5], [A6]) – returns future value of an investment with these terms (2,581.40).
Returns the interest rate for a fully invested security.
Note: Dates should be entered by using the Date function, or as results of other formulas or functions.
For example, use Date(2008,5,23) for the 23rd day of May, 2008. Problems can occur if dates are entered as text. Settlement is the security's settlement date. The security settlement date is the date after the issue date when the security is traded to the buyer. Maturity is the security's maturity date. The maturity date is the date when the security expires. Investment is the amount invested in the security. Redemption is the amount to be received at maturity. Basis is the type of day count basis to use.
The settlement date is the date a buyer purchases a coupon, such as a bond. The maturity date is the date when a coupon expires. For example, suppose a 30-year bond is issued on January 1, 2008, and is purchased by a buyer six months later. The issue date would be January 1, 2008, the settlement date would be July 1, 2008, and the maturity date would be January 1, 2038, which is 30 years after the January 1, 2008, issue date. Settlement, maturity, and basis are truncated to integers. If settlement or maturity is not a valid date, IntRate returns the #VALUE! error value. If investment = 0 or if redemption = 0, IntRate returns the #NUM! error value. If basis < 0 or if basis > 4, IntRate returns the #NUM! error value. If settlement = maturity, IntRate returns the #NUM! error value.
Data Assumptions: Settlement date=February 15, 2008 (A2); Maturity date=May 15, 2008 (A3); Investment=1,000,000 (A4); Redemption value=1,014,420 (A5); Actual/360 basis (see above)=2 (A6).
Ex. IntRate([A2],[A3],[A4],[A5],[A6]) - returns discount rate, for the terms of the bond above (0.05768 or 5.77%).
Returns the interest payment for a given period for an investment based on periodic, constant payments and a constant interest rate. For a more complete description of the arguments in Ipmt and for more information about annuity functions, see Pv. Rate is the interest rate per period. Per is the period for which you want to find the interest and must be in the range 1 to Nper. Nper is the total number of payment periods in an annuity. Pv is the present value, or the lump-sum amount that a series of future payments is worth right now. Fv is the future value, or a cash balance you want to attain after the last payment is made. If Fv is omitted, it is assumed to be 0 (the future value of a loan, for example, is 0). Type is the number 0 or 1 and indicates when payments are due. If type is omitted, it is assumed to be 0.
Make sure that you are consistent about the units you use for specifying Rate and Nper. If you make monthly payments on a four-year loan at 12 percent annual interest, use 12%/12 for Rate and 4*12 for Nper. If you make annual payments on the same loan, use 12% for Rate and 4 for Nper. For all the arguments, cash you pay out, such as deposits to savings, is represented by negative numbers; cash you receive, such as dividend checks, is represented by positive numbers.
Data Assumptions: Annual interest=10% (A2); Period for which you want to find the interest=1 (A3); Years of loan=3 (A5); Present value of loan=8000 (A6).
Ex. Ipmt([A2]/12, [A3]*3, [A4], [A5]) - Interest due in the first month for a loan with the terms above (-22.41).
Note: The interest rate is divided by 12 to get a monthly rate. The years the money is paid out is multiplied by 12 to get the number of payments.
Returns the number of periods for an investment based on periodic, constant payments and a constant interest rate. For a more complete description of the arguments in Nper and for more information about annuity functions, see Pv (above). Rate is the interest rate per period. Pmt is the payment made each period; it cannot change over the life of the annuity. Typically, Pmt contains principal and interest but no other fees or taxes. Pv is the present value, or the lump-sum amount that a series of future payments is worth right now. Fv is the future value, or a cash balance you want to attain after the last payment is made. If Fv is omitted, it is assumed to be 0 (the future value of a loan, for example, is 0). Type is the number 0 or 1 and indicates when payments are due.
Set Type equal to 0 (or omitted) if payments are due at the end of the period. Set Type equal to 1 if payments are due at the beginning of the period.
Data Assumptions: Annual interest rate=12% (A2); Payment made each period=-100 (A3); Present Value=-1000 (A4); Future Value=10000 (A5); Payment is due at the beginning of the period=1 (A6).
Ex. Nper([A2]/12, [A3], [A4], [A5], 1) - Periods for the investment with the above terms (60).
Ex. Nper([A2]/12, [A3], [A4], [A5]) - Periods for the investment with the above terms, except payments are made at the beginning of the period (60).
Ex. Nper([A2]/12, [A3], [A4]) - Periods for the investment with the above terms, except with a future value of 0 (-9.578).
Calculates the net present value of an investment by using a discount rate and a series of future payments (negative values) and income (positive values). Rate is the rate of discount over the length of one period. Value1, value2, ... are 1 to 29 arguments representing the payments and income. Value1, value2, ... must be equally spaced in time and occur at the end of each period. Npv uses the order of value1, value2, ... to interpret the order of cash flows. Be sure to enter your payment and income values in the correct sequence. Arguments that are numbers, empty cells, logical values, or text representations of numbers are counted; arguments that are error values or text that cannot be translated into numbers are ignored. If an argument is an array or reference, then only numbers in that array or reference are counted. Empty cells, logical values, text, or error values in the array or reference are ignored.
The Npv investment begins one period before the date of the value1 cash flow and ends with the last cash flow in the list. The Npv calculation is based on future cash flows. If your first cash flow occurs at the beginning of the first period, the first value must be added to the Npv result, not included in the values arguments. For more information, see the example below. Npv is similar to the Pv function (present value). The primary difference between Pv and Npv is that Pv allows cash flows to begin either at the end or at the beginning of the period. Unlike the variable Npv cash flow values, Pv cash flows must be constant throughout the investment. For information about annuities and financial functions, see Pv. Npv is also related to the Irr function (internal rate of return). Irr is the rate for which Npv equals zero: Npv(Irr(...), ...) = 0.
Data Assumptions: Annual discount rate=10% (A2); Initial cost of investment one year from today=-10,000 (A3); Return from first year=3,000 (A5); Return from second year=4,200 (A6).
Ex. Npv([A2], [A3], [A4], [A5], [A6]) - Net present value of this investment (1,188.44) ...In the preceding example, you include the initial $10,000 cost as one of the values, because the payment occurs at the end of the first period.
Calculates the payment for a loan based on constant payments and a constant interest rate. For a more complete description of the arguments in PMT, see the PV function. Rate is the interest rate for the loan. Nper is the total number of payments for the loan. Pv is the present value, or the total amount that a series of future payments is worth now; also known as the principal. Fv is the future value, or a cash balance you want to attain after the last payment is made. If fv is omitted, it is assumed to be 0 (zero), that is, the future value of a loan is 0. Type is the number 0 (zero) or 1 and indicates when payments are due.
The payment returned by PMT includes principal and interest but no taxes, reserve payments, or fees sometimes associated with loans. Make sure that you are consistent about the units you use for specifying rate and nper. If you make monthly payments on a four-year loan at an annual interest rate of 12 percent, use 12%/12 for rate and 4*12 for nper. If you make annual payments on the same loan, use 12 percent for rate and 4 for nper.
Data Assumptions: Annual interest rate=8% (A2); Number of months of payments=10 (A3); Amount of loan=10000 (A4).
Ex. Pmt([A2]/12, [A3], [A4]) - Monthly payment for a loan with the above terms (-1,037.03).
Ex. Pmt([A2]/12, [A3], [A4], 0, 1) - Monthly payment for a loan with the above terms, except payments are due at the beginning of the period (-1,030.16).
Returns the payment on the principal for a given period for an investment based on periodic, constant payments and a constant interest rate. For a more complete description of the arguments in PPMT, see PV (above). Rate is the interest rate per period. Per specifies the period and must be in the range 1 to nper. Nper is the total number of payment periods in an annuity. Pv is the present value—the total amount that a series of future payments is worth now. Fv is the future value, or a cash balance you want to attain after the last payment is made. If fv is omitted, it is assumed to be 0 (zero), that is, the future value of a loan is 0. Type is the number 0 or 1 and indicates when payments are due.
Make sure that you are consistent about the units you use for specifying rate and nper. If you make monthly payments on a four-year loan at 12 percent annual interest, use 12%/12 for rate and 4*12 for nper. If you make annual payments on the same loan, use 12% for rate and 4 for nper.
Data Assumptions: Annual interest rate=10% (A2); Number of years in the loan=2 (A3); Amount of loan=2000 (A4).
Ex. Ppmt([A2]/12, 1, [A3]*12, [A4]) - Payment on principle for the first month of loan (-75.62).
Note: The interest rate is divided by 12 to get a monthly rate. The number of years the money is paid out is multiplied by 12 to get the number of payments.
Returns the present value of an investment. The present value is the total amount that a series of future payments is worth now. For example, when you borrow money, the loan amount is the present value to the lender. Rate is the interest rate per period. For example, if you obtain a car loan at a 10% annual interest rate and make monthly payments, your interest rate per month is 10%/12, or 0.83%. You would enter 10%/12, or 0.83%, or 0.0083, into the formula as the rate. Nper is the total number of payment periods in an annuity. For example, if you get a four-year car loan and make monthly payments, your loan has 4*12 (or 48) periods. You would enter 48 into the formula for nper. Pmt is the payment made each period and cannot change over the life of the annuity. Typically, pmt includes principal and interest, but no other fees or taxes. For example, the monthly payments on a $10,000, four-year car loan at 12 percent are $263.33. You would enter -263.33 into the formula as the pmt. If pmt is omitted, you must include the fv argument. Fv is the future value, or a cash balance you want to attain after the last payment is made. If fv is omitted, then it is assumed to be 0 (the future value of a loan, for example, is 0). For example, if you want to save $50,000 to pay for a special project in 18 years, then $50,000 is the future value. You could then make a conservative guess at an interest rate and determine how much you must save each month. If fv is omitted, then you must include the pmt argument. Type is the number 0 or 1 and indicates when payments are due.
Make sure that you are consistent about the units you use for specifying rate and nper. If you make monthly payments on a four-year loan at 12 percent annual interest, use 12%/12 for rate and 4*12 for nper. If you make annual payments on the same loan, use 12% for rate and 4 for nper. In annuity functions, cash you pay out, such as a deposit to savings, is represented by a negative number; cash you receive, such as a dividend check, is represented by a positive number. For example, a $1,000 deposit to the bank would be represented by the argument -1000 if you are the depositor and by the argument 1000 if you are the bank.
Data Assumptions: Money paid out of an insurance annuity at the end of every month=500 (A2); 8% is the interest rate earned on the money paid out (A3); 20 is the number of years the money will be paid out (A4).
Ex. Pv([A3]/12, 12*[A4], [A2], , 0) - Present value of an annuity with the stated terms (-59,777.15). The result is negative because it represents money that you would pay in an outgoing cash flow. If you are asked to pay ($60,000) for the annuity, you would determine this would not be a good investment because the present value of the annuity (59,777.15) is less than what you are asked to pay.
Returns the interest rate per period of an annuity. RATE is calculated by iteration and can have zero or more solutions. If the successive results of RATE do not converge to within 0.0000001 after 20 iterations, RATE returns the #NUM! error value. For a complete description of the arguments nper, pmt, pv, fv, and type, see PV. Nper is the total number of payment periods in an annuity. Pmt is the payment made each period and cannot change over the life of the annuity. Typically, pmt includes principal and interest but no other fees or taxes. If pmt is omitted, you must include the fv argument. Pv is the present value—the total amount that a series of future payments is worth now. Fv is the future value, or a cash balance you want to attain after the last payment is made. If fv is omitted, it is assumed to be 0 (the future value of a loan, for example, is 0). Type is the number 0 or 1 and indicates when payments are due.
Guess is your guess for what the rate will be. If you omit guess, it is assumed to be 10 percent. If RATE does not converge, try different values for guess. RATE usually converges if guess is between 0 and 1. Make sure that you are consistent about the units you use for specifying guess and nper. If you make monthly payments on a four-year loan at 12 percent annual interest, use 12%/12 for guess and 4*12 for nper. If you make annual payments on the same loan, use 12% for guess and 4 for nper.
Data Assumptions: Years of the loan=4 (A2); Monthly payment=-200 (A3); Amount of the loan=8000 (A4).
Ex. Rate([A2]*12, [A3], [A4]) - Monthly rate of the loan with the stated terms (1%).
Note: The number of years of the loan is multiplied by 12 to get the number of months.
Cost is the initial cost of the asset. Salvage is the value at the end of the depreciation (sometimes called the salvage value of the asset). Life is the number of periods over which the asset is depreciated (sometimes called the useful life of the asset).
Data Assumptions: Cost=30,000 (A2); Salvage value=7,500 (A3); Years of useful life=10 (A4).
Ex. Sln([A2], [A3], [A4]) - The depreciation allowance for each year (2,250).
Returns the sum-of-years' digits depreciation of an asset for a specified period.
Cost is the initial cost of the asset. Salvage is the value at the end of the depreciation (sometimes called the salvage value of the asset). Life is the number of periods over which the asset is depreciated (sometimes called the useful life of the asset). Per is the period and must use the same units as life.
Data Assumptions: initial cost=30,000 (A2); Salvage value=7,500 (A3); Lifespan in years=10 (A4).
Ex. Syd([A2], [A3], [A4], 1) - Yearly depreciation allowance for the first year (4,090.91).
Ex. Syd([A2], [A3], [A4], 10) - Yearly depreciation allowance for the tenth year (409.09).
|
0.999833 |
What is the legal regulation of cryptocurrency transactions in Australia? In fact, Australia is a developed state that could create conditions for the development of financial companies and technology centers.
The state for a short period of time became the owner of the most progressive approaches in the financial industry.
The Australian government is implementing the use of Blockchain technology in all spheres of public administration: mail, ground transportation, etc.
State programs promote the development of commercial projects and programs that focus on working with digital money and the latest decentralized technologies.
Digital money in Australia is not considered a financial product, therefore do not need licensing. The legislator adopted the Code of Conduct for players in the digital currency industry, which was developed by the Association of Digital Currencies and Commerce of Australia.
The Code regulates the relationship of participants in the cryptocurrency business in the country, but its implementation is mandatory only for participants of the above-mentioned Association.
Recently, the state is actively combating the laundering of illicit proceeds. Countering the financing of terrorism is the main direction of political development.
It should be noted that operations with the cryptocurrency are taxed on general grounds. When the transaction is completed, you must pay income tax and income tax.
A feature of the Australian tax system is the double taxation on operations with digital currency. The tax is levied both for the exchange of fiat funds and for the purchase of goods received by cryptocurrency.
In 2013, Bitcoin is recognized by the Reserve Bank as an alternative to foreign currency and the current payment system.
In 2014, the regulator began to consider the possibility of levying taxes on foreign exchange operations, but the legal mechanism was never developed. The Securities and Investments Commission after the conducted checks comes to the conclusion that the digital currency can not be equated to a financial product.
In 2015, the Australian Treasury published a financial report that demonstrated the imperfection of the current tax system. When assembling and processing the data, the state treasury service could not take into account digital funds.
At the end of 2016, the question arose about the development of new standards for the recording of digital cash and intangible assets. At the moment, there are active consultations on the development of project documentation, which allows to amend the current legislation. The preliminary completion of the process is projected for 2018.
The main legislative act regulating operations with the cryptocurrency in the territory of Australia is the Code of Conduct for players in the digital currency industry, which became effective from December 2016.
to compensate the customer money for services and goods of inadequate quality.
If the violations of the requirements of the Code are revealed, the Association is authorized to impose fines.
In 2014, cryptocurrency entrepreneurs received a report from the Australian Tax Service, which indicated that income and profits received from dealing with digital currencies should be paid income tax.
In this case, a significant problem of double taxation arose. The participant of the transaction for the first time pays the tax for the exchange of fiat money for digital, and the second – for the purchase of goods / services. If the amount of the cryptocurrency is more than ten thousand Australian dollars, then the user is exempt from double tax payment.
The government intends in the near future to abolish double taxation of operations with the cryptocurrency. In the near future, digital money will completely replace the fiat. Already today, with mutual consent, the employer can pay wages in the crypto currency. The legislation of Australia does not stand still, adjusting the regulatory framework to the parameters of the modern world.
To correctly implement the procedure for the settlement of cryptocurrency transactions in Australia, contact Eternity Law International. Our company will provide the best conditions for the legalization of the procedure, simply call us.
|
0.999965 |
Stopped by J & R this evening to pick up a new camera lens for my Digital Rebel and while wandering through the computer department I ran into a near encounter of what Rob Rosenberger calls "False Authority Syndrome".
1) You have just sold someone who couldn't pick out her own hard drive an internal, which requires that she has to crack her case, do the install. Additionally you never verified whether she had SATA or EIDE. A desktop or a laptop? Once she's done with the installing the hardware you're expecting to figure out how to partition a drive, an option that's well hidden in the depths of the WIndows XP computer management..
2) If she follows his advice and partitions the drive, considering the over head for partitions tabels, the filesystem, and "hard drive math", best case scenario she would have filled the partition. As the 100 Gigs of data was a guess (really, who has such 100 Gigs of data, and not 99 Gigs or 101 Gigs), odds are she will not have enough disk space for her files.
3) He gave her no instructions on how to do the backup.
4) The backup is on the same machine. If something goes wrong with the computer she'll need to go ahead and pull that drive out and mount in another machine if she wants to get here data back.
5) The backup is in the same location so when the place burns down she losses both her original and her backup.
6) THE BACKUP IS ON THE SAME GORAM HARD DRIVE SO WHEN THE "DRIVE FAILS" YOU'LL LOOSE BOTH THE ORIGINAL AND THE BACKUP. Yes this may protect against accidental erasure and corruption, if you realize it quick enough before the next backup is done (let's face it, with that partition scheme, there's no room for any sort of archive). But almost every drive problem I've had has been hardware based (drive controlers, Head crash, etc).
So, being the helpful human I am, I try to insert myself into the conversation. "Excuse me, I don't think that's the best thing I've heard.. You'll probably run out of room really fast if you make 2 100 Gig partitions. Having the original and backup in the same place isn't the wisest thing I've heard. Oh, BTW, how handy are you with a screwdriver?".
His response was "Don't listen to this guy, if he was really knew this stuff he'd be working here. I'm a J&R senior sales person, what's his qualifications?".
All I said was "a Masters Degree in Computer Science from Columbia University". I was proud that I didn't ask him how that made him more qualified then a Petco Pet Food Stacker.
She put down the drive and said she needed to think about it.
In my past life, I did some home technical support and often wondered where people got some of their hair brained ideas about technology and how things work. As I've gotten older I've found plenty of stories like this and now know. I wonder how many people are using the J&R senior sales man backup method and if there are enough smart geeks out there to save them for J&R dweebs.
|
0.945787 |
On 30 October 1974 the undefeated heavyweight champion of the world met the most famous boxer of all time in Kinshasa, the capital of Zaire (now the Democratic Republic of the Congo). The watching crowd of 60,000 included celebrities, literary figures, and Zaire's despotic ruler Mobutu Sese Seko. The bout between George Foreman and superstar challenger Muhammad Ali was dubbed the 'Rumble in the Jungle' by its flamboyant organiser Don King and is now considered to be one off the greatest sporting events of all time.
Untouchable during his early years, Ali's career had been derailed by a three-a-half-year ban for refusing the army draft in 1967. He returned to the ring in 1970, winning his first two comeback fights, but then lost a title shot against new heavyweight champ Joe Frazier in March 1971.
Though he fought on, Ali's days as a world champion appeared to be numbered. After the defeat to Frazier he contested 10 successful bouts, but then lost to Ken Norton in March 1973. He won a rematch six months later, and managed to beat the now ex-champion Frazier in March 1974, but few believed that the 32-year-old could take on the heavyweight division's new king, George Foreman.
By the time he came to fight Ali, 'Big George' was an unstoppable force of nature. Having taken gold at the 1968 Mexico City Olympics he turned pro and quickly began blazing a trail to the top of the heavyweight ranks.
Following 37 successive victories – all but four of them coming by knockout – he got his title shot against Frazier in January 1973. 'Smokin' Joe' was undefeated at this stage, but Foreman decimated the champion. The 24-year-old challenger knocked Frazier down six times in only two rounds before the bout was stopped. The heavyweight division had a new champ, one who it appeared could reign for years to come.
While Ali's abilities appeared to have faded, his star power certainly hadn't. Though he would go on to become universally popular, it is fair to say that Ali was a polarising figure during the mid seventies. Millions loved him for his boxing and his bravado, but he was loathed by others for refusing the draft and his close association with the controversial Nation of Islam group. Many wanted to see Ali win back his title, but some would have taken great pleasure in watching him receive a beating. A fight between Ali and Foreman would be pure dynamite.
The man to pull it all together was Don King. This was among the first ventures into boxing for the irrepressible King, a former illegal bookmaker with convictions for 'justifiable homicide' and manslaughter. He led a consortium of backers who stood to earn big money when the fight was broadcast at theatres across the U.S., and on television the world over.
But while King had united a wealthy group of backers he would need yet more money to bankroll the $5million purse that each fighter had been promised. To do so, he required someone possessing vast wealth and an even larger ego.
King found his man in Mobutu Sese Seko, the president of Zaire. Mobutu was an authoritarian tyrant who plundered his nation's wealth to live a life of luxury while the people scraped by in abject poverty. He was a maniac of many stripes – a kleptomaniac, a megalomaniac, and certainly an egomaniac – who wanted to lead a famous nation. He thus needed a big event to put Zaire on the map, and what better than the biggest boxing match of all time?
King used the fact that the fight would take place in Zaire to frame it as an event of African unity, though he would have staged it wherever the money was sufficient to pay the combatants. Nevertheless, with the likes of James Brown and B.B. King playing alongside several African performers at a festival to promote the fight, it lived up to billing.
Ahead of the bout both fighters spent several weeks in Zaire training and acclimatising to the local climate. Originally scheduled for 25 September, the fight was postponed to 30 October after Foreman was cut near his right eye during training.
Finally, after a seemingly never-ending wait and considerable hype, Ali and Foreman stepped into the ring at Kinshasa's 20th of May Stadium. By the time they did so it was 4am local time, which allowed the fight to be screened at 10pm in key U.S. markets. In the words of the New York Times, each man had "been assured $5‐million to alter his sleeping habits."
Foreman was the overwhelming favourite. Seven years Ali's junior, a ferocious hitter and built like a tank, the bookies felt sure he would have too much for the savvy 32-year-old. Ali was no longer a lithe youngster; though not overweight he was puffier than in his youth, a look that suggested a degree of slowness.
In fact there were sincere concerns for Ali's safety, such was Foreman's ferocity. The champion had won his previous eight fights inside the first two rounds, including victories against the two men who had beaten Ali. He was machine-like in his demolition of opponents.
Aware that beating Foreman would require something different, Ali deployed a dangerous and incredibly brave strategy. Dubbed "rope-a-dope", his game plan was to retreat to the ropes and allow Foreman to unleash blows at will, with Ali dodging the worst but taking many to his body and arms. "Is that all you got, George?" Ali would ask after eating another huge punch from there hardest hitter on the planet. "Is that all you got?"
Despite the onslaught Ali survived, while Foreman began to tire. The champion hadn't fought beyond the fourth round in more than three years. As the fight progressed, he was heading into uncharted territory.
With Foreman tiring his downfall became inevitable. He was conditioned to finish his prey quickly, not pursue it for an hour. Ali came out punching in the eighth and Foreman had no answer. With seconds remaining in the round the challenger floored the champion, who was as exhausted as he was hurt. The fight was stopped and the 60,000 crowd erupted. Incredibly, Ali was champion once more.
The Rumble in the Jungle was a momentous fight that more than lived up to the hype. Today, it is considered one of the greatest sporting events of all time. Perhaps more than anything, it should be remembered as the fight that secured Muhammad Ali's status as the biggest star in the history of sport.
|
0.999996 |
Here are the Kings and Queens of the Netherlands, since the modern government was formed in 1815.
1 King William I 1815–1840 William I, born Willem Frederik Prins van Oranje-Nassau (The Hague, August 24, 1772 - Berlin, December 12, 1843), was the first King of the Netherlands. He succeeded his father as head of the House of Orange-Nassau in 1805, was named 'Sovereign Prince' of the Netherlands in 1813, proclaimed himself King in 1815, and abdicated in 1840. William I was also the Grand Duke of Luxembourg after 1815 and Duke of Limburg after 1839 until his abdication in 1840.
2 King William II 1840–1849 William II (William Frederick George Louis) (December 6, 1792 – March 17, 1849) was King of the Netherlands, Grand Duke of Luxembourg, and Duke of Limburg from October 7, 1840 until his death.
3 King William III 1849–1890 William III (Willem Alexander Paul Frederik Lodewijk van Oranje-Nassau) (February 19, 1817 – November 23, 1890) was from 1849 King of the Netherlands and Grand Duke of Luxembourg until his death and the Duke of Limburg until the abolition of the Duchy in 1866.
4 Queen Wilhelmina 1890–1948 Wilhelmina (Wilhelmina Helena Pauline Marie of Orange-Nassau; August 31, 1880 – November 28, 1962) was queen regnant of the Kingdom of the Netherlands from 1890 to 1948. She ruled the Netherlands for fifty-eight years, longer than any other Dutch monarch. Her reign saw many turning points in both Dutch and world history: World War I and World War II, the Great Crisis of 1933, as well as the decline of the Netherlands as a major colonial empire. Outside the Netherlands she is primarily remembered for her role in the Second World War, in which she proved to be a great inspiration to the Dutch resistance, as well as a prominent leader of the Dutch government in exile.
5 Queen Juliana 1948–1980 Juliana (Juliana Emma Louise Marie Wilhelmina van Oranje-Nassau; April 30, 1909 – March 20, 2004) was Queen regnant of the Kingdom of the Netherlands from her mother's abdication in 1948 to her own abdication in 1980.
6 Queen Beatrix 1980–2013 Beatrix (Beatrix Wilhelmina Armgard); born January 31, 1938) was the Queen regnant of the Kingdom of the Netherlands April 30, 1980 - April 30, 2013, when she abdicated.
7 King Willem-Alexander 2013-Incumbent Willem-Alexander (Willem-Alexander Claus George Ferdinand); born 27 April 1967) is the King of the Kingdom of the Netherlands, including the Netherlands proper (with the Caribbean Netherlands), and the countries of Curaçao, Aruba, and Sint Maarten. He is head of the Dutch royal house and the House of Amsberg.
|
0.970916 |
President Obama’s Immigration Plan to be reconsidered?
The article reads in part: “The Obama administration asked the Supreme Court Monday to reconsider the president’s plan to shield millions of undocumented immigrants from deportation once the court again has nine members.
The court last month said it was split 4 to 4 on whether lower courts were correct when they blocked implementation of President Obama’s plan, which he announced in 2014 after Congress failed to pass comprehensive immigration reform. Obama’s plan would have shielded those who have been in the country for years without committing serious crimes and have family ties to those here legally.
Although it is highly unlikely that the court would grant such a petition it is not impossible. The most likely result is that the court will refuse to reconsider the petition and it will be the next President who will have the most impact on future U.S. Immigration policy. Hillary Clinton is likely to be pro immigrant and have pro immigrant policies while Donald Trump will likely be her exact opposite in this area.
Lawrence Gruner is a Sacramento Immigration Attorney and Sacramento Immigration Lawyer with over 20 years of handling Immigration cases. He may be reached at 916-760-7270. His office handles cases throughout California, the United States and the World. He would be happy to talk with you about your immigration matter.
|
0.985319 |
Claudia Winkleman has revealed why she is nervous for the contestants ahead of the premiere of Britain's Best Home Cook.
The Strictly Come Dancing host joins former Great British Bake Off judge Mary Berry on the BBC's new cooking competition Britain's Best Home Cook, which will see will see ten contestants living and cooking together while competing to win.
Each week, the contestants will be set challenges that test their skill, creativity and individual flair, before culminating in a final elimination round to decide who will leave the show.
And, speaking ahead of the show's premiere in May, Claudia revealed the role she's playing in calming the contestants' nerves.
'I think I have two jobs here,' she said. 'One is to be as orange as humanly possible and two is to make sure the contestants have a fantastic time.
'It is nerve wracking. These are home cooks, these are not people who have worked in kitchens or highly pressured scenarios. They've been making the most delicious spaghetti Bolognese in their street for years, and they come here where there are cameras and lights, there's Mary and the brilliant boys, so they're daunted and scared.
Alongside Claudia Winkleman and Mary Berry, the show will feature Chris Bavin, aka The Naked Grocer, and Dan Doherty, the Chief Director of the Duck and Waffle, as judges.
Britain's Best Home Cook will premiere at 8pm on Thursday, May 3 on BBC One.
|
0.99473 |
I am a huge movie buff. I anticipate big blockbuster hits and save up the money for the admission at the movie theater. Film strips are made up of still frames that when projected at an average of twenty-four frames a second gives the illusion of movement and continuity. Many films use different elements in their frames whether it is between shots or within them. This is an example of Montage. In this paper I will attempt to discuss montage, the film "Battleship Potemkin", analyze Eisenstein's methods of montage he used for the film, and come up with an idea as to how to transform this historical piece on to theater stage.
Montage is the joining together of different elements of film in a variety of ways, between shots, within them, between sequences, within these. During the 1920's Russian filmmaker Sergei Eisenstein, who is considered "the father of montage," created five methods to montage. These methods are metric, rhythmic, tonal, overtonal, and intellectual montage. With these methods Eisenstein was able to change the way a scene was brought to life in film.
Metric montage is when "pieces are joined together according to their lengths, in a formula-scheme corresponding to a measure of music. Realization is in repetition of these "measures". Tension is obtained by the effect of mechanical acceleration by shortening the pieces while preserving the original proportions of the formula (Eisenstein & Leyda pg73)." When cutting to the next shot, no matter what was happening to the next image, it was used to bring out the most basal and emotional reactions from the audience.
With rhythmic montage "the length of the pieces, the content with in the frame is a factor possessing equal rights to consideration. Abstract determination of the piece-lengths gives away to a flexible relationship of the actual lengths (Eisenstein & Leyda pg74)." With this type of montage, it is based on timing of the visual composition of the shots to induce a more complex meaning than with metric montage.
In tonal montage the "movement is perceived in a wider sense. The concept embraces all effects of the montage piece. Here montage is based on the characteristics of emotional sound of the piece-of its dominant (Eisenstein & Leyda pg75)." Just like metric montage the shot is used to make a reaction to the audience. Only difference is that tonal montage uses shots that have emotion.
Overtonal montage is an accumulation of the previous three montages. And give the audience an even more abstract and complicated reaction. Overtonal montage "steps up the impression from a melodically emotional coloring to a directly psychological perception (Eisenstein & Leyda pg78)."
The last is intellectual montage. This method of montage is "sounds and overtones of an intellectual sort: i.e. conflict-juxtaposition of accompanying intellectual affects (Eisenstein & Leyda pg82)." With these shots combined you get an intellectual image.
It starts out in act I, named "Men and Maggots", it is June of 1905, and the armored battle ship Potemkin is near Odessa on the Black Sea returning after Russia's defeat in the Russo-Japanese war. There are many sailors that are sleeping in their hammocks. A petty officer walks in checking on therm. One of the sailors that are sleeping with a shoulder and arm hanging outside his hammock is in the way of the officer who is trying to get through. When the officer cannot get through he reacts by whipping the young man. Some of the other sailors wake up this act. In the morning, the ship's cook has displayed large pieces of meat outside the ships kitchen. The sailors saw this meat hanging and began talking and pointing to the meat and calling others to look. An officer on a railing higher up notices the sailors around the meat and the sailors start to complain to him that the meat is rotten. The officer calls the ship's doctor, who goes down to check out the meat. The ship's doctor after looking carefully at the meat says that the meat is not rotten it has no worms, only maggots that can be washed out with brine. The cooks prepare to serve a meal on table tops that hang from ropes in the ceiling. Large steel bowls are placed on the tables and soup is the only food that is being served. Some sailors do not eat the soup. Later it is shown a sailor is in the kitchen washing dishes after the meal. One dish had an inscription that read "Give Us This Day Our Daily Bread". The sailor washing the dish hold it for a moment, read it again, and then smashes it. The sailors who were on wash duty walks off from their work stations. In the next scene of we see a lot of sailors at the ships commissary buying many cans of food. One of the higher ranking officers notices this and continues walking by. One point talking on one of the decks below, sailor Vakulinshuk says the treatment on the boat was worse than being a POW in a Japanese camp. Other sailor talked about the overall treatment.
In Act II named "Drama at the Harbour", it is all hands on deck as Captain Golikov comes up from a trap entrance to discipline the men that did not eat the soup. He says that there will be no disobedience or strike or that he will hang everyone in the ship if it is. Then the captain asks who ever at the soup with the rotten meat to step under the cannons to show their loyalty. All but a group of fifteen shows their "loyalty". The captain decides that he wants to kill the fifteen for not eating the soup. The group tries to escape but the other officers step in their way. The captain throws a tarpaulin over them, making it easier for the other sailors to shoot the fifteen sailors. When the Captain gives the order to shoot, seaman Vakulinchuk stops the firing squad from executing the other sailors. Vakulinchuck gives a speech encouraging the shipmates to stand up and rebel against those who oppress them which would be the officers of the ship. While this is going on the captain is repeating the firing order but it is not carried out. Vakulinchuk and the other shipmates get together and turn on the officers. There is a chase after the officers and when caught they are thrown overboard, the doctor as well. The ships priest appears and plays "possum" when he gets pushed down the stairs pretending to be dead. Before being thrown overboard, one of the officers is able to grab a gun and shoots Vakulinchuk. Vakulinchuk falls from a high point of a ship on to a tackle and then tumbles into the water. The shipmates shout that Vakulinchuk has gone overboard and a couple of sailors jump in to save but it is too late as his body is brought back on the ship. Vakulinchuk's death bonds the shipmates together.
In Act III, "A Dead Man Calls for Justice", The Potemkin is under the control of the sailor s and they dock at the port of Odessa. Vakulinchuk's body is taken to the shore and laid under a tent that is set up on the pier. Vakulinchuk is holding a candle with a sign on his chest reading "KILLED FOR A BOWL OF SOUP". There is talk amongst the people in the local area in small groups about Vakulinchuk. An obnoxious member of the bourgeoisie heckles a woman protester. During another heated discussion someone in the crowd says "kill the Jews!" But the majority of the citizens of Odessa get riled up and decided to destroy the oppressors and help the sailors who rebelled on the Potemkin. Large numbers of the citizens bring food to the battleship to support the crew.
In Act IV, "The Odessa Staircase," after they given the sailors quality food, many of the townspeople have gathered along the long and wide flight of stairs overlooking the harbor leading down towards the piers. In good moral, shouting encouragements towards the ship. All ages of men, women, and children of all ages have come to see what is going on. Then out of nowhere, troops in white tunics show up at the top of the stairs slowly marching down the steps. People start to scramble as the soldiers began their assault on the innocent men, women, the elderly, and children. Countless people scramble down the steps to get to the side. Some elderly people hide behind walls as the soldiers continue to slaughter the people fleeing. A woman who carried her dead son's body in her arms walks up to the soldier s telling them that her son is very ill expecting to let her pass. A second later people look in fear as she is gunned downed. People step over others who have fallen, dead or alive. We even see the soldiers stepping on a small child. One woman had a bullet shot through one lens in her glasses. Another victim included in this massacre was a mother who was pushing a baby in a carriage. As she falls dead, she hits the carriage on the way down to the ground. The carriage starts to make its way down the steps as onlooker watch the carriage travel untouched. Then soldiers on horseback arrive at the bottom of the steps to finish the innocent off.
In the final act, ACT V:"The Rendezvous with a Squadron", The sailors who have taken over the Potemkin mend their battle stations and turn their guns on the buildings that might have held Tsarist soldiers but by then the massacre on the stairs is over leaving only the soldiers standing. The sailors of Potemkin then sail out to sea to avoid an attack from the shore when suddenly a squadron of warships has a course headed straight toward the Potemkin to take it back. The crew of the Potemkin expected this and some mend sentry duty. Other sailors of the Potemkin try to sleep. They are soon woken up and mend battle stations as multiple ships are sighted far away on the horizon. As the ships get close, the Potemkin send a sort of morse code to the other ships crews to treat them as brothers. Potemkin's cannons, despite being outnumbered, are aimed at the other ships as in an attempt at one last "hurrah". But when the ships get into range, the ships allowed the Potemkin to pass through. The crew of the Potemkin celebrates and they come on deck waving at the others ships, as they do the same, when the ships cross in opposite direction.
As I was watching "Battleship Potemkin", I thought about how I could transfer this film on to the stage and the first thing that came to mind was how you (as in you Joseph) and Lucius set up "Vertigo". From a technical stand point, I would use only a limited amount of technology if space was limited to me. If I was to put this play in swain, I would use the projectors to display and identify what scene the actors were in on the back curtain.
Fully referenced, delivered on time, Essay Writing Service.
|
0.999997 |
Nathaniel Branden was a psychotherapist (he died in 2014) who studied the psychology of self-esteem. He wrote books about the importance of it, including his 6 pillars of self-esteem that would explain how individuals could nurture their confidence and relationships.
Although not directly tied into his work with self-esteem, it is still interesting to note that Branden was a supporter of the philosophy of Objectivism, which was started by Ayn Rand (whom Branden had a personal and business relationship with).
He spent most of his time (before, during, and after his relationship with Rand), however, developing psychological theories and working on therapies.
He was also into politics, mainly backing Libertarianism and having a prominent role in this political movement.
What are the Six Pillars of Self-Esteem?
Branden believed that healthy self-esteem was a cornerstone to happiness. He believed that if your self-esteem needs were not being met, it could cause psychological issues, like depression and anxiety. He also thought could affect relationships and more.
To him, having self-esteem was having to competency needed to function in life and be happy. He understood that well others can nurture your self-esteem, it is mostly an internally generated feeling that one needs to focus on for themselves.
To help people focus on themselves and develop self-esteem, Branden came up with the six pillars of self-esteem. It was meant as a framework to guide people on the path to happiness.
1. Living Consciously – Being aware of your thoughts and actions is an important component of healthy self-esteem. Branden believed in living mindfully as a tool for happiness. Many people in the fields of therapy, metaphysics, and psychiatry would agree.
2. Accepting Yourself – Self-acceptance is an important tool in your self-confidence arsenal. You are who you are, and until you accept it you can't do anything about it. It's important to accept yourself, flaws and all.
3. Personal Responsibility – You are responsible for who you are. While your past shapes you, you need to take responsibility for who you've become. Your actions are your own, no one else forces you to do things (in normal cases, anyway). When you actually take responsibility for who you are you can learn to work toward who you want to be.
4. Being Assertive – Do you stand up for yourself and your needs, or do you feel like a doormat to someone else? This is where assertiveness comes in – it is not a bad thing. Being assertive is simply expressing your needs, just make sure you do it appropriately, and without rudeness.
5. Living Purposefully – Everyone has a purpose, but not everyone strives to meet that purpose. In fact, many people don't even know what their purpose is or how to find out what it is. Being mindful of who you are and the things that interest you will help you determine your life purpose.
6. Integrity – Being whole and sticking with your moral principles is important when it comes to developing healthy self-esteem. In the six pillars, Branden meant this to be a point where people matched their behaviors and their convictions.
These 6 pillars of self-esteem, when followed, are meant to help you have high self-esteem. This is the healthy high self-esteem, not selfishness. In the book, they are listed as “practices,” because they are something you need to consciously be doing on a daily basis in order to make them a normal part of your life.
Branden, in his studies, would encourage people to build both their self-worth and self-confidence through the building of their self-esteem. He also believed in individualism as an essential part of human freedom.
To be free and find happiness, Branden also believed that people need personal autonomy.
That means being able to make your own choices and pursue your own passions. For people that have started their own businesses or done work in a field that they are passionate about, this dream of personal autonomy has been realized. People that are forced into the family business or to go to a college that is not of their own choosing are robbed of personal autonomy.
Because he included an emphasis on internal practices, instead of relying on others to help boost self-esteem, his form of bettering self-confidence was seen as different from other people in the same field.
His beliefs started what some people referred to as the “self-esteem movement,” something we need more of right now.
What Is Self-Esteem and Why Does It Matter?
People can have high self-esteem, low self-esteem, or healthy self-esteem. Keep reading if you're curious about where you fall on the scale.
To understand your self-esteem, you need to know what it is.
Self-esteem is your view of yourself, what you're worth, and your value. It's how you see yourself, sometimes affected by how you think other people see you (though this perception isn't always factual). It’s basically a scale of how much you like yourself.
Self-esteem is considered a personality trait. While self-esteem levels can fluctuate, it's normally a stable feeling. People with healthy self-esteem maintain a healthy view of themselves, for the most part.
What creates and affects your self-esteem, aside from what you believe other people think about you (which is part of self-concept), includes your own beliefs about who and what you are. You may judge yourself based on your appearance, your emotions, and your common behaviors.
What is self-concept? It's a view of yourself based on a combination of how you see yourself and how others see you. Self-concept isn't always the best view of a person since other people's views of you are not always actual reflections of who you are, but instead are mirrors of themselves.
The act of measuring your self-worth isn't always a healthy thing. Every human being has worth, although far too many think they are worthless.
Some of the things people use to judge their self-worth include appearance, money, acquaintances and friends, career, and success. Of course, everyone knows that each of these things is viewed differently by different people – what is success and riches to one person may look like the poor house to another.
Someone that sees themselves as a good person, good looking, feels happy and successful has positive self- esteem. What is low self-esteem? That's when someone isn't comfortable in their own skin, or maybe they're not satisfied with their relationships or job choices.
People with higher confidences have more respect for themselves usually. Respect synonym – esteem, admiration, appreciation – these all show a person that feels good about themselves and content in their lives and the choices they've made.
Someone that is happy in their own skin and unconcerned with looks has healthy self-esteem. They may smile at themselves when they look in the mirror, but they don't spend all day looking in the mirror either.
People with low self-esteem might not want to look in the mirror. They are not always, but often, unhappy with what they see. It could be something about their weight, their hair, their skin, or even the shape of their nose.
When you don't have enough money to pay your bills and can make you a bit down on yourself. Many people judge themselves, and sometimes others, on the amount of money they have in their pockets or the size of their homes and types of cars they drive.
From platonic friends to the person you spend your life with, there's a chance you're judging yourself (and being judged) by the type of people you hang out with.
One person may find happiness in a career slinging fast food, and see the fact that they have a paycheck and a roof over their heads as being successful. To someone else, not having a four-digit income each year spells complete disaster, and it makes them feel worthless.
If you aren't sure where you are on the confidence scale, you can take a test like the Rosenberg Self-Esteem Scale. This is a scale that was developed by a sociologist named Dr. Morris Rosenberg, which is used to determine a person's self-esteem.
You can find versions of this test online, and answer the presented questions with a scale of “strongly agree” to “strongly disagree.” With ten questions you will be rated 0 to 30 points. The lower the points, the lower your self-esteem, with 15-25 being the normal, healthy, range.
Narcissism and self-esteem are not synonymous with one another. A narcissist can have high self-esteem or low self-esteem.
While they may seem similar, there are differences. They want others to see them as “royalty,” so to speak. Someone with high self-esteem is happy with themselves no matter what others think of them.
That doesn't mean having high self-esteem is always a good thing. That also doesn't mean someone that thinks highly of themselves and has healthy self-esteem couldn't be a narcissist either.
Where Are You on the Scale?
Now that we’ve answered the question, “what is self-esteem?” You can take the test above to figure out where you rate when it comes to self-esteem.
What can you do to improve low self-esteem? Do you need any improvements? Do you have positive self-esteem? You may be surprised at your results.
|
0.968095 |
Is the Arab-Israeli conflict in the Bible?
What is the reason for the Arab-Israeli conflict?
The Bible suggests it is a spiritual battle!
So who are the descendants of Abraham, Isaac and Jacob? They are of course the twelve tribes of Israel - Hebrew Israelites. Since these tribes will once more take up their role as a witness to the nations during the Millennium (Ezek 37, Ezek 40-48), it follows that they must be amongst those who are returning to the State of Israel today. Note also that, whilst it is common to refer to Abraham's descendents as 'Jews', this is not strictly accurate. Strictly speaking, today's Jews (Hebrew, 'Yehudim') are those who follow Judaism and who are descendents from the kingdom of Judah.
So at least some modern-day Israeli's are justifed in claiming that the land of Israel is theirs through God's covenant with Abraham.
Some 1900 years later this blessing was fulfilled in the birth of Jesus Christ. Christ was from the tribe of Judah (Gen 49.10), and Judah was one of the twelve sons of Jacob, who was the grandson of Abraham. In other words, salvation for the world (as in Christ) came from the Jews (Jn 4.22). Paul takes the reference to Abraham's 'seed' as a reference to Christ (Gal 3.16), whose genealogy is traced back to Abraham in Matthew's gospel (Mat 1.1-17). So in Gen 22 we have an early reference to the gospel that would be proclaimed as good news to the Gentile nations as well as to the Jews. Christ's offer of salvation for all through His death and resurrection was the promised blessing to all nations. Christ brings 'good news to the afflicted, binds up the broken hearted, and gives liberty to captives' (Isa 61.1).
In terms of married couples, the Arab nations can be traced back to Abram and his wife Sarai (later called Sarah), and the nation of Israel can be traced back to Isaac and his wife Rebecca. Both couples, and particularly Sarah and Rebecca, made serious mistakes.
Abram and Sarai were childless into old age. God had promised Abram that the covenant promises would be fulfilled through his own seed (Gen 15.4), but even in old age they had no heir. So Abraham listened to his wife and followed the custom of the day. He took her female servant Hagar (an Egyptian) as wife and she bore him a son, Ishmael (Gen 16.3).
Here we see the birth of two great peoples; Israel and the Arab nations. But the covenant promises of blessing and the land of Canaan went with Isaac, not Ishmael.
Following God's instruction, Abraham subsequently gave all that he had to Isaac (Gen 25.5), whilst, according to Gen 25.18, the sons of Ishmael (the prophesied 'twelve princes') settled from Havilah to Shur (see Fig.1). Havilah (meaning 'sandy stretch') is widely accepted to be Arabia. After Sarah died, Abraham married Keturah and gave their descendents gifts, but he sent them away from Isaac 'to the land of the east' (Gen 25.6) - again widely accepted to be Arabia.
Today, various Encyclopaedias and Dictionaries state that the Arabs (Arabian tribes) claim descent from Ishmael (see also the descendants of Ishmael).
Esau was born first and so Rebecca knew that it was God's plan that he and his descendants would serve Jacob and his descendants i.e. the family birthright would be Jacob's, even though traditionally it should be Esau's since he was the eldest. Unfortunately, it turned out that Jacob secured the birthright promises e.g. family leadership through deception (Gen 27.18-29), and this created conflict. Esau sold his birthright to Jacob cheaply (for a meal), and Jacob secured it by deceiving his father Isaac into blessing him as the birthright holder! Subsequently, Esau hated Jacob (Gen 27.41) and moved away from Canaan to 'the hill country of Seir' (Gen 36.5-8) or 'Mount Seir' (Deut 2.5) - an area of mountainous territory south of the Dead Sea given to him by God. This area was Edom (see Fig.1) - part of which would fall into today's Jordan - and Esau became the father of the Edomites (Gen 36.9).
Edom as a nation was subsequently destroyed by invaders and, according to Jewish Encyclopaedias, the Edomites were incorporated into the Jewish people (Herod was an Edomite). And when the Temple was destroyed in 70 AD Edom disappeared into Rome. So has the aggression from Esau stopped? Some think not, and maintain that one of the two nations in Rebecca's womb was to have everlasting emnity towards Israel (Ezek 35.5). Note that Esau had a grandson Amalek (Gen 36.12), and the Amalekites were enemies of OT Israel. God commanded King Saul to 'utterly destroy' the Amalekites for their aggression against Israel (1 Sam 15.3-7), although they were not completely destroyed by Saul. Today many Israeli's regard their adversaries as 'Amalek', either in reality or in symbolism.
It is important to distinguish between Arabs and Muslims. Before the arrival of Islam the Arabs were either pagan or followed Judaism or Christianity. But from about 620 AD Islam unified many of the Arabs, using military might when people wouldn't convert willingly. Today, most, but not all Arabs are Muslim, whilst only some 18% of Muslims are Arab. For example, part of the ancient kingdom of Edom is now part of Jordan, which became Arab-Islamic from the 7th century. Today Jordan is about 98% Arab and Islam is the state religion.
Why such hatred towards the Jews/Israelites? Recall that Mohammed was an Asian-Arab and he embarked on a mission to create an 'Arab religion' as distinct from Judaism and Christianity. But when the Jews in Arabia rejected him and refused to convert to Islam, Mohammed and the Quran turned against them.
Today, the Islamic claim to Palestine (the Promised Land) and to Jerusalem comes from their misunderstanding of the Abrahamic Covenant. Tragically, Muslims take God's covenant promise of land and blessing to apply to Ishmael and not Isaac, in contradiction to Gen 17.19-21! They believe that Ishmael, not Isaac, was the son whom Abraham nearly sacrificed, and that Ishmael was the son of promise. This is despite the fact that Jerusalem is mentioned more than 700 times in the Bible, but not once in the Quran! So, for example, Jordan, driven by Islamic ideology, joined four other Arab armies in 1948-49 in a war against the new State of Israel, and took by force East Jerusalem and the West Bank (Judea and Samaria).
Such hostility certainly applied in OT times, as in the Israel-Egypt, Israel-Moab, Israel-Canaan, Israel-Philistia, Israel-Amalek and Israel-Edom conflicts. But some see Ps 83 to be equally relevant today, and verses 6-8 seem to identify a present-day hostile Arab confederacy. Here we see Edom (Muslim-Arab Jordan?), Ishmaelites (Muslim Arabs in general), Philistia (Muslim Hamas in Gaza), Tyre (Muslim Hezbollah in Lebanon,) and Assyria (parts of Muslim Iraq, Iran, Turkey, and Syria).
Many assume the root-cause of today's Arab-Israeli conflict can be traced back to Abraham's time and to human fallibility. They trace it back to disputes between Jacob and Esau, and in particular between Isaac and Ishmael. But the Bible doesn't stress a long-term conflict between Israelite and Ishmael's descendants. Indeed, God blesses Ishmael and makes his descendants 'a great nation'! Others see today's conflict to be rooted in Esau and the Amalekites. But there may be an even deeper reason.
It is more realistic to see the root-cause of the Arab-Israeli conflict as a battle between truth and error - a spiritual battle between God's word in the Bible (truth) and Satanic deception leading to error. Today this battle is fuelled by the ideology of political Islam (forms of Islam pursuing political objectives) and facilitated by the Arabic nations, which are largely Islamic.
So we have the concept of national Israel, a special nation in their God-given land, embracing all strangers who happen to live in the land. This again is a crucial factor in today's Arab-Israeli conflict. Strangers should be welcomed, but they must accept the existence and land-rights of national Israel.
|
0.914379 |
John Jay (1745–1829), of Huguenot descent, was born in New York City, attended King's (later Columbia) College, went on to study law, and was admitted to the New York Bar in 1766 at the age of 21, soon establishing his own private practice.
During the tumultuous period leading up to the American Revolution, Jay was a moderate, speaking out against British policies, but certainly without subscribing to the radical democratic-republican views of the Liberty Boys, most of whom he regarded as "lower class," as they were. By birth, training, experience, and personal choice, Jay was always a patrician, sharing Hamilton's views that a propertied elite should hold power.
As a member of the New York delegation to the historic First Continental Congress in 1774, Jay drafted an "Address to the People of Great Britain," which Jefferson, without knowing its authorship, declared to be a "production certainly of the finest pen in America."
Though a reconciliationist, hoping to the last to patch up the differences between the rebellious American colonies and the mother country, Jay served in the Second Continental Congress and in the historic session of 1776 that adopted the Declaration of Independence, which he signed.
Jay drafted the new state constitution for New York, and was later named as minister to Spain. While in Madrid, he was sent to France as one of the three American commissioners who in 1783 negotiated the Treaty of Paris which ended the Revolutionary War and formally recognized American independence. Returning home, Jay was chosen by the Continental Congress to be in charge of foreign affairs, and was in serious difficulties almost immediately.
In 1785, Spain sent Don Diego de Gardoqui to this country as its ambassador extraordinary. Count Gardoqui arrived bringing some tempting offers that might, hopefully, open the way for a mutually profitable trade treaty.
Americans in the South and West, particularly in the states of Virginia and North Carolina which held territories extending westward from the Atlantic to the Mississippi River (territories later to become the states of Kentucky and Tennessee), were vitally concerned about navigation rights on the Mississippi. In the Treaty of Paris of 1763, France had ceded to Spain all of her claims west of the Mississippi, all of the vast and ill-defined expanse known as Louisiana. For most of its length, the river was the boundary between the American and Spanish territories, except that Spain held both banks of the river for several hundred miles above its mouth on the Gulf of Mexico. From New Orleans, a busy and thriving river and ocean port, the Spanish controlled all shipping coming into and passing out of the river. As usual in such cases, Spain made the most of her opportunities, favoring Spanish commerce by imposing restrictions, levies, and tolls on foreign shipping.
This pained many Americans, especially those in the West and South, who wished free and unrestrained shipping down the Mississippi into the Gulf of Mexico. If this right were not obtained, it would impede development of western lands. It would be much cheaper and easier to float heavy agricultural and forest products down the river and out into the Gulf than to cart them laboriously eastward over the mountains.
Madrid had ordered Gardoqui not to yield an inch on Spain's rights along the lower Mississippi. In authorizing Jay to negotiate with Gardoqui, the Continental Congress had strictly instructed him that he was "particularly to stipulate the right of the United States to free navigation of the Mississippi." It is not surprising, therefore, that after more than a year of secret negotiations, no agreement was reached.
Then came a turn that caused widespread alarm and threatened to tear the Union apart. To break the deadlock in negotiations, Secretary Jay recommended to the Continental Congress that his instructions be changed. In a secret session, by a close vote after a bitter debate, the Congress decided that Jay must stop pressing the Mississippi issue, and in return ask for certain trade concessions from Spain.
The motion to change Jay's instructions had the support of seven states, all of the North and East: Massachusetts, Rhode Island, Connecticut, New Hampshire, New York, Pennsylvania, and New Jersey, all of them interested in promoting Atlantic seaboard trade, and having little or no concern about navigation rights on the Mississippi, which, to them, seemed far away and inconsequential. Jay was to agree to a treaty which would close the Mississippi River to navigation for 30 years, in exchange for commercial concessions in the Spanish Caribbean.
This is one of the most extraordinary transactions I have ever known, a minister negotiating expressly for defeating the object of his instructions, and by a long train of intrigue and management seducing the representatives of the states to concur in it.
In his letter to Patrick Henry, Monroe added even more alarming information. Some influential people of the Northeast were openly talking about the "subject of a dismemberment of the states east of the Hudson from the Union, and the erection of them into a separate government, . . . that the measure is talked about in Massachusetts familiarly, and is supposed to have been originated there. . . ."
Moves to dismember the Union should be blocked, Monroe added, yet I do consider it as necessary on our part to contemplate it as an event that may happen. . . . It should be so managed (if it takes place) either that it should be formed into three divisions or, if into two, that Pennsylvania, if not Jersey, should be included in ours."
With Patrick Henry taking the lead, the Virginia legislature passed a number of very strong resolutions opposing any attempt "to barter or surrender the rights of the United States to the free and common use of the river Mississippi," that any such attempt would provoke the just resentment "of our western brethren whose essential rights and interests would be thereby sacrificed and sold," that the sacrifice of the rights of certain parts of the Union (the South and West) to the "supposed or real interests" of another part (the North and East) would be "a flagrant violation of justice, a direct contravention of the end for which the federal government was instituted." The fruitless Jay-Gardoqui negotiations were very important to the constitutional convention when the southern states insisted upon a two-thirds majority vote for ratification of treaties. The severely criticized negotiations also became heavily involved in the debate over the role of the senate under the proposed constitution, particularly as it concerned the senate approval of treaties.
In spite of his part in the Gardoqui fiasco, Jay remained in charge of the nation's foreign relations until 1789 when President-elect Washington, recognizing his zealous and influential activities in the Federalist cause, asked Jay which post he wished to occupy in the new administration. Chief justice of the United States, Jay replied, and he was thereupon appointed by and with the consent and advice of the Senate.
In 1792, he resigned to run unsuccessfully for the governorship of New York. Two years later, in 1794, Jay was given another diplomatic assignment, being named by President Washington as special envoy to Great Britain, with whom relations were very strained. In the treaty that resulted, the British complained that they had been "perfectly duped" by Jay. On this side of the Atlantic, Americans, particularly Jeffersonians, sneered at what they called "Jay's treaty" and denounced it as a "give-away." Whatever its defects, which were many, the treaty postponed war with Britain for almost two decades.
Running again for the governorship of New York, this time successfully, Jay served two terms. In 1801, when offered reappointment as chief justice of the United States, he declined and retired to the mansion he had built on his large country estate at Bedford, in Westchester County, New York, dying there in 1829 at the age of 84.
Eventually, James Madison lost faith in a one party system, and helped organize which political party to compete with the Federalists?
|
0.939651 |
How do I get my tax certificate?
You can download your tax certificates, complete with SARS codes to make your e-filing easier, directly from the 'Statements' page of the EasyEquities platform, by following the instructions below.
2. Click on the stack menu (the 3 horizontal lines stacked on top of one another to the left of the EasyEquities Logo as highlighted in yellow in the image below) and then select the Statements option (see black arrow towards the bottom of that menu structure).
3. On the Statements Page, you will see dropdown options available for both Tax Certificates and Monthly Statements (unfortunately in beta at the moment and currently working hard at getting recent statements delivered).
Click on the dropdown arrow for Tax Certificates (highlighted in yellow in the image below), which will open up the Tax Certificate area of the page.
4. Once opened, you will see all the various tax certificates available to you, named with the primary account (eg. EasyEquities ZAR, TFSA, EasyEquities RA etc.) to which the statements relate.
1. EasyEquities ZAR account, with a #Invest Aggressive Growth Bundle beneath.
Please note: Should an Investor have ownership of a bundle during a particular tax year, then he / she will have a separate statement for each separate bundle owned.
If you then open / expand the primary account (EasyEquities ZAR in the image below) by clicking on the little black downward facing arrow, you will see the available tax statements for that primary account for the years for which they're available.
Underneath the primary account statement, you can see that there is also an IT3(B) & (C) available for the #Invest Aggressive Growth bundle owned within the primary account (EasyEquities ZAR).
Please note: When you download a bundle tax statement for a bundle held in one of your primary accounts (eg. ZAR, TFSA, RA etc), the account number on the cover of that bundle statement will not be the same as the account number for the primary account, nor will it bear any correlation to the EE number (UserID) you are used to on your account as the bundles follow an entirely different account number convention.
Downloading the documents simply requires clicking on the download button to the right of the respective document and saving it to a desired location on your computer.
There will only be a tax certificate available for download if you became an EasyEquities client prior to the end of a particular tax year.
The tax year runs from 1 March to the end of February every year.
|
0.999646 |
While a large part of the San Francisco Bay Area is busy discussing the latest definition of “hipster” and inflated tech valuations, there is another trend currently creating a lot of buzz. This one is related to the notion of “doing well, while doing good” by addressing social and environmental challenges through the deployment of capital.
Doing well while doing good: Trend or mainstream investing approach?
Investors interested in making a difference in the world can focus on a variety of sectors, such as energy, natural resources, water, sustainable agriculture, clean technology, biomimicry and financial services. Regardless of the chosen sector, impact investing focuses on building new markets and supporting socially and environmentally beneficial businesses as they scale, something that resonates with responsible investors in the Bay Area.
Recent successes show that the field has a lot of potential and that the buzz is justified. This investment approach could unlock significant sums of investment capital, complementing efforts by public bodies and philanthropic organizations to address the most pressing global challenges. To make it even more attractive, estimates show that the impact investing industry could grow to US$500 billion in assets by 2020 (from around US$50 billion in assets back in 2007 when the definition of impact investing first emerged).
Why is this trend so significant for the Silicon Valley area, you might ask? A recent article published by Sir Ronald Cohen, a man widely regarded as the “father of social investment”, created quite a stir when he stated that “social impact Investing is the new venture capital.” Cohen argued that impact investing will play a transformative role in the future of our society, similar to the one venture capital has in the past.
While there are similarities between impact investing and principles traditionally applied to venture capital, there is more to the story. Most promising technology entrepreneurs have no problem coming up with a compelling exit strategy: acquisitions and IPOs are a well-known path to success in Silicon Valley. However, investments that blend financial returns with intentional social or environmental impacts tend to be more complex, often calling for longer repayment schedules than those used by venture capital (VC) deals, especially when the investment is in less mature markets where business models need more time to develop.
Impact investors are also often willing to take on significantly more risk than a traditional VC if the social mission aligns closely with the investor’s vision and his commitment to developing non-traditional sectors. Finally, even if a company appears attractive from a purely financial perspective, impact investors won’t invest unless positive outcomes (impacts) can be quantified and demonstrated.
If impact investing is on your radar, the Bay Area is a great place to start looking for ways to get involved. The region is home to a substantial pool of potential funders as well as many highly creative impact entrepreneurs, giving investors plenty of choices. Below we list* just some of the biggest impact players in the Bay Area.
The Bay Area offers rich pickings for impact investing, but remember that impact is a global game: would-be impact investors shouldn’t limit their search to a single geographic area.
Today there are literally hundreds of impact funds across the globe, with diverse areas of interest and investment philosophies. They are run by specialized asset managers as well as mainstream financial institutions such as J.P. Morgan, UBS, Credit Suisse and Deutsche Bank. The same goes for innovative networking platforms that connect impact investors and entrepreneurs looking to make a difference. Maximpact is just one example of such a service, offering an online deal listing platform where a broad array of global opportunities can be examined in one place. So if impact investing intrigues you, don’t hesitate to look more broadly and take advantage of innovative tools available to impact investors today.
*This list is partial and there are plenty more innovative impact investing focused businesses serving the Bay Area. If you have additional suggestions, we would love to hear about them.
|
0.99996 |
In 2003, developed countries as a group accounted for 44% of retail-level cotton consumption worldwide, and developing countries accounted for 52%. At the retail level, the United States is the largest consuming country, accounting for 21% of total cotton use in 2005. United States per capita cotton consumption was 17 kilograms in 2005, compared with a world average of only 3.8 kilograms. High consumer incomes, a history of cotton consumption, consumer preferences in favour of cotton bolstered by industry advertising, and fashion trends that favour cotton explain the high level of per capita cotton use in the United States.
Retail consumption of cotton in Latin America accounted for 9% of world cotton use in 2000; per capita consumption was 3.2 kilograms per year. Consumers in Brazil and Mexico account for two-thirds of Latin American retail-level cotton use.
Retail consumption in the EU-15 accounts for 16% of world cotton use, and per capita cotton consumption in Europe was about 7 kilograms in 2000. The lower level of per capita consumption of cotton in Europe compared with the United States reflects lower average income levels, fewer consumer-oriented retail structures, and differences in tastes and preferences between United States and European consumers.
Retail consumption in the Russian Federation and other countries of the former Soviet Union accounted for 2% of world cotton use in 2000; per capita cotton use was below the world average at just 2.7 kilograms.
Retail consumption in the Middle East, including Turkey, accounted for 6% of world use in 2000; per capita consumption was equal to the world average at 3.6 kilograms.
Africa, including South Africa and Egypt, accounts for only 2% of world cotton use at the retail level; per capita consumption of cotton in Africa is less than 1 kilogram per year.
Retail consumption in Japan equalled 6% of world cotton use in 2000. Per capita consumption in Japan was 9 kilograms, 2 kilograms higher than the EU average, but lower than in the United States. Consumption in the rest of East Asia (including China) and South Asia accounted for 31% of world cotton use at the retail level in 2000, but per capita consumption averaged just 1.8 kilograms because of low incomes and government policies that favour the use of polyester to conserve land devoted to cotton. One of the great challenges for the cotton industry is to raise per capita consumption in the countries with the largest populations, including China, where cotton use per capita was just 1.9 kilograms in 2000, India, with per capita cotton use of 1.7 kilograms, and Indonesia, with per capita use of 1.4 kilograms. It is hoped that rising incomes in India, Indonesia and China will lead to increases in per capita cotton consumption during the current decade.
|
0.932228 |
Rudolf Sturm's father was a businessman. After attending the St Maria Magdalena Gymnasium in Breslau he entered Breslau University in October 1859 to study mathematics and physics. There he was taught by Schröter who encouraged him to study synthetic geometry.
He was awarded a doctorate by Breslau in 1863 for a dissertation entitled De superficiebus tertii ordinis disquisitiones geometricae Ⓣ. In this work he studied third degree surfaces in their projective representations and proved theorems which had been stated, but not proved, by Steiner. After the award of his doctorate he taught at Breslau as an assistant. He continued to work on surfaces and, in 1864, he shared with Cremona the Steiner prize of the Berlin Academy for his investigations of surfaces.
In 1866 he became a science teacher in Bromberg, which is the German name for the city now called Bydgoszcz in northern Poland. The year after he took up the post in Bromberg he published Synthetische untersuchungen über Flächen Ⓣ which collected together his prize winning results and his other work in the area.
In 1872 Sturm was appointed assistant professor at the Technical College in Darmstadt where he taught descriptive geometry and graphic statics. In order to provide a good teaching book for his students, Sturm published a textbook Elemente der darstellenden Geometrie Ⓣ on descriptive geometry and graphical statics for his students in 1874. He became an ordinary professor at Münster in 1878, then he returned to Breslau in 1892 where he again held an ordinary professorship. He remained in this post until his death.
Sturm wrote extensively on geometry and, other than the teaching textbook on descriptive geometry and graphical statics which we mentioned above and one other teaching text Maxima und Minima in der elementaren Geometrie which he published in 1910, all his work was on synthetic geometry.
He wrote a three volume work on line geometry published between 1892 and 1896, and a four volume work on projective geometry, algebraic geometry and Schubert's enumerative geometry the first two volumes of which he published in 1908 and the second two volumes in 1909. These two multi-volume works collect together most of his life's research.
... in the first two volumes Sturm treated linear complexes, congruences, and the simplest ruled surfaces up to tetrahedral complexes, all of which can be particularly well handled in a purely geometric fashion. He did not systematically investigate the remaining quadratic complexes until volume three, where the difficulties of his approach - as compared with an algebraic treatment - place many demands on the reader.
Sturm's four volume work contains over 1800 pages. It examines geometric relationships, in particular transformations such as Cremona transformations. The work in some respects represents the crowning achievement of synthetic geometry developed in Sturm's style. The subject had little in the way of opportunities for further progress and as a consequence, despite Sturm supervising quite a few doctoral students, nevertheless he did not build a school to continue to develop his mathematical ideas.
|
0.994784 |
What is “The Post” in “First Past The Post” voting?
What does "The Post" refer to? It sounds intuitively like it would be a fixed figure (e.g. 50% of the electorate, or 50% of the votes), but as the winner needs only to have the "most votes," that is not so. If it is an analogy to horse racing, where there is a fixed post to pass, please explain the analogy.
"First Past The Post" voting is simply "whoever gets the most votes". A common system used in most of the US and UK, for example.
The 'post' in this case is having the 'most votes'. It's not a fixed number at all. Whoever has more votes than anyone else, wins. It's not always a majority.
In many cases, a majority of votes isn't needed to win--simply having more votes than anyone else is enough to give you the win. In some cases where there is a desire to have an actual majority of votes, then there may be multiple rounds of this type of voting used in a run-off system.
As for the analogy, that part might be better asked on english.se. But yes, in simple terms, it's simply using the metaphor of a horse race.
The term first past the post (abbreviated FPTP or FPP) was coined as an analogy to horse racing, where the winner of the race is the first to pass a particular point (the "post" or finish line) on the track (in this case a plurality of votes), after which all other runners automatically and completely lose (that is, the payoff is "winner-takes-all").
There is, however, no "post" that the winning candidate must pass in order to win, as the winning candidate is required only to have received the highest number of votes in his or her favour. This results in the alternative name sometimes being "farthest past the post".
I did some of my own research, and found the phrase "First Past The Post" is profoundly ill-suited to describe the voting system it's associated with.
There is no 'post,' or fixed amount of votes, which candidates must get to win.
The results for all candidates are announced at the same time; no candidate finishes "first," i.e. before another candidate.
A researcher on Metafilter dug up evidence that "first past the post" in horse racing is in fact not simply "winning the race," as we think of it, but a specific kind of bet. When you bet "first past the post," you disregard any judgements, disqualifications, or other exceptions that may come up post-race. (Here are stories from 1882 and 1901 invoking the phrase in this way.) In modern elections, objections after voting day are certainly investigated.
Additionally, one writer argues that even in horse racing, the phrase should be "first on the post," as the race is over once the lead horse's nose crosses the wire, not once it's completely past. Thus all four words in the phrase are subject to question.
Normally, in a horse race, there are many rules that the jockeys must follow. The adjudicators may hold a (quick!) investigation, called a Stewards' Enquiry into any claims of that rules have been broken, and decide to disqualify a horse, or to adjust the rankings based on some penalty for failing to follow the rules.
In First Past The Post rules, however, there is an agreement between the bettors that they will disregard the stewards' tweaks, and instead base the decision "purely" on which horse passes the winning post first.
Many voting systems, such as Single Transferrable Vote, involve finding a provisional ranking of several candidates, eliminating a losing candidate, and transferring the losing votes to the next preference, which changes the rankings. The candidate with the most primary votes may not be the final winner.
However, First Past The Post voting systems have no such ranking adjustments. It is the least complicated system - the most primary votes - just like least complicated system in horse racing - the first horse to reach the finish line.
Not the answer you're looking for? Browse other questions tagged voting-systems history or ask your own question.
Does running dead happen in elections outside of Australia?
What are the disadvantages of first-past-the-post electoral systems?
What are practical steps citizens can take to alter a first past the post system?
Which US states give proportional Presidential electoral college votes to candidates?
Why is First past the post used in so many countries?
What arguments are there against ranked-choice voting?
Why is Instant Runoff Voting the accepted form of ranked voting?
Why do proportional representation voting systems not evolve to first past the post systems?
What can UK citizens do to replace first past the post with a proportional representation voting system?
Why isn't approval voting used more often in elections?
Does the number of constituencies in First Past The Post system affect the strength of third parties?
|
0.99831 |
Интернет — мощный источник информации по мнению одних, а другие думают, что книги — тоже хороший источник информации и способ занять время. В июле 2017 года эта тема была в официальных тестах IELTS. Можно легко и быстро найти аргументы для обеих точек зрения и написать эссе.
Some people think advance technology such as the Internet is a powerful source of information. Others, however, think that books are also a good source of information and entertainment. Do you agree or disagree with this statement?
Мое мнение — согласен с тем, что интернет дает намного больше возможностей, т.к. предоставляет больший объем информации и позволяет вступать в обсуждения, которые дают дополнительные знания.
With the rapid development of computer technology the Internet has become probably the most powerful information resource for global population. Nonetheless, some people claim that books should not be forgotten and regarded as an essential source of information. The reasons behind these both viewpoints need to be carefully examined before coming to a conclusion.
It is hard to deny that the Internet is indeed a highly influential source of information for mankind nowadays. One reason for this is the fact that the Internet provides oftentimes free and quick access to information, whereas book outlets can be scarce in certain areas and book prices increase incrementally. Moreover, the Internet has grown into a platform for discussion, which enables people, regardless of their physical location, to share their opinions and knowledge.
By contrast, the proponents of books assert that the latter have not lost their relevance in spreading information. Thus, a book, unlike the Internet, is focused on a particular topic, provides information in a structured manner and bears no distracting features, such as unnecessary information or advertisements pertinent to the Web. On the other hand, it is also important to keep in mind that not every book can be found online in a digitized form, and therefore traditional books should still be considered as a unique source of information.
To conclude, I am definite in my opinion that the Internet is the most powerful source of information and it presently supersedes books, whether people agree to this or not. It is due to the Internet’s vast data resources and the ability to discuss and share that makes the Web the primary knowledge base for the humanity.
Be the first to comment on "Writing Task 2: пример темы и готового эссе (Internet and books)"
|
0.918386 |
Ranveer Singh Bhavnani (born 6 July 1985) is an Indian film actor who appears in Hindi films. After completing a bachelor's degree from Indiana University, Bloomington, Singh returned to India to pursue a career in film. He made his acting debut in 2010 with a leading role in Yash Raj Films' romantic comedy Band Baaja Baaraat. The film emerged as a critical and commercial success, earning Singh a Filmfare Award in the Best Male Debut category.
Singh was born on 6 July 1985 into a Sindhi family to Anju and Jagjit Singh Bhavnani.His grandparents, Sunder Singh Bhavnani and Chand Burke, moved to Mumbai from Karachi, Sindh, during the Partition of India. He has an elder sister named Ritika Bhavnani. Singh is the maternal cousin of actress Sonam Kapoor and producer Rhea Kapoor, daughters of actor Anil Kapoor and wife Sunita Kapoor (née Bhavnani).Singh explains that he dropped his surname Bhavnani, since he felt that the name would have been "too long, too many syllables", thus downplaying his brand as a "saleable commodity".
Singh always aspired to be an actor, participating in several school plays and debates. Once when he had gone for a birthday party, his grandmother asked him to dance and entertain her. Singh remembers that he suddenly jumped in the lawn and started dancing to the song "Chumma Chumma" from the 1991 action film, Hum. He felt the thrill of performing and was interested in acting and dancing. However, after he joined H.R. College of Commerce and Economics in Mumbai, Singh realised that getting a break in the film industry was not at all easy, as it was mostly people with a film background who got these opportunities. Feeling that the idea of acting was "too far-fetched", Singh focused on creative writing. He went to the United States where he received his Bachelor of Arts degree from Indiana University.
At university, he decided to take acting classes and took up theatre as his minor.After completing his studies and returning to Mumbai in 2007, Singh worked for a few years in advertising as a copywriter, with agencies like O&M and J. Walter Thompson. He then worked as an assistant director, but left it to pursue acting. He decided to send his portfolio to directors. He would go for all kinds of auditions, but did not get any good opportunities, while only getting calls for minor roles: "Everything was so bleak. It was very frustrating. There were times I would think whether I was doing the right thing or not."
|
0.999661 |
ETH-LAD 150 mcg, also known as "6-ethyl-6-nor-lysergic acid diethylamide" is a brand new LSD analogue and uncontrolled in the UK.
ETH-LAD 150mcg is a psychedelic drug similar to LSD, and is slightly more potent than LSD itself with an active dose reported at between 40 and 150 micrograms.
ETH-LAD 150 mcg is capable of producing a full range of low and high level hallucinatory states in a fashion that is significantly less consistent and reproducible than that of many other commonly used psychedelics.
Depending on how much and how recently one has eaten, LSD generally takes 20 - 60 minutes (though sometimes as long as 2 hrs) to take effect.
The primary effects of LSD last for 6-8 hours..
For many people there is an additional period of time (2-6 hrs) where it is difficult to go to sleep and there is definitely a noticeable difference from everyday reality, but which is not strong enough to be considered 'tripping'.
In the beginning stages of onset, LSD is likely to cause a sort of undefinable feeling similar to anticipation or anxiety.
There is often a slight feeling of energy in the body, an extra twinkle to lights, or the feeling that things are somehow different than usual.
As the effects become stronger, a wide variety of perceptual changes may occur; non-specific mental and physical stimulation, pupil dilation, closed and open eye patterning and visuals, changed thought patterns, feelings of insight, confusion, or paranoia, and quickly changing emotions (happiness, fear, giddiness, anxiety, anger, joy, irritation).
An LSD trip may vary greatly from person to person, from one trip to another, and even as time passes during a single trip.
Widely different effects emerge based on set and setting: the 'set' being the general mindset of the user, and the 'setting' being the physical and social environment in which the drug's effects are experienced.
It is common for users to believe that they have achieved insights into the way the mind works and some users experience permanent or long-lasting changes in their life perspective.
Some users consider LSD a religious sacrament, or a powerful tool for access to the divine. Many books have been written comparing the LSD trip to the state of enlightenment of eastern philosophy.
ETH-LAD is capable of producing a full range of low and high level hallucinatory states in a fashion that is significantly less consistent and reproducible than that of many other commonly used psychedelics.
ETH-LAD is technically capable of producing hallucinatory states in a fashion that is on par with psilocin or DMT in its vividness and intensity, these effects are rarer and more inconsistent in comparison.
ETH-LAD will for most simply go straight into level 8A visual geometry.
This lack of consistently induced hallucinatory breakthroughs means that, for most, ETH-LAD is not quite as deep of an experience as certain other psychedelics.
On the occasion that they are induced, however, they can be comprehensively described in terms of their variations as lucid in believability, interactive in style, new experiences in content, autonomous in controllability and geometry-based in appearance.
This chemical was carefully designed and released following the UK ban on AL-LAD and LSZ to avoid legal issues and continues to allow websites to sell legal lysergamides following a long chain of designer drugs replacing each other.
RC CHEMICAL MEGASTORE TEAM accept no responsibility for those who act in conflict with the law.
ETH-LAD is an experimental research chemical and is sold for research purpose only.
ETH-LAD is not a “legal high” and is not intended for human consumption or in vivo application, we cannot be held liable for any direct or indirect damages arising from it’s misuse.
|
0.969108 |
Then again, the Prancing Horse the German drove in Barcelona back in March hasn't exactly been the same auto since it left Spain. He touched upon the subject of Sebastian Vettel and his spin while chasing Lewis Hamilton.
Five times world champion Lewis Hamilton, victor in Bahrain after Leclerc slowed while leading, has been more successful than anyone in China and will be aiming to win the race for a sixth time.
Vettel admitted he was "not happy" about a "difficult race" in Bahrain, when he was outpaced by team-mate Charles Leclerc in only his second race for Ferrari in his second Formula 1 season.
"It's absolutely no different to any other race weekend for me".
Wolff said: "The main reason was that I think it would have been a distraction in the title fight, especially as our main opponent did not cooperate".
"As I showed in Australia, the interest of the team is extremely important", Leclerc said. "I didn't know he said anything nice, but it's nice to hear".
"I hope this weekend is closer between us because this is a great track to have a real race". Nevertheless, every year the challenges are different and I think we had a little extra time to understand the auto a little bit more to work and play with the setup.
"It was a surprise when we came to Australia and the vehicle wasn't anywhere near what we had in Barcelona, with the auto being very alive and unstable".
"I am not one for birthdays, I'm not one for anniversaries, I'm not one for particularly special days like this, so it is absolutely no different to any other race weekend for me", he said.
"So I think that proved very useful, at least that's the feeling we have now so let's see how it turns out overall".
At both races this year he has been off the pace and in Bahrain he was comprehensively outperformed by teammate Charles Leclerc.
"He's got the auto for it", he added. But I think Bahrain was already a lot better. And to be fair the behaviour in Barcelona was very strong. "Not happy with the feeling yet with the vehicle, the feeling that I had prior to the season in testing". But sometimes you never know what is going on with other drivers at other teams.
"We need to get everything right [in China]". I'm not really sure. The win was very close.
Bernie Ecclestone does not believe five-time World Champion, Lewis Hamilton will join Ferrari from current team Mercedes with the Brit going head to head with the Scuderia for the title this season.
The song debuted on the Billboard Hot Country Songs chart in late March at number 19, before Billboard gave it the ax. Billy Ray Cyrus hopped on the remix and delivered what can only be described as the "sauciest" country feature ever.
The banners' idea came to Al-Timrawi as he was recently watching re-runs in anticipation of the final season which will be broadcast in the UAE on April 15.
Avenatti shared a 41-page document over social media, but it did not contain anything specific mentioning Williamson. Avenatti feels emboldened when Nike and Duke don't come right out and issue denials about his accusations.
The CPU features two big performance Cortex-A76 cores clocked at 2.2GHz and six efficiency Cortex-A55 cores clocked at 1.8Ghz. The camera pipeline has also been improved, with Cinemagraph support added, along with HD super slow-mo at up to 960 fps.
|
0.988589 |
Write a review on Enyotics Health Science Focus Fast!
With 44 years of combined academic and professional experience in the fields of neuroscience and cognition, the researchers behind Focus Fast were shocked by the lack of effective focus support supplements available.* Recently, there has been an explosion in ground breaking discoveries revealing to the scientific community and the World the deeper inner workings and capabilities of the human mind. Utilizing this revolutionary research our team harnessed the latest advancements in Neuro-Focusing Technologies to produce Focus Fast - Enyotics Health Science's 1st & Only Neuro Focus Supporting Agent*.
Previous memory has come back and it's easier to focus on any tasks that I have. easy to swallow and results occurred within hours.
A lot of good ingredients including 5htp, gingko, vinpocetine, choline, huperzine, alpha-gpc, inositol, etc. helped with my memory and focus a great deal.
Headache I took the full bottle with exact dosing instructions and all i got were random headaches.
Focus I can focus better, and work better at school.
Focus I have been taking Focus Fast for approximately 1 year now.
Focus After reading up on a few particular products that fit this bill, I chose focus fast because it did not include any additional caffeine which was strangely one of the selling points for me.
Focus Focus Fast did an o.k.
Focus but didn't do much for me much in terms of memory, focus & and concentration.
Headache Two tabs b4 breakfast as recommended gave me headaches everytime.
Focus I'm trying this product because I loose focus at work and forget things sometime.
Taste The taste wasn't bad at all.
Focus Focus Fast does exactly what the name says!
Focus but it just makes you FOCUS!
Focus I work in an office and it's hard to get lost and tied down to the monotonous routines, and Focus Fast really helped my memory and concentration.
Muscle gain It would be better if they had discounts on 2 or more bottles or a larger pill count size.
Focus The focus feeling is a lot more constant though and it permeates everything as compared to caffeine which is acts like a boost.
Focus I noticed I became more alert during lectures, and was able to focus on studying my notes for a test for a longer period of time without being distracted.
Focus however I think it is related to me sleeping a lot better since I have started taking Focus Fast.
Focus I do recommend this product for those looking for a memory boost and better focus and to stay alert.
|
0.999981 |
Okay, New York has roughly the same population as the province of Quebec and with size comes capacity and resource.
The point is that the city is taking control over its own public health issues. It is not depending on, or demanding that the state take action. It has acknowledged the need to address its own health concerns and find its own solutions. New York’s attack on the fundamental problems of community violence have expanded over the years in its attempts to address obesity and now tobacco. With evidence of success NYC dept of health and mental hygiene. It helps to have a mayor that is brave and caring enough to address such issues (something both Toronto and Montreal have lacked in recent years).
The re-emergence of the city-state should not be dismissed. While some local governments mock the radical efforts and legal barriers that New York has experienced, their efforts are turning heads in the municipal ranks.
Vancouver’s mayor declaring a public health crisis over the issues over mental illness, Toronto’s work on housing and more recently on racialization and health inequalities, Montreal’s work on Transportation and health are all examples of local city-state efforts to address community health issues without allowing the federal-provincial divide to become an impediment.
The obstacle is Canada has about 3700 local/municipal governments, each one needs to be addressed in person and uniquely. As such, it is not surprising that the focus of policy efforts is aimed at the 13 provincial/territorial governments, or when possible the single federal body. It seems increasingly though that the decision power is moving from pan-Canadian to provincial/territorial and now to local government/First Nations communities.
Public health professionals have long been associated with local governments and their efforts. Paraphrasing a quote that isn’t readily at hand ‘The greatest gains in the health of the people has been made, not through the efforts of doctors and hospitals, but through the efforts of local government’. A statement that was made over fifty years ago and remains just as true today.
Kudos to New York City and those local governments that take “governance for the good of the people” to heart and apply a broad interpretation.
|
0.999998 |
What is detoxification, and how best can it be utilized to get all the toxins out of your body? This is a question, which is asked by a a lot of people, who are extremely health conscious and looking for detox methods. There are plenty of detoxification methods which are being very commonly used, but you need to look for a detoxification method, which is going to get rid of all the accumulated toxins in the body in a natural manner.
|
0.99999 |
"The Sash My Father Wore," or simply "The Sash," is a well-known song hailing from the Irish province of Ulster and Scotland. "The Sash" might be one of Ireland’s most divisive songs, yet it is cherished by a large part of the population in Northern Ireland. However, it certainly is not universally loved, thanks to centuries of political connotations attached to it.
“The Sash” is steeped in Ulster lore and Irish history, and proudly tells the story of King William III's victories over King James II, during the wars these two English monarchs fought in Ireland from 1689 to 1691.
It is a song that is also played in Scotland during events led by the Orange Order.
Mentioned in the lyrics are historic events of the so-called "Williamite War," which included the 1689 Siege of Derry, the 1689 Battle of Newtownbutler, the famed Battle of the Boyne in 1690, and the decisive Battle of Aughrim a year later.
Ultimately, the battles were fought between a king who was considered Protestant and a king who was Catholic. Therefore, "The Sash" is seen as an incredibly controversial song because it has become an anthem for sectarian (religion-based) political groups.
First of all, the Williamite War in Ireland (1688–1691) was a conflict between Jacobites (supporters of the Catholic King James II of England and Ireland, VII of Scotland) and Williamites (multinational supporters of the Dutch Protestant Prince William of Orange) over who should be monarch of the kingdoms of England, Scotland, and Ireland.
James had been deposed as king of these three kingdoms in the Glorious Revolution of 1688, and the mostly Catholic Jacobites of Ireland supported his return to power, as did France. For this reason, the war became part of a wider European conflict known as the Nine Years' War.
The mostly Protestant Williamites, who were concentrated in the north of Ireland, opposed James.
William landed a multinational force in Ireland to put down Jacobite resistance. James left Ireland after defeats at the Battle of the Boyne in 1690 and the Battle of Aughrim in 1691. The Williamite victories of the Siege of Derry and the Battle of the Boyne are still celebrated in Ireland today, mainly by Ulster Protestant unionists.
There was intense anxiety in Holland about England under James, whom the Dutch suspected of favoring France, their archenemy after a war with France and earlier Anglo-French alliances had caused Holland great suffering. They wanted England’s support for an alliance against Louis XIV. William then invaded England in 1688 as a pre-emptive strike, and it worked.
James fled to France, joining the queen and the infant Prince of Wales there. It was decided that James had, de facto, abdicated. Since William was James’ nephew and closest legitimate male relative, and his wife, Mary, was James’ eldest daughter and heir apparent, William and Mary were jointly offered the throne, which they accepted. In the same way, they were also awarded the throne in Scotland.
William had defeated Jacobitism in Ireland, and subsequent Jacobite uprisings were confined to Scotland and England.
In Ireland, Britain and Protestants ruled over the country for more than two centuries, effectively keeping Catholics from any positions of real power.
For more than a century after the war, Irish Catholics maintained a sentimental attachment to the Jacobite cause, portraying James and the Stuarts as the rightful monarchs who would have given a just settlement to Ireland, with self-government, restoration of confiscated lands, and tolerance for Catholicism.
As for "The Sash," the melody the lyrics are sung to has been known as far back as the late 18th century in the British Isles and all around Europe. The first lyrics, from 1787, seem to have been a lament about lovers who were forcibly parted that contains a chorus starting, “She was young and she was beautiful,” far from the political anthem it has become.
There seems to be no definitive version of this song, so we present here one popular set of lyrics and, below those, some well-known alternative lyrics.
The sash my father wore.
Of honor and of fame.
In the sash my father wore.
That you all will welcome me.
Surely the Orange flute I'll play.
Some supporters of the Glasgow Rangers soccer team are known for their unionist (sectarian) politics, and many fans use “The Sash” as a kind of anthem, just as the Irish supporters of Celtic Glasgow use republican songs. Even though both clubs try to steer their respective fans away from sectarianism, this re-visiting of old history is expected to continue for the foreseeable future.
What You Know About the Battle of the Boyne is ... Wrong?
|
0.943753 |
In a few minutes , I'll be scrambling around my house , looking for the things I need to bring to Bukit Jalil .
In a few minutes , I'll be packing my stuff into big travel bags .
In a few minutes , I'll be carrying these travel bags downstairs .
In a few minutes , I'll be filling the car trunk with my junk .
In a few minutes , I'll be thinking if I missed out anything .
In a few minutes , I'll be waving and saying my good bye-s and take care-s to my family .
In a few minutes , I'll be in the car and my mum will be asking me if I've taken everything I need .
In a few minutes , my dad will be asking me whether I am excited .
In a few minutes , I'll be in the car wondering what'll happen to me starting tomorrow .
In a few hours , I'll be setting foot in Vista Komanwel in Bukit Jalil .
In a few hours , I'll be unloading my junk out of the car .
In a few hour , I'll be struggling to unlock the locks to the condominium .
In a few hours , I'll be opening the biege coloured door to my room .
In a few hours , my parents and my younger brother will be sweeping , mopping , wiping and unpacking my stuff together with me .
In a few hours , I'll be standing in the middle of the room , looking to see if everything is in place .
In a few hours , I'll be reminded by my parents of certain important things .
In a few hours , I'll be saying good bye and hugging my family .
In a few hours , I'll be closing the doors to the condominium .
In a few hours , I'll be wondering , " What am I going to do now " .
In a few hours , I'll be lying in bed , praying that tomorrow will be a good day .
In a few hours , I'll be sleeping .
In a few hours , I'll be waking up , washing up and registering into IMU .
Those few minutes have passed .
I shall start scrambling around my house , looking for things I need to bring to Bukit Jalil now .
Good bye home , Good bye family , Good bye TV , Good bye Internet .
|
0.941817 |
Climate change, rising atmospheric carbon dioxide, excess nutrientinputs, and pollution in its many forms are fundamentally alteringthe chemistry of the ocean, often on a global scale and, insome cases, at rates greatly exceeding those in the historicaland recent geological record. Major observed trends includea shift in the acid-base chemistry of seawater, reduced subsurfaceoxygen both in near-shore coastal water and in the open ocean,rising coastal nitrogen levels, and widespread increase in mercuryand persistent organic pollutants. Most of these perturbations,tied either directly or indirectly to human fossil fuel combustion,fertilizer use, and industrial activity, are projected to growin coming decades, resulting in increasing negative impactson ocean biota and marine resources.
The ocean plays a pivotal role in the global biogeochemicalcycles of carbon, nitrogen, phosphorus, silicon, and a varietyof other biologically active elements and chemical compounds(1, 2). Human fossil-fuel combustion, agriculture, and climatechange have a growing influence on ocean chemistry, both regionallyin coastal waters and globally in the open ocean (3–5)(Fig. 1). Some of the largest anthropogenic impacts are on inorganiccarbon (6), nutrients (4, 7), and dissolved oxygen (8, 9), whichare linked through and affect biological productivity. Seawaterchemistry is also altered, some times quite strongly, by theindustrial production, transport, and environmental releaseof a host of persistent organic chemicals (10) and trace metals,in particular mercury (11), lead (12), and perhaps iron (13).
Marine biogeochemical dynamics is increasingly relevant to discussionsof ecosystem health, climate impacts and mitigation strategies,and planetary sustainability. Human-driven chemical perturbationsoverlay substantial natural biogeochemical cycling and variability.Key scientific challenges involve the detection and attributionof decadal and longer trends in ocean chemistry as well as moredefinitive assessments of the resulting implications for oceanlife and marine resources.
The biogeochemical state of the sea reflects both cycling andtransformations within the ocean, much of which are governedby biological dynamics, and fluxes across the ocean boundarieswith the land, atmosphere, and sea floor (2, 14). For most chemicalspecies, seawater concentrations are governed more by kinetics—therates of net formation and transport processes—than bychemical equilibrium with particles and sediments. Clear exceptionsare dissolved gases such as carbon dioxide (CO2) and oxygen(O2), which are driven to solubility equilibrium with the partialpressure of gases in the atmosphere in the surface ocean byair-sea gas exchange.
Phytoplankton in the ocean surface plays a crucial biogeochemicalrole, converting CO2 and nutrients into particulate organicand inorganic matter via photosynthesis and releasing O2 inthe process. The rate of marine primary production is governedby temperature, light (strongly influenced by surface turbulentmixing depths), and limiting nutrients, most notably nitrogen,phosphorus, iron, and silicon for some plankton. Some fractionof the biologically produced particulate matter subsequentlysinks into the subsurface ocean and is consumed by microbesand macrofauna, releasing CO2 and nutrients and consuming subsurfaceO2. Export production thus maintains strong vertical gradientsin biogeochemical tracers over the water column.
The global biologically driven export flux of ~10 Pg of C year–1 must be balanced by a supply of "new" nutrients brought up frombelow by ocean circulation, input by rivers, or deposited fromthe atmosphere. With sufficient iron and phosphorus, some diazotrophicmicrobes can produce "new" nitrogen in situ through nitrogenfixation that converts inert nitrogen gas into biological reactivenitrogen. Marine microbes produce and consume a number of tracegases that can influence climate, for example CO2, nitrous oxide(N2O), methane (CH4), and dimethylsulfide (DMS).
Ocean upwelling and mixing bring water with elevated CO2 andnutrients to the surface and replenish subsurface O2, with ventilationtime scales of years to a few decades in the main thermocline(upper 1 km of the water column) and many centuries for deepwaters. Natural ocean-atmosphere climate modes (e.g., El Nino–SouthernOscillation and Pacific Decadal Oscillation) generate substantialinterannual to interdecadal variability in ocean biogeochemistry.The major external source terms to the ocean are typically riverinputs and atmospheric deposition of dust, aerosols, and precipitation.These source terms are balanced mostly by losses to the seafloorvia the burial of the small fraction (<1% of organic matter)of sinking particulate matter that is not destroyed either inthe water column or in surface sediments.
For most of history, it was inconceivable that humankind coulddirectly influence ocean chemistry other than in local and inconsequentialmanners. That changed after the industrial revolution with thedevelopment of modern energy systems, chemical industries, andagriculture that process ever-growing volumes of material, someof which are released either advertently or inadvertently intothe environment and eventually reach the ocean. For example,because of human fossil-fuel combustion, deforestation, andland-use change (3), global mean atmospheric carbon dioxide(CO2) has grown by almost 40% from about 280 parts per million(ppm) in the preindustrial era to nearly 388 ppm by 2010 (15).The invention of the Haber-Bosch process, which converts N2gas into fixed nitrogen for agricultural fertilizer, has hadan even greater proportional impact on the global nitrogen cycle,approximately equaling the annual production of reactive nitrogenfrom natural sources (4). Comparable amplifications of a factorof 2 to 3 have occurred in the emissions of reactive phosphorus(16) and mercury (17) to the atmosphere and hydrosphere.
Indirect human effects on ocean chemistry can also occur, mainlythrough climate change. According to the most recent synthesisby the Intergovernmental Panel on Climate Change, warming ofthe climate system since the mid-20th century is unequivocaland is very likely caused by the increase in anthropogenic greenhousegas concentrations (CO2, N2O, CH4, and chlorofluorocarbons)(18). Documented physical changes relevant to ocean biogeochemistryinclude upper-ocean warming, altered precipitation patternsand river runoff rates, and sea-ice retreat in the Arctic andthe West Antarctic Peninsula. Reduced stratospheric ozone overAntarctica appears to be causing a major shift in atmosphericpressure (more positive Southern Annular Mode conditions), whichstrengthens and displaces poleward the westerly winds in theSouthern Ocean and which also may be increasing ocean verticalupwelling (19). Future climate projections indicate continuation,and in many cases acceleration, of these trends as well as otherchanges such as more intense tropical storms, an ice-free summerin the Arctic, and a very likely reduction in the strength ofthe Atlantic deepwater formation.
Rising atmospheric CO2 causes a net air-to-sea flux of excessCO2 that dissolves in surface seawater as inorganic carbon throughwell-known physical-chemical reactions. The global uptake rateis governed primarily by atmospheric CO2 concentrations andthe rate of ocean circulation that exchanges surface watersequilibrated with elevated CO2 levels with subsurface waters.The distribution, global inventory, and decadal trend in anthropogenicCO2 are well characterized from ship-based observations (6,20) and models (3). Based on a recent synthesis, in 2008 fossil-fuelcombustion released 8.7 ± 0.5 Pg of C year–1 tothe atmosphere primarily as CO2, contributing to an ocean uptakeof 2.3 ± 0.4 Pg of C year–1 (3). Cumulative oceancarbon uptake since the beginning of the industrial age is equivalentto about 25 to 30% of total human CO2 emissions (6).
Climate change is expected to decrease ocean uptake of anthropogenicCO2 because of lower CO2 solubility in warmer waters and slowerphysical transport into the ocean interior due to increasedvertical stratification and reduced deepwater formation (21).In contrast, stronger Southern Ocean winds and ocean upwellingmay increase future uptake of anthropogenic CO2 (22). Changesin ocean circulation also alter the upward transport of subsurfacewater enriched in nutrients and dissolved inorganic carbon,and these biogeochemical feedbacks tend to partially offsetclimate effects on anthropogenic CO2 uptake. In model estimatesfor the contemporary Southern Ocean for example, enhanced effluxof natural CO2 due to stronger winds and upwelling more thancompensates for increased anthropogenic CO2 uptake, leadingto a net reduction in global ocean uptake (19, 23). Recent observationsof the air-sea difference in the partial pressure of carbondioxide (pCO2), the driving force for air-sea CO2 exchange,indicate a weakening of oceanic uptake in a number of regions,although there remains some debate about whether this signalshould be attributed primarily to climate change or decadalclimate variability (3, 24).
Ocean uptake of anthropogenic CO2 also alters ocean chemistry,leading to more acidic conditions (lower pH) and lower chemicalsaturation states ( ) for calcium carbonate (CaCO3) mineralsused by many plants, animals, and microorganisms to make shellsand skeletons (25). Seawater acid-base chemistry is bufferedlargely by the inorganic carbon system, and CO2 acts as a weakacid in seawater. Processes that add CO2, like air-to-sea gasflux or bacterial respiration of organic matter, increase theconcentration of hydrogen ions (H+) and thus decrease pH (pH= –log10[H+]).
Critically for many organisms, the addition of CO2 reduces carbonateion (CO32–) concentration through the reaction H+ + CO32– HCO3–, even though the total amount of dissolved inorganiccarbon (DIC) goes up (DIC = [CO2] + [H2CO3] + [HCO3–]+ [CO32–]). Declining CO32– in turn lowers CaCO3saturation state, = [Ca2+][CO32–]/Ksp, where Ksp is thethermodynamic solubility product that varies with temperature,pressure, and mineral form. Ocean surface waters are currentlysupersaturated ( > 1) for the two major forms used by marineorganisms, aragonite (corals and many mollusks) and calcite(coccolithophores, foraminifera, and some mollusks). Becauseof pressure effects and higher metabolic CO2 from organic matterrespiration, decreases with depth, often becoming undersaturated( < 1), at which point unprotected shells and skeletons beginto dissolve.
Ocean acidification is documented clearly from ocean time-seriesand survey measurements over the past two decades (Fig. 2) (26, 27). From preindustrial levels, contemporary surface oceanpH has dropped on average by about 0.1 pH units (a 26% increasein [H+]), and additional declines of 0.2 and 0.3 pH units willoccur over the 21st century unless human CO2 emissions are curtailedsubstantially (28). Surface ocean CaCO3 saturation states aredeclining everywhere, and polar surface waters will become undersaturatedfor aragonite when atmospheric CO2 reaches 400 to 450 ppm forthe Arctic and 550 to 600 ppm for the Antarctic (29). Subsurfacewaters will also be affected but more slowly, governed by oceancirculation, with the fastest rates in the main thermoclineand high latitudes where cold surface waters sink into the oceaninterior. Many coastal waters naturally have low pH, a factoramplified by acid rain (30) and nutrient eutrophication (seebelow).
The rates of change in global ocean pH and are unprecedented,a factor of 30 to 100 times faster than temporal changes inthe recent geological past, and the perturbations will lastmany centuries to millennia. The geological record does containpast ocean acidification events, the most recent associatedwith the Paleocene-Eocene Thermal Maximum 55.8 million yearsago. But these events may have occurred gradually enough andunder different enough background conditions for ocean chemistryand biology that there is no good paleo-analog for the currentsituation (31).
On the basis of laboratory experiments and limited surveys acrossocean chemistry gradients, ocean acidification will likely reduceshell and skeleton growth by many marine calcifying speciesincluding corals and mollusks (25). Ocean acidification alsomay reduce the tolerance of some species to thermal stress.Some studies suggest a threshold of about 550 ppm atmosphericCO2 where coral reefs would begin to erode rather than growbecause of acidification and surface ocean warming; this wouldnegatively affect diverse reef-dependent taxa (32). Polar ecosystemsalso may be particularly susceptible when surface waters becomeundersaturated for aragonite, the mineral form used by many mollusks.
Some organisms may benefit in a high-CO2 world, in particularphotosynthetic organisms that are currently limited by the amountof dissolved CO2 in seawater. In laboratory experiments withelevated CO2, higher photosynthesis rates are found for certainphytoplankton species, seagrasses, and macroalgae, and enhancednitrogen-fixation rates are found for some cyanobacteria. Indirectimpacts on noncalcifying organisms and marine ecosystems asa whole are possible but more difficult to characterize frompresent understanding.
Primary production by upper-ocean phytoplankton forms the baseof the marine food web and drives ocean biogeochemistry throughthe export flux of organic matter and calcareous and siliceousbiominerals from planktonic shells. Satellite observations indicatea strong negative relationship, at interannual time scales,between productivity and warming in the tropics and subtropics,most likely because of reduced nutrient supply from increasedvertical stratification (33). Numerical models project declininglow-latitude marine primary production in response to 21st-centuryclimate warming (34). The situation is less clear in temperateand polar waters, although there is a tendency in models forincreased production because of warming, reduced vertical mixing,and reduced sea-ice cover. The climate signal in primary productionmay be difficult to distinguish from natural variability formany decades (35).
Changes in atmospheric nutrient deposition also can alter productivitybut mostly on regional scales near industrial and agriculturalsources. Present anthropogenic reactive nitrogen depositionto the surface ocean (54 ± 23 Tg of N year–1) (Fig. 3) supports an export production of ~0.3 Pg of C year–1 (~3% of global total) while producing an additional ~1.6 Tgyear–1 of N2O (7). In much of the North Pacific, equatorialPacific, and Southern Ocean, phytoplankton are limited by iron,but most of the atmospheric iron deposition is in the form ofmineral dust that is not readily bioavailable. Anthropogeniccombustion sources and increased cloud-water acidity are increasingsoluble iron input to the ocean (13, 36). Models suggest thatanthropogenic iron deposition could have a greater positiveimpact on productivity than anthropogenic nitrogen and alsoenhance nitrogen fixation, but direct observations are lacking(37).
Low subsurface O2, termed hypoxia, occurs naturally in open-oceanand coastal environments from a combination of weak ventilationand/or strong organic matter degradation (8, 9). Dissolved O2gas is essential for aerobic respiration, and low O2 levelsnegatively affect the physiology of higher animals, leadingto so-called "dead-zones" where many macrofauna are absent.Thresholds for hypoxia vary by organism but are ~60 µmolof O2 kg–1 or about 30% of surface saturation. Under suboxicconditions (<5 µmol kg–1), microbes begin touse nitrate (NO3) rather than O2 as a terminal electron acceptorfor organic matter respiration (denitrification), resultingin reactive nitrogen loss and N2O production. Toxic hydrogensulfide (H2S) production occurs under anoxic (no O2) conditions.The organic matter respiration that generates hypoxia also elevatesCO2, thus leading to coupled deoxygenation and ocean acidificationin a future warmer, high-CO2 world. The synergistic effectsof these multiple stressors may magnify the negative physiologicaland microbial responses beyond the impacts expected for eachperturbation considered in isolation (38, 39).
Fertilizer runoff and nitrogen deposition from fossil fuelsare driving an expansion in the duration, intensity, and extentof coastal hypoxia, leading to marine habitat degradation and,in extreme cases, extensive fish and invertebrate mortality(8, 40, 41). About half the global riverine nitrogen input (50to 80 Tg of N year–1) is anthropogenic in origin (4, 42),and anthropogenic nitrogen deposition is concentrated in coastalwaters downwind of industrial and intensive agricultural regions(30). The result is coastal eutrophication and enhanced organicmatter production, export, and subsurface decomposition thatconsumes O2. Nutrient eutrophication is also associated withincreased frequency of harmful algal blooms (43).
Worldwide there are now more than 400 coastal hypoxic systemscovering an area > 245,000 km2 (40). Population growth andfurther coastal urbanization will only exacerbate coastal hypoxiawithout careful land and ocean management. Accelerated hypoxiamay also result from climate warming and regional increasesin precipitation and runoff that increase water-column verticalstratification; on the other hand, more intense tropical stormscould disrupt stratification and increase O2 ventilation (8).
Expanding coastal hypoxia is also induced in some regions byreorganization in ocean-atmosphere physics. Off the Oregon-Washingtoncoast, increased wind-driven upwelling is linked to the firstappearance of hypoxia, and even anoxia, on the inner shelf after5 decades of hypoxia-free conditions (44). Further south inthe California Current System, the depth of the hypoxic surfacehas shoaled along the coast by up to 90 m (45). The same physicalphenomenon, along with the penetration of fossil-fuel CO2 intooff-shore source waters, are introducing waters corrosive toaragonite ( < 1) onto the continental shelf (46). There isconflicting evidence on how coastal upwelling may respond toclimate change, and impacts may vary regionally (47).
Extensive deoxygenation is also occurring in the open ocean,most notably in the thermocline of the North Pacific and tropicaloceans (9, 48) (Fig. 4). A portion of the observed oxygen changelikely reflects decadal variability in ocean circulation, butsimilar to ocean CO2 distinct secular trends are apparent atsome long-term time series stations (49). Models project furtherreductions of 1 to 7% in the global oxygen inventory and expansionsof open-ocean oxygen minimum zones over the 21st century fromdecreased solubility in warmer waters and slower ventilationrates (50).
Points sources of pollution from industrial discharges and oilspills are often highly visible and destructive to the localand regional marine environment (51). Perhaps less well knownis the global spread of industrial pollutants into what otherwisewould appear to be pristine environments. Elevated oceanic levelsof persistent organic pollutants (10) and methyl mercury, ahighly toxic organic form (11), raise serious concerns for marineecosystem health and, potentially, human health through theconsumption of contaminated seafood. Many organic and organo-metalliccompounds bioaccumulate in the fatty tissues of marine organismsat levels orders of magnitude higher than ambient seawater concentrations.Such pollutants are passed up the food chain and are most concentratedin marine organisms at the higher trophic levels including predatoryfish, marine mammals, and seabirds.
Key factors in determining overall biological impacts for aparticular pollutant are source magnitudes and locations, physicaland biological transport pathways, toxicity, and persistencein the environment. Pollutants exhibit elevated levels nearlocal point sources and in coastal and open-ocean waters becauseof atmospheric deposition downwind of industrial regions (e.g.,western Pacific near East Asia and North Atlantic near NorthAmerica and Western Europe) (17, 52) (Fig. 3). However, theyare also distributed globally, found in even the most remotemarine locations, transported through the atmosphere in thevapor phase, aerosols, and soot particles (i.e., black carbon);by ocean currents; and in some cases by migrating animals (53).
Elemental mercury (Hg0), the main chemical form in the ocean,is transformed into the more toxic methyl mercury form by microbes,particularly in reduced environments such as coastal sedimentsand perhaps oxygen minimum zones (11). Although mercury distributionsare poorly characterized from direct seawater measurements,time histories reconstructed from numerical models (17) andbiological samples (e.g., seabird feathers) indicate increasingtrends over the 20th century (11). It is encouraging that, afterthe phaseout of leaded gasoline in North America that beganin the mid-1970s, the high levels of anthropogenic lead observedin the North Atlantic declined sharply and are now comparableto those occurring at the beginning of the 20th century (12).
Some persistent organic pollutants are synthetic and did notexist in nature before industrial manufacture. Production forsome organic pollutants peaked in developed nations in the mid-to late 20th century but is continuing to grow in the developingworld. Commonly measured synthetic contaminants include pesticideslike DDT, polychlorinated biphenyls, and brominated flame retardantssuch as polybrominated diphenyl ethers. However, there are manymore organic compounds synthesized and used that presumablyexist in the ocean but that have not been detected (54).
Human activities have also increased levels of naturally occurringcompounds such as polycyclic aromatic hydrocarbons, which havesources from petroleum spills and natural oil seeps as wellas, primarily, incomplete combustion from wildfires, biomassburning, and fossil fuels (55). In a study on the Gulf of Mainedownwind of the Northeast United States, another combustionproduct, black carbon, contributed up to 20% of the total particulateorganic carbon in seawater and about half of the "molecularlyuncharacterized" fraction (56). Environmental samples oftencontain organic compounds similar in chemical structure to knownpollutants but which may be biosynthesized natural products;compound-specific radiocarbon analysis is emerging as a powerfultool for distinguishing between natural and industrial sources(10).
A deeper understanding of human impacts on ocean biogeochemistryis essential if the scientific community is to provide appropriateand timely information to the public and decision-makers onpressing environmental questions. Although some progress hasbeen made on a nascent ocean observing system for CO2 (57),the marine environment remains woefully undersampled for mostcompounds. The oceanographic community needs to develop a coordinatedobservational plan that takes better advantage of in situ autonomoussensors and observation platforms (58). Monitoring efforts shouldbe paired with laboratory and field process studies to betterelucidate the biological effects of changing chemistry at organism,population, and ecosystem levels.
In particular, more detailed biochemical, system biology, andgenomic studies are required to explain mechanistically theresponses of cells and organism to external perturbations, supplementingwhat have often been to date more phenomenological findings.Genomic and physiological research should be embedded in large-scaleecological and biogeochemical spatial surveys and time seriesto facilitate scaling to ecosystems (59). Further work is neededacross scales exploring possible synergistic effects among multiplestressors and to assess the potential for biological acclimationand adaptation to human perturbations over decadal to centennialtime scales. Lastly, targeted research is needed on the impactson marine resources and fisheries, potential adaptation strategies,and the consequences for human social and economic systems (60).
1. W. H. Schlesinger, Biogeochemistry: An Analysis of Global Change (Academic Press, San Diego, CA, 1997).
2. J. L. Sarmiento, N. Gruber, Ocean Biogeochemical Dynamics (Princeton Univ. Press, Princeton, NJ, 2006).
3. C. Le Quéré et al., Trends in the sources and sinks of carbon dioxide. Nat. Geosci. 2, 831 (2009).
4. J. N. Galloway et al., Nitrogen Cycles: Past, present, and future. Biogeochemistry 70, 153 (2004).
5. S. D. Donner, C. J. Kucharik, Corn-based ethanol production compromises goal of reducing nitrogen export by the Mississippi River. Proc. Natl. Acad. Sci. U.S.A. 105, 4513 (2008).
6. C. L. Sabine et al., The oceanic sink for anthropogenic CO2. Science 305, 367 (2004).
8. N. N. Rabalais et al., Biogeoscience 7, 589 (2010).
10. C. M. Reddy, J. J. Stegeman, M. E. Hahn, in Oceans and Human Health: Risks and Remedies from the Seas, P. J. Walsh, S. L. Smith, H. M. Solo-Gabriele, W. H. Gerwick, Eds. (Academic Press, Burlington, MA, 2008), pp. 121–141.
14. M. J. R. Fasham, Ed., Ocean Biogeochemistry (Springer, New York, 2003).
15. National Oceanic and Atmospheric Administration Earth System Research Laboratory, www.esrl.noaa.gov/gmd/ccgg/trends/.
18. S. Solomon et al., in Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, S. Solomon et al., Eds. (Cambridge Univ. Press, Cambridge, 2007), pp. 19–91.
29. M. Steinacher, F. Joos, T. L. Frölicher, G.-K. Plattner, S. C. Doney, Imminent ocean acidification in the Arctic projected with the NCAR global coupled carbon cycle-climate model. Biogeoscience 6, 515 (2009).
51. National Research Council, Oil in the Sea: Inputs, Fates and Effects (National Academies Press, Washington, DC, 2003).
58. K. S. Johnson et al., Oceanography 22, 217 (2009).
61. This work was supported by the Center for Microbial Oceanography, Research and Education (C-MORE) (NSF grant EF-0424599) and the W. Van Alan Clark, Sr. Chair for Excellence in Oceanography from the Woods Hole Oceanographic Institution. I thank J. Dore for Fig. 2, I. Lima for Fig. 3, S. Mecking for Fig. 4, and C. Reddy and C. Lamborg for discussions on ocean pollutants.
|
0.934785 |
Valeri Vladimirovich "Val" Bure (/ˈvɑːlɛri bʊˈreɪ/; Russian: Валерий Владимирович Буре, IPA: [vɐˈlʲerʲɪj bʊˈrɛ]; born June 13, 1974) is a Russian-American former ice hockey right winger. He played 10 seasons in the National Hockey League (NHL) for the Montreal Canadiens, Calgary Flames, Florida Panthers, St. Louis Blues, and Dallas Stars. A second round selection of the Canadiens, 33rd overall, at the 1992 NHL Entry Draft, Bure appeared in one NHL All-Star Game, in 2000. He led the Flames in scoring with 35 goals and 75 points in 1999–2000, a season in which he and brother Pavel combined to set an NHL record for goals by a pair of siblings with 93.
Bure left his home in the Soviet Union in 1991 to play junior hockey in the Western Hockey League (WHL) for the Spokane Chiefs. A two-time WHL all-star, he was the first Russian player in the league's history. Internationally, he represented Russia on numerous occasions. He was a member of the bronze medal-winning squad at the 1994 World Junior Championship and was a two-time medalist at the Winter Olympics. Bure and the Russians won the silver medal in 1998 and bronze in 2002.
Back and hip injuries led to Bure's retirement from hockey in 2005. He now operates a winery in California with his wife, Candace Cameron. Bure paired with Ekaterina Gordeeva in 2010 to win the second season of the figure skating reality show Battle of the Blades.
Valeri Bure was born June 13, 1974, in Moscow, Soviet Union. He is the younger son of Vladimir and Tatiana Bure. Vladimir, whose family originated from Furna, Switzerland, was an Olympic swimmer who won four medals for the Soviet Union at three Olympic Games between 1968 and 1976. Bure's family had a noble history: his ancestors made precious watches for Russian tsars from 1815–1917 and as craftsmen of the imperial family, were granted noble status.
Bure was around nine years old when his parents separated. In 1991, he joined his father and brother, Pavel in moving to North America as his elder sibling embarked on a National Hockey League (NHL) career with the Vancouver Canucks. His mother arrived two months later. They settled initially in Los Angeles where Vladimir continued to train and coach both Valeri and Pavel in hockey and physical conditioning. However both became estranged from their father, along with his second wife and their half-sister Katya, by 1998. Neither brother has explained a reason for the split.
Bure played three games during the 1990–91 season with HC CSKA Moscow of the Soviet Championship League prior to leaving the Soviet Union. As a 17-year-old, Bure was eligible to play junior hockey upon his arrival in North America, and joined the Spokane Chiefs of the Western Hockey League (WHL). In doing so, he became the first Russian player in the league's history. He joined the team one year before the Canadian Hockey League, of which the WHL is a member, instituted an import draft.
Bure recorded 49 points in 53 games in 1991–92 for the Chiefs, his first season in the WHL. The Montreal Canadiens selected him with their second round pick, 33rd overall, at the 1992 NHL Entry Draft. The NHL Central Scouting Bureau praised Bure as being a good skater. In its assessment, the Bureau added: "very smart around the net; good passer, playmaker. Good shot, quick release. Will take a hit to make the play. Good competitor." He returned to Spokane for the 1992–93 season where Bure led his team and finished second overall in WHL scoring with 147 points. His 68 goals that season remains a Chiefs' franchise record. He was named to the WHL's West Division First All-Star Team. Bure attended Montreal's training camp prior to the 1993–94 season, but was again returned to his junior team. He recorded 102 points in his final season in the WHL and was named to the Second All-Star Team. In three seasons with Spokane, Bure recorded 298 points and stands fourth on the Chiefs' all-time scoring list.
Upon turning professional in 1994–95, Bure spent the majority of the season with Montreal's American Hockey League (AHL) affiliate, the Fredericton Canadiens. He had 23 goals and 48 points in 45 games for the club. Bure earned a recall to Montreal late in the season and made his NHL debut on February 28, 1995, against the New York Islanders. His first goal came two weeks later, on March 15, against goaltender Wendell Young of the Pittsburgh Penguins. In 24 games with Montreal, Bure scored 3 goals and added an assist. Playing in his brother's shadow – Pavel had become a superstar in Vancouver – Valeri struggled to live up to the expectations placed on him. He scored 22 goals and 42 points in his first full season in Montreal, 1995–96, but scored only 14 goals the following season. He battled injuries that season; two concussions and a kidney injury limited him to 64 games, 13 fewer than the previous season.
At five feet, ten inches (178 cm) tall, Bure was a smaller player in the NHL. His linemates Saku Koivu (five foot ten) and Oleg Petrov (five foot nine) were similarly diminutive, and the trio were known in Montreal as the "Smurf line". After playing 50 games for the Canadiens in 1997–98, Bure was traded. He was sent to the Calgary Flames in a February 1, 1998, deal in exchange for Jonas Höglund and Zarley Zalapski. The deal was welcomed by Bure, who appreciated both the ability to play closer to his family on the west coast as well as increased opportunity by joining a young Flames team. He recorded his first career hat trick in one of his first games in Calgary, against the Edmonton Oilers. Bure appeared in 16 games with the Flames that season and scored 38 points in 66 games combined between Montreal and Calgary.
Bure's offensive ability emerged in Calgary as he became one of the team's leading scorers. His totals of 26 goals and 53 points in 1998–99 were both third best on the team; at one point of the season, Bure scored the game-winning goal in four consecutive victories for Calgary. The departure of Flames' star Theoren Fleury added pressure on Bure to be an offensive leader in 1999–2000, and he responded to become one of the NHL's early scoring leaders. He used his speed and skating ability to good effect and was eighth in league scoring by mid-December. Bure was named to the World team at the 2000 All-Star Game where he played on a line with his brother. Pavel was named most valuable player of the game by scoring three goals, two of them assisted by Valeri, in a 9–4 victory over North America. Bure completed the season as the Flames leader in goals (35) and points (75, 14th overall in the NHL) and was the only player on the team to appear in all 82 games. Pavel Bure scored 58 goals for the Florida Panthers, and the brothers' combined total of 93 goals set an NHL record for a set of siblings.
Though his offensive production declined in 2000–01, Bure's 27 goals was second on the team to Jarome Iginla's 31 and he finished third with 55 points. He became embroiled in a power struggle with his coaches, first Don Hay who was dismissed mid-season, and then Greg Gilbert, as both wanted him play a more defensive-minded game. Bure struggled to adapt and at one point was held out of the Flames lineup by Gilbert in response. Bure was rumoured to have asked for a trade out of Calgary, and the Florida Panthers (who had acquired Pavel), Buffalo Sabres and New York Rangers were among the teams who showed interest in his services. On June 24, 2001, the Flames traded Bure, along with Jason Wiemer to the Panthers for Rob Niedermayer and a second round draft pick.
As his contract had expired, Bure was a restricted free agent. Initially unable to come to an agreement with the Panthers on salary, Bure did not sign until late September. The delay resulted in his being a brief hold-out from Florida's training camp in advance of the 2001–02 season. Injury interrupted the start of Bure's Panthers career as a knee ailment that began bothering him before the season worsened as he played the first games of the campaign. Tests revealed damage to his right knee that required arthroscopic surgery to repair; Bure missed 37 games while recovering. A second knee injury ended Bure's season in mid-March as the Panthers had fallen out of playoff contention. His brother had already been traded by that point, and the Panthers were also making Valeri available in potential deals. He appeared in only 31 games and recorded 18 points.
Bure remained with the Panthers as the 2002–03 season began, but his year was marked by an offensive slump. He was also hampered by a hairline fracture to his wrist after Keith Primeau slashed him during an early December game against the Philadelphia Flyers. With only 5 goals and 26 points in 46 games for Florida, Bure was traded on March 11, 2003, to the St. Louis Blues in exchange for defenceman Mike Van Ryn. Another knee injury, this time a sprained ligament, kept Bure out of the Blues lineup for much of the remainder of the season. He recorded two assists each in five regular season and six post-season games for St. Louis. After the season, the Blues placed Bure on waivers, and he returned to Florida upon being claimed by the Panthers.
Free of injury for the first time in two seasons, Bure was one of the Panthers' offensive leaders in 2003–04. He reached 20 goals for the fifth time in his NHL career, and as the season's trade deadline approached, was Florida's leading scorer with 45 points. However, as the Panthers were out of playoff contention, they traded Bure to the Dallas Stars on March 9, 2004, in exchange for Drew Bagnall and a draft pick. Bure was placed on the Stars' top line with Mike Modano and Jere Lehtinen, and he recorded 7 points in 13 games to conclude the regular season. Bure added three assists in five playoff games.
An unrestricted free agent following the 2004 playoffs, Bure did not play anywhere in 2004–05 as the entire NHL season was canceled due to a labour dispute. He signed a one-year contract with the Los Angeles Kings for the 2005–06 season when the league resumed operations. He never played a regular season game for the Kings. A back injury suffered during the pre-season, initially just described as "soreness", kept him out of the regular lineup. The injury ultimately required surgery, and a second surgery on his hip caused Bure to miss the entire season. At the age of 31, he opted to retire following the surgeries.
Valeri Bure (far right), brother Pavel (centre-right) meet with Russian Olympic Committee President Leonid Tyagachev and Russian President Vladimir Putin (left) in 2001.
Bure made his debut internationally with the Russian national junior team at the 1994 World Junior Championship. He was the leading scorer of the bronze medal-winning Russians with eight points in six games and was named to the tournament's All-Star Team. That same year, Bure first played with the senior team as he scored three goals in six contests at the 1994 World Championship in a fifth-place effort.
After appearing in one game at the inaugural World Cup of Hockey in 1996, Bure played in his first of two Olympic Games in 1998. The tournament marked the first time he played with his brother Pavel since they were briefly teammates with CSKA Moscow in 1991. Valeri scored one goal in the tournament, and Russia advanced to the gold medal game. They settled for the silver medal after being shut out by Dominik Hašek and the Czech Republic. Bure returned for the 2002 Salt Lake Games. He scored a goal in the tournament as Russia won the bronze medal. Russia invited him to play at the 2004 World Cup of Hockey, but as he was without an NHL contract at the time, Bure declined to play due to a lack of proper insurance in the event of injury.
Bure married actress Candace Cameron on June 22, 1996. They were introduced by Cameron's Full House castmate Dave Coulier at a charity hockey game. The couple have three children: daughter Natasha (b. 1998) and sons Lev (b. 2000) and Maksim (b. 2002). Bure became an American citizen in December 2001. Bure cited his family as the reason he retired from hockey in 2005. He felt he could return from his surgeries, but wanted to spend time with his children and allow his wife to return to acting. The family are Christians.
In 2007, Bure and his wife opened a Florida restaurant called "The Milk and Honey Café", but closed the business when the family moved to California. They operate a Napa Valley, California winery, Bure Family Wines. Bure developed an interest in wine early in his NHL career that he described as growing into a passion: "I fell in love with the behind-the-scenes work and being able to start from the vineyard and put it into a bottle. It's an amazing process." Bure modified the Russian imperial seal his great-grandfather stamped on his watches to use as his company's label.
Bure returned to the ice in 2010 as a contestant on the second season of the Canadian Broadcasting Corporation's figure skating reality show Battle of the Blades. The series was a competition that paired a former professional hockey player with a figure skater. Bure's partner was Ekaterina Gordeeva. The pair won the competition and shared a $100,000 prize donated to charities of their choice. Bure's donation was made to Compassion Canada.
Career statistics: "Valeri Bure player card". National Hockey League. Retrieved 2014-05-21.
^ a b Russo, Michael (2003-01-01). "Bure's Full House". Sun-Sentinel. Retrieved 2016-01-07.
^ a b c d e f g h Hanlon, Peter; O'Brien, Sean, eds. (2000). 2000–01 Calgary Flames Media Guide. Calgary Flames Hockey Club. pp. 30–31.
^ a b Banks, Kerry (1999). Pavel Bure: The Riddle of the Russian Rocket. Vancouver, BC: Douglas & McIntyre. p. 18. ISBN 1-55054-714-3.
^ a b Banks, Kerry (1999). Pavel Bure: The Riddle of the Russian Rocket. Vancouver, BC: Douglas & McIntyre. pp. 7–8. ISBN 1-55054-714-3.
^ a b Murphy, Austin (1992-12-07). "The Russian Rocket". Sports Illustrated. Retrieved 2014-10-15.
^ Bell, Terry (2000-02-06). "My boys...everybody knows I love my sons". Vancouver Province. p. A22.
^ a b c d e f g h i j k l "Valeri Bure player card". National Hockey League. Retrieved 2014-05-21.
^ a b Gerheim, Earl (1993-09-27). "Bure returns to Spokane, expected to debut Friday". Spokane Spokesman-Review. p. C1. Retrieved 2014-05-21.
^ a b "Chiefs hold 41st pick in import draft". Spokane Chiefs Hockey Club. 2012-06-26. Retrieved 2014-05-21.
^ Cowan, Stu (2014-02-25). "Former Hab Valeri Bure now has his own wine label". Montreal Gazette. Retrieved 2014-05-21.
^ Watts, Jesse, ed. (2013). 2013–14 WHL Guide. Western Hockey League. p. 231.
^ Watts, Jesse, ed. (2013). 2013–14 WHL Guide. Western Hockey League. p. 164.
^ a b Flett, Cory; Watts, Jesse, eds. (2008). 2008–09 WHL Guide. Western Hockey League. p. 203.
^ a b c d "Valeri Bure biography". Hockey Hall of Fame. Retrieved 2014-05-20.
^ a b c Podnieks, Andrew (2003). Players: The ultimate A–Z guide of everyone who has ever played in the NHL. Toronto: Doubleday Canada. pp. 109–110. ISBN 0-385-25999-9.
^ a b "Quick on the ice and skilled with the puck, Valeri Bure left his mark as a member of the "Smurf line"". Montreal Canadiens Hockey Club. Retrieved 2014-05-21.
^ a b Pap, Elliott (1998-02-28). "Younger Bure fitting in". Vancouver Sun. p. G3.
^ Pap, Elliott (1999-12-10). "Valeri's 10 points up on Pavel, for now". Vancouver Sun. p. E2.
^ "Pavel Bure wins MVP award with help from Valeri". Canadian Press. 2000-02-07.
^ Hanlon, Peter; Kelso, Sean, eds. (2007). 2007–08 Calgary Flames Media Guide. Calgary Flames Hockey Club. p. 111.
^ a b c O'Connor, Joe (2002-10-24). "Valeri unburied: After languishing under a defensive system in Calgary, Valeri Bure is struggling to stay out of the doghouse in Florida". National Post. p. S4.
^ Gallagher, Tony (2001-03-07). "Florida, New York, Buffalo all pitching for Valeri Bure: Peca could be Flames' target". Calgary Herald. p. C3.
^ "Panthers send Valeri Bure to Blues". United Press International. 2003-03-11. Retrieved 2014-05-24.
^ Lefebvre, Jean (2004-01-11). "Ex-Flame Bure still loves Calgary". Calgary Herald. p. B5.
^ "Offside". Edmonton Journal. 2004-03-14. p. C3.
^ a b c Bush, Gareth (2014-03-14). A new kind of success. The Hockey News. p. 41. ISSN 0018-3016.
^ a b c d Podnieks, Andrew, ed. (2011). IIHF Guide & Record Book 2012. International Ice Hockey Federation. p. 461. ISBN 978-0-7710-9598-6.
^ Shea, Kevin (2013-04-16). "Czech Republic – 1998 Olympic Games". Hockey Hall of Fame. Retrieved 2014-05-24.
^ Stevens, Neil. "Russians lose Fedorov, Bure; Schneider won't play for U.S. as World Cup rosters change". Chatham Daily News. p. 13.
^ "Full House: 1987–1995". People. 2000-06-26. Retrieved 2014-05-25.
^ Beth, Lisa. "Candace Cameron-Bure's Full House of Faith". Cornerstone Connection. Retrieved 2012-08-31.
^ "Valeri Bure, Russ Courtnall to compete on Battle of the Blades". Globe and Mail. 2010-09-07.
^ "Russian duo wins Battle of the Blades". Vancouver Sun. 2010-11-23. p. C6.
|
0.999997 |
This crispy, flavorful sandwich is the perfect lunchbox treat -- and ready in just 15 minutes.
1. Heat a medium skillet over medium, and cook bacon, turning occasionally, until crisp, 5 to 8 minutes. Drain on a paper-towel-lined plate.
2. Heat a medium skillet over medium, and cook bacon, turning occasionally, until crisp, 5 to 8 minutes. Drain on a paper-towel-lined plate.
3. Place lettuce and tomato in another resealable plastic bag; add them to sandwich just before eating.
The recipe is missing steps. Step 1 & 2 are identical.. fry bacon. The only other step (3) says place lettuce and tomato in another resealable bag. Please fix the recipe!
|
0.99996 |
Tattoos (If any): A tattoo of a two pistols crossed on his back.
Piercings (If any): Two along the bottom of his left ear just above the lobe.
Preferred Style of Clothing: A standard police uniform with a skinny blue tie and black undershirt.
Personality: Calm, analytical, serious when need be.
Talents/skills: His analytical skills are superb. His aim is pretty good. He can also read things uncannily fast. On a side note, he can down an entire bowl of ramen in five seconds.
Digimon Description: A short, purple and yellow dragon rookie digimon.
Digimon's Personality: Silly, serious when Arano is, lazy.
How/When You Met Your Partner: The two met at night in a grassy field as Arano layed and looked to the sky. Drake, then just an in-training digimon, approached Arano and asked to join. Arano had no idea what Drake was and swiftly drew his gun. After it was apparent that Drake offered no threat, Arano lowered his guard. Their personalities seemed to mesh, so they became partners.
Situation of the character's birth (where, when etc): Arano's mother was rushed to the closest hospital in tokyo. After hours of labor, Arano was born, but his mother died.
How/When Your Character Arrived In the Digital World: On a mission for the police department, Arano was assigned to infiltrate an abandoned lab that was labeled for recent suspicious activity. He found nothing be a strange gate-like device. Out of no where, the gate opened a portal to the digital world. Arano was then pushed in by a shadowy figure.
Describe their childhood (newborn - age 10): After living with his father for 6 years, his father died on a mission with the police department. Arano didn't cry. He just vowed to himself that he'd become the best cop ever to make his father proud. He was moved in with his aunt and uncle and was loved just as if he were theirs.
Adult years (20 on): On his 26th birthday, he was sent on a mission for the department to check out suspicious activity in an abandoned laboratory. A shadowy figure pushed him through a strange portal to the digital world. He camped the first night out in a grassy field and it was there that he met his partner. They trained together and roamed the digital world for about a month. In the process Drake digivolved into a Monodramon. Now, they've come to a rather out-of-place-looking hill that towered above all the trees in the forest they were in. On top of the hill was a building. They looked at each other with a confused expressions and looked back to the hill to begin their ascent. As they reached the top of the hill, they read the sign hanging on the building. The sign read "Tamer HQ". They headed in, and this is where their true adventure began.
|
0.993381 |
How to use this script..?
Kaldin is a java/tomcat based online assessment software to help instructors create online assessments. Visit this page for more details: http://www.kaldin.com/ The script presented in this tutorial will install Kaldin for you. I have successfully tested this script on Debian_7.0.0 amd64 netinst CD image. So Ubuntu_12.04 should also work fine but use at your own risk.
At the end, you should see a kaldin installation page on http://your_server_ip/kaldin, proceed with filling all the required details, this will populate the MySQL DB and allows you to login.
PHPMyadmin is useful when dealing with MySQL, and is also needed when editing certain entries in Kaldin DB - Namely email_settings.
Webmin is useful when dealing with any Linux servers, in this case, it is very useful in dealing Postfix.
Copy & paste the below code (just copy the script and right click inside vi editor).
Change SERVER_FQDN, hit Esc then Shift+ZZ will save (you should be knowing how to use vi editor).
echo "Found Oracle JDK $JDK_VER, Kaldin can be installed on this system"
echo "Oracle JDK $JDK_VER wasn't found in $JDKPATH, please check the installation and/or path $JDKPATH"
echo "Please correct the JDK installation and then run the Kaldin installer script. Kaldin installer is exiting now"
echo "Creating webmin sources for apt"
echo "downloading kaldin2.1 from $KALDIN_SOURCE"
##### Setting up apache2 with "ServerName $SERVER_FQDN:80"
echo "I am assuming $SERVER_FQDN as the default FQDN, this is required to update in the file apache.conf file"
echo "Type 'y' if you want to change the $SERVER_FQDN to your own"
echo "Type 'n' to continue with $SERVER_FQDN"
y|Y) echo "Please type the Server FQDN in the form of foo.domain.com"
n|N) echo "Continuing with $SERVER_FQDN"
# Configurations specific to this location. Add what you need.
# Allow access to this proxied URL location for everyone.
You must use "sudo" before the "./kaldin" to get all the permissions needed.
I have added "sudo" wherever it is required inside the script and well tested for the working condition. So prefixing the "sudo" is no longer required. Just "./kaldin" should do.
dont follow this guide it is very bad.
Don't write something if you don't know Linux/Unix. I know who you are.
|
0.954241 |
The evolutionary origins of human language are obscured by the scarcity of essential linguistic characteristics in non-human primate communication systems. Volitional control of vocal utterances is one such indispensable feature of language. We investigated the ability of two monkeys to volitionally utter species-specific calls over many years. Both monkeys reliably vocalized on command during juvenile periods, but discontinued this controlled vocal behavior in adulthood. This emerging disability was confined to volitional vocal production, as the monkeys continued to vocalize spontaneously. In addition, they continued to use hand movements as instructed responses during adulthood. This greater vocal flexibility of monkeys early in ontogeny supports the neoteny hypothesis in human evolution. This suggests that linguistic capabilities were enabled via an expansion of the juvenile period during the development of humans.
The human language faculty vastly outperforms primate vocal communication systems in scope and flexibility (Balter, 2010; Ghazanfar, 2008; Hammerschmidt and Fischer, 2008). This lack of essential linguistic characteristics in extant non-human primate communication systems hampers insights into the evolutionary origins of speech and language (Arnold and Zuberbühler, 2006; Seyfarth and Cheney, 2010). Volitional control of vocal utterances is deemed a critical, albeit insufficient, precursor for the development of a flexible communicative system (Balter, 2010; Ghazanfar, 2008; Ackermann et al., 2014). However, primate communication systems consist of stereotyped and innate calls that are almost exclusively uttered affectively (Ackermann et al., 2014; Deacon, 2010; Jürgens, 2002). Non-human primates lack the neural machinery that endows modern humans with outstanding cognitive abilities such as language. The ‘neoteny hypothesis of human evolution’ (Gould, 1977) posits the expansion of the childhood period with refined synaptic development in modern humans to facilitate larger and more powerful neural systems. Specifically, the prefrontal cortex, which is associated with the highest levels of cognition in addition to being the site of Broca's language production area, experiences extraordinary long phases of developmental reorganization of neuronal circuits (Petanjek et al., 2011). Genes related to the development of the prefrontal cortex show excessive, neotenic expression in humans relative to chimpanzees and rhesus macaques (Somel et al., 2013).
The neoteny hypothesis suggests an exploitation of greater neural plasticity early in ontogeny to fostering the neural underpinnings of high-level communication systems like language (Carroll, 2003; Oller, 2000). Interestingly, primate vocalizations experience ontogenetic changes. In infant and juvenile simian monkeys, calls are more variable (Hammerschmidt et al., 2001; Pistorio et al., 2006; Takahashi et al., 2015) and vocal-related learning, such as call usage and comprehension, is facilitated (Seyfarth and Cheney, 1986, 2010). Therefore, infant and juvenile monkeys seem to have an advantage and can use vocal communication signals more flexibly.
Earlier studies revealed that monkeys and apes can be trained to vocalize in operant conditioning tasks (Sutton et al., 1973, 1974, 1985; Trachy et al., 1981; Coudé et al., 2011; Koda et al., 2007). We recently reported that two juvenile rhesus monkeys can be trained with effort to instrumentalize their calls as a conditioned response in a simple detection task (Hage et al., 2013). All but one study, including our own, that indicated the age of the monkeys and apes were performed with juvenile animals (Sutton et al., 1973, 1974; Trachy et al., 1981; Koda et al., 2007; Hage et al., 2013). Based on the neoteny hypothesis, we hypothesized that juvenile monkeys with a more plastic brain would be better suited for volitional call production than adult monkeys. We here present a longitudinal study based on data collected from two monkeys over several years, investigating potential developmental trends of vocal behavior from the juvenile to the adult period.
We used two male rhesus monkeys, Macaca mulatta (Zimmermann 1780), aged 4.8 and 4.9 years and weighing 4.2 and 4.5 kg at the beginning of this long-term study, and aged 9.5 and 9.7 years and weighing 8.6 and 9 kg, respectively, at the end. All procedures were authorized by the national authority, the Regierungspräsidium Tübingen, Germany.
Both monkeys were first trained to perform a vocal response task (Fig. 1A), i.e. a visual go/no-go detection task using their vocalizations as a response (Hage et al., 2013; Hage and Nieder, 2013, 2015). Briefly, the monkeys were required to vocalize cued by arbitrary visual stimuli (red or blue squares) to receive a reward. Monkey T was trained to utter ‘coo’ vocalizations; monkey C was taught to emit ‘grunts’. The two colors appeared with equal probability (P=0.5) and had no significant influence on call probability (Wilcoxon signed rank test, P>0.1 for both monkeys). Trials began when the monkey initiated a ‘ready’ response by grasping a bar. Then, a visual cue, indicating the ‘no-go’ signal (‘pre-cue’; white square, diameter 0.5 deg of visual angle) appeared for a randomized time of 1–5 s (time epoch 1 of monkey C with times between 0.5 and 5 s). During this period, vocal output had to be withheld. Next, in 80% of the trials, the visual cue was changed to a colored ‘go’ signal (red or blue square; diameter 0.5 deg of visual angle) lasting for 3000 ms (for monkey C, the duration of the go signal was extended to 3500 ms from the 19th session of epoch 6 until the end of epoch 8). During this time, the monkeys had to emit a vocalization to receive a reward. In 20% of the trials, the cue remained unchanged for another 3000 ms (‘catch’ trial). During this period, the monkey had to withhold calls. Catch trials were not rewarded. ‘False alarms’ were indicated by visual feedback (blue screen) and by trial abortion. To demonstrate its readiness to work, the monkey had to grab the bar throughout the pre-cue as well as the go phases. Bar release aborted the trials instantaneously, followed by visual feedback (red screen). In accordance with the go/no-go detection protocol, successful go trials were defined as ‘hits’, and unsuccessful catch trials as false alarms. One session was recorded per individual per day.
Behavioral protocols. (A) In the vocal response task (visual go/no-go detection task), monkeys called within 3 s to indicate the detection of a color ‘go’ stimulus. They were required to withhold calls in the absence of a color go stimulus in catch trials. (B) In the manual response task (visual delayed match-to-sample task), monkeys released a bar (originally grabbed to initiate the trial) within 1.2 s to indicate the matching of a test color with the sample color. They were required to continue grasping the bar whenever a non-match color appeared in the test 1 period.
Vocal recording sessions comprised eight contiguous epochs in monkey C (epoch 1: median age 4.9 years with N=15 daily sessions; epoch 2: 5.4 years, N=27; epoch 3: 6.2 years, N=25; epoch 4: 6.8 years, N=29; epoch 5: 7.1 years, N=47; epoch 6: 7.7 years, N=28; epoch 7: 7.8 years, N=13; epoch 8: 8.0 years, N=8) and 7 epochs in monkey T (epoch 1: 4.9 years, N=15; epoch 2: 5.0 years, N=33; epoch 3: 5.8 years, N=20; epoch 4: 6.4 years, N=52; epoch 5: 6.7 years, N=33; epoch 6: 7.0 years, N=12; epoch 7: 7.8 years, N=53). Monkey T was head-fixed during all sessions, monkey C was head-fixed in all sessions during epoch 1–5. In both monkeys, epochs 2 and 5 include neuronal recording sessions, while all other sessions in the remaining epochs were behavioral sessions.
After the monkeys ceased to produce conditioned calls as a response, they were re-trained to perform a manual response task (Fig. 1B). They were trained to perform a standard visual delayed match-to-sample (DMS) task with colors and were required to respond to matching colors by hand movements. A trial started when the monkey grasped a lever. A sample display showing a color square (2 deg visual angle) was presented on a black background in the center of a computer screen for 800 ms. A constant 1000 ms memory delay followed. Next, a test display appeared which in 50% of the cases was a match showing the same color as the sample period (‘match’ trials). In the other 50% of cases (‘non-match’ trials), the first test display after the delay period was a non-match, showing a different color, followed by a second test display, which always displayed a match color. If a match appeared, monkeys released the lever (within 1.2 s) to receive a fluid reward. If a non-match was shown, they held the lever until the second test display appeared (which in these trials was always a match), requiring a lever release for a reward. Trials were randomized and balanced across all relevant features (e.g. match versus non-match, colors). Monkey C performed the task with red and blue colors, monkey T with red, blue and green colors.
As in our previous studies (Hage et al., 2013; Hage and Nieder, 2013, 2015), stimulus presentation and behavioral monitoring were automated on PCs running the CORTEX program (National Institutes of Health) and recorded by a Plexon Multi-Acquisition system. Vocalizations were recorded by the same system with a sampling rate of 40,000 Hz via an A/D converter. A custom-written MATLAB program running on another PC monitored the vocal behavior in real time and detected the vocalizations. Vocal onset times were detected offline by a custom-written MATLAB program to ensure precise timing for data analysis in all but two sessions of monkey C (epoch 3 and epoch 4), as these behavioral sessions were recorded by the CORTEX program only.
The spontaneous vocalizations of the two monkeys in their housing environment were measured during their juvenile and adult periods as part of ‘ethograms’ for which a range of behaviors was recorded (Hage et al., 2014). To that aim, we focally sampled the call behavior of the monkeys in 1 min intervals over a duration of 10 min, during two periods of five consecutive days (‘continuous sampling’; Altmann, 1974; Martin and Bateson, 1993). Call occurrence (%) could range from 0% (no calls during the 10 min observation window) to 100% (calls every minute during the 10 min observation) and was averaged for the juvenile and adult test periods. The data for the juvenile phase were collected when monkey C was 5.4 and 5.6 years old, and when monkey T was 5.0 and 6.1 years old. Spontaneous call behavior for the adult phase was recorded when monkey C was 9.5 years old, and when monkey T was 9.7 years old. Wilcoxon rank sum tests were used to test for significant differences in spontaneous vocal behavior between the juvenile phase and adulthood.
We computed d-prime (d′) sensitivity values derived from signal detection theory (Green and Swets, 1966) by subtracting z-scores (normal deviates) of median hit rates from z-scores of median false alarm rates. The detection threshold for d′ values was set to 1.8. The d′ criterion for the threshold was 1.8, which corresponds to a hit rate of 56% at a false alarm rate of 5% in this go/no-go task (Green and Swets, 1966).
Kruskal–Wallis tests (with post hoc Wilcoxon rank sum tests) were performed to test for significant differences in call performance, hit rate, false alarm rate, d′ value and call latency during the detection task over time. We used Pearson's correlations to test for possible correlations between these parameters characterizing vocal behavior and the monkeys’ age in the appropriate sessions.
We measured vocal behavior over a period of about 4 years, when monkey C's age ranged from 4.8 to 8.1 years and monkey T's age spanned from 5.1 to 7.9 years. During this time, we recorded 12,769 vocalizations in monkey C and 21,029 vocalizations in monkey T, which were uttered as obligatory responses in the vocal response task (Fig. 1A). In total, this corresponded to 192 daily sessions in monkey C, and 218 sessions in monkey T. Vocal recording sessions comprised eight contiguous epochs in monkey C and 7 epochs in monkey T (see Materials and methods for details). Fig. 2 shows the vocalization behavior of both monkeys over this time in relation to the timing of life history events in macaques (Fleagle, 2013). We measured several behavioral parameters characterizing call behavior: the total number of volitional calls per session, the hit rate (percentage correct responses) and the false alarm rate (vocalizations during catch trials without a go stimulus). The hit rate and false alarm rate were used to calculate the sensitivity index, or d′, from signal detection theory (Green and Swets, 1966). During the first epoch, and at an age of 4.8 and 5.1 years for monkey C and monkey T, respectively, both monkeys showed superior vocalization behavior. This was evidenced by high call rates (monkey C: median 90 calls per session, Fig. 2A; monkey T: median 181 calls per session, Fig. 2B), high hit rates (monkey C: median 62.7%, Fig. 2C; monkey T: median 56.2%, Fig. 2D) and no false alarms at all in both monkeys (Fig. 2E,F). As a result of this high performance, the d′ value was 4.0 in monkey C (Fig. 2G) and 3.9 in monkey T (Fig. 2H), and thus well above chance.
Temporal trajectories of vocal behavior. (A,B) Distribution of the number of vocalizations produced during the vocal response task (red bars) and number of performed trials in the manual response task (blue bars) of monkey C and monkey T within the recorded time epochs. (C,D) Distribution of hit rates within the vocal response task and correct responses in the manual response task. Dashed lines in A–D indicate significant correlations between call and hit rates and the monkey's age. (E,F) Distribution of false alarm rates in the vocal response task. (G,H) Distribution of d′ values as a measure of sensitivity. Dotted lines indicate the d′ threshold criterion of 1.8. *No d′ value could be calculated for the last time epoch of monkey T because of the absence of vocal performance. Colored dots inside boxes indicate medians; lower and upper margins of boxes represent the first and third quartile, respectively. Note the different scales in A–D for the detection task (left) and delayed match-to-sample task (right). The shaded background indicates the adult period (Fleagle, 2013).
However, vocal performance progressively declined with increasing age of the monkeys. The number of calls per session decreased systematically over the epochs until both monkeys stopped uttering vocalizations completely (Fig. 2A,B; Kruskal–Wallis test; monkey C: P<0.001, N=192, d.f.=7, χ2=109.1; monkey T: P<0.001, N=218, d.f.=6, χ2=138.1) and was significantly correlated with age (Pearson's correlation: monkey C: P<0.001, N=192, R=−0.63; monkey T: P<0.001, N=192, R=−0.63; Fig. 2A,B). A similar decline of hit rates was observed for monkey C (Fig. 2C; Kruskal–Wallis test, P<0.001, N=192, d.f.=7, χ2=125.1) and monkey T (Fig. 2D; Kruskal–Wallis test, P<0.001, N=218, d.f.=6, χ2=151.4), which was also significantly correlated with age in both monkeys (Pearson's correlation: monkey C: P<0.001, N=192, R=−0.80; monkey T: P<0.001, N=218, R=−0.73; Fig. 2C,D). Importantly, however, the false alarm rate stayed at low levels for all epochs in both monkeys (Fig. 2E,F), indicating that the monkeys did not develop arbitrary calling behavior. Therefore, the accompanying significant change of d′ values (Fig. 2G,H; Kruskal–Wallis test; monkey C: P<0.001, N=192, d.f.=7, χ2=118.3; monkey T: P<0.001, N=165, d.f.=5, χ2=42.6), as well as the correlation of d′ values with age, was caused by the decrease in overall vocalizations until extinction. However, median d′ values were well above detection threshold until the end of the recordings (Pearson's correlation: monkey C: P<0.001, N=190, R=−0.68; monkey T: P<0.001, N=165, R=−0.39; Fig. 2G,H).
In parallel with the decline in performance, call latency increased significantly in duration. In monkey C, call latency changed from a median of 1.64 s in epoch 1 to 2.63 s in epoch 7 (Fig. 3A,B; Kruskal–Wallis test, P<0.001, N=130, d.f.=4, χ2=91.0, post hoc Wilcoxon rank sum test, P<0.001, N=28). In addition, median call latency was significantly correlated with the age of monkey C (Pearson's correlation, P<0.001, N=130, R=0.77). A less pronounced but equally significant decrease of call latency was observed between the first and last time epoch in monkey T (Fig. 3C; Kruskal–Wallis test, P<0.001, N=165, d.f.=5, χ2=32.4, post hoc Wilcoxon rank sum test, P<0.02, N=68). Changes in call latency did not constantly increase from epoch to epoch as in monkey C and showed only a weak, yet significant, correlation with the animal's age (Pearson's correlation, P<0.01, N=165, R=0.21).
Changes in call latency. (A) Examples of the distribution of median call latency in single sessions for monkey C during go trials in epochs 1, 2, 5, 6 and 7 (bin width 100 ms; call latency was not measured during epochs 3 and 4; see Materials and methods for details). Blue vertical lines indicate the median latency of the sessions. (B,C) Distribution of median call latency in the different epochs for monkey C (B) and monkey T (C). Colored dots inside boxes indicate medians; lower and upper margins of boxes represent the first and third quartile, respectively. Blue vertical bars in B are medians of the examples shown in A. Latencies of epoch 8 of monkey C (B) are not depicted because of low call numbers within the sessions; latencies of epoch 7 of monkey T (C) are not presented because of the absence of vocal behavior. *Epochs in which call latencies could not be determined in B (see Materials and methods for details). The shaded background indicates the adult period (Fleagle, 2013).
To see whether the absence of vocalizations within the vocal response task was due to a general loss of vocal behavior, we investigated the spontaneous vocal behavior of both monkeys in their housing environment during their juvenile phase and adulthood. Fig. 4 depicts the mean occurrence of the monkeys’ vocal behavior during focal animal scanning (10 min ethogram). Spontaneous calling behavior remained stable in monkey C (Wilcoxon rank sum test, P>0.1, N=20). Monkey T showed reduced spontaneous calling behavior during adulthood (Wilcoxon rank sum test, P<0.01, N=20), but never stopped vocalizing spontaneously. Thus, the ongoing spontaneous call behavior of both monkeys was in stark contrast to the complete halt of volitional vocalizations with age. Therefore, the reported decline of volitional vocalizations cannot be explained by a general lapse of calling behavior, because the monkeys continued to vocalize spontaneously in their housing environment, i.e. outside of the behavioral protocol.
Comparison of the occurrence of vocal behavior in the juvenile and adult phase for both monkeys. The bars show the mean+s.e.m. call behavior of each monkey within 10 min observation periods (100% indicates calls every minute during the 10 min; 20 sessions per monkey) as a function of the monkeys’ developmental stage. J, juvenile; A, adult.
Moreover, the discontinuation of volitional vocal behavior could also not be accounted for by major environmental changes. Throughout these years of training, the monkeys maintained continuous good health (also verified by regular blood tests) and gained normal weight. Moreover, the same behavioral protocol was presented, the same controlled fluid intake protocol for motivation was applied, the same housing of the monkeys in small social groups was carried out, and the same scientific trainers (S.R.H. and N.G.) worked with the monkeys throughout this 4 year period.
Finally, we wondered whether the extinction of volitional calling could be explained by a general loss of volitional responses, a lack of motivation, or some general resistance to respond in a conditioned task. To test this possibility, we re-trained both monkeys after they stopped vocalizing in the vocal response task on a manual response task. To remain within the same sensory modality, we trained them to perform a DMS task with color stimuli (Fig. 1B). Monkeys were required to use a manual bar release instead of a vocalization as a response. Even though a DMS discrimination task is more demanding in comparison to the previous simple detection task, the monkeys, which were now 9.0 years old (monkey C) and 8.2 years old (monkey T), showed full recovery of the volitional response. Monkey C performed, on average, 534 trials (8 sessions) and monkey T performed 526 trials (7 sessions; both medians; Fig. 2A,B). They also showed a high median percentage of correct responses (Fig. 2C,D; monkey C: 80.2%, monkey T: 79.7%). Both monkeys continued to work at this high performance level.
We report a systematic decline of volitional vocalizations in rhesus monkeys that was not explained by (a) a general lapse of calling behavior, (b) environmental changes or (c) a general loss of voluntary responses or lack of motivation. During this longitudinal investigation, we also performed unilateral single-unit recordings with microelectrodes in the prefrontal cortex (PFC) of both monkeys (Hage and Nieder, 2013, 2015), but we exclude the possibility that recordings caused damage that would have left the monkeys unable to vocalize on command. We have never witnessed a decline of any cognitive function as a result of PFC recordings, and post-mortem histological examination of other monkey brains has never showed damage to the tissue resulting from recordings. Furthermore, both monkeys have successfully been re-trained on other demanding tasks, and there was no indication whatsoever that the monkeys had suffered from disturbance of cognitive control functions. In fact, we argue that the visual DMS task that both monkeys successfully performed after they ceased to vocalize volitionally is more demanding than the cued vocalization (CV) task. In contrast to the CV task, the DMS task required discrimination of both sample and test stimuli (not just simple detection of a go stimulus) and memorization of a sample image over a delay period (which was entirely missing in the CV task). This is another indication that the monkeys were fully intact. Finally, we think it is highly unlikely that a putative worsened coordination between the manual and oral domains over development (the monkeys needed to grab a bar while vocalizing) might have caused the observed effects, given that hand movements and vocalizations were temporally disparate. Because the observed decline in volitional call behavior correlated with the transition of the monkeys from juvenile phases to adulthood, our findings can therefore best be reconciled with a maturation process. We suspect that early in ontogeny, the monkeys’ neural central executive was still connected with the vocal motor network, thus allowing rudimentary cognitive control over call behavior. This cognitive control of vocal behavior was lost when the monkeys reached adulthood, pointing to developmental reorganization in the brain of these monkeys.
Using the identical task protocol, we previously reported a neuronal correlate of the monkeys’ ability to initiate calls in response to the detection of an arbitrary visual stimulus (Hage and Nieder, 2013). Single neurons in the monkey homolog of Broca's area (Brodmann area 44 and 45) in the lateral PFC specifically signaled the preparation of instructed vocalizations, but not of spontaneous calls (Hage and Nieder, 2013). We hypothesize that these neurons of the PFC (which is generally associated with the brain's cognitive control center) connect the brain's executive with the vocal motor network early in primate ontogeny (Ackermann et al., 2014) as an obligate network for executive control on vocal output (Miller and Cohen, 2001). The anatomical substrate of this juvenile capability might be found in the excessive synaptic connections and dendritic spines particularly found in the PFC of human and non-human primates that are initially overproduced to about two times the adult number before being pruned during puberty to reach the adult level at the onset of adolescence (Petanjek et al., 2011; Bourgeois et al., 1994; Huttenlocher and Dabholkar, 1997; Dehaene and Cohen, 2007). This neoteny of brain structures in the PFC could be mediated by genes related to the development of the prefrontal cortex that show a correspondingly excessive, neotenic expression in humans relative to chimpanzees and rhesus macaques (Somel et al., 2013). Our hypothesis predicts that neural connections between the executive functioning networks in PFC and the brain's vocal motor network, which exist in juvenile monkeys, are decoupled during adolescence and are lost in adult monkeys. If true, such a finding would strengthen the neoteny hypothesis of human evolution (Gould, 1977) and explain aspects of human language evolution.
It is widely acknowledged that adolescence is associated with considerable reorganization of the brain. But what could cause the loss of volitional vocalizations? Activity-dependent pruning of connections via elimination of excessive synapses is thought to play a major role in sculpting circuits and connections during ontogeny. However, because the brain networks to produce vocalizations were in use and of considerable behavioral relevance for our monkeys, the loss of this function would be difficult to reconcile with activity-dependent elimination of synapses. However, even without activity-dependent plasticity, the brain undergoes considerable reorganization during adolescence that serves a variety of other, possibly competing functions. For instance, hormonal changes associated with sexual maturation contribute to adolescent-typical behavioral changes that necessarily have an impact on large-scale networks. Functions beneficial during childhood may become inhibited during adulthood. In addition, changes of the highly interconnected brain in one area may in turn constrain the maintenance of other functions. Moreover, synaptic elimination during adolescence probably involves adjustment of the excitatory/inhibitory balance on individual neurons and within networks, given that excitatory synapses are selectively degenerated whereas inhibitory synapses are spared (Rakic et al., 1986). We speculate that the causes of the loss of brain circuits and networks for voluntary vocalizations are related to one (or several) of the non-activity-related elimination processes occurring in the maturing brain.
Our study emphasizes one of the rare cases of commonality between the human language system and non-human primate communication systems, namely the (developmentally restricted) ability to cognitively control vocalizations. It suggests that one important aspect of flexible communication is grounded in the primate lineage and could be exploited during the emergence of functional flexibility of prelinguistic vocalizations of human infants (Oller et al., 2013). As a phylogenetic pre-adaptation, volitional control of vocal utterances would be a crucial subcomponent in the complex multi-component system ‘human language’ and instrumental for all higher level linguistic characteristics emerging in human development, such as semantic compositionality or the grasp and mastering of a symbol system (Deacon, 1997; Nieder, 2009). Our behavioral study suggests an expansion of the juvenile period during ontogeny as one of the key evolutionary events in the evolution of language.
We thank two anonymous reviewers for helpful comments on a previous version of the manuscript.
S.R.H. and A.N. designed the study, interpreted the data and wrote the manuscript. S.R.H. and N.G. performed experiments and analyzed the data.
This work was supported by the Werner Reichardt Centre for Integrative Neuroscience (CIN) at the Eberhard Karls University of Tübingen (CIN is an Excellence Cluster funded by the Deutsche Forschungsgemeinschaft within the frame work of the Excellence Initiative EXC 307).
(1997). The Symbolic Species: The Co-evolution of Language and the Brain. New York: W. W. Norton.
(2013). Primate Adaptation and Evolution. Waltham, MA: Academic Press.
(1977). Ontogeny and Phylogeny. Cambridge, MA: Harvard University Press.
(1966). Signal Detection Theory and Psychophysics. New York: Wiley.
(2008). Constraints in primate vocal production. In Evolution of Communicative Flexibility: Complexity, Creativity, and Adaptability in Human and Animal Communication (ed. D. K. Oller and U. Griebel), pp. 93-121. Cambridge, MA: MIT Press.
(1993). Measuring Behaviour: An Introductory Guide. Cambridge: University Press.
(2000). The Emergence of the Speech Capacity. New York: Psychology Press.
(2010). Primate vocal communication. In Primate Neuroethology (ed. M. Platt and A. A. Ghazanfar), pp. 84-97. New York: Oxford Univ. Press.
|
0.970447 |
What is the main feature of Italian violinmaking after the great classical period? During the 19th and 20th centuries the construction method in countries such as France, Germany and Great Britain is clearly influenced by the models and style of the great classical Cremonese makers.
At the beginning of the 19th century, the number of copies of ancient instruments made in Europe progressively increased: the makers’ skill did not only show in following the Cremonese method but also in the ability to give the instrument an ancient appearance.
In Italy, the makers’ creativity ,with only few exceptions, developed in a different way. Until the mid 20th century, Italian luthiers distinguished in creating a continuity of styles and construction methods on a regional base, or sometimes on even smaller geographical areas.
This is why today, when we look back to this period of great productivity, we can recognize a Piedmontese, Milanese, Genoese and, further South, a Tuscan and Neapolitan school.
With a series of monthly exhibitions and conferences involving researchers and experts, the Museo del Violino presents a century characterized, in violin making, by a great diversification and variety of individual styles.
|
0.999616 |
Sinclair was nothing if not bold with its claims, including this one that the Spectrum - announced just a few months before at a press conference at the Churchill Hotel on Friday, 23rd April 1982 - was the best personal computer in the world for less than £500 (£1,820 in 2019) - a fairly wide range at the time which included Acorn's BBC Model B, Atari's 400 and 800 and the Commodore 64 (although that had only just been released in the US at the time of the advert, so can be excluded). Despite its "dead flesh" keyboard, the Spectrum was certainly popular, going on to sell around 5 million units in its various guises - right up to the Spectrum +3 with integrated floppy drive. At launch, the 48K Spectrum retailed for £175 (£630), which was a breakthrough price for a colour computer with more than a few KB of memory.
In the September 1982 issue of Electronics and Computing was an interesting note, in the "Talking Shop" section, on the shape of the microcomputer industry in the UK, which showed just how much competition the Spectrum was up against. A market report revealed that there were some 112 different manufacturers selling micros in the UK, a number made up of 44 from the US, 38 from the UK, 14 from Europe, 12 from Japan and 4 from the rest of the Far East. There was some optimisim that this would mean that the UK computer industry would "stay in the business", unlike the motobike and ship-building industries before it. Sadly, this position did not last beyond the 1980s.
Meanwhile, there was some confusion in the market between the ZX Spectrum and another Spectrum which had been launched the previous September. This was perhaps surprising as the "other" Spectrum was produced by specialist company Micro APL and was a 16-bit multi-user multi-tasking machine with an entry-level price of some £10,000 - or £36,500 in 2019 - a mere 57 times more than its erstwhile competition. Micro APL stated that whilst it had actually been getting enquiries from customers confusing the two machines, there "were no hard feelings" before noting that it hadn't bothered to register the name because it had been advised that the name was too common to be a trademark.
An early 1982 mockup of Sinclair's pocket TV, © Practical Computing, August 1982Back in 1981, Sinclair had been one of the companies - alongside Acorn, Tangerine, Research Machines, Transam and Nascom - that had tendered for the BBC's microcomputer project, after the BBC had given up hope of ever seeing a machine produced by its original choice, Newbury. Oddly enough, Newbury's machine - the NewBrain - had started as a Sinclair Radionics project in 1978, before a change in management at the National Enterprise Board, which had been brought in to help fund Sinclair's Microvision TV project and whose cash injection meant that Radionics was now essentially part-nationalised, meant a shift in emphasis away from consumer electronics back to Radionics' old instruments market - largely because the NEB didn't think that Sinclair could hold its own in the electronics market against the perceived threat from the Japanese. This triggered the exit of Clive in 1979 from the company he had founded, as well as the farming off of the NewBrain to Newbury (and eventually Grundy). The worry about the Japanese invasion was a common theme amongst government and the UK micro industry, so it was all the more ironic when Sinclair Research signed an agreement with Matsushita towards the end of 1981 to export micros to Japan.
One of the reasons that Sinclair didn't get the contract was said to be because of the company's existing success, which had made it unwilling to adapt to the BBC's particular specification - in particular over the requirement for a structured BASIC; expansion, printer and network ports; state-of-the-art colour graphics and sound, and analogue inputs. This inflexibility left the BBC feeling that it would end up with a "Sinclair machine in BBC colours" and contrasted significantly with Acorn which, as well as being far less secretive, was eager to adapt to the BBC's requirements. As Acorn's Chris Curry put it in a 2008 interview "Clive was much less inclined to accept outside input. I mean we were being absolute tarts about it. We were doing just what the BBC wanted us to do and Clive certainly wasn't doing that".
Clive Sinclair in happier times, © Popular Computing Weekly September 1983Clive was still furious about losing out, and got in several digs at the April launch of his might-have-been BBC Micro, which was now known as the ZX Spectrum. Comparing the Spectrum to the Model A BBC Micro, he said "It's obvious at a glance that the design of the Spectrum is more elegant. What may not be so obvious is that it also provides more power. [It] has more usable RAM and higher maximum RAM. It offers twice as many colours on the screen at any one time [and] also offers user-definable graphics". He continued "It employs a dialect of BASIC already in use in over 400,000 computers worldwide" - a reference to the BBC's decision to insist on its own custom BASIC rather than perhaps use Sinclair's "standard". He concluded that "The BBC makes the best TV programmes - and Sinclair makes the world's best computers!".
Sinclair's particular obsession about "elegance" cropped up frequently, as it seemed to be a core component of the company belief system and appeared to be almost a shield of righteousness with which to fend off the competition. Whilst some observers had suggested that elegance was merely a "self-serving concept fitted up to justify under-specification", Sinclair did seem to have considered it in the design of the Spectrum, which was apparently not just small simply in order to make it cheaper. "If you made [the Spectrum] any larger it would simply be more expensive. There would be no contra-benefit, so elegant design has led to a very compact shape compared with its competitors, not just because we wanted it to be tiny". He continued "If we wanted to make it really tiny we could have made it, I suppose, the size of a cigarette packet, but that would not have been functional, because the keyboard would not have been usable". Continuing with the keyboard, which was one of the few areas in the Spectrum's design that Uncle Clive took a direct interest in, Sinclair suggested that "The Spectrum sacrifices nothing to size. The keyboard is exactly the same spacing and pitch as the IBM, which is why we went for that size. If we went down to the the size of a cigarette packet it would not be cheaper, it would be more expensive. That size is optimum". Quite handy that it happened to just the same size as the IBM keyboard really.
Meanwhile, Sinclair was not so much mad at Acorn but at what he saw as the BBC's arrogance in trying to set a new language standard, and even in making micros at all, suggesting that it was about as acceptable as a BBC car or BBC toothpaste. In an extensive interview in July 1982's Practical Computing, he railed "They were able to get away with making computers because none of us had sufficient power or pull with the Government to put over just what a damaging action that was. They had the unmitigated gall to think that they could set a standard - the BBC language. It is just sheer arrogance on their part". Perhaps trading some arrogange with arrogance, or at least a fair degree of hubris, Sinclair continued "We will win hands down because we know so much better what is needed and know so much better how to do it than the BBC does that our system, our machine and our language will completely win out in any competitive battle".
Sinclair was unusual in that whilst most companies would pick an off-the-shelf BASIC such as Microsoft's, it went off and wrote its own for its first machine - the ZX-80. This was not only because of Clive's wish to create a "radically new Basic interpreter" for the ZX-80, but for the more prosaic reason that it was unlikely that a standard Basic would fit into the tiny 4K of ROM that was available. What Sinclair ended up with was a highly-compact, very structured (internally, if not in the programming sense) and almost totally bug-free integer-Basic interpreter with just enough additional minimalist functions to manage the keyboard, display and cassette interface. This well-regarded 4K program was re-written and extended into 8K for the ZX-81, and included some extra routines such as the infamous slow/fast handler, a floating-point calculator and some more - but still rudimentary - cassette handlers. However, after all this extra stuff was added, it was found that there wasn't enough room and so some functions had to be removed and others re-written to be more economical, even if in some cases it meant they would run slower. It was also said that this version of the ROM looked like it had been written in a hurry as it featured several significant bugs, including one which gave the answer to 0.25**2 (the square of 0.25) as 3.1423844. The faulty version was shipped on around 100,000 ZX-81s, whilst 20,000 later ZX-81s were shipped with a hardware add-on to fix the maths bugs in the summer of 1981. The situation was clearly not ideal and Sinclair had the 8K ROM completely re-written, with around 400,000 of the updated machines shipping during 1982. When the Spectrum eventually shipped in 1982, its 16K ROM apparently contained large chunks of un-modified code from the ZX-81, complete with many of its bugs. Sinclair User lamented that "the tidiness of the first [ZX-80 4K] program is now almost completely lost and the present 16K program has only a token attempt at structure". However, it also suggested that - at the time when Sinclair's micros had sold more than pretty much anything else - it was also perhaps the most successful machine-code program ever written. This success was perhaps despite the fact that the early compromises made to make Sinclair Basic work on the ZX-80 and '81 lived on into the Spectrum, making it significantly slower than much of its competition, including the BBC Micro and even the memory-starved VIC-20.
Nigel Searle of Sinclair, © Popular Computing Weekly September 1983Back in the summer of 1982, Sinclair got a crumb of comfort when the Department of Industry (DOI) was persuaded to accept the recently-launched Spectrum as an alternative to the BBC Micro in the £9 million "Micros in Primaries" scheme. The deal to include the Speccy had been largely brokered by Nigel Searle, a long-time Sinclair trustee who had ended up as Sinclair Research's managing director. On his return to the UK from a trip to the US, he had picked up rumours of the primary-school scheme and got the DOI to add the Spectrum to its list, even though it had already chosen the micros it wanted for the scheme. Definitely excluded was Commodore, largely on account of its non-British North-American-ness, which reacted by reducing the price of its PETs by between 20 and 33%, for a three-month period commencing in September.
Nigel Searle of Sinclair, © Sinclair User 1982Searle had first joined Sinclair in 1972 during the Radionics days, where he had been designing pocket calculators. He then moved to California and then New York, were he looked after the promotion of Sinclair's calculators and watches in the US market. He left the company when the National Enterprise Board came along, saying that "The calculator business was not doing too well and also it was not really the same company once the NEB was involved". Not long after Clive started Sinclair Research and had launched the ZX80, Searle rejoined to run the US office in Boston, selling the '80 and '81, before moving back to the UK to head up the computer division of Sinclair Research.
Nigel Searle with Clive Sinclair, who's waving a Spectrum around at the Spectrum's launch event at the Churchill Hotel, © Sinclair User 1983In an interview with Sinclair User, which appeared in the 1983 "The First Sinclair User Annual", Searle explained Sinclair's approach to selling its computers, which up to that point had been almost entirely mail order, even when the Spectrum had launched. "There are no plans at present for putting the new [Spectrum] machine into WH Smith, which is Sinclair's only retailer". He suggested that the reason for this approach was that "not many others are selling so many computers as we are" and that "we have sold far more computers by mail order than anyone who has sold through stores". Sinclair's attachment to mail order was also explained by the fact that when the company started, there actually wasn't any obvious retail outlet for its new micros. Searle stated "It does not occur to me, or anybody else, that Boots, Curry's or Rumbelow's would sell a computer". Plus, there was also the unavoidable fact of economics that not selling through retailers meant higher profit margins. Or as Searle put it, when you sell through mail order, you "do not have to give a discount to retailers which you normally have to do" - a situation which could add up to 50% to the retail price. Instead, Sinclair had no pre-determined limit to its advertising budget and would spend "as much on advertising as will produce a profitable number of sales". This led to a spend of some £5 million in 1982, rising to a forecast for 1982 of more than £10 million. It was certainly profligate, and frequently published its glossy 4-page (or more) gate-fold "mini magazines" in popular computer magazines of the day.
Sinclair's crack at the BBC wasn't particularly successful, as the company was essentially shunned by the Local Education Authorities responsible for deciding which micros their schools bought. This was not just because of the perceived toy-like properties of the Spectrum when compared to the all-metal case and proper keyboard of the BBC Micro or RM's similarly-robust 380Z, but was also largely because many authorities were having to retain compatibility with the computers they had already installed in their secondary schools during the previous scheme, and of course Sinclair hadn't been an approved choice before. By November 1982, some four months after MIP launched, only three out of 422 applications under the scheme had been for the ZX Spectrum.
In non-Spectrum news, it turned out that Searle was a fan of Prestel, the dial-up Viewdata system that had been launched by state monopoly the GPO in 1979. He thought it too expensive, but did offer the prediction "You have to consider not what benefit people get from [Prestel] now, but what they will get in the future. Kids will do more of their learning from computers and many people will work from home". Sinclair had made some moves to support Prestel as it announced it was developing a Prestel adapter which was planned to cost "substantially less than £100", according to Searle. That compared favourably to the winner of the British Telecom ZX81 Prestel Adapter competition, Martochoice Ltd, which was expecting to charge between £120 to £150 (£540 in 2019) for its adapter. Searle remained convinced that the ability to download software, which provided the benefit of constantly updated software and less need to store everything, was crucial, suggesting that "the future of personal computing lies in communication".
Searle went on to explain Sinclair's wider move into software, which he had admitted was something that the company had neglected in the past, by pointing out that as machines were becoming more complex, they were becoming more useless without it. It was also clear that there was much profit to be had in software, as its value was in its content and not in its physical form. At the same time, Sinclair was also moving away from its early mail-order setup and exclusive arrangement with WH Smith as everyone else was now selling micros and they had become an accepted part of the High Street. The transition to retail would however be easier as the company now had a number of outlets it had built up with the ZX81. Searle continued "Sinclair Research is changing. It had always been a technology-driven company with no great emphasis laid on exploiting the market. We will now sell not just by the most profitable route but by any route that is sufficiently profitable".
The arrival of the Spectrum certainly shook the market up, not least for Commodore, where the 48K Spectrum at £175 (£630 in 2019) was cheaper than the £199 3.5K VIC-20. Respected PET and VIC software author and writer Nick Hampshire concluded in the first published review of the machine, in May 1982's Popular Computing Weekly, that despite finding the keyboard's cramped layout and red-on-grey text hard going "This new computer from Sinclair clearly represents excellent value for money and will, no doubt, prove a great success". Somewhat unexpectedly, the 1977-designed PET was still around even in 1982 and was still seeing ongoing usage in schools and offices.
"The legacy of the BBC Micro, Chris Curry interview, 6th May 2008, https://www.nesta.org.uk/sites/default/files/the_legacy_of_bbc_micro.pdf"
|
0.920058 |
Born on 2 October 1934 in Boujad, Morocco, Ahmed Cherkaoui was one of the leading Modernist painters of Moroccan Art in the post-independence period of 1956. Cherkaoui's large-scale abstract and symbolic canvases negotiated an amalgam of references, including Amazighi art, calligraphy and talismanic symbols. Cherkaoui painted his complex symbols on canvases covered with burlap. He used a system of geometric signs and ciphers, including triangles, circles, lozenges, dots and broken and curved lines. He was generally associated with a small group of painters in Casablanca including Houssein Tallal and Andre Elbaz although he was never part of the Casablanca School. Cherkaoui died on 17 August 1967 in Casablanca, Morocco.
Daoud Corm was born on 26 June 1852 in Ghosta, Lebanon. Recognized as the father of modern art in Lebanon, Corm was a pioneer in establishing a market for oil painting in the country's private sector. Although Corm experimented with a number of genres including still lifes, landscapes, genres scenes, and a substantial number of Biblical scenes and portraits of religious figures, he was the first artist in Beirut to earn a living by portraying public figures and members of the city's emerging mercantile class on paper and canvas. In addition to his portraits, Corm created a substantial body of religious works, the majority of which were commissioned by the Maronite Church and many of which remain in churches throughout Mount Lebanon. Corm died on 6 June 1930 in Beirut, Lebanon.
Georges Daoud Corm, born in 1896 in Beirut, Lebanon, was a distinguished Lebanese painter and francophone poet. In both his visual and written work, Corm expressed a dedication to the classical tradition of European Humanism and Christian ethics. This commitment is evident in a series of paintings, which critics have classified as, "paysages d'âme," or spiritual landscapes. His body of work also includes a number of commissioned and anonymous portraits. In 1966, he published his most well-known written work, Essai sur l'art et la civilization de ce temps, in which Corm articulates an aesthetic position in the midst of a radically divided cold war culture. Corm died on 13 December 1971 in Beirut, Lebanon.
Saloua Raouda Choucair was born on 24 June 1916 in Beirut, Lebanon. Inspired by a range of diverse influences, including quantum physics, molecular biology, Arabic poetry, and Islamic theology, she used art to explore the deep essentials structuring human life and universal processes. Her oeuvre integrates practical utility and aesthetic self-awareness in a corpus spanning sculpture, painting, architectural plans, outdoor installations, domestic items, and personal adornment. Her art is influenced by concurrent trends in Abstract, International Modern, Geometricist, and Neo-plastic art in France, America, and the Arab world, all places where she lived and worked between 1948 and the 1980s. She wrote and lectured extensively and taught at the Lebanese University and the American University of Beirut in the 1970s and 1980s. Choucair died in Beirut, in 2017.
|
0.98074 |
What else can I do to make my detox journey easier?
Once you have had your detox questions answered, chosen the type of detox that you would like to do and the length of time you would like to do it and have organised your time to make space for it AND have prepared the shopping for it/cleared out your cupboards of toxic foods and chemicals, you may begin!
1. Silent contemplation walks: Start your day peacefully with a silent contemplative walk. Cut out the usual chatter from family or friends and see how differently the day goes.
2. Skin brush: Before you shower or bath, try skin brushing. Start at the furthest part of the body to the heart and then gradually skin brush in the direction of the heart. Press lightly, as it is about stimulating the lymph system to help support the circulatory system in removing toxins from the blood. Even though it is subtle it is also an energising exercise, have a go!
3. Don’t switch on your phone: On waking, don’t turn to your phone as the first thing you do. Take this time to step away from technology also and experiment with a digital detox, and see how your stress levels vary…does it help?
4. Yoga: It is good to move the body when detoxing to help shift stubborn toxic build up and to aid elimination through the various postures, especially twists that help to wring out the organs.
5. Meditation and Mantra: When detoxing you are also detoxing your emotions, and so you will come up against some harder emotions when it starts to get tough. Yoga can support this, and meditation and mantra. Meditation is a mind detox. Practice every day even if only for 5 minutes to start, and begin to observe the mind rather than it control you! Create your own mantra that will help support you through the process.
6. Breathing techniques: When we detoxify we detox through the blood, the colon, the kidneys and also through the lungs with our breath. It is important to release the stagnant air in the lungs and to help stimulate regeneration of cells in the body through the purity and capacity of the inward breath, fresh with oxygen to send around the blood stream alkalizing the body. It also calms the mind and can leave you feeling at peace and more positive. Much like meditation.
7. Colonic hydrotherapy or enema: Alongside any detox, it is important to support the evacuation process as this area can become blocked with a change of diet. It is important to aid this process as what happens in the colon is toxic foods can become stuck here and this can then cause damage to the intestinal wall which in turn can cause a number of diseases. When we detox we help to shift these toxic waste deposits and then healing can occur. If you would like an expert to do it for you, then try colonic hydrotherapy. This cleanses throughout the entire large intestines and those who have had them done on our 5-day detox yoga retreats are completely different people afterwards…full of beans, giggling, a real energy boost. It really is a remarkable difference.
8. Have a massage: Detoxing can be tiring, and the body systems can react quite strongly to the process. A fantastic way to support the blood, lymph, kidneys, liver is through massage. Try a lymph massage, a bit like skin brushing, very soothing and relaxing, cleansing yet energising.
9. Have a laugh: Don’t take it all too seriously! Distracting the mind is also healthy as it can all get too much. Watch a comedy, chat to a friend who can make you laugh or if you are on a retreat, get to know fellow detoxers and laughter will follow I am sure!
Once you have completed your detox, well done – you did it! Notice what words pop into your head to describe how you feel; lighter, happier, clearer, energised, lighter in weight, the list goes on.
Don’t celebrate with a pizza and glass of wine!
It is VERY important after a detox to not bounce back like an elastic band straight back to your usual bad habits. A way to avoid this is by giving yourself a prize at the end of the detox. Book in for a spa day, have a massage, relax in a sauna, arrange to eat a more substantial meal if you were juicing, or if you cut out the list of toxic foods, just try to stay off them. If you do introduce them then do it bit by bit. Start with one of them, not all at once.
If you would like to try a supported detox, then we have 3-day detox or a 5-day detox for you to try. The 3-day detoxes are our intro to detoxing, and you will feel great even after 3 days. And also realise how fun it can be and how delicious detoxing foods can be. Our 5-day detox incorporates less food, green smoothies, wheatgrass shots, juices but also some warm soups and broths so that you don’t go into too much of a shock!
|
0.981686 |
Explaining the declining numbers of iconic Chinook salmon is more complicated than one might think, and harbour seals have been increasingly put forth by some as the primary culprit.
Sure, seals eat salmon. But food webs are complicated, and it is easy to gloss over the positive roles that predators play in contributing to healthy and productive coastal ecosystems. Declining salmon abundance is the result of a complex variety of factors, and cannot be solely attributed to harbour seals. In the case of declining numbers of vulnerable Chinook salmon, threats include warming ocean and freshwater temperatures, destruction and alteration of stream and estuary habitats, fishing pressures, pollution, and the salmon’s starring role as prey for other fishes, seals, sea lions, birds, bears, dolphins, porpoises, and whales — including critically endangered Southern Resident killer whales.
It was also people who almost eliminated harbour seals from B.C. in the 20th century. From 1879-1968, an estimated half a million seals were killed in B.C. for the commercial fur trade and for predator control. Their numbers plummeted to fewer than 10,000 in the 1960s; less than one-tenth of their pre-exploitation numbers. They did have an impressive recovery in the two decades following, increasing exponentially during the ’70s and ’80s, beginning to slow in the 1990s, and have been relatively stable — at about 110,000 animals — in the two decades since. The so-called “explosion” was a conservation success story that took place largely in the 1970s and 1980s, pretty old numbers to cite in today’s news.
With a successful recovery of harbour seals, more interactions with fisheries come naturally compared to several decades ago. Harbour seals are opportunistic predators that feed on bite-sized prey that are abundant or accessible. They take advantage of an easy meal, sometimes targeting those same fish that we value: juvenile salmon heading to out to sea, or adult salmon incapacitated on fishing lines. A healthy population of seals and fewer salmon inevitably leads to more noticeable interactions, including quick-learning “nuisance animals” that quite understandably draw the ire of some anglers and fisheries managers. Calls for action come quickly, and seals are an easy target.
Some research indicates that wild Chinook salmon productivity is negatively related to seal density, yet there have also been many good Chinook years during the past two decades when seal numbers were high. Other studies show only four per cent of the harbour seal diet is salmon; that herring and hake are their primary prey. In fact, 40 per cent of the harbour seal diet is hake, which is a major salmon smolt predator. While appealing to some, a seal cull could actually destabilize the coastal food web, result in an increased abundance of hake, and increase the predation of juvenile salmon by hake.
The harbour seals healthy population numbers are also echoed by another well-known species, the Bigg’s (or transient) killer whale. The population of these apex predators — which feed on marine mammals and favour seals and sea lions — is on the rise, and the increase in Bigg’s killer whales is serving to effectively keep the seal population in check, without any human intervention at all. A human-sanctioned cull of the seal population in the Salish Sea would mean a reduction in food for these mammal-eating killer whales.
Unfortunately, history is rife with failed attempts to manage populations by removing predators from the ecosystem. In this case, we need to focus on mitigating our own impacts so as to better protect salmon and their habitat. As tempting as it is to lay the blame on seals, it is we humans that have work to do. Let’s celebrate the vibrant population of these predators — seals were equally abundant during pre-contact times when salmon was plentiful.
|
0.998942 |
"What is your policy towards police officers in uniform smoking on duty in public in the street?"
and your response "The MPS should always uphold the law in relation to smoking in buildings."
Will you now answer the question as asked? do you approve of police officers in uniform smoking on duty in public in the street? Does this project the image of the Met that you wish to see?
The Metropolitan Police Service should always uphold the law in relation to smoking in buildings.
|
0.99055 |
Can you name a single person who does not like pizza? Pizza is a universal food loved by all! There are so many variations of pizza, along with several recipes for pizza sauce, toppings and dough. During winters, making your pizza from scratch can be the ideal thing to do on one of those freezing weekend nights. There’s nothing better than biting into that first slice of hot and fresh homemade pizza. Making your own pizza at home is also a great way to avoid unnecessary calories and add some serious nutrients and flavours by choosing your own fresh toppings. However, a homemade pizza can also turn out to be a disaster if you’re not careful.
Here are some tips you need to follow while making a pizza at home.
The sauce should be the first thing you think about when you are attempting to make a homemade pizza, as it is an important part of the pizza. A flavorful and tangy sauce with fresh spices and herbs can add loads of flavour to your pizza, so you don’t have to adda lot of cheese or high-calorie toppings to make it taste good.
Choosing cheese for your homemade pizza is no delicate matter because you want to get the flavour just perfect. For the classic pizza taste, your number one choice should be mozzarella. However, when choosing your mozzarella, you should try to buy fresh mozzarella for the best results. To add more depth to your pizza’s flavour, you can always mix different cheese together like provolone, gouda, or gruyere cheese. Furthermore, try to avoid processed and oily cheeses when making your homemade pizza.
Toppings are known to either make your pizza or break it, so piling on fresh veggies, herbs and spices is the key to making a mouthwatering pizza.Some of the most popular pizza veggies includered peppers, mushrooms, garlic, jalapenos, spinach, brussels sprouts, and basil.
Once you’ve mastered the dough, you’re well on your way to pizza perfection.
Kent Atta Maker & Bread Maker is the best atta kneader available in the market, which can help you make a perfect base for your pizza. This atta kneader gets rid of all the problems related to kneading atta with bare hands.
Heat a non-stick pan and add oil to it. Sauté green chillies, ginger, and garlic for few seconds.
Add bell peppers, capsicum, thinly chopped tomato, and onions.
Mix the vegetables with salt as per preference.
Cook the veggies on amedium-high flame until they are tender.
Then add cubes of cottage cheese and boiled sweet corns.
Mix everything gently and cook well for around 2 more minutes.
After the veggies have been cooked, switch off the gas and remove the mixture from the flame.
Now add water, salt, oil, refine flour and instant yeast to the atta maker and create a dough for the pizza base.
Apply some butter and spread pizza sauce evenly on pizza the base.
Now top the pizza base with 2 tbsp. of mozzarella cheese, along with the veggies.
Heat the pan or griddle overmedium-high heat and add a little amount of butterto it.
Place the pizza base on the hot pan and leave it to cook on a low flame for 6-7 minutes.
Now cover the pizza with any U-shaped vessel from upside down.
Keep checking the pizza in between. Once the cheese starts melting, remove the hot pizza from the pan and place on a serving plate.
Cut your freshly made homemade pizza into desired pieces and sprinkle some oregano and chilli flakes and serve immediately.
One of the greatest things about knowing how to cook is that you can turn whatever is in your fridge into a wonderful and delicious meal for family and friends! In chilly winters, you don’t have to bother waiting around for delivery when you can make yourself a scrumptious homemade pizza under an hour.
|
0.960314 |
Eric Benhamou, former chief executive of both 3Com and Palm, has just joined the board of Finjan and taken a minority stake in the web security company through his venture capital fund, Benhamou Global Ventures.
The fund, which is intended to give high-tech entrepreneurs a helping hand, was set up in 2004. This was shortly after Benhamou had stepped down from the chief executive position he held for two years from October 2001 to October 2003 at mobile device specialist Palm, where he remains as chairman.
But Benhamou also acts as chair for 3Com which he joined in 1987 when it acquired Bridge Communications – another networking company which he had co-founded six years earlier. He eventually took over the helm of 3Com in 1990 and within 10 years, he had grown 3Com from a $380m company to a $5.8bn one. He eventually moved from 3Com to helm Palm which he spun off from the networking giant in 2000 and floated on the Nasdaq stock exchange.
These are not the only strings to Benhamou's bow. Not only is he chairman of a third company, Cypress Semiconductor, but he is also on the board of a host of high-tech companies, including RealNetworks and GO Networks. He also serves on the boards of three academic institutions: the Insead School of Business in Fontainebleau, near Paris; the Ben Gurion University of the Negev, which awarded him one of his four honorary doctoral degrees; and Stanford University's School of Engineering in California. He gained a Master's degree in engineering and computer science from Stanford in 1977 and also teaches a six-week course in entrepreneurship there each year.
Other activities, meanwhile, include membership of the board of Washington DC-based think tank, the New America Foundation, and serving on the executive committee of high-tech lobby group, TechNet.
Not bad for a graduate who obtained his engineering degree 30 years ago from the Ecole Nationale Supérieure d'Arts et Métiers in Paris, after moving to France from Algeria during its war of independence in 1960 when he was five years old.
Why did you decide to invest in relatively unknown company such as Finjan?
Finjan is just the right type of company for me. It focuses on network security and they're technical innovators who have discovered very sophisticated algorithms to assess the security of websites. So if you want to access a website, their technology can tell you whether it's okay or not and whether you run the risk of being infiltrated by malware such as Trojan Horses. Today, the internet is no longer built on the trust model it used to be, so it's important for people to have the technology to keep them in safe zones.
Also, Finjan is at the stage where it's clearly got good technology and products, it's got a good customer base and it's ramping up. But it's at a phase where I can help the chief executive to build this foundation into a successful company.
Another thing is that Finjan has a good management team and that's part of the package — you can't just take what you want and reject the rest. But I like the chief executive, who's called Asher Polani. He's not a newcomer — he's been in the industry for 20 years, but he hasn't yet really built up a company to a certain stage as chief executive. So there are a few things I can coach him on. For example, most of the company's experience has been based in Europe and Israel, but the big market is the US. So I hope to help in giving him a few pointers on how to build up a salesforce in the US.
After you established your venture capital fund, why did you decide to continue being chairman of three companies and on the board of others?
I wanted to have a balance in my professional life and I didn't think I would be satisfied just working with young start-ups. I still wanted to be exposed to large publicly traded companies. They've been through a momentous period over the last few years with deep governance reform and the way a company is run at board level now is very different to 10 years ago. So the job of chair is far more challenging than before and more interesting, which is why I choose to spend a third of my time on that type of activity.
What was your best and worst moment running large, publicly traded companies?
My best moment was probably just after I became chief executive of 3Com. I was brought in to do a turnaround in the early 1990s and I was betting the recovery of the company on a new router — NETbuilder — which combined Risc processors and Asics for the first time. I remember visiting the labs in 1992, having discussions with the engineers and realising that the product was going to be a winner. It was a moment of exhilaration, realising that it was going to work and get the company back on track.
As for the worst moment, I don't really have one because difficulties are all part of life. But a very intense moment was on 1 March, 2000 when Palm went public at the peak of the dot com bubble. We'd made the decision to IPO in May 1999 and by 1 March, we were ready to go. The trading on the Nasdaq was over 5,000 and the IPO offering was over-subscribed by several hundred times.
… what to do and so we were caught up in the frenzy. Palm went public on the day the market peaked, but it was more of a curse than a blessing. At some point, it was worth more than Ford and General Motors combined, and it was a lesson on how crazy the markets had become.
But I wished I'd known that it could only go down, that reason would prevail and the people who got hurt would probably be staff and investors. I didn't feel responsibility for the investors because that was their decision, but I felt bad for the staff because all their stock options on the day of the offering were worthless. For years after the IPO, we were a different company. We've now bounced back and become profitable again, but for the first two or three years afterwards, it was very difficult and it hit people's motivation.
How do you see the IT industry developing over the next few years?
In general, I feel that the last few years have been pretty good for the IT industry, although the general pace of innovation has slowed down compared with the 1990s — certainly in the networking industry, anyway. Instead I feel that there's quite a lot of business rather than technical innovation taking place. The next generation of web companies are attractive, not because they're breaking new technical ground, but because they're creating new business models and services. Areas such as social networking, for example, are figuring out new ways of leveraging the web.
The only pessimistic note, I feel, is that technical innovation is not as healthy today as it was a decade ago because we don't have the right policies to nurture it. We haven't maintained adequate funding for research in universities such as Stanford, but it's not just in the US. The education system is also continuing to have problems attracting enough interest in scientific disciplines, so there aren't enough smart, competent scientists and engineers. So other countries such as China and India are progressing faster, which is another sobering note.
I feel that many of our policies in the US are hurting young innovators. So much energy has been put into destroying stock options by regulators and accountants that we've lost a precious tool to motivate entrepreneurs. People are waging fights, but they're unaware of the unintended consequences. They're probably going after ridiculous compensation packages for chief executivess, but it's hitting IT entrepreneurs. Sarbanes-Oxley is also very onerous for small companies. The motivation was to improve the trust and integrity of the business community, but it's not a very enlightened policy from a high-tech point of view.
You've done a tremendous amount in your life — what is it that drives you?
How do you manage to fit so much in? You must be good at time management.
The key is to ensure your portfolio is balanced. What you learn in one environment can be helpful in another. So a lot of case material from the course [at Stanford] is pulled directly from my professional experience, which makes it more interesting and means that my students learn things from situations that have taken place in real life. I also learn things from questions that the students ask and I haven't necessarily thought about before.
But many governance issues I have to deal with on company boards also help me to set new governance frameworks in a more informed way, for the organisations I work with through Benhamou Global Ventures. And that way, I learn what small companies do to be innovative, fast and nimble, which helps me to keep the large companies I work with honest regarding bureaucratic matters.
It can all be managed if you stay away from conflicts of interest. But I am very busy, although I do find I work better if I have a full schedule and I'm under stress. I don't play golf though, so I don't need a lot of leisure time!
Why did you set up a venture capital fund?
I created Benhamou Global Ventures a few years ago to give me an investment vehicle to work interactively with IT companies. It's a venture fund, but it doesn't have limited partners. I invest my own money. The reason I made that choice was to preserve personal flexibility so I organise it as a portfolio of activities.
When you invest in other companies, that responsibility has to be taken very seriously and the people investing generally want a very high return. When I make an investment, it's not just about that. It's also for the pleasure of working with entrepreneurs to learn a few things and discover new technology. Benefits like that don't necessarily accrue to limited partners so to invest in that way, I have to invest my own money.
But in general, I don't make passive investments. They're only active and so come with an opportunity to, at the very least, operate at board level and build close working relationships with the founder. I sometimes play the role of mentor to try and help the company become more successful because, at this stage of my life, I'm trying to both learn and teach at the same time. So being at the forefront of IT, I can discover new people and things. This means that I can learn, but I've also made enough mistakes that I can teach some of the pitfalls to a new generation.
|
0.967169 |
<h2>How to keep lunar samples safe</h2><br /><p>In 1969, the Apollo 12 mission retrieved a camera that had been left on the moon by an earlier spacecraft. When it was analysed, scientists discovered some bacteria – <em>Streptococcus mitis </em>– that are found in humans. While the origin of the bacteria was <a href="https://www.space.com/11536-moon-microbe-mystery-solved-apollo-12.html" target="_blank" rel="noopener noreferrer">hotly debated</a>, the incident highlighted the problem of cross-contamination – something that could invalidate the analysis of any samples brought back from <a href="https://horizon-magazine.eu/article/moon-history-museum-and-we-ve-only-visited-gift-shop.html" target="_blank" rel="noopener noreferrer">future lunar missions</a>.</p> <p>Given the potential moon and other space missions coming up in the next decade, there will also be more of a need for better ways of handling and analysing the material.</p> <p>One solution is to use lunar rovers to analyse the moon’s environment in situ, but a key challenge with this, according to Diego Urbina from Belgium-based company Space Applications Services, is the traditional huge expense and difficulty of transporting and using these vehicles. </p> <p>Urbina works on a project called <a href="https://cordis.europa.eu/project/rcn/205949/factsheet/en" target="_blank" rel="noopener noreferrer">LUVMI</a>, which is developing a lightweight, low-cost rover weighing about 45kg. In January, the team tested a 60kg prototype over two days in Noordwijk, the Netherlands, to see how it performed autonomously in navigating hazardous obstacles and moon-mimicking surfaces, from rugged terrain to sandy dunes.</p> <p>‘That went really well. It proved that the concept worked, that it delivered the functions we wanted it to, and the rover’s drill worked correctly,’ said Urbina. ‘We’re hoping that by the mid-2020s, if all goes well, we could have LUVMI ready for the moon.’</p> <p>The team hopes that LUVMI could be sent to look at the characteristics of the moon’s water ice – the <a href="https://www.pnas.org/content/115/36/8907" target="_blank" rel="noopener noreferrer">existence of which was confirmed last year</a> – as well as the chemicals in the crust and atmosphere, known as volatiles.</p> <p>It could also explore ways to extract oxygen and water for use by humans and as fuel by vehicles and satellites, potentially aiding future missions. ‘We expect this to be a kind of exponential effect – that once you can extract resources, these enable many things that can help you extract more resources and expand into the solar system,’ said Urbina.</p> <p><strong>Sweet spot</strong></p> <p>Urbina explained that the LUVMI rover is much smaller than traditional government-backed ones, but also larger than the more commercial miniature ones, such as those designed for Google’s Lunar XPRIZE before it was cancelled last year. </p> <p>‘We’re at a nice sweet spot where it is small enough that your launch costs are not too high and big enough that you can deliver a nice suite of payloads and do something interesting,’ he said.</p> <p>Rather than having six wheels like some other models, the LUVMI rover only has four, which Urbina says makes it more energy-effective while also keeping it highly mobile. This is enabled by an adjustable suspension system that allows the chassis to move up and down and more easily put sensors in contact with the lunar surface as it drives along.</p> <p>Unlike traditional rovers that transfer samples to the lunar surface after drilling into rock, LUVMI will also aim to cut analysis time in half and reduce the risk of damaging the materials by measuring them in situ rather than returning them to Earth. It will do this by drilling into the ground with its sampler, which uses heat to release the volatiles to be measured.</p> <p> <div class="quote-view quotesBlock quote_horizontal"> <div class="quotesTop"><img src="https://horizon-magazine.eu/sites/all/themes/horizon/images/quotes_top.png" alt="" title="" /></div> <p>‘There are many experiments where you need to actually have the sample in your hands in a lab to do them.’</p> <blockquote><p>Prof. Sara Russell, Natural History Museum, London, UK</p></blockquote> <div class="quotesBottom"><img src="https://horizon-magazine.eu/sites/all/themes/horizon/images/quotes_bottom.png" alt="" title="" /></div> </div> </p> <p>But while analysing samples on the moon could yield a certain amount of information, there’s nothing like having part of the moon in front of you to look at on Earth, says Professor Sara Russell at the Natural History Museum in London, UK.</p> <p>‘There’s lots of things that an in situ rover or orbital mission can do, but there are many experiments where you need to actually have the sample in your hands in a lab to do them,’ she said.</p> <p>Prof. Russell said this is necessary for studies such as precise isotope measurements to determine the ages or chemical history of samples, or detailed examinations of organic material to assess the possibility of life elsewhere in the solar system.</p> <p>She is part of a team that is developing a plan to build a dedicated pan-European facility to properly curate samples returned from space, protecting them from contamination and preserving them in pristine condition.</p> <p><strong>Curation</strong></p> <p>Her role, as leader of a project called <a href="https://cordis.europa.eu/project/rcn/193697/factsheet/en" target="_blank" rel="noopener noreferrer">EURO-CARES</a>, was to bring together scientists and engineers from across Europe to plan a European Sample Curation Facility (ESCF) to meet the needs of sample return missions over the coming decades.</p> <p>‘There’s a lot of commonality in what we need to do, and any European space mission will be an international venture that’s a collaboration of several different countries,’ explained Prof. Russell. ‘So it was important that we came together to share our expertise and create something that would be more Europe-wide.’</p> <p>Apart from uniting their knowledge from previous space research, the researchers looked around curation facilities on other continents, such as those of NASA and Japan’s JAXA. ‘They were brilliant in sharing their lessons learnt,’ said Prof. Russell.</p> <p>She said that any research facility should be modular, with space to add new buildings to protect samples coming from very different environments and avoid cross-contamination. ‘The rule of thumb is that samples should be kept in a similar condition to how they are on the surface of their body,’ she said.</p> <p>According to Prof. Russell, the curation of lunar samples themselves is relatively straightforward because of the half-century of legacy knowledge gained from the Apollo moon missions – making starting with the moon ‘really good, easy and doable’.</p> <p>But, she said, samples from bodies such as Mars are ‘a whole different kettle of fish’ compared with the sterile nature of the moon. There is a need to take account of the conditions of the Martian atmosphere and the possibility that bugs could be brought back to Earth. That gives them a ‘restricted’ status that involves a whole set of protocols for protection on Earth.</p> <p>This could also necessitate, for example, some kind of tent that could be erected where a sample lands for initial work before being taken to its final curation facility.</p> <p>The team estimates that building an ESCF for curating just unrestricted samples would cost between €10 million and €20 million, and over €100 million for one that analysed restricted samples too. Prof. Russell says this is a relatively small outlay given the overall cost of missions, with current asteroid sample return missions such as Hayabusa2 and OSIRIS-REx budgeted at hundreds of millions of euros and a Mars one likely to cost billions.</p> <p>The team has not yet settled on a specific site and would need to seek funding to build it as a next step. Prof. Russell says, however, that work on an ESCF should begin at least seven years before samples are likely to be returned to Earth – and with missions possibly coming back from the moon and elsewhere within a 10-year time frame from now, this may heighten the urgency.</p> <p>‘It’s brought home that we really need to start thinking about it now,’ said Prof. Russell. ‘A facility would open up a whole new area of science, some of which we don’t even know about yet.’</p> <p><em>The research in this article was funded by the EU. If you liked this article, please consider sharing it on social media.</em></p> <br /> <img src=”http://www.google-analytics.com/collect?v=1&tid=UA-40077089-1&cid=how-to-keep-lunar-samples-safe&t=event&ec=republish&ea=read&el=how-to-keep-lunar-samples-safe&cs=republish&cm=republish&cn=republish&cm1=1" /> This post <a href="https://horizon-magazine.eu/article/how-keep-lunar-samples-safe.html">How to keep lunar samples safe</a> was originally published on <a href="https://horizon-magazine.eu/">Horizon: the EU Research & Innovation magazine | European Commission</a>.
In 1969, the Apollo 12 mission retrieved a camera that had been left on the moon by an earlier spacecraft. When it was analysed, scientists discovered some bacteria – Streptococcus mitis – that are found in humans. While the origin of the bacteria was hotly debated, the incident highlighted the problem of cross-contamination – something that could invalidate the analysis of any samples brought back from future lunar missions.
Given the potential moon and other space missions coming up in the next decade, there will also be more of a need for better ways of handling and analysing the material.
One solution is to use lunar rovers to analyse the moon’s environment in situ, but a key challenge with this, according to Diego Urbina from Belgium-based company Space Applications Services, is the traditional huge expense and difficulty of transporting and using these vehicles.
Urbina works on a project called LUVMI, which is developing a lightweight, low-cost rover weighing about 45kg. In January, the team tested a 60kg prototype over two days in Noordwijk, the Netherlands, to see how it performed autonomously in navigating hazardous obstacles and moon-mimicking surfaces, from rugged terrain to sandy dunes.
The team hopes that LUVMI could be sent to look at the characteristics of the moon’s water ice – the existence of which was confirmed last year – as well as the chemicals in the crust and atmosphere, known as volatiles.
It could also explore ways to extract oxygen and water for use by humans and as fuel by vehicles and satellites, potentially aiding future missions. ‘We expect this to be a kind of exponential effect – that once you can extract resources, these enable many things that can help you extract more resources and expand into the solar system,’ said Urbina.
Urbina explained that the LUVMI rover is much smaller than traditional government-backed ones, but also larger than the more commercial miniature ones, such as those designed for Google’s Lunar XPRIZE before it was cancelled last year.
‘We’re at a nice sweet spot where it is small enough that your launch costs are not too high and big enough that you can deliver a nice suite of payloads and do something interesting,’ he said.
Rather than having six wheels like some other models, the LUVMI rover only has four, which Urbina says makes it more energy-effective while also keeping it highly mobile. This is enabled by an adjustable suspension system that allows the chassis to move up and down and more easily put sensors in contact with the lunar surface as it drives along.
Unlike traditional rovers that transfer samples to the lunar surface after drilling into rock, LUVMI will also aim to cut analysis time in half and reduce the risk of damaging the materials by measuring them in situ rather than returning them to Earth. It will do this by drilling into the ground with its sampler, which uses heat to release the volatiles to be measured.
But while analysing samples on the moon could yield a certain amount of information, there’s nothing like having part of the moon in front of you to look at on Earth, says Professor Sara Russell at the Natural History Museum in London, UK.
‘There’s lots of things that an in situ rover or orbital mission can do, but there are many experiments where you need to actually have the sample in your hands in a lab to do them,’ she said.
Prof. Russell said this is necessary for studies such as precise isotope measurements to determine the ages or chemical history of samples, or detailed examinations of organic material to assess the possibility of life elsewhere in the solar system.
She is part of a team that is developing a plan to build a dedicated pan-European facility to properly curate samples returned from space, protecting them from contamination and preserving them in pristine condition.
Her role, as leader of a project called EURO-CARES, was to bring together scientists and engineers from across Europe to plan a European Sample Curation Facility (ESCF) to meet the needs of sample return missions over the coming decades.
Apart from uniting their knowledge from previous space research, the researchers looked around curation facilities on other continents, such as those of NASA and Japan’s JAXA. ‘They were brilliant in sharing their lessons learnt,’ said Prof. Russell.
She said that any research facility should be modular, with space to add new buildings to protect samples coming from very different environments and avoid cross-contamination. ‘The rule of thumb is that samples should be kept in a similar condition to how they are on the surface of their body,’ she said.
According to Prof. Russell, the curation of lunar samples themselves is relatively straightforward because of the half-century of legacy knowledge gained from the Apollo moon missions – making starting with the moon ‘really good, easy and doable’.
But, she said, samples from bodies such as Mars are ‘a whole different kettle of fish’ compared with the sterile nature of the moon. There is a need to take account of the conditions of the Martian atmosphere and the possibility that bugs could be brought back to Earth. That gives them a ‘restricted’ status that involves a whole set of protocols for protection on Earth.
This could also necessitate, for example, some kind of tent that could be erected where a sample lands for initial work before being taken to its final curation facility.
The team estimates that building an ESCF for curating just unrestricted samples would cost between €10 million and €20 million, and over €100 million for one that analysed restricted samples too. Prof. Russell says this is a relatively small outlay given the overall cost of missions, with current asteroid sample return missions such as Hayabusa2 and OSIRIS-REx budgeted at hundreds of millions of euros and a Mars one likely to cost billions.
The team has not yet settled on a specific site and would need to seek funding to build it as a next step. Prof. Russell says, however, that work on an ESCF should begin at least seven years before samples are likely to be returned to Earth – and with missions possibly coming back from the moon and elsewhere within a 10-year time frame from now, this may heighten the urgency.
|
0.999992 |
The plan for the chapter Trigonometric Equations has been discussed.
which is a good book for calculus?
PLAN FOR THE CHAPTER TRIGONOMETRIC EQUATIONS: The course will be of 18 videos and will be completed in a week Each video ranges from around 8 mins to 15 mins. All the concepts along with multiple examples will be dealt to enhance clarity. . Kindly click on the Enroll button so that you receive the notifications and there's no confusion regarding the videos. You can Review the course too. A good review gives a lot of mental support to me for my hardwork.
1. Solution of Trigonometric Equation. 2. Questions on General Solution (I). 3. Questions on General Solution () 4. Types of Trigonometric Equations (Part l). 5. Types of Trigonometric Equations (Part ll) 6. Types of Trigonometric Equations (Part II). 7. Important Points to Remember 8. Combinations of Various Types (l) 9. Combinations of Various Types (I). 10. Solving Simultaneous Equations (Part l) 11. Solving Simultaneous Equations (Part Il) 12. Examples of Simultaneous Equations (Part II). 13. Examples of Simultaneous Equations (Part IV) 14. Trigonometric Inequations. 15. Problems on Boundary Conditions 16. Important Questions (I) 17. Important Questions (lI) 18. Important Questions (IIl).
|
0.944215 |
International Workers' Day is a celebration of the international labour movement that occurs on May Day, May 1, a traditional Spring holiday in much of Europe. May 1 is a national holiday in more than 80 countries, and celebrated unofficially in many other countries. In some countries the public holiday is officially Labor Day while in others the public holiday marks the traditional Spring festival known as May Day. Other countries, such as the United States celebrate Labour Days on another date, usually with special significance to the labour movement in that country.
International Workers' Day is the commemoration of the May 4, 1886 Haymarket affair in Chicago. The police were trying to disperse a public assembly during a general strike for the eight-hour workday, when an unidentified person threw a bomb at them. The police reacted by firing on the workers, killing four demonstrators. "Reliable witnesses testified that all the pistol flashes came from the center of the street, where the police were standing, and none from the crowd. Moreover, initial newspaper reports made no mention of firing by civilians. A telegraph pole at the scene was filled with bullet holes, all coming from the direction of the police."
In many countries, the working classes sought to make May Day an official holiday, and their efforts largely succeeded. May Day has long been a focal point for demonstrations by various socialist, communist and anarchist groups. In Germany, May Day coincides with Walpurgisnacht. May Day has been an important official holiday in countries such as the People's Republic of China, North Korea, Cuba and the former Soviet Union. May Day celebrations typically feature elaborate popular and military parades in these countries.
In the United States and Canada, however, the official holiday for workers is Labor Day in September. This day was promoted by the Central Labor Union and the Knights of Labor, who organized the first parade in New York City. After the Haymarket Massacre, US President Grover Cleveland feared that commemorating Labor Day on May 1 could become an opportunity to commemorate the affair. Thus, in 1887, it was established as an official holiday in September to support the Labor Day that the Knights favored.
In 1955, the Catholic Church dedicated May 1 to "Saint Joseph The Worker". The Catholic Church considers Saint Joseph the patron saint of (among others) workers and craftsmen.
Far-right governments have traditionally sought to repress the message behind International Workers' Day, with fascist governments in Portugal, Italy, Germany and Spain abolishing the workers' holiday.
|
0.989611 |
Guidance on SCA/DOL Applicability The agency I work for has a general rule of thumb when assigning acquisition packages received that contain both supplies (SUP) and services (SVC) elements. The way they decide which team it gets assigned too is by looking if SVC component is more than $2,500.00 (SCA applicability threshold). If it is higher then it goes to the SVC team and if less, then it goes to the SUP team. I done some research and provided it below: Thoughts? Am I close or way of the mark? Resource for Determination: The Department of Labor (DOL) provides many resources to make these decisions. The best one I have found is located at the link below: https://www.dol.gov/whd/foh/ This is the Field Operations Handbook (FOH), which was updates in 2016, to help when making these decisions in accordance with federal law and DOL guidance. The link below is for chapter 14 that provides this guidance for the SCA: https://www.dol.gov/whd/FOH/FOH_Ch14.pdf At section 14a01 it states: “The SCA (41 USC 351, et seq.) applies to every contract entered into by the U.S. or the District of Columbia (DC), the principal purpose of which is to furnish services in the U.S. through the use of service employees. Contractors performing on such federal service contracts in excess of $2,500.00 must observe minimum monetary wage and safety and health standards and maintain certain records. Service employees on covered contracts in excess of $2,500.00 must be paid not less than the monetary wages and fringe benefits contained in wage determinations issued by the DOL for the contract work. Such wage and fringe benefit determinations may reflect what has been determined to be prevailing in the locality or may reflect the wage rates and fringe benefits contained in the predecessor contractor’s collective bargaining agreement (CBA), if any, pursuant to section 4(c) of the SCA. See 29 CFR 4.” (Emphasis Added) The definition of Principal is below (Marriam-Webster): most important, consequential, or influential The definition of Purpose is below (Marriam-Webster): something set up as an object or end to be attained From these definitions we can reasonably assume that “the principal purpose” means “the most important end to be obtained.” At section 14d13 the FOH speaks to the applicability of the SCA when services and other items are to be obtained under a single contract: “The SCA applies only where a contract as a whole is principally for the furnishing of services, as opposed to line items for specific work in a contract. The SCA reference to bid specification refers to the advertised specifications in a solicitation for bids rather than a separate line item or work requirement within a contract. See 29 CFR 4.132.” (Emphasis Added) The CFR reference is below: § 4.132 Services and other items to be furnished under a single contract. If the principal purpose of a contract is to furnish services through the use of service employees within the meaning of the Act, the contract to furnish such services is not removed from the Act's coverage merely because, as a matter of convenience in procurement, the service specifications are combined in a single contract document with specifications for the procurement of different or unrelated items. In such case, the Act would apply to service specifications but would not apply to any specifications subject to the Walsh-Healey Act or to the Davis-Bacon Act. With respect to contracts which contain separate specifications for the furnishing of services and construction activity, see § 4.116(c). Section 14d13 and 29 CFR 4.132 clearly state that the SCA’s applicability for a contract that contains both services and supplies is based on the “principal purpose” or “the most important end to be obtained.” Dollar value is not consulted for applicability.
I'm looking at a RFP. The Gov't wants to buy labor on an IDIQ basis anywhere in the US. One CLIN equals one labor category. The Gov't is seeking a single hourly rate per CLIN. A few wage determinations have been included in the RFP, but nowhere near all of them for the entire country. To help focus, let's say Manhattan is not covered by any of the wage determinations in the RFP. After award of the IDIQ, how would the SCA rules apply if the Gov't wants to buy labor in Manhattan?
This is my first post, but I've read with interest a number of threads discussing similar issues. I'm excited to hear your feedback on this. I understand that 52.222-43 requires the contractor to warrant that wage escalations in the proposal do not include an allowance for adjustments that would automatically be made under -43 to compensate for revised Wage Determinations. 1. My general question is: under this FAR, when would I be allowed to escalate wages, and when would I be allowed to submit a proposal with flat wages? I'm thinking the answer turns on whether or not the wages proposed are at the minimum allowed under the Wage Determinations. For example, if I submit a proposal where my fully burdened wages are based on minimum wages under WDs, then I could submit flat rates for option years, and rely on the automatic adjustment scheme in -43 to increase wages if/as they rise due to DOL revisions to WDs. If I submit a proposal where filling a position will require a premium over the minimum allowed under DOL WDs, then I would need to escalate wages if I anticipate it becoming more expensive to fill the position each year. If I do not escalate wages in my proposal, and my base year wages are above any new revisions to the WDs, then -43 doesn't provide me with an automatic adjustment each year. Does this conform to your views on the way -43 operates? A few follow-up questions. 2a. If I submit flat wages for the option years, and rely on -43 to compensate for rising labor costs, could an agency have any basis for rejecting my proposal on "realism"? Is it unrealistic to rely on revisions to WDs for price adjustments necessary to fill positions? Maybe I need to note that reliance in the bid to be clear? 2b. If the answer to 2a is "yes", then do you interpret -43 differently? For example, would it be allowable to base my fully burdened rates on the minimum wages allowed under DOL WDs, and still escalate wages, knowing that since I'm already at minimum wages, they would need to be revised each year? Such escalations would seem to include an allowance for the adjustments provided for in -43, but maybe we could warrant that our escalations aren't based on expected changes in the WDs, per se, but instead, we promise the escalations are based on "realism" or some other need to escalate wages, like for general retention purposes, for performance incentives, etc.? It seems to me that even if I have alternative explanations for my wage escalations, if I'm already working from the minimum wages allowed under WDs, my escalations are necessarily including some allowance for revisions to the WDs, and would therefore contravene the purpose of -43, or perhaps violate it altogether. 3. If I am above the minimum WDs amount, and I do not escalate, I imagine there could be a basis for rejecting my proposal based on realism. But if it weren't rejected for realism, would I have an alternative means for adjusting wages annually on renewals? Would equitable or economic price adjustments only be available if specifically allowed in the contract, or in your experience, does this depend on the Contract Officer, to be determined on a case-by-case basis? Thank you in advance for your contributions. I've read a number of different interpretations of -43. Some departments seem to believe this doesn't allow for any wage escalations at all (although it clearly does) and some departments seem to think it's unrealistic to not escalate wages (even though FAR -43 provides an automatic scheme for adjusting wages).
Good afternoon, Mr. Edwards: First, let me thank you in advance, for your assistance. This site is wonderful for getting another perspective on all things Federal Government contracting. Here is my question: Recently, the Department of Labor has been conducting an audit of the Service Contract Act to ensure we have been following appropriate procedures (i.e. paying correct wage and health and welfare). Last week, we received an amendment to modify a Blanket Purchase Agreement (BPA) through an amendment that retroactively incorporates the Wage Determination schedules for 2011 and 2012. What they are attempting to do is force our customer to have us sign the amendment so that we would be responsible for the wage differences and health and welfare from the inception of the contract. I have done a quick calculation and for the number of hours that we have put into the BPA calls, it would add up to a very large number. We would have never priced the contract as we did if this was an SCA contract and we stand to lose a significant amount of dollars if we were to accept this retroactively. Can you please tell me what recourse we may have. Thank you very much!
Good day! My question is about selecting the correct labor law to apply to a subcontractor. If the subcontractor is performing a "service" on a DBA construction site, employs no laborers, mechanics, apprentices, trainees or helpers, is the work subject to the Service Contract Act or Davis Bacon Act? Anxiously awaiting your response!
|
0.944867 |
Name two of the three large gulfs that are close to the United States?
The Gulf of Mexico, the Gulf of Alaska and the Gulf of California are three large gulfs that are close to the United States.
What kind of animals live in a moderate climate?
Who is the Biggest Animal in the World?
What is the most famous animal in England?
What happens if you hatch a lady bug in captivity?
Do you have any ideas for Fancy dressbegging with the letter P?
How long have dolphins been around?
What is a response to physical stimuli?
Why are there less fireflies then in the past?
Which animal has a short pregnancy?
List five ways water helps maintain homeostasis in living things?
Why does a turtle shell turn soft?
|
0.99942 |
What would a Megaton Nuke do to MCU Thor?
Badly hurt but he survives.
Basically what condition he was in after the full force of the star, but this time he aint waking up.
Survives, but is badly damaged.
@tenguswordsman: @jashro44: You guys think Thor would be able to get up and walk? Or would he be barely clinging on to life not even able to get up?
@xzone: I think he would be knocked out. I think saying he would be clinging to life is an exaggeration.
@xzone: Barely able to stand up.
The star is a lot worse than a Nuke.
Definitely near death to dead scaling from his star feat!
Alright guys come on, it's only a single megaton. Thor survives with a few burnt pieces of clothing. Not sure how the lighting cloak helps him though.
He would die horribly.MCU characters are too weak.
He would be badly hurt but he is not dying.
He survives but barely retains conscious enough to call SB and heal up.
You won't get a concensus in this forum.
He should tank it with not real damage IMO.
Probably gets killed, lesser heat could burn him up badly to near death. Heat that is 15-20 times hotter than that would probably kill him. Not to mention the blunt force of a megaton nuke is immense. It can create billions pascals of atmoshperic pressure. No way thor is tanking that.
@supermanforever: star&gt;nuke , also the heat from a nuke lasts for less than a millisecond.
If it was done by MCU he would be injured. Not life threatning but still Injured.
@supermanforever: The heat from a star is worse. Don't just look at 1 side of the equation.
1- Can the nuke destroy a city like Sokovia?
2- Is the nuke more powerful than the full force of a star?
Survives, but is heavily injured.
it isnt though nuke plasma heat is 150 milion kelvin. 15-20 times hotter than the core of our sun.
False it stays up to 10 seconds.
test on 1 megaton thermonuclear explosion has shown that the fireball lasts about 10 seconds and cools down after that.
Also even if we ignored and said it lasts milisecond. There is still no escape, unless maybe you can run speed of light maybe.
@supermanforever: you're wrong on both counts it doesn't last at 10 seconds and it's temp peaks and doesn't last long at all. The star feat lasted much longer dealing out a massive amount more than the Nuke. You can Nuke the Earth a hundreds time and nothing happens, that same beam that Thor tanked would kill all life.
I don't see how people can say he dies after the Sokovia and Star feat. I mean that's actually pretty silly.
But that fireball is not the same temperature.
Uru can tank a nuke level hit in the form of Sokovia. Thor survived in a beam that deformed Uru. Thus, Thor can AT LEAST survive a nuke.
@man_of_miracles: this forum has turned into an echo chamber for people with pre conceived opinions about everything. They are incapable of giving an inch to an opposing argument and therefore make it their goal to claim characters are at one particular level. It doesn’t matter what they do... if any other character had taken that star people would be saying they tank the nuke. But... it’s Thor, so for reasons I’m unable to understand it automatically makes him pretty much less than the sum of his parts no matter what he does.
Badly hurt but stil conscious and recovers in a few minutes.
These pretty much explain it, not with opinions or subjective stuff, but with actual facts and feats that we saw in the movies.
Nobody knows how hot that beam was. On the other hand we have real life numbers regarding the heat, pressure wave and radiation from a nuke. He gets vaporized in the very first second of the explosion which encompasses temperatures greater than 50 million degrees.
People act like the whole beam of energy was absorbed by Thor's body before he was cooked alive whereas he only stood on the beam's path and only a part of the beam hit him for like 15-20 seconds.
|
0.999991 |
Please read the passage silently for 20 seconds.
I'm going to ask you five questions, one by one. Are you ready?
Mr./Mrs.Y, please turn over the card.
|
0.929408 |
The Pawestinian vocawization, Pawestinian pointing, Pawestinian niqqwd or Eretz Israewi vocawization (Hebrew: ניקוד ארץ ישראל Niqqwd Eretz Israew) is an extinct system of diacritics (niqqwd) devised by de Masoretes of Jerusawem to add to de consonantaw text of de Hebrew Bibwe to indicate vowew qwawity, refwecting de Hebrew of Jerusawem. The Pawestinian system is no wonger in use, having been suppwanted by de Tiberian vocawization system.
The Pawestinian vocawization refwects de Hebrew of Pawestine of at weast de 7f century.[cwarification needed] A common view among schowars is dat de Pawestinian system preceded de Tiberian system, but water came under de watter's infwuence and became more simiwar to de Tiberian tradition of de schoow of Aaron ben Moses ben Asher. Aww known exampwes of de Pawestinian vocawization come from de Cairo Geniza, discovered at de end of de 19f century, awdough schowars had awready known of de existence of a "Pawestinian pointing" from de Vitry Machzor. In particuwar, Pawestinian piyyutim generawwy make up de most ancient of de texts found, de earwiest of which date to de 8f or 9f centuries and predate most of de known Pawestinian bibwicaw fragments.
As in de Babywonian vocawization, onwy de most important vowews are indicated. The Pawestinian vocawization awong wif de Babywonian vocawization are known as de superwinear vocawizations because dey pwace de vowew graphemes above de consonant wetters, rader dan bof above and bewow as in de Tiberian system.
Even so, most Pawestinian manuscripts show interchanges between qamatz and patah, and between tzere and segow. Shva is marked in muwtipwe ways.
Some manuscripts are vocawized wif de Tiberian graphemes used in a manner cwoser to de Pawestinian system. The most widewy accepted term for dis vocawization system is de Pawestino-Tiberian vocawization, uh-hah-hah-hah. This system originated in de east, most wikewy in Pawestine. It spread to centraw Europe by de middwe of de 12f century in modified form, often used by Ashkenazi scribes due to its greater affinity wif owd Ashkenazi Hebrew dan de Tiberian system. For a period of time bof were used in bibwicaw and witurgicaw texts, but by de middwe of de 14f century it had ceased being used in favor of de Tiberian vocawization, uh-hah-hah-hah.
Joshua Bwau (2010). Phonowogy and Morphowogy of Bibwicaw Hebrew. Winona Lake, Indiana: Eisenbrauns. ISBN 1-57506-129-5.
Sáenz-Badiwwos, Angew (1993). A History of de Hebrew Language. Cambridge University Press. ISBN 0-521-55634-1.
Tov, Emanuew (1992). Textuaw Criticism of de Hebrew Bibwe. Minneapowis: Augsburg Fortress. ISBN 978-0-8006-3429-2.
Yahawom, Joseph (1997). Pawestinian Vocawised Piyyut Manuscripts in de Cambridge Genizah Cowwections. Cambridge University. ISBN 0-521-58399-3.
This page was wast edited on 6 March 2018, at 09:48 (UTC).
|
0.977796 |
Many people fear that potatoes will make them fat or cause other health problems. Are potatoes really such villains? Are they any better or worse than bread, rice, or other starchy grains?
Potatoes have a bad reputation, in part, because they have a high glycemic index (GI), meaning that their carbohydrates are quickly broken down into sugar, causing blood sugar and insulin levels to rise rapidly. This, in turn, increases fat storage and the risk of obesity and diabetes—at least in theory.
A few studies have implicated potatoes in weight gain and diabetes. For instance, a 2009 study in the Journal of the American Dietetic Association found a link between potato consumption and waist circumference in women (but not men). Earlier data from the Nurses' Health Study, in the American Journal of Clinical Nutrition in 2006, linked potato intake and the risk of type 2 diabetes in obese women—especially when potatoes were eaten in place of whole grains.
But there are plenty of caveats to consider before you drop the potato. For one, not all studies support the idea that high GI diets—let alone potatoes, in particular—have such adverse effects. Several have found no relationship between high GI-diets and body fat or diabetes. In any case, the GI of potatoes (and other foods) depends on many factors, including how they're cooked and what they're eaten with. And not all varieties have such a high GI (russet potatoes do, for example, but red potatoes rank moderately).
Moreover, it's hard to separate the effects of potatoes from those of other foods in a typical Western diet. That is, the undesirable associations seen in some studies could be due to the meat, refined grains, sugars, and trans fats (as in French fries) in a "meat and potatoes" diet, rather than the potatoes. People also vary in their responses to carbohydrates, and some research suggests that potatoes may be more problematic in overweight and/or sedentary people, who are more likely to have insulin resistance.
On the flip side, some research suggests that potatoes may help with weight control. They rate high in satiety, meaning they help fill you up, so you may eat less. Potatoes also contain proteinase inhibitors, which may suppress appetite. And preliminary experimental work suggests that potato extracts may improve insulin sensitivity and decrease diabetes risk due to their polyphenols. There's even a weight-loss supplement that contains a potato extract, which is claimed to act as an appetite suppressant, though there's no evidence it works. More research is needed, certainly, to confirm any weight-loss potential of potato extracts.
In actuality, potatoes are relatively low in calories—just 130 to 140 in a medium plain baked potato (5 ounces after cooking). That's more per ounce than non-starchy vegetables, but fewer than the calories in bread and rice. The problem is that potatoes are often prepared and served with lots of high-calorie ingredients. A 5-ounce potato with two tablespoons of butter and three tablespoons of sour cream has 415 calories (and 30 grams of fat). A 5-ounce portion of hash browns, cooked in oil or butter, has 375 calories, while 5 ounces of fast-food French fries has 435 calories. Ounce for ounce, potato chips have more than five times as many calories as a plain potato.
Potatoes are also a good source of fiber (leave the skin on), potassium (more than bananas), and vitamin C, and they provide some protein, iron, B vitamins (notably folate), and magnesium, along with other potentially beneficial plant compounds. The more colorful the potato, the higher the antioxidants.
Final thoughts: There's plenty of room for potatoes in a healthy diet that's rich in vegetables, fruits, legumes, and whole grains. Eat them in moderation and go easy on the oil, cheese, and cream when preparing them. By the way, sweet potatoes are technically unrelated to potatoes, but are a nutritious vegetable that provides lots of beta carotene and other carotenoids. You'd do well serving them up in place of white potatoes on occasion.
|
0.994453 |
Using capitalism to build socialism?
Socialism is inconceivable without large-scale capitalist engineering based on the latest discoveries of modern science. It is inconceivable without planned state organisation which keeps tens of millions of people to the strictest observance of a unified standard in production and distribution. We Marxists have always spoken of this, and it is not worth while wasting two seconds talking to people who do not understand even this (anarchists and a good half of the Left Socialist-Revolutionaries) (Lenin, Collected Works, vol. 32, p. 334, 1921).
|
0.999999 |
Rate this Article Have you set out to be the very best like no one ever was? Well, we’ll be giving you some excellent tips that will sure to improve your overall skill at the game.
If you’re one of the millions of people around the world who’ve set out to be the very best like no one ever was (heh) then it’s likely that you’re facing a bit of trouble getting the hang of Pokémon Go since the game isn’t really big on explaining its basics to newcomers. In this article, we’ll be giving you some excellent tips that are sure to improve your overall skill level at the game.
1: Don’t waste stardust early on.
Although you might not realize it during the first few levels of gameplay, stardust is actually a pretty rare resource that will come in quite handy later on when you’re trying to power up your higher tier Pokémon but people make the mistake of wasting it all on whatever low tier Pokémon that they have at their disposal which is why they don’t have any left when they actually need it. You should never make the mistake of using stardust on Pokémon that you acquire during the first few levels of the game.
2: Evolve the Pokémon with the highest CP.
Say you have a bunch of Pidgeys and the resources required to evolve them, then you should always make sure to check which one of them has the highest CP (Combat Power) and evolve that specific one. Combat Power increases exponentially after a Pokémon evolves so it’s crucial that you evolve the Pokémon that have the highest base CP and transfer all the other similar ones to the Professor.
3: Don’t waste the Incense item by idling.
Incense is an insanely useful item that increases the spawn rate of Pokémon in your immediate radius. People think that they can just sit or lie down wherever they are and use Incense to catch a few Pokémon but they’re actually wasting this item by doing so. When you’re idling, Incense will only spawn a total of 5 or 6 Pokémon around you but if you’re actively walking around the world with the item active then you’ll actually spawn a whole lot more. You can catch up to 20 Pokémon if you make efficient use of this item so you should never waste it by using it and then sitting in the same area.
Ok here’s how to throw curveballs: You tap and hold the ball and keep spinning it around until you see it start to glimmer. Once the ball starts to glimmer it’s ready to be thrown as a curveball. The catch about this ball is that it’ll go right if you throw it left, it’ll go left if you throw it right and it’ll go in any one of these directions if you throw it towards the middle of the screen. Landing this ball might be tricky but if you can successfully connect it with a Pokémon then the chances of capturing it will increase very significantly.
5: Catch every single Pokémon you see.
As long as you have the balls necessary to do so, you should pretty much never pass up on any Pokémon that pops up for you in the wild. It doesn’t matter how many of the same Pokémon you’ve already captured because capturing a new one will give you experience regardless. Therefore, if you want to keep leveling at a steady pace it’s best to catch pretty much every single Pokémon you encounter.
All said and done, as long as you keep the aforementioned tips in mind while playing Pokémon Go, we can say with certainty that you’ll be able to play much better.
|
0.825924 |
In large collections of tumor samples, it has been observed that sets of genes that are commonly involved in the same cancer pathways tend not to occur mutated together in the same patient. Such gene sets form mutually exclusive patterns of gene alterations in cancer genomic data. Computational approaches that detect mutually exclusive gene sets, rank and test candidate alteration patterns by rewarding the number of samples the pattern covers and by punishing its impurity, i.e., additional alterations that violate strict mutual exclusivity. However, the extant approaches do not account for possible observation errors. In practice, false negatives and especially false positives can severely bias evaluation and ranking of alteration patterns. To address these limitations, we develop a fully probabilistic, generative model of mutual exclusivity, explicitly taking coverage, impurity, as well as error rates into account, and devise efficient algorithms for parameter estimation and pattern ranking. Based on this model, we derive a statistical test of mutual exclusivity by comparing its likelihood to the null model that assumes independent gene alterations. Using extensive simulations, the new test is shown to be more powerful than a permutation test applied previously. When applied to detect mutual exclusivity patterns in glioblastoma and in pan-cancer data from twelve tumor types, we identify several significant patterns that are biologically relevant, most of which would not be detected by previous approaches. Our statistical modeling framework of mutual exclusivity provides increased flexibility and power to detect cancer pathways from genomic alteration data in the presence of noise. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2–5.
Tumor DNA carries multiple alterations, including somatic point mutations, amplifications, and deletions. It is challenging to identify the disease-causing alterations from the plethora of random ones, and to delineate their functional relations and involvement in common pathways. One solution for this task is inspired by the observation that genes from the same cancer pathway tend not to be altered together in each patient, and thus form patterns of mutually exclusive alterations across patients. Mutual exclusivity may arise, because alteration of only one pathway component is sufficient to deregulate the entire process. Detecting such patterns is an important step in de novo identification of cancerous pathways and potential treatment targets. However, the task is complicated by errors in the data, due to measurement noise, false mutation calls and their misinterpretation. Here, we propose a fully probabilistic, generative model of mutually exclusive patterns accounting for observation errors, with interpretable parameters that allow proper evaluation of patterns, free of error bias. Within our statistical framework, we develop efficient algorithms for parameter estimation and pattern ranking, together with a statistical test for mutual exclusivity, providing more flexibility and power than procedures applied previously.
Copyright: © 2014 Szczurek, Beerenwinkel. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: ES was supported by the ETH Zurich Postdoctoral Fellowship Program and the Marie Curie Actions for People COFUND program (grant No. FEL-13 12-1). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
This article is associated with RECOMB 2014.
Recent years in cancer research are characterized by both accumulation of data and growing awareness of its overwhelming complexity. While consortia like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) generate the multidimensional profiles of genomic changes in various cancer types, computational approaches struggle to pinpoint its underlying mechanisms . The most basic yet already challenging task is to identify cancer drivers, genomic events that are causal for disease progression. A second, more general task is to elucidate sets of functionally related drivers, such as mutations of genes involved in a common oncogenic pathway.
One systematic approach to address the latter task is to search for mutually exclusive patterns in cancer genomic data –. Typically, the data is collected for a large number of tumor samples, and records presence or absence of genomic alterations, such as somatic point mutations, amplifications, or deletions of genes. In mutually exclusive patterns, the alterations tend not to occur together in the same patient. These patterns are commonly characterized by their coverage and impurity. Coverage is defined as the number of patient samples in which at least one alteration occurred, while impurity refers to non-exclusive, additional alterations (referred to as non-exclusivity or coverage overlap in previous studies). Such mutually exclusive alterations have frequently been observed in cancer data – and were associated with functional pathways or synthetic lethality –, , . Therefore, mutually exclusive patterns are important for a basic understanding of cancer progression and may suggest genes for targeted treatment.
Previous studies identified mutually exclusive patterns either via integrated analysis of known cellular interactions and genomic alteration data , or de novo, by an online learning approach , or by maximizing the mutual exclusivity weight introduced by Vandin and colleagues , , . The weight increases with coverage and decreases with coverage overlap and proved successful for pattern ranking and cancer pathway identification.
To our knowledge, there exists no approach that explicitly models the generative process of mutual exclusivity patterns. In the absence of a statistical model of the data, the definition of the weight, although intuitively reasonable, remains arbitrary. In the previous studies, the weight served also as statistic for a column-wise permutation test that assesses the significance of patterns. We show that the power of this test decreases with the number of genes, likely because the weight does not scale with gene number, and the same impurity level affects it more with more genes in the pattern. Most importantly, none of the existing approaches deal with the problem of errors in the data. Despite advanced methodologies on both experimental and computational side , records of genomic alterations may contain false positives and false negatives, due to measurement noise, as well as uncertainty in mutation calling and interpretation. As illustrated in Figure S1, ignoring errors in the data, particularly false positives, may lead to wrong ranking of patterns.
Here, we develop two alternative models for cancer alteration data (Figure 1). One is a probabilistic, generative model of mutually exclusive patterns in the data. The model contains coverage as well as impurity as parameters, together with false positive and false negative rates. We show analytically that the model parameters are identifiable, and propose how they can be estimated and used for pattern evaluation. The second is a null model assuming independent alterations of genes. Via comparison of the mutual exclusivity model to the null model, our approach allows statistical testing for mutual exclusivity, both in the presence and absence of errors.
Figure 1. Principles of the mutual exclusivity model and test.
A The generative process underlying mutual exclusivity patterns. The matrices show alteration status (shaded for presence and white for absence of alteration) for genes (columns) in patients (rows) in consecutive steps of the process, each dependent on parameters indicated in brackets. Blue arrows point at patients that are covered by the pattern with probability . Orange arrows point at impure alterations, added with probability . Yellow and green arrows show false positives (added with rate ) and false negatives (rate ), respectively. B Graphical representation of the mutual exclusivity model. Large circles: random variables, with observed variables shaded. Small black circles indicate model parameters, and are connected to their corresponding variables with edges. Arrowed edges show dependencies between variables. The rectangle plate indicates a set of identically distributed variables or a set of their parameters (with indices ). C The independence model.
First, we evaluate performance of our approach in the case when, as it is done in the literature, the data is assumed to record no false positive or negative alterations. On simulated patterns our mutual exclusivity test proves more powerful than the weight-based permutation test. In glioblastoma multiforme data , analyzed by the previous approaches, we find novel, biologically relevant patterns, which are not detected by the permutation test. Next, we examine the bias introduced in pattern ranking by ignorance of errors, especially false positives, and show that when the error rates are known, our approach is able to accurately estimate the true coverage and impurity and rank the patterns accordingly. Finally, we analyze the practical limits of accurate parameter estimation in the most difficult, but also most realistic case where the data contains errors occurring at unknown rates. We apply our approach to a large, pan-cancer collection of 3299 tumor samples from twelve tumor types , for which the model accounting for the presence of false positives can accurately be estimated. This model is shown to be more flexible than the model assuming no errors in the data, and is applied to identify several universal, significant mutual exclusivity patterns, which would not be found by the previous methods.
A mutual exclusivity pattern can be detected in a given cancer alteration dataset, with columns that correspond to a subset of measured genes and rows (observations) that correspond to patients whose tumor samples were collected (with ). For each patient and gene, the dataset records a binary alteration status of the gene observed in the patient, with 0 standing for absence and 1 for presence of alteration.
We assume that the mutual exclusivity patterns are the result of the following generative process (Figure 1A). First, with a certain probability, denoted and called coverage, the patients who are covered by the pattern are chosen. Each row corresponding to a covered patient is hit by an exclusive alteration, meaning that exactly one gene is assigned value 1 in this row. Here, we assume that all genes have equal probability to be exclusively mutated. Next, in the same row, with probability , any other gene can be mutated in addition. Those added alterations are interpreted as impurity in the mutual exclusivity pattern, hence is referred to as the impurity parameter. The generative process described up to this point coincides with the data simulation procedure used in previous studies , . However, the corresponding generative model was not used for statistical inference. This prevalent view of the generative process ignores the possible occurrence of errors. Realistically, the observed alteration data result from adding false positives (with rate ) and false negatives (rate ) to the true, exclusive, and impure alterations.
Proposition 1 For , the parameters in the mutual exclusivity model are identifiable.
Encouraged by this result, we propose an expectation maximization algorithm (Methods) to estimate the maximum likelihood parameter values and evaluate its performance in practice (Results).
In the case when the dataset does not carry the mutual exclusivity pattern, we assume that the corresponding genes are mutated independently with their individual alteration frequencies. This is modeled with a set of independent, observed binary random variables , satisfying for each (referred to as the independence model; Text S1). We devise a mutual exclusivity test (shortly, ME test), which compares the likelihood in the mutual exclusivity model to the likelihood in the independence model. Since the models are not nested, we use Vuong's closeness test to compute the p-values (Methods). A small p-value means that the mutual exclusivity model is closer (with respect to Kullback-Leibler divergence) to the true model from which the data was generated than the independence model. The test statistic accounts for the difference in degrees of freedom between the models.
We evaluate our mutual exclusivity model and statistical test in three different scenarios. First, we make an assumption prevalent in the literature, namely that the data is generated without errors. In the second scenario, we assume that the data contains errors, and the error rates are given. Finally, we consider the scenario where the data is generated with errors, and the error rates are unknown.
First, we evaluate the performance of our mutual exclusivity model on simulated data assuming that the data is clean of errors. In this case, the model is reduced, since it is parametrized only by the coverage and the impurity , and the observed variables are equated with the true hidden variables . We have derived closed-form expressions for the maximum likelihood parameter values (Methods), providing reliable parameter estimates already for datasets of sample size 200 (Table S1). We simulated datasets from the reduced mutual exclusivity model, for increasing gene set sizes, , patients, and combinations of parameter values and , with 20 datasets generated per each parameter setting (example in Figure 2A). For each dataset, we assessed the significance of mutual exclusivity using the proposed ME test (Methods). For comparison, we obtained empirical p-values from the weight-based permutation test, which permutes individual columns in the dataset 1000 times, and reports the number of times a permuted dataset had a higher weight than the original , .
Figure 2. Our mutual exclusivity (ME) test is more powerful than a permutation test, which was applied previously.
A Example simulated mutual exclusivity pattern. B The ME test shows smaller p-values with growing number of genes in the patterns. On the contrary, the permutation test (with 1000 column-wise permutations) is less powerful for larger patterns. C Both tests do not support mutual exclusivity in data generated from the independence model.
For datasets with three genes only and low coverages, both our ME and the permutation test not always detect mutual exclusivity (Figure 2B). As the gene set size increases, in contrast to the permutation test, the ME test becomes more powerful. With ten genes, our test supports mutual exclusivity for all datasets, whereas the permutation test does not, even for a large fraction of datasets with high coverage. As an example, for the mutual exclusivity pattern in Figure 2A the ME test p-value is , and the permutation test p-value is 0.15. We speculate that the reason for the decreased power of the permutation test is the weight itself. With the same coverage and impurity, large gene sets get less significant weights than small gene sets, since the weight decreases drastically with addition of impure alterations in each row, and this addition is more likely for longer rows. In addition, with increased gene set size the ME test p-values tend to decrease. This suggests that the test will remain powerful also after multiple hypothesis testing correction, which is expected to be more restrictive for larger set sizes.
Both tests correctly do not support mutual exclusivity for datasets generated from the independence model (Figure 2C). 20 datasets were simulated per each maximum individual frequency (each frequency was drawn at random uniformly from interval ). The same, correct behavior was observed when the independent frequencies were drawn from a distribution observed in real cancer data (Figure S2). Figures 2B,C show that the ME test, without computationally expensive permutations, yields ranges of p-values that are amenable to multiple testing corrections. In summary, the ME test is equally powerful for small gene sets as the permutation test, and more powerful for larger ones, and can efficiently be applied in practice.
We further use our model to identify significant mutual exclusivity patterns with high coverage and low impurity in glioblastoma multiforme samples from The Cancer Genome Atlas (TCGA ; extended collection; originally published with fewer samples ). The data were organized in a binary matrix combining point mutations and copy number variants for 236 patients in 83 genes. The genes and their alterations were selected to represent significant players and events in disease progression (Methods).
To obtain a comprehensive picture of the types of patterns that can be found in this dataset, we restricted the gene set size to four, and evaluated all 1,837,620 possible gene subsets of this size. Figure 3A presents the pattern with the largest weight, but also large imbalance: in that pattern, almost the entire coverage comes from alterations of a single gene, EGFR. With our approach the quality of each pattern can be assessed with the estimated coverage and impurity parameters, while its significance is given by the p-value from the ME test. In the standard understanding, a high quality pattern has high coverage and low impurity. For the GBM dataset we obtained 11 significant (Benjamini-Hochberg adjusted ME p-value ) patterns with estimated coverage larger than 0.3 and impurity lower than 0.2 (Table S2). Figure 3B–D presents top three of those patterns with the lowest impurity. Out of the genes included in those top sets, NF1, PIK3C2G, PIK3R1 and PIK3CA play roles in the interconnected canonical glioblastoma signaling , although are not found directly grouped into individual pathways as identified by the original publication. Notably, the TRAT1 protein is a known interaction partner of PIK3R , .
Figure 3. Top mutual exclusivity patterns identified in cancer data.
A–D Patterns in glioblastoma. A Pattern for the gene set with the highest weight (scoring high coverage and low impurity, applied in previous studies), with adjusted permutation test p-value 0. B–D Examples of significant, high quality patterns identified using the reduced mutual exclusivity model (assuming no errors), with estimated coverage larger by 0.3 and impurity lower than 0.2. E–H Patterns in pan-cancer data. E Pattern the for gene set with the highest weight. F–H Examples of significant, high quality patterns identified using the mutual exclusivity model that accounts for false positives, with estimated coverage larger by 0.3 and impurity lower than 0.2.
Table 1 summarizes the statistics for all presented patterns, underlining the differences between the ME and permutation tests. With the explicit account for coverage and impurity as parameters in the model, our approach gives control over which important features of the patterns should be used to prioritize the significant patterns of interest. In contrast to the permutation test, the ME test is specifically designed to prefer balanced patterns. Consequently, patterns identified using our ME approach have over three times lower median imbalance than the median imbalance of top weight patterns with adjusted permutation test p-values (Figure S3). To assess the imbalance of a given gene set, we calculated the ratio between the number of alterations of the gene with the largest individual frequency in the set to the total number of patients covered with the pattern.
Table 1. Summary of top patterns identified for the glioblastoma dataset assuming no errors.
Our analysis did not rediscover four mutually exclusive gene sets (Table S3) identified previously based on optimizing the weight , for the first, original GBM dataset version. Several genes in those sets did not pass our filtering criteria in the pre-processing step (Methods), and one gene set could not be analyzed for this reason. Two sets had large estimated impurity( ), which does not satisfy our threshold. All three analyzed gene sets were insignificant according to the ME test, most likely due to relatively high imbalance (two to three times larger than median imbalance of gene sets we identified, compare Figure S3). Interestingly, one of those gene sets does not have a significant permutation p-value, which may be due to the fact that the processing of the data was different and the original dataset contained fewer samples.
In this section, we consider the scenario where the data are erroneous, and the error rates are known and can be used for pattern evaluation. Figure S1 visualizes the severe effects of error ignorance. The observed weight, computed on datasets with false negatives, is consistently reduced as compared to the true weight of patterns generated without errors. Addition of false positives introduces most bias in the observed weight, and results in false ranking. Similarly, for the reduced mutual exclusivity model assuming no errors, parameter estimation fails in the case when they do occur in the data (Figure S4). Thus there is a well motivated need for the model to account for errors.
Fixing the parameters and in our model to the true false positive and negative rates, respectively, we can estimate the remaining coverage and impurity parameters using the EM algorithm (Methods). This estimation is very precise for simulated datasets with five genes, and sample sizes 200 or 1000 (Table S1, Figure S5). Figure 4 shows that such precise estimates can be used to rank the patterns by their estimated true quality, first sorting by the estimated impurity and second by their estimated coverage. We ranked the erroneous datasets simulated in Figure S1 by their estimated true quality. Next, we evaluated the fraction of dataset pairs which were ordered the same way as when their true impurity and coverage were used for sorting. This fraction of correctly ranked pairs was compared to the fraction that is ranked the same way by the observed weight as compared to the true weight. For data containing false negatives both the quality ranking and the observed weight perform very well in correct ranking. The estimated true quality significantly outperforms the observed weight in the presence of false positives.
Figure 4. Improved ranking of erroneous patterns.
In contrast to the observed weight, which was applied in previous studies, and ignores errors and scores observed coverage and impurity, our approach to estimate true quality, using known error rates, estimates the true parameters and ranks the patterns correctly. The data was simulated from the mutual exclusivity model with parameter values fixed to , , with error rates A (x-axis), , as well as B (x-axis), . 20 datasets with 5 genes and 1000 patients were simulated per each parameter setting.
Finally, we consider the scenario, where the observed data contains errors that occur at unknown rates. In this case we need to estimate all four model parameters, and we proved the model to be identifiable from the data (Text S1). As expected, Table S1 shows that for realistic sample and gene set size (200 or 1000 patients and five genes), and for typical parameter settings (with small impurity and error rates and ), parameter estimation is more difficult than in the case where and are given (compare Figure S5). The estimated parameter values start approaching the true ones only for prohibitively large sample sizes (Figure S6). In particular, for realistic sample numbers, the parameter is largely underestimated. Since in case of mutual exclusivity and small values, there are in total not many true positive cases, the actual false negatives should be very rare. Thus, without much loss of generality of our approach for realistic datasets, we further assume that the false negative rate is zero, and account only for the false positives. With this assumption, our approach is still very useful in mutual exclusivity analysis: Figure S1 and Figure 4 show that in terms of ranking there is a pressing need to account for the false positives rather than for false negatives.
Table S1 and Figure S7 illustrate that with this assumption, already for 1000 samples (but not 200) a much more accurate estimation of the remaining parameters , , and is possible. Still, for impurity too similar to false positive , the parameter is overestimated, and underestimated. Thus, in some cases, the true impurity may be smaller than its estimated value, making our evaluation of patterns over-conservative. Again, this problem diminishes for larger datasets. Figure 5 shows, that for realistic dataset sizes and parameter sizes, the ME test is able to detect mutual exclusivity in data with false positives, and is more powerful than the permutation test.
Figure 5. Power of the mutual exclusivity model accounting for false positives.
The ME test p-values for A data generated from the full mutual exclusivity model with given, and B generated from the independence model, in comparison to a permutation test applied in previous studies. Again, the ME test is more powerful (compare Figure 2).
We applied our approach accounting for false positives to pan-cancer genomic alteration data , a data collection from twelve distinct cancer types. Combining cancer datasets enables to mine for mutually exclusive patterns that are universal for the disease, but can be a problem for the search of patterns that are specific for one of the combined types. A gene set which has mutually exclusive alterations in only one cancer type and not others will most likely not be detectable in the combined dataset. The pan-cancer dataset is much larger than the glioblastoma data, thus allowing more accurate parameter estimation. Somatic point mutations, copy number variants, and methylations were compiled into a single binary data matrix. Duplicated columns from the compiled matrix were removed, yielding a matrix with 428 columns, some of which represent not one, but several genes (Methods).
We aimed to collect universal, low-impurity mutual exclusivity patterns for gene sets of size five that cover multiple cancer samples, accounting for possible false positives. We first pre-filtered the immense set of all possible subsets, starting with fitting the reduced model (assuming no errors in the data) for all 15,504 subsets of 20 measured genes that were selected by their large individual alteration frequency ( ; c.a. 0.6%). Next, we chose the 2039 subsets that had estimated coverage larger than 0.3, impurity lower than 0.2, and ME statistic larger than 0, indicating the reduced mutual exclusivity model fits the data better than the independence model (not necessarily significantly). Figure 3E shows the pattern that in the pre-filtered dataset has the largest weight, which is largely dominated by alterations of TP53. Finally, we applied the model accounting for false positives to the pre-filtered subsets, and identified 476 high quality patterns (Table S5) with estimated coverage larger than 0.3, impurity lower than 0.2, selecting by significance (Benjamini-Hochberg adjusted ME p-value ), and sorting by impurity (lowest on top; examples in Figures 3F–H). Three out of all columns in the visualized patterns correspond not to one, but a set of genes, and are denoted META 1-3 (see Table S4 for individual genes). A possible reason for a large number of significant and high quality gene sets (Table S5) is the fact that the identified gene sets overlap. Such overlapping gene sets may either share strongly mutually exclusive subsets of smaller size, or may all be subsets of a single, larger mutually exclusive gene set.
Findings for various cancers for pairs of genes support that the top patterns are indicative of coexistence in a common cancer pathway. For instance, for the pattern in Figure 3G, the protein products of the genes PTEN and MYC (element of META 2) are co-regulators of p53 in control of differentiation, self-renewal, and transformation in glioblastoma . The gene copy ratio of MYC and CDKN2A in the same pattern has a prognostic value in squamous cell carcinoma of the head and neck . Finally, PTEN and VHL are both known regulators of the HIF-1 pathway . PTEN and APC, common to two identified gene sets, are tumor suppressors that are known to interact in cancer .
Table S6 compares the p-values and estimated parameters, obtained for the top identified patterns, using the model accounting for false positives to the reduced model. As a rule, the former p-values are smaller, while the values of the coverage and impurity parameters estimated by the two models are similar. In one case however (Figure 3G), the estimated false positive rate is 0.037, yielding the estimated coverage accordingly smaller (0.45) than the estimate from the reduced model (0.55). This is why this pattern, although with larger observed coverage, in our true quality ranking would score lower than the pattern in Figure 3H. In general, for all pre-filtered subsets the ME test based on the model that accounts for false positives was more flexible, and returned a larger number of significant p-values (1397; adjusted ME p-value ), than the test based on the reduced model (1171).
This work brings two main contributions. First, a probabilistic, generative model of mutual exclusivity, with readily interpretable parameters that represent pattern coverage and impurity, as well as parameters that account for false positive and false negative rates. In the case when the data is clear of errors, we give closed-form expressions for maximum likelihood coverage and impurity estimates. For erroneous data, we propose an EM algorithm for parameter estimation. We prove analytically that the model parameters are identifiable, and show the limits of parameter estimation in practice, where the sample sizes are small. These limits allow accurate estimation of the most troublesome false positive rate, as well as the coverage and impurity parameters, which are most useful for pattern ranking. Second, we develop the ME test, which assesses the significance of mutual exclusivity patterns by comparing the likelihood of the dataset under the mutual exclusivity model to the null model assuming independent alterations of genes. The proposed test proves to be more powerful than a permutation test applied previously.
Our approach was first applied to identify mutually exclusive patterns that are specific for glioblastoma, with the assumption prevalent in the literature that the data does not contain errors. The genes that show the top identified patterns are involved in canonical glioblastoma signaling pathways, with addition of two novel genes, RPL5 and TRAT1. Next, we applied the model that accounts for false positives, and detected universal patterns with high coverage and low impurity, found significant by the ME test across a collection of samples from twelve different cancers. Although both these cancer cohorts were already analyzed in detail with cutting-edge tools , –, , our new testing procedure provides new, significant, and biologically relevant patterns that were not identified previously.
The proposed mutual exclusivity model could be extended in several ways. For instance, the current model explicitly assumes that the mutually exclusive mutations occur equally likely in all genes in the dataset. This assumption has two important advantages. First, the ME test finds most evidence for mutual exclusivity for balanced patterns, where the genes contribute similarly to the coverage. Second, with this assumption our EM algorithm is very efficient (Methods) and dropping it would increase its time complexity. The model may be extended to allow different mutually exclusive mutation rates of genes as parameters, which would be estimated from the data. Another possible extension of the model would allow for multiple gene sets, each with own coverage and impurity parameters, and the same error rates. Such a model, in contrast to previous work in this direction , would correct for errors and prioritize patterns with balanced mutually exclusive mutations. Finally, this work, focusing on modeling, evaluation, and testing for mutual exclusivity, does not deal with efficient search for mutual exclusivity patterns. Instead, we browse all possible, small gene subsets measured in glioblastoma, or all gene sets with high coverage in the pan-cancer data. Integration of the model into existing , or a new search procedure is one direction of our future research. Ideally, the objective optimized in the search would be a single measure that reflects preferred impurity, coverage, and significance in the ME test. These three evaluation criteria could be combined using appropriate priors in the ME model. The results presented here indicate that already now, the proposed approach is a step forward in the demanding task of mining cancer genomic data for the mechanistic principles of this disease.
The TCGA provisional glioblastoma data for 236 patients in 83 genes includes somatic point mutations (identified as significant by MutSig ), amplifications and deletions (called by GISTIC ). The combined analyzed dataset is filled with zeros, and has entry 1 whenever there was a significant point mutation, or a copy number variant that is concordant with expression in the data. For each gene, concordance of its copy number variants (amplifications and deletions) with expression data was assessed using the Wilcoxon test, comparing medians of the gene expression in the samples with the variant to expression in diploid samples. Specifically, amplifications were tested to have expression median higher, and deletions to have the median lower than the diploid cases. Only significantly concordant (p-value 0.05) variants were recorded in the analyzed dataset. The pan-cancer TCGA data has 3299 samples and records somatic point mutations, amplifications, deletions and methylations. Pre-processed data was downloaded from the cBioPortal and combined into a single binary matrix with altered genes as columns, separately for the GBM and for the pan-cancer data collection. In the combined pan-cancer matrix some columns were identical, with different genes having alterations in exactly the same patients. Since such genes are undistinguishable with respect to mutual exclusivity patterns, they were combined into “meta” sets of genes, and represented with a single column in the matrix.
Let be the set of model parameters, with coverage , impurity , false positive rate and false negative rate . We define the mutual exclusivity model on a set of random variables: hidden binary random variable that indicates patient coverage, hidden binary vector variable that specifies the single exclusively mutated gene in a covered patient, a set of hidden binary variables that represent the true alterations of genes, and a set of observed variables that correspond to the alteration status of genes recorded in the data. The model is defined by: for all , where is a unit vector of length with a single entry 1 at position . Thus, means that some other gene than is selected as mutually exclusively mutated. With this distribution of , our model is tailored for balanced patterns, where the mutually exclusive alterations occur on average equally frequently for each gene in the pattern. The set of of hidden binary random variables indicates true alterations in the genes. has value 1 either when gene is selected as mutually exclusive (for ), or, otherwise, when the entry for gene is impure, and it was mutated in addition to another gene (for ). In this model, the observed likelihood for a given observation depends only on the number of values 1 in the observation, denoted , and observation length , and is thus denoted (Text S1). For we have: (1)The likelihood of the whole dataset reads: (2)where is the number of observations with values 1 in . Thus, after pre-computation of values in steps, the likelihood can be computed efficiently in only steps of constant time complexity.
Parameter estimation in the model without errors.
In the reduced model we know and and we are interested only in estimating and . In this case, for all , , and the log likelihood reads (3)The maximum likelihood parameter estimates are given by and .
Parameter estimation in the model with errors.
By Proposition (1), we have that for the parameters in the full model are identifiable (Text S1). For maximum likelihood estimation, we propose an EM algorithm (Box 1 and Text S1). In our analysis, we set the input arguments to , , , and . The algorithm utilizes the estimates of the and parameters from the reduced mutual exclusivity model (assuming no errors) as educated guesses for initialization. In the E-step, five expected values are computed in constant time for values of . One reason for this computational efficiency is the assumption that (Text S1). The M-step is performed in constant time. After initial pre-computing steps, computation of is only , and therefore the complexity of the entire algorithm is . We expect that, as for all mutually exclusive patterns so far observed in the literature, holds. Thus, our algorithm gives a significant reduction in run time of EM in the usual case, where computations need to be performed for all observations, and where would replace in the complexity. Increasing difficulty of the estimation problem (from both error rates given to unknown, Table S7) for the same and fixed , increases the run time, due to larger number of iterations performed (from 21 to 1033 on average). In the case where the data is generated with errors, and the error rates or are known, we use the same EM algorithm for estimating the remaining parameters but fix the given values in the M-step.
Box 1. EM for Mutual Exclusivity Model.
The independence model assumes all genes are mutated independently. Each gene has individual alteration probability , and the vector parametrizes the model (Figure 1C). Let denote the number of patients with alteration in gene . With log likelihood (4)the maximum likelihood parameter values are given by .
The mutual exclusivity and independence models are not nested. To compare their likelihoods for a given dataset , we compute the Vuong's statistic , defined by the standardized and corrected log-likelihood ratio: (5)where (equation 1) and (equation 4) are the observed log likelihoods of the data for the maximum likelihood parameter estimates and under the mutual exclusivity and independence model, respectively, and is the standard deviation of the log likelihood ratios across observations. The second term is a correction for the difference in the numbers of free parameters in the models. For non-nested models , their has normal distribution with mean 0 and variance 1, and equals when the models have equal Kullback-Leiber divergence from the true model generating the data . Thus, the ME test p-value is given by .
For a given set of genes , the mutual exclusivity weight is defined as where is the number of samples with at least one alteration in . To assess significance of the weight, a permutation test is performed with the weight as test statistic, and the null distribution is obtained by independently permuting alterations 1000 times for each gene (each column in the dataset), preserving its alteration frequency.
Computation of mutual exclusivity weight can be severely biased by errors in the data. Left plot: mutual exclusivity weight, proposed by Vandin and colleagues , for datasets simulated from the mutual exclusivity model without errors. In this case, the observed weight (weight computed on observed data) is the same as the true weight (weight computed on true data, i.e., with true alteration status recorded), and increases with coverage and decreases with impurity. Arrow points at one example pair of datasets, indicating how they are ranked by the true weight. Middle: addition of false negatives decreases the observed weight (here, computed on the observed, erroneous dataset, and not based on the true alteration status), but has a consistent effect and does not disturb the ranking. Right: addition of false positives has most severe effect on ranking using the observed weight. An arrow points at two datasets, which based on the true weight (i.e. computed on data recording true alteration status, as in the left plot) were ordered increasingly, and which are now reverse-ordered by the observed weight.
Both our mutual exclusivity (ME) test and a permutation test, which was applied previously do not support mutual exclusivity in data generated from the independence model with independent frequencies distributed as in the glioblastoma dataset. Shown are log p-values for simulated data with 1000 patients, 20 datasets per each gene set size ( ).
Imbalance of patterns identified with the ME approach is much lower than of patterns identified using the previously proposed weight. Box-plots summarize the imbalance distribution for 11 patterns called significant with ME p-value , high coverage ( ) and low impurity ( ; red), as well as the 10, 100, and 1000 top patterns with the largest weight, called significant with permutation test (p-value ). Median imbalance of patterns prioritized using our approach is around three times lower than of patterns with top, significant weights, regardless of how many of the top ones are considered.
Parameter estimation in the reduced mutual exclusivity model can be severely biased by errors in the data. Left column: the difference between the true and the estimated parameter values for datasets simulated from the mutual exclusivity model without errors. In this case, both impurity (delta; top) and coverage (gamma; bottom) estimation is very accurate, regardless the impurity (marked with colors). The true coverage values are indicated on the x-axis. Middle column: addition of false negatives results in underestimation of the coverage parameter. Right column: addition of false positives results in underestimation of both the impurity and coverage parameters, and most strongly affects estimation of low coverage values.
Efficient parameter estimation of the coverage parameter and the impurity parameter , using the EM algorithm, from data generated from the mutual exclusivity model with error rates that were given to the model. The tested true parameter values were fixed to , and (20 datasets with 5 genes and 1000 patients were simulated per each parameter setting). There are different box plots of estimated parameter values for different true values. The medians of the estimated values are close to the true values, marked with red dashed lines.
Difficulties in estimating the full set of parameters. We applied our EM algorithm to estimate the coverage parameter , the impurity parameter , as well as false positive and false negative rate , from data generated from the mutual exclusivity model with error rates that were not given to the model, using increasing sample size. The tested parameter values were fixed to realistic values , , and . 20 datasets with genes and from 1000 (1 K) to 100000 patients (100 K) were simulated. Estimation accuracy increases with sample size.
More accurate parameter estimation assuming false negative rate . A Estimation of parameters , , and from data generated from the mutual exclusivity model accounting for false positives (false positive rate was not given to the model). The tested parameter values were fixed to , , and . B The estimation is more difficult when and are similar (for ). C Similarity of and is less of a problem for larger gene sets (here, 10 genes), as well as when more samples are used (not shown). All plots: results on simulations of 20 datasets with 5 genes and 1000 patients per each parameter setting.
Root mean squared error (RMSE) of parameter estimation for different model variants and sample sizes. To determine a reasonable dataset size for the different model variants, we tracked the RMSE of parameter estimates for sample sizes 200 and 1000, with typical parameter settings: , and error rates as indicated in the column “True error rates”. 20 datasets with 5 genes and the number of patients indicated in column “ ” were simulated from the models per each parameter setting. RMSE was chosen to represent the difficulty of the estimation task as a function of the sample size. For example, for the reduced model that assumes no errors, we have derived closed-form expressions for the maximum likelihood parameter values. Thus, in this case, RMSE of parameter estimates depends only on random variation in the data and defines the best you can get reference for the remaining models, where parameter estimation is more difficult and performed using EM. Since both the ME model likelihood and the test largely depend on how accurately the parameters are estimated, RMSE defines the applicability of the approach.
List of high quality, significant gene sets of size four identified in the GBM dataset.
Results for mutually exclusive patterns identified in the glioblastoma dataset by previous studies. Analyzed genes are written in bold, to distinguish from genes that were filtered out in preprocessing steps. Publication: the study in which the gene set was identified as mutually exclusive. Other results are given as in Table 1. *from this gene set, only TP53 passed the pre-filtering step, and thus no results are available.
Sets of genes that had identical columns in the combined pan-cancer data matrix and their short names used in the main text. Genes with identical columns in the combined and binarized pan-cancer data matrix were merged into sets and represented by a single column. The table lists those merged gene sets that are involved in top mutually exclusive patterns identified for the pan-cancer data.
List of high quality, significant gene sets of size five identified in the pan-cancer dataset.
Summary of top patterns identified for the pan-cancer dataset assuming false positives. , , p-value: coverage and impurity estimates, and the p-value from the reduced mutual exclusivity model, assuming no errors in the data. , , , ME p-value: parameter estimates and p-value from the mutual exclusivity model accounting for false positives.
Average runtime of the EM algorithm in CPU seconds. The table presents average runtimes of parameter estimation using the EM algorithm averaged over the datasets simulated and summarized in Table S1. The runtime increases with the difficulty of the parameter estimation problem.
Supplementary Methods. Likelihood in the mutual exclusivity model, identifiability of the mutual exclusivity model, and derivation of the Expectation Maximization algorithm.
Conceived and designed the experiments: ES NB. Analyzed the data: ES. Contributed reagents/materials/analysis tools: ES. Wrote the paper: ES NB.
1. TCGA (2008) Comprehensive genomic characterization defines human glioblastoma genes and core pathways. Nature 455: 1061–1068.
2. Garraway LA, Lander ES (2013) Lessons from the Cancer Genome. Cell 153: 17–37.
3. Miller C, Settle S, Sulman E, Aldape K, Milosavljevic A (2011) Discovering functional modules by identifying recurrent and mutually exclusive mutational patterns in tumors. BMC Medical Genomics 4: 34+.
4. Vandin F, Upfal E, Raphael BJ (2012) De Novo discovery of mutated driver pathways in cancer. Genome Res 22: 375–385.
5. Zhao J, Zhang S, Wu LY, Zhang XS (2012) Efficient methods for identifying mutated driver pathways in cancer. Bioinformatics 28: 2940–2947.
6. Ciriello G, Cerami E, Sander C, Schultz N (2012) Mutual exclusivity analysis identifies oncogenic network modules. Genome research 22: 398–406.
7. Leiserson MDM, Blokh D, Sharan R, Raphael BJ (2013) Simultaneous Identification of Multiple Driver Pathways in Cancer. PLoS Comput Biol 9: e1003054+.
8. Yeang CH, Mccormick F, Levine A (2008) Combinatorial patterns of somatic gene mutations in cancer. FASEB J 22: 2605–2622.
9. Sparks AB, Morin PJ, Vogelstein B, Kinzler KW (1998) Mutational analysis of the APC/betacatenin/Tcf pathway in colorectal cancer. Cancer Res 58: 1130–1134.
10. Rajagopalan H, Bardelli A, Lengauer C, Kinzler KW, Vogelstein B, et al. (2002) Tumorigenesis: RAF/RAS oncogenes and mismatch-repair status. Nature 418: 934.
11. Masica DL, Karchin R (2011) Correlation of somatic mutation and expression identifies genes important in human glioblastoma progression and survival. Cancer Research 71: 4550–4561.
12. Szczurek E, Misra N, Vingron M (2013) Synthetic sickness or lethality points at candidate combination therapy targets in glioblastoma. International Journal of Cancer 133: 2123–2132.
13. Cibulskis K, Lawrence MS, Carter SL, Sivachenko A, Jaffe D, et al. (2013) Sensitive detection of somatic point mutations in impure and heterogeneous cancer samples. Nat Biotech 31: 213–219.
14. Brennan CW, Verhaak RGW, McKenna A, Campos B, Noushmehr H, et al. (2013) The Somatic Genomic Landscape of Glioblastoma. Cell 155: 462–477.
15. Ciriello G, Miller ML, Aksoy BA, Senbabaoglu Y, Schultz N, et al. (2013) Emerging landscape of oncogenic signatures across human cancers. Nat Genet 45: 1127–1133.
16. Vuong QH (1989) Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica 57: 307–33.
17. Zhang W, Samelson LE (2000) The role of membrane-associated adaptors in T cell receptor signalling. Semin Immunol 12: 35–41.
18. Bruyns E, Marie-Cardine A, Kirchgessner H, Sagolla K, Shevchenko A, et al. (1998) T cell receptor (TCR) interacting molecule (TRIM), a novel disulfide-linked dimer associated with the TCR-CD3-zeta complex, recruits intracellular signaling proteins to the plasma membrane. J Exp Med 188: 561–575.
19. Zheng H, Ying H, Yan H, Kimmelman AC, Hiller DJ, et al. (2008) Pten and p53 converge on c-Myc to control differentiation, self-renewal, and transformation of normal and neoplastic stem cells in glioblastoma. Cold Spring Harb Symp Quant Biol 73: 427–437.
20. Akervall J, Bockmuhl U, Petersen I, Yang K, Carey TE, et al. (2003) The gene ratios c-MYC:cyclin-dependent kinase (CDK)N2A and CCND1:CDKN2A correlate with poor prognosis in squamous cell carcinoma of the head and neck. Clin Cancer Res 9: 1750–1755.
21. Zundel W, Schindler C, Haas-Kogan D, Koong A, Kaper F, et al. (2000) Loss of PTEN facilitates HIF-1-mediated gene expression. Genes Dev 14: 391–396.
22. Song MS, Carracedo A, Salmena L, Song SJ, Egia A, et al. (2011) Nuclear PTEN regulates the APC-CDH1 tumor-suppressive complex in a phosphatase-independent manner. Cell 144: 187–199.
23. Getz G, Hoing H, Mesirov JP, Golub TR, Meyerson M, et al. (2007) Comment on “The consensus coding sequences of human breast and colorectal cancers”. Science 317: 1500.
24. Beroukhim R, Getz G, Nghiemphu L, Barretina J, Hsueh T, et al. (2007) Assessing the significance of chromosomal aberrations in cancer: methodology and application to glioma. Proc Natl Acad Sci USA 104: 20007–20012.
25. Cerami E, Gao J, Dogrusoz U, Gross BE, Sumer SO, et al. (2012) The cBio Cancer Genomics Portal: An Open Platform for Exploring Multidimensional Cancer Genomics Data. Cancer Discovery 2: 401–404.
26. Szczurek E, Beerenwinkel N (2014) Modeling Mutual Exclusivity of Cancer Mutations. In: Research in Computational Molecular Biology, Springer. pp 307–308.
|
0.995106 |
Is Zika virus outbreak a solved issue in Brazil?
Although the first cases of Zika virus infection in Brazil were confirmed only in the first semester of 2015, recent publications suggest that the disease is probably among us since the beginning of 2014.(25) Brazil played an important scientific role and has been internationally recognized within the last two years for reporting the emergency situation of Zika virus outbreak, and also for the identification of neurologic outcomes in infants exposed to the infection during gestation.(23) The disease received striking visibility from the scientific community, emphasized by the increase in related publications within the last years.
In 2015 the Brazilian Ministry of Health advised women to avoid becoming pregnant,(28) and in 2016 Zika virus was declared a Public Health Emergency of international concern by the World Health Organization(29) resulting in a reduction of birth rates in some regions.(30) In 2017, however, the number of consultations in fertilization clinics increased again (Glina and Alvarenga, personal communication), which may reflect a decrease in levels of concern of general population regarding Zika virus infection.
Of note, this expressive reduction in the levels of concern about Zika virus infection in our setting can be premature, if not mistaken. Statistical modeling studies have been used to predict regions around the world that might be more affected by Zika virus. Such predicting models are important not only to guide preventive measures and help plan allocation of therapeutic resources, but are also important to guide where projects should concentrate efforts to clarify a number of unknown aspects of the disease. Modeling studies use information such as presence, efficiency and density of disease vectors, population density, temperature and local humidity, altitude, immunity or susceptibility of resident population, history of incidence of others arboviruses and population movements among geographic regions.(31,32) Many regions in Brazil remain signposted as high risk areas for Zika occurrence,(31) including the state of São Paulo, where available surveillance data show that disease incidence was lower compared with the Northeast region of the country,(26) and a large proportion of the population remains susceptible to the infection.
The arboviral transmission season in Brazil is just around the corner, and the risk for Zika virus infection must not be neglected, specially among women at reproductive age and pregnant women.
|
0.85891 |
The origins of the Green Knight are unknown. He/it appears to be a mystical entity somehow embodying the spirit of the British Isles, which is linked to or based at the Green Chapel of Avalon on Otherworld. The Green Knight is engaged in an endless war with his/its opposite number, the Red Lord, and his/its servants the Bane, and periodically empowers human champions to fight the Bane on his/its behalf. In the past, these champions included King Arthur Pendragon and the Knights of the Round Table, and in recent years the Knight has bonded the same 'Pendragon spirits' which once belonged to them, to a number of others including the superhero Albion (who became a Pendragon during the first World War) and, more recently, the Knights of Pendragon. Whether the Green Knight has any connection to the Pendragon spirit which was given to the Black Knight (Dane Whitman) is unclear, though it seems likely.
The Green Knight was seemingly destroyed by the Skrulls when they invaded Avalon along with the Lady of the Lake, but was later restored by Pete Wisdom and MI13 after he broke the doors of the dark magic.
During the events of the Revolutionary War, the Green Knight was paralyzed by the Mys-Tech using the DNA of original Knights of the Round Table and their corrupting influence to create the Zombie Knights of the Zombie Round Table with the Zombie King Arthur leading them. It was thanks to Peter Wisdom once again that the Green Knight was freed. As result of Avalon being the Collective Unconscious of the British citizen the Green Knight's appearance was transformed into an enormous basketball player capable of reviving its champion Sir Gawain.
As a manifestation of Britain's Collected Unconscious, the knight's powers are varied and fluctuate as result of the peoples view of the nations and it legends. Can imbue humans with guiding spirits (Pendragons), employing them as his protectors. While in Avalon it can also physical engage others in combat, showing a tremendous strength and speed, shape shifting and was capable of reviving others.
Moderately trained in the used of shields and swords.
Increases in pollution and feelings of hate can cause the Green Knight to weaken, sicken, and eventually die.
The Green Knight is ever changing however it wooden large form seems to be it base form or one that prefers the most.
|
0.936209 |
The House of Medici (/ˈmɛdɨtʃi/ med-i-chee; Italian pronunciation: [de ˈmɛːditʃi]) was a banking family, political dynasty and later royal house that first began to gather prominence under Cosimo de' Medici in the Republic of Florence during the late 14th century. The family originated in the Mugello region of the Tuscan countryside, gradually rising until they were able to fund the Medici Bank. The bank was the largest in Europe during the 15th century, seeing the Medici gain political power in Florence — though officially they remained citizens rather than monarchs.
The Medici Bank was one of the most prosperous and most respected institutions in Europe. There are some estimates that the Medici family were the wealthiest family in Europe for a period of time. From this base, they acquired political power initially in Florence and later in wider Italy and Europe. A notable contribution to the profession of accounting was the improvement of thegeneral ledger system through the development of the double-entry bookkeeping system for tracking credits and debits. The Medici family were among the earliest businesses to use the system.
The Medici family came from the agricultural Mugello region, north of Florence, being mentioned for the first time in a document of 1230.The origin of the name is uncertain. Medici is the plural of medico, also written "del medico" or "delmedigo", meaning, "medical doctor". It has been suggested that the name derived from one Medico di Potrone, a castellan of Potrone in the late 11th century, who presumably was the family's ancestor.
The Medici family was connected to most other elite families of the time through marriages of convenience, partnerships, or employment, as a result of which the Medici family had a central position in the social network: several families had systematic access to the rest of the elite families only through the Medici, perhaps similar to banking relationships. Some examples of these families include the Bardi, Salviati, Cavalcanti, and the Tornabuoni. This has been suggested as a reason for the rise of the Medici family. Members of the family rose to some prominence in the early 14th century in the wool trade, especially with France and Spain. Despite the presence of some Medici in the city's government institutions, they were still far less notable than other outstanding families such as the Albizzi or the Strozzi. One Salvestro de' Medici was speaker of the woolmakers' guild during the Ciompi revolt, and one Antonio was exiled from Florence in 1396. The involvement in another plot in 1400 caused all branches of the family to be banned from Florentine politics for twenty years, with the exception of two: from one of the latter, that of Averardo de' Medici, originated the Medici dynasty.
Tuscany participated in the Wars of Castro (the last time Medicean Tuscany proper was involved in a conflict) and inflicted a defeat on the forces of Urban VIII in 1643.The war effort was costly and the treasury so empty because of it that when the Castro mercenaries were paid for, the state could no longer afford to pay interest on government bonds, with the result that the interest rate was lowered by 0.75%. At that time, the economy was so decrepit that barter trade became prevalent in rural market places.
The Medici lacked male heirs, and in 1705, the grand ducal treasury was virtually bankrupt. The population of Florence declined by 50%; the population of the grand duchy as a whole declined by an estimated 40%. Cosimo desperately tried to reach a settlement with the European powers, but Tuscany’s legal status was very complicated: the area of the grand duchy formerly comprising theRepublic of Siena was technically a Spanish fief, while the territory of the old Republic of Florence was thought to be under imperialsuzerainty. Upon the death of his first son, Cosimo contemplated restoring the Florentine republic, either upon Anna Maria Luisa's death, or on his own, if he predeceased her. The restoration of the republic would entail resigning Siena to the Holy Roman Empire, but, regardless, it was vehemently endorsed by his government. Europe largely ignored Cosimo’s plan, only Great Britain and the Dutch Republic gave any credence to it, and the plan ultimately died with Cosimo III in 1723.
On 4 April 1718, Great Britain, France and the Dutch Republic (and later Austria) selected Don Carlos of Spain, the elder child ofElisabeth Farnese and Philip V of Spain, as the Tuscan heir. By 1722, the Electress was not even acknowledged as heiress, and Cosimo was reduced to spectator at the conferences for Tuscany's future. On 25 October 1723, six days before his death, Grand Duke Cosimo disseminated a final proclamation commanding that Tuscany stay independent: Anna Maria Luisa would succeed uninhibited to Tuscany after Gian Gastone, and the Grand Duke reserved the right to choose his successor. However, these portions of his proclamation were completely ignored and he died a few days later.
The Ruspanti, Gian Gastone's decrepit entourage, loathed the Electress, and she them. Duchess Violante, Gian Gastone's sister-in-law, tried to withdraw the Grand Duke from the Ruspanti sphere of influence by organising banquets. His conduct at the banquets was less than regal, he often vomited repeatedly into his napkin, belched, and regaled those present with socially inappropriate jokes.Following a sprained ankle in 1731, he remained confined to his bed for the rest of his life. The bed, oft smelling of faeces, was occasionally cleaned by Violante.
The family of Piero de' Mediciportrayed by Sandro Botticelli in theMadonna del Magnificat.
Later, in Rome, the Medici Popes continued in the family tradition of patronizing artists in Rome. Pope Leo X would chiefly commission works from Raphael. Pope Clement VII commissioned Michelangelo to paint the altar wall of the Sistine Chapel just before the pontiff's death in 1534. Eleanor of Toledo, princess of Spain and wife of Cosimo I the Great, purchased the Pitti Palace from Buonaccorso Pitti in 1550. Cosimo in turn patronized Vasari who erected the Uffizi Gallery in 1560 and founded theAccademia delle Arti del Disegno – ("Academy of the Arts of Drawing") in 1563. Marie de' Medici, widow of Henry IV of Franceand mother of Louis XIII, is the subject of a commissioned cycle of paintings known as the Marie de' Medici cycle, painted for theLuxembourg Palace by court painter Peter Paul Rubens in 1622-23.
Although none of the Medici themselves were scientists, the family is well known to have been the patrons of the famous Galileo Galilei, who tutored multiple generations of Medici children, and was an important figurehead for his patron's quest for power. Galileo's patronage was eventually abandoned by Ferdinando II, when the Inquisitionaccused Galileo of heresy. However, the Medici family did afford the scientist a safe haven for many years. Galileo named the four largest moons of Jupiter after four Medici children he tutored, although the names Galileo used are not the names currently used.
(Piero the Unfortunate) 9 April 1492 8 November 1494 Eldest son of Lorenzo the Magnificent. Overthrown when Charles VIII of Franceinvaded as a full republic was restored, first under the theocracy of Girolamo Savonarolaand then statesman Piero Soderini.
Allesandro il Moro 24 October 1529 6 January 1537 Cousin of Cardinal Ippolito de' Medici, illegitimate son of Lorenzo II de' Medici, Duke of Urbino or Pope Clement VII. Acting signoreduring imperial Siege of Florence, made Duke in 1531.
Cosimo I 6 January 1537 21 April 1574 Distant cousin of Alessandro de' Medici, Son of Giovanni dalle Bande Nere. dei Popolaniline descended from Lorenzo the Elder, Brother of Cosimo de' Medici; also great-grandson of Lorenzo the Magnificent through his mother, Maria Salviati, and his grandmother, Lucrezia de' Medici. 1569, h was made Grand Duke of Tuscany.
Jump up^ "Medici Family - - Encyclopædia Britannica". Encyclopædia Britannica. Retrieved 27 September 2009.
Jump up^ Silvia Malaguzzi, Botticelli. Artist's life, Giunti Editore, Florence (Italy) 2004, p. 33.
Jump up^ The name in Italian is pronounced with the stress on the first syllable /ˈmɛ .di.tʃi/ and not on the second vowel.How to say: Medici, BBC News Magazine Monitor. In American English, MED-uh-chee.
Jump up^ Padgett, John F.; Ansell, Christopher K. (May 1993). "Robust Action and the Rise of the Medici, 1400–1434". The American Journal of Sociology 98 (6): 1259–1319.doi:10.1086/230190. JSTOR 2781822.. This has led to much more analysis.
Jump up^ Machiavelli, Niccolò (1906). The Florentine history written by Niccolò Machiavelli, Volume 1. p. 221..
Jump up^ Bradley, Richard (executive producer) (2003). The Medici: Godfathers of the Renaissance (Part I) (DVD). PBS Home Video.
^ Jump up to:a b The Prince Niccolò Machiavelli. A Norton Critical Edition. Translated and edited by Rober M. Adams. New York. W.W. Norton and Company, 1977. p. viii (Historical Introduction).
Jump up^ 15th century Italy.
Jump up^ Hibbard, pp. 177, 202, 162.
Jump up^ Hibbert, The House of Medici: Its Rise and Fall, 153.
^ Jump up to:a b Hale, p. 150.
Jump up^ Hale, p. 151.
Jump up^ Austria and Spain were ruled by the House of Habsburg; the two are interchangeable terms for the Habsburg domains in the time period in question.
Jump up^ Hale, p. 158.
^ Jump up to:a b Hale, p. 160.
Jump up^ Hale, p. 165.
Jump up^ Strathen, p. 368.
Jump up^ Hale, p. 187.
Jump up^ Acton, p. 111.
^ Jump up to:a b Acton, p. 192.
Jump up^ Acton, p. 27.
Jump up^ Acton, p. 38.
^ Jump up to:a b Hale, p. 180.
Jump up^ Hale, p. 181.
Jump up^ Acton, p. 108.
Jump up^ Acton, p. 112.
Jump up^ Acton, pp. 140-141.
Jump up^ Acton, p. 185.
Jump up^ Acton, p. 182.
Jump up^ Acton, p. 243.
Jump up^ Strathern, p. 392.
Jump up^ Hale, p. 191.
Jump up^ Acton, p. 175.
Jump up^ Acton, pp. 275-276.
Jump up^ Acton, p. 280.
Jump up^ Acton, p. 297.
Jump up^ Acton, p. 188.
Jump up^ Acton, p. 301.
Jump up^ Acton, p. 304.
Jump up^ "Anna Maria Luisa de' Medici - Electress Palatine". Retrieved 3 September 2009.
Jump up^ Acton, p. 209.
Jump up^ Acton, p. 310.
Jump up^ Acton, p. 309.
Jump up^ Hibbert, p. 60.
Jump up^ Howard Hibbard, Michelangelo (New York: Harper and Row, 1974), p. 21.
Jump up^ Hibbard, p. 240.
|
0.999967 |
Hi, Christina here with a very touchy subject, breastfeeding.
My husband and I welcomed our first baby into this world in May of 2006. Our, well, my decision was "duh I'm breastfeeding" even though I had never been around anyone that breastfed. I had researched enough to know that was the only choice for my baby and I.
Fast forward, almost ten years, we now have (and will only have lol) four beautiful, healthy bundles of joy and sarcasm, bad moods, pickiness, you have the picture, breastfeeding can't do everything ya know :-) Here is an easy read article explaining the benefits from Written by: Leslie Burby "101 Reasons to Breastfeed Your Child"
The American Academy of Pediatrics recommends Breastfeeding: "Human milk is the preferred feeding for all infants, including premature and sick newborns... It is recommended that breastfeeding continue for at least the first 12 months, and thereafter for as long as mutually desired."
One of the most highly effective preventive measures a mother can take to protect the health of her infant is to breastfeed. However, in the United States, although most mothers hope to breastfeed, and 79% of babies start out being breastfed, only 19% are exclusively breastfed 6 months later. Additionally, rates are significantly lower for African-American infants.
The success rate among mothers who want to breastfeed can be greatly improved through active support from their families, friends, communities, clinicians, health care leaders, employers, and policymakers. Given the importance of breastfeeding for the health and well-being of mothers and children, it is critical that we take action across the country to support breastfeeding.
Breastfeeding has been linked to higher IQ scores in later childhood in some studies. What's more, the physical closeness, skin-to-skin touching, and eye contact all help your baby bond with you and feel secure. Breastfed infants are more likely to gain the right amount of weight as they grow rather than become overweight children. The AAP says breastfeeding also plays a role in the prevention of SIDS (sudden infant death syndrome). It's been thought to lower the risk of diabetes, obesity, and certain cancers as well, but more research is needed. Are There Breastfeeding Benefits for the Mother?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.