proba
float64 0.5
1
| text
stringlengths 16
174k
|
---|---|
0.997486 |
I love to help people with cancer. There is a lot of information and help you need, whether it is during chemo/radiation, after or if you opt to not do those treatments.
Also helping those with difficult illnesses but not limited to these patients.
|
0.999999 |
How did the gold standard work?
Bitcoin is a digital currency, which has monetary value and can be exchanged for fiat money. Most people are interested in knowing if it is backed by any commodity or currency of value and this is because this was how the financial system worked before now. Some time ago, nations used currencies backed by a valuable commodity such as gold or silver, but mostly gold, however, at some point in the 20th Century, most nations abandoned this standard for their monetary system.
In the gold standard, money was backed by gold. Here, a unit of money was associated with the value of certain gold coins in circulation. This system was established in 1821 and more common where a unit of money had the same value as a certain amount of gold coins and is known as the gold specie standard. This monetary standard ended in World War I.
Following the end of the gold specie standard in 1925, another system was adopted (first by Britain) where the gold coins were not in circulation, but the government issued what was known as gold bullion, in exchange for the currency in circulation. This gold bullion carried the value of a certain number of gold coins; therefore, buying this gold bullion was equivalent to buying gold. This system was known as the gold bullion standard.
The third system was like a de facto gold system, was after the World War II, where a fixed exchange rate was given to the currency of a country operating in a gold system (any of the first two above). Here, all other currencies were tied to the U.S. dollar.
When the gold system was being used, the major advantage was that there was long-term stability in price, inflation was rare (during wars), and exchange rates were fixed.
The disadvantages of this system were plenty. The countries with larger deposits of gold were richer. Also, to correct economic downturns, money supply is increased, but since money is backed by gold, no increase in gold supply means no increase in money supply, which in turn limits economic growth.
Generally, this system hardly allowed even spread of wealth in a nation.
With all these considered, most nations of the world prefer to operate in economic freedom, with fiat currency (currency not backed by gold) where policies can be made without the setback of lack of gold production. This system means that no matter how educated, industrious, beautiful or privileged a nation is, if it does not have large gold deposits, it could not really be rich.
After knowing the impact of operating the gold system, it is easy to know that Bitcoin is not backed by gold, as this is the era of economic liberty. Bitcoin has some really serious similarities with gold-backed money, for which reason the very question arose.
You have to mine gold, in order to create more gold-backed money and you have to spend resources in order to mine Bitcoin.
Nonetheless, Bitcoin is not backed by gold, and if at all it should be likened to any kind of money, it is most similar to fiat money. Which has no intrinsic value as such but is generally agreed on as being valuable.
In the 21st Century, money is circulated having no intrinsic value as such but is generally agreed on as being valuable. Therefore the markets and consensus determine the value of fiat money. Bitcoin operates in this manner and thrives. However, Bitcoin is regarded more as a stock which is usually exchanged for fiat currency such as U.S. Dollars, Euros and so on, maybe because of its quickly soaring value. It doesn’t take a genius to see that Bitcoin has combined the advantages of both fiat money and money backed by gold, hence becoming the new age currency.
|
0.958525 |
PHP has three different ways to connect to and interact with a MySQL database: the original MySQL extension (with functions), MySQL Improved (MySQLi, object-oriented), or PHP Data Objects (PDO, object-oriented).
They can't be mixed in the same script. The original MySQL extension is no longer actively developed and is not recommended for new PHP-MySQL projects.
The PHP documentation describes MySQLi as the preferred option recommended by MySQL for new projects.
Before you can access data in a database, you must create a connection to the MySQL server.
To connect to a MySQL server with PHP and MySQLi, you have to create a mysqli object by passing your data connecting to new mysqli().
- $servername - Specifies the server to connect to. If you pass the NULL value or an empty string "", the server will use the default value: "localhost"
- This connects to a database called "dbname", and stores the connection object as $conn.
If there is any error connecting, mysqli_connect_errno() returns the error code.
The connection will be closed automatically when the script ends. It's a good practice to close the conection earlier, when the script no longer uses it. This will release the memory used for the connection. To close the connection before, use the close() method of the MySQLi class.
To store data in MySQL, you need to use a database.
To get PHP to execute the SQL instructions, first you must create a mysqli object with the conection to the server, then use the query() method of the MySQLi class.
- $sql_query - is a string with SQL instructions.
This method sends a query or command to a MySQL connection, will return a result object, or TRUE on success. FALSE on failure.
- The extra instruction: DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci creates the database to use UTF-8 charset for encoding characters.
The example above attempt to create a database called "test", outputs "Database "tests" successfully created" on succes, or an error message on failure.
$conn->error (or also can be used mysqli_error($conn)) returns a string description of the last error (if exists).
SQL commands are case-insensitive, so, you can use "CREATE DATABASE" or "create database". But the name of tables and columns are case-sensitive.
Once you have set a connection to a database, you can create tables in that database. Tables are the elements that store data in databases.
To create a table in MySQL use the CREATE TABLE statement, than pass it to the query() method.
The data type specifies what type of data the column can hold. For a list of the MySQL data types, see the previous lesson: PHP MySQL Introduction, Data Types.
NOT NULL - Each row must contain a value for that column, null values are not allowed.
UNSIGNED - Can be used for number types, limits the stored data to positive numbers and zero.
AUTO INCREMENT - MySQL will automatically increases the value of the field by 1 each time a new record is added.
Each table should have a primary key column. Its value must be unique for each record in the table.
- This code will create a table called "users" in the "tests" database, with five columns (id , name , pass , email and reg_date), sets the 'id' column as the primary key field.
In the image below you can see the description of the "users" table.
In SQL statements is recommended that the names of the tables and columns to be added within apostrophes ` ` (carefully, not single quotes, but the character near the key with the number 1 on the keyboard). This is the correct syntax, but is not strictly required.
|
0.999843 |
How do I find the right manual?
First pick the bike category. Then select one of the three options in the second drop-down menu.
You can find manuals for specific bikes or parts, or through the keyword search.
|
0.952915 |
Harry Dobson was a coalminer from the Rhondda Valley. Dobson worked at the Blaenclydach Colliery. A strong trade unionist he was a member of the National Union of Mineworkers and the Communist Party of Great Britain (CPGB).
Dobson was involved in the campaign against fascism. In his autobiography, My Generation (1972) Will Paynter describes a demonstration against Oswald Mosley and the British Union of Fascists in Tonypandy: "I can recall a visit by a henchman of Mosley named Moran, who came with his van to Tonypandy. There was a spontaneous massing of people as soon as he arrived and he was forced to leave, but not before the angry crowd did battle with this blackshirt troop. Some thirty-six Rhondda men and women were sent for trial, many of them receiving gaol sentences, including a very close friend of mine, Harry Dobson."
At its Congress in February 1937, the South Wales District of the Communist Party of Great Britain stated: "This Congress sends its greetings to the remaining Taff Merthyr prisoner, Will Richards and to our comrades of the Rhondda, Harry Dobson, Sam Paddock and Arthur Griffiths who are imprisoned for their anti-Fascist activities."
The Communist Party of Great Britain supported the Popular Front government during the Spanish Civil War. The idea of an international force of volunteers to fight for the Republic was initiated by Maurice Thorez, the French Communist Party leader. Joseph Stalin agreed and in September 1936 the Comintern began organising the formation of International Brigades. An international recruiting centre was set up in Paris and a training base at Albercete in Spain. Dobson, who was unemployed at the time, decided he would join the British Battalion.
Dobson avoided the French-Spanish border by sailing from Marseilles to Barcelona. However, the ship he was travelling in was torpedoed in June 1937. Some of the men drowned but Dobson was able to reach land and eventually joined up with the International Brigade in Spain.
Will Paynter, who also fought for the Popular Front government, described Dobson as "quiet and unassuming but a great comrade." In his book Miners Against Fascism (1984), Hywel Francis adds that "Harry Dobson of Mid-Rhondda who, amongst the Welshmen, best combined the qualities of courage under fire, coolness and shrewdness in leadership with a profound political understanding."
Dobson joined up with the British Battalion in June 1937 and he went straight into action at Brunete, where he was wounded. After returning to the front-line he replaced Wally Tapsell as battalion commissar. He took part in the battles on the Aragón front in September and October, and was then sent to the Officers' Training School in Tarazona. In January 1938 he was promoted sergeant and rejoined the battalion towards the end of the Teruel campaign.
Dobson took part in the fighting at Huesca, Gandesa and Ebro. The author of Wales and the Spanish Civil War (2004) points out: "Many incidents of the bloody and prolonged encounter between the English-speaking battalions and the 6th Bandera of the Spanish Foreign Legion, an elite Nationalist unit which was well dug in to the key pinnacles, have passed into legend. The hill was attacked repeatedly for four or five days, incurring severe losses, and all in vain. At the first onslaught, Dobson was badly wounded in the upper abdomen and fell alongside Morris Davies (Treharris). Both men sustained their injuries whilst attacking enemy positions without thought of their own safety, an action which deserves to be regarded as heroic. It was an advanced and exposed position and only one stretcher party was in the vicinity. The bearers chose to take Davies, whose wound was more immediately life-threatening. Dobson lay helpless and in agony within the fire field for some time before being rescued."
Morris Davies later recalled: "I was given orders to capture a ridge. As I advanced with six other men we were peppered with enemy fire. We would not have achieved our objective had not Harry Dobson of the Rhondda given us cover-fire. Harry and I were caught by shrapnel. He insisted that his wound was not as bad as mine and... that I should be taken back on a stretcher first."
Reginald Saxton while receiving support from Leah Manning.
As there was only one available stretcher it was sometime before he was eventually taken back to base camp. Nan Green, Patience Darton and Leah Manning were all involved in nursing Dobson when he was wounded at the Battle of the Ebro. Green later recalled Manning holding his hand until he died. Manning later described what happened: "Patience (Darton) was just coming on duty for the night and as we went into the cave, the stretcher bearers brought in an English comrade from the British Battalion who was gravely wounded in the abdomen. He had had his spleen removed and Reggie Saxton had given him a blood transfusion. As I stood by he opened his eyes and spoke my name. I recognised him as a comrade whom I had met at a by-election in South Wales, a miner from Tonypandy named Harry Dobson. Dr. Jolly told me that it was not possible that he could live in fact they thought only a few hours, so I determined to stay by him until the end. Actually, it was fifteen hours before he passed away but I did not leave him during that time and he seemed very happy to have me there."
Harry Dobson died on 28th July 1938. During the battle of Ebro the Nationalist Army had 6,500 killed and nearly 30,000 wounded. These were the worst casualties of the war but it finally destroyed the Republican Army as a fighting force.
I can recall a visit by a henchman of Mosley named Moran, who came with his van to Tonypandy. There was a spontaneous massing of people as soon as he arrived and he was forced to leave, but not before the angry crowd did battle with this blackshirt troop. Some thirty-six Rhondda men and women were sent for trial, many of them receiving gaol sentences, including a very close friend of mine, Harry Dobson.
This Congress sends its greetings to the remaining Taff Merthyr prisoner, Will Richards and to our comrades of the Rhondda, Harry Dobson, Sam Paddock and Arthur Griffiths who are imprisoned for their anti-Fascist activities.
One of the comrades whom Griffiths relied on for advice and information was Harry Dobson. Indeed, Dobson and Griffiths acting together seem at times to have reached decision (or at least reconirnendations) upon the fates of fellow volunteers, up to and including that of "execution". It could be argued that Dobson had acquired sonic moral authority for such a role, since he was the most unambiguous example of a Welsh warrior hero produced by "Spain". He was already an authentic veteran by the time Griffiths arrived in the battalion. It was Dobson who allegedly posed the resonant question upon release from gaol - after serving a sentence for riotous objection to a fascist meeting in Tonypandy - "How do I get to Spain?" Whether or not they picked up this announcement, the fascists seemed to know Dobson was coming. Appropriately enough, Harry's war began even before he got to Spain, since he was among the survivors after the troop-ship Ciudad de Barcelona was torpedoed by an Italian submarine.
Arriving at the battalion in June 1937, Dobson went straight into action at Brunete, where he was wounded. After recovery he succeeded Tapsell as battalion commissar. He took part in the battles on the Aragon front in September-0ctober, and was then sent to the Officers' Training School in Tarazona. Early in 1938 he was promoted sergeant and rejoined the battalion towards the end of the Teruel campaign.
During the retreats of March he was captured, along with his whole platoon. Somehow they escaped their captors' clutches and, by travelling at night across country, safely regained the Republican lines. Dobson later - doubtless mainly in obedience to orders - published an account of the exploit which lacks credibility in detail, but may have served its morale-boosting purpose in encouraging the others.
Shortly after the Ebro offensive began, the XV Brigade was engaged in the siege of enemy positions in the hills overlooking the town of Gandesa. Divisional commander, General Walter, estimated that Gandesa's capture was essential to the further progress of the whole offensive; in turn, his staff decided that this success depended on the taking of the nearby summits, in particular the peak numbered "Hill 487".
Many incidents of the bloody and prolonged encounter between the English-speaking battalions and the 6th Bandera of the Spanish Foreign Legion, an elite Nationalist unit which was well dug in to the key pinnacles, have passed into legend. The hill was attacked repeatedly for four or five days, incurring severe losses, and all in vain. At the first onslaught, Dobson was badly wounded in the upper abdomen and fell alongside Morris Davies (Treharris). Both men sustained their injuries whilst attacking enemy positions without thought of their own safety, an action which deserves to be regarded as heroic. It was an advanced and exposed position and only one stretcher party was in the vicinity. The bearers chose to take Davies, whose wound was more immediately life-threatening. Dobson lay helpless and in agony within the firefield for some time before being rescued.
Eventually he was carried to Brigade HQ, a few kilometres away, in a complex of hillside caves near La Bisbal de Falset Here a field medical team operated on him. Despite his sedated condition, Dobson later spotted and recognized Leah Manning, ex-MP for the Labour Party and Hon. Sec. of the Spanish Medical Aid Committee, whom he had heard speaking during the very antifascist rally at which he had been arrested. The surgeon, Reg Saxton, told Manning that Dobson could only last a few hours. At some point it was decided to give the patient a blood transfusion, even though his spleen had been destroyed and recovery was impossible. This technique was still a new one, adapted by Saxton from a patent system made famous by the Canadian, Dr Bethune. In a letter to a Welsh coinrade, Manning later remarked that I may be able to send you a photograph taken of him in bed. We wanted to have a photograph of someone having a transfusion, for propaganda in this countrv. In fact, perhaps as a result of this decision, Dobson survived for fourteen hours rather than the two predicted. Manning staved at his side for most of this period and Dobson asked her to hold his hand. His last words were carefully chosen in order to emphasize the "Unity" between his own party and that of the Labour MP. "Comrade" - he said - "they will never keep back the progressive cause."
Harry Dobson of Mid-Rhondda who, amongst the Welshmen, best combined the qualities of courage under fire, coolness and shrewdness in leadership with a profound political understanding.... He served with distinction at Brunete, Quinto, Belchite, Mediana, Huesca, Teruel, Caspe and Grandesa. On one occasion he succeeded in escaping from captivity. Finally, having become a Political Commissar of the Major Attlee Company, he took part in the Ebo offensive during which he was fatally wounded on 28 July 1938.
The most important military experience for me was the Ebro offensive in the Summer of 1938. It was the biggest and last action for the International Brigades... I was given orders to capture a ridge. As I advanced with six other men we were peppered with enemy fire. We would not have achieved our objective had not Harry Dobson of the Rhondda given us cover-fire. Harry and I were caught by shrapnel. He insisted that his wound was not as bad as mine and... that I should be taken back on a stretcher first. I was taken back across the river to Cherte. Harry was removed later but he died.
I suppose that in all the history of modern warfare there has never been such a hospital. It is the safest place in Spain, beautifully wired for electric lights and with every kind of modern equipment. This hospital is evacuated twice a day. It is tragic to add that a large proportion of the evacuations are by death, because only the gravest cases are brought here from the front, and the only ones who remain for longer than the first day or two are abdominals and serious amputations.
Patience Darton and Ada Hodson were working there when we arrived. Patience was just coming on duty for the night and as we went into the cave, the stretcher bearers brought in an English comrade from the British Battalion who was gravely wounded in the abdomen. He had had his spleen removed and Reggie Saxton had given him a blood transfusion. As I stood by he opened his eyes and spoke my name. I recognised him as a comrade whom I had met at a by-election in South Wales, a miner from Tonypandy named Harry Dobson. Dr. Jolly told me that it was not possible that he could live in fact they thought only a few hours, so I determined to stay by him until the end. Actually, it was fifteen hours before he passed away but I did not leave him during that time and he seemed very happy to have me there.
|
0.999997 |
How do you setup public-key authentication on Unix?
Ensure public-key authentication is enabled and specify the keys to be offered during public-key authentication. If you have not already generated a user keypair, please see the Key Generation KB item before proceeding.
Ensure public-key authentication is allowed in the client config file. SSH Secure Shell checks configuration options in the following order: System-wide client configuration file, /etc/ssh2/ssh2_config User-Specific client configuration file, $HOME/.ssh2/ssh2_config Command line options The last value obtained is the one used. So, this means if you have the authority to change /etc/ssh2/ssh2_config, this is the best option, as it will allow all users the possibility to authenticate using public key. If you do not have the authority to change the system-wide config file, you can still edit your user-specific config file, $HOME/.ssh2/ssh2_config , to allow public-key authentication. Ensure the AllowedAuthentications keyword in the client config file contains at least 'publickey' as an allowed authentication method: AllowedAuthentications publickey,password Always place the least interactive method first. This usually means that if you wish to have multiple methods listed here, you should ensure that 'password' is last in the list.
|
0.999894 |
Read the following tips from National Geographic on how to protect the ocean. Help protect the last healthy, undisturbed places in the ocean, help healthy reefs thrive, help unhealthy reefs recover and be more conscious of what ends up in the ocean.
1. Mind Your Carbon Footprint and Reduce Energy Consumption Reduce the effects of climate change on the ocean by leaving the car at home when you can and being conscious of your energy use at home and work. A few things you can do to get started today: Switch to compact fluorescent light bulbs, take the stairs, and bundle up or use a fan to avoid oversetting your thermostat.
|
0.976686 |
Gitanjali Kolanad was involved in the practice, performance, and teaching of bharata natyam for more than to forty years. She performed in major cities in Europe, America and India. She collaborated with noted artists: director Phillip Zarrilli, video/installation artist Ray Langenbach, poet Judith Kroll, among others. Her work incorporated folk and ritual forms of dance, theatre and martial art forms from South India. Gitanjali's collection of short stories,“Sleeping with Movie Stars”, was published in 2011 by Penguin India. She has written on aspects of Indian dance for major Indian publications. Now, she teaches the Indian martial art form of kalaripayat in Toronto.
Brandy Leary uses the body as a means of philosophical enquiry, creating contemporary dancetheatre that is at once visceral and transcendent. Brandy holds a BA Honours in Theatre with a specialization in Direction and Asian Theatre from York University. She has lived between Canada and India for the past 14 years training, collaborating and creating in the traditional Indian performing languages of Seraikella and Mayurbhanj Chhau (dance), Kalarippayattu (martial art) and Rope Mallakhamb (aerial rope).
Brandy Leary founded Anandam Dancetheatre Productions (www.anandam.ca) in 2002 and is its Artistic Director. She has been the resident choreographer at the Bata Shoe Museum in Toronto since 2010.
This will be presented as a lecture with video clips of Leary's recent works, 'Confluence' and 'Precipice' contrasted with her performance of a traditional item from the chauu repertoire.
In 1970, when I first went from Canada to study bharata natyam in India, the dance was described as a revival of a two thousand year old tradition of temple dance going back to the Natya Shastra. This story remains prevalent and is the most widely accepted version of the history of bharata natyam, the descriptor 'two thousand year old dance form' appearing again and again in publicity material, reviews, grant applications, etc to this day, despite lack of historical veracity.
This story gives audiences with no specialist knowledge of the dance form reasons to suspend aesthetic judgments. Diasporic practitioners and audiences are deeply attached to the notion of 'ancient' and 'tradition' in relation to Indian dance, and find it hard to explain or value the art form without reference to its age or adherence to a 'tradition'.
What makes contemporary Indian dance 'Indian' dance? I look at this question through the work of chauu dancer Brandy Leary, who creates contemporary work that doesn't look 'Indian' in any superficial way, and is not buttressed by claims of 'authenticity', but nevertheless embodies 'Indian' concepts of dance and theatre, relationships of dancer to audience, and conforms to 'Indian' aesthetic principles that go back to the Natya Shastra.
|
0.960297 |
Sun and sand top the list of favorite summer vacations. But it’s hard to feel fully free when you’re dragging too much stuff around. Let other people sweat the small stuff, and streamline your list of hot weather must-haves so you look cool, even when the temperature isn’t. Here’s what to pack for the beach — without overpacking.
Overall Plan: Light and breezy items should dominate your wardrobe choices. While you want to be comfortable, skip the faded and raggedy T-shirts and instead aim for a summery look that’s polished, not dumpy. And while you may want to concentrate on getting there, make sure you spend some time thinking about how you’ll transport wet and sandy items back home. There’s nothing worse than a suitcase full of sand.
What’s Essential? You might hate shopping for them, but no beach vacation is complete without a swimsuit. Buy more than one so there’s always something dry to wear, and bring them along in your carry-on. Women should pack cute cover-ups, both to wear on the beach when it gets too hot and to walk along the boardwalk without too much exposure. In the evenings, costume jewelry can add just enough glamour to a sundress. Men should bring a lightweight button-down shirt for nicer restaurants; Tommy Bahama is always an upscale choice. For your feet, bring flip-flops, sandals or canvas tennis shoes, depending on the type of beach you’re on.
Choose a mesh or nylon beach bag with a distinctive pattern so it’s easy to spot among the crowds, and make sure it has inside pockets, preferably waterproof, to store valuables and small electronics such as your cell phone. Speaking of gadgets, make sure that they’re waterproof or have protective covers. A soft-sided insulated tote for drinks and snacks is easier to carry than a bulky cooler. Pack some disposable wipes for quick clean-up. Plastic bags can be your best friend: Use them to bring food to the beach, and then carry wet swimsuits and towels on the way home.
Secret Weapon: If you wear corrective lenses and your beach sessions involve exploring reefs for colorful fish, you’ll want to invest in a prescription snorkel mask. Having your own mask can also prevent communicable diseases (I once got a wicked case of pinkeye from a tainted snorkel mask in Costa Rica).
Safety First: No matter how good it feels, the sun is not your friend. Load up on sun protection with a strong sunscreen that you can reapply often. If you’re traveling to your destination by plane, look into sunscreen towelettes that won’t explode or leak. When you’re lathering up, don’t forget your face. Add lip balm, and wear sunglasses and a hat.
Leave at Home: Being on the beach is an excuse to cut loose; avoid bringing clothing that’s too stuffy or structured. If you’re staying at a hotel, find out ahead of time if towels and other beach amenities are included. Many vacation rentals also have “house” items such as camp chairs and barbecue grills so there’s no need to bring your own.
|
0.998767 |
Learn which safe practices protect against safety risks depending on service style. Each service style (way food is served) creates food safety risks. Service style can be cafeteria, buffet, family, potluck, waitress, temporary or delivered. Factors such as food temperature, personal hygiene, dirty dishes and insects can all cause problems.
Cafeteria style: food served from cafeteria in large quantities and immediately served to guests in line.
Buffet or family style: guests serve themselves from large containers on serving or individual plates.
Potluck (buffet without hot/cold holding unites): food brought to dinner and prepared in individual kitchens.
Temporary: food prepared and served from a temporary site such as at a fair, booth or festival.
Waiter/waitress: food brought to a table by a server.
Family style: bowls of food placed on table where guests are seated; guests serve themselves.
Home-delivered meals/carryout/catering: food is prepared at a kitchen and then delivered or served at another location.
|
0.998812 |
These soft skills are seen as incredibly valuable transferable skills to employers. Use them to your advantage. I talk about how to do it in this LiveCareer article: Margaret Buj is a career and interview coach who specializes in helping professionals get hired, promoted, and paid more. Margaret has 12 years of experience in recruiting for global technology and e-commerce companies across Europe and the United States. Over the past 11 years, she has successfully coached hundreds of people to the jobs and promotions they were seeking, with a focus on mastering interviewing skills; identifying unique selling points; and creating self-marketing strategies that enhance a reputation with a consistent online and offline brand presence.
You can learn more about Margaret, her services, and her award-winning blog here. Skip to primary navigation Skip to content Skip to primary sidebar. Proofread your resume and cover letter. Put them down for a few hours, come back, and proofread again. Then, get a friend or family member with a good eye to proof them for you.
Make your cover letter and resume! This means white space, reasonable margins, and bullet points for readability. Finally, using a professional cover letter builder can make it super simple to create effective cover letters for new grads. The 5 Sections of a New Grad Cover Letter A well-formatted cover letter is critical to new grads getting noticed by hiring managers.
Using a cover letter builder or cover letter templates are both great ways to ensure you include all of the essential elements, including: Do you research to find the name of the hiring manager. Diligent high school student 3. Aiming to use my abilities to successfully fulfill the cashier position at your store. My enthusiasm to learn new skills quickly will help your company meet its milestones. Earnest high school student with strong interpersonal and management skills.
Seeking to leverage my experience in student government and theatre to fulfill the duties of a customer service representative at you company. My abilities to cooperate with other and manage conflicts will be an asset to your company. Committed high school student 3. Aiming to utilize my experience as a member of the basketball team and honors society to effectively satisfy the responsibilities of administrative assistant at your company.
I am a driven worker who can meet deadlines and is eager to help your company succeed. They are free to download, and will help you land interviews faster:. Skip to content Skip to primary sidebar Skip to footer. Applying for a Janitorial Position Energetic and passionate entry-level professional seeking a full-time janitorial position. Leadership, Management, Organization Sports: Engaged, Active, Friendly, Enthusiastic Academics: Analytical, Hard Working, Fast Learner.
Customer Service Resume Objective Example Earnest high school student with strong interpersonal and management skills. For a slightly different approach to starting off your resume, check out our expert guide on how to write a professional profile. Get amazing job opportunities sent straight to your inbox. Teacher Resume Objective Sample. Admin Assistant Resume Objective. Nursing Resume Objective Example. Medical Assistant Resume Objective. Avoid misspelling these credentials, as they can be mistyped more frequently.
Being recent graduates, the education section on their resumes is extremely important. Some candidates might not possess internship experience, and all they have are their degrees and certifications. Recruiters usually short-list candidates for entry-level and internship positions on the basis of their educational qualifications. Therefore, candidates must be very careful while adding and organizing educational information on their resumes. Grads are in the process of starting careers as professionals and should not underestimate themselves.
It reveals they are eager to become professionals, and possess an innate love for specific career fields.
Here's an eye-catching resume for recent grads. Approximately million students in the U.S. graduated from college this year. Some are going into their post-college job search with an extensive.
Resume examples for a recent college graduate, what to include on your resume, as well as tips and advice for writing a resume as a college graduate. The Balance Careers College Graduate Resume Example and Writing Tips. Menu Search Go. Go. Finding a Job. Need More Help? Resume Writing for the Recent College Graduate. of a resume when first writing a professional resume. It A challenging position where I can help people and help the company succeed. OBJECTIVE. To obtain an entry -level position in a Fortune company. Education.
Below is a resume template for college students and college graduates, as well as advice on how to use the template. The resume template lists the information you need to include on your resume when you're a college student or recent graduate. A large collection of real, quality new college grad resume and cover letter samples for improving your job, internship, grad-school search. No cost.
|
0.999988 |
When in Dacia, do as the Romans?
What does Dacia have to do with Transylvania?
Dacia was the ancient area of Transylvania, and present day Romania, while I'm not entirely certain of the range to the present day Romania, (help is greatly needed) it is interesting to see how much the land was valued back then.
By the Turks, and Mongols, later, but also by the Roman Empire, who wanted Dacia to be a part of the territory to the East of Rome. After many battles, it became a province of the Roman Empire for a time.
According to some books, many of the Roman settlers left, some stayed, and for the most part, the Dacians remained there.
|
0.935124 |
Android 4.0 was focused on simplifying and modernizing the overall Android experience around a new set of human interface guidelines. As part of these efforts, it introduced a new visual appearance codenamed "Holo", which is built around a cleaner, minimalist design, and a new default typeface named Roboto. It also introduced a number of other new features, including a refreshed home screen, near-field communication (NFC) support and the ability to "beam" content to another user using the technology, an updated web browser, a new contacts manager with social network integration, the ability to access the camera and control music playback from the lock screen, visual voicemail support, face recognition for device unlocking ("Face Unlock"), the ability to monitor and limit mobile data usage, and other internal improvements.
Android 4.0 received positive reviews by critics, who praised the cleaner, revamped appearance of the operating system in comparison to previous versions, along with its improved performance and functionality. However, critics still felt that some of Android 4.0's stock apps were still lacking in quality and functionality in comparison to third-party equivalents, and regarded some of the operating system's new features, particularly the "face unlock" feature, as being gimmicks.
As of October 2018[update], statistics issued by Google indicate that 0.3% of all Android devices accessing Google Play run Ice Cream Sandwich.
Following the tablet-only release "Honeycomb", it was announced at Google I/O 2011 that the next version of Android, code named "Ice Cream Sandwich" (ICS), would be emphasized providing a unified user experience between both smartphones and tablets. In June 2011, details also began to surface surrounding a new Nexus phone by Samsung to accompany ICS, which would notably exclude hardware navigation keys. Android blog RootzWiki released photos in August 2011 showing a Nexus S running a build of ICS, depicting a new application menu layout resembling that of Honeycomb, and a new interface with blue-colored accenting. An official launch event for Android 4.0 and the new Nexus phone was originally scheduled for October 11, 2011, at a CTIA trade show in San Diego. However, out of respect for the death of Apple co-founder Steve Jobs, Google and Samsung postponed the event to October 19, 2011, in Hong Kong. Android 4.0 and its launch device, the Galaxy Nexus, were officially unveiled on October 19, 2011. Andy Rubin explained that 4.0 was intended to provide a "enticing and intuitive" user experience across both smartphones and tablets.
Matias Duarte, Google's vice president of design, explained that development of Ice Cream Sandwich was based around the question "What is the soul of the new machine?"; user studies concluded that the existing Android interface was too complicated, and thus prevented users from being "empowered" by their devices. The overall visual appearance of Android was streamlined for Ice Cream Sandwich, building upon the changes made on the tablet-oriented Android 3.0, his first project at Google; Duarte admitted that his team had cut back support for smaller screens on Honeycomb to prioritize sufficient tablet support, as he wanted Android OEMs to "stop doing silly things like taking a phone UI and stretching it out to a 10-inch tablet." Judging Android's major competitors, Duarte felt that the interface of iOS was too skeuomorphic and kitschy, Windows Phone's Metro design language looked too much like "airport lavatory signage", and that both operating systems tried too hard to enforce conformity, "[without] leaving any room for the content to express itself." For Ice Cream Sandwich, his team aimed to provide interface design guidelines which would evoke a modern appearance, while still allowing flexibility for application developers. He characterized the revised look of Ice Cream Sandwich as having "toned down the geeky nerd quotient" in comparison to Honeycomb, which carried a more futuristic appearance that was compared by critics to the aesthetics of Tron.
In January 2012, following the official launch of Ice Cream Sandwich, Duarte and Google launched an Android Design portal, which features human interface guidelines, best practices, and other resources for developers building Android apps designed for Ice Cream Sandwich.
The Galaxy Nexus was the first Android device to ship with Android 4.0. Android 4.0.3 was released on December 16, 2011, providing bug fixes, a new social stream API, and other internal improvements. The same day, Google began a rollout of Ice Cream Sandwich to the predecessor of the Galaxy Nexus, the Nexus S. However, on December 20, 2011, the Nexus S roll-out was "paused" so the company could "monitor feedback" related to the update.
Google Play Services support for 4.0 ended in February 2019.
The user interface of Android 4.0 represents an evolution of the design introduced by Honeycomb, although the futuristic aesthetics of Honeycomb were scaled back in favor of a flatter and cleaner feel with neon blue accenting, hard edges, and drop shadows for depth. Ice Cream Sandwich also introduces a new default system font, Roboto; designed in-house to replace the Droid font family, Roboto is primarily optimized for use on high-resolution mobile displays. The new visual appearance of Ice Cream Sandwich is implemented by a widget toolkit known as "Holo"; to ensure access to the Holo style across all devices—even if they use a customized interface skin elsewhere, all Android devices certified to ship with Google Play Store (formerly Android Market) must provide the capability for apps to use the unmodified Holo theme.
As with Honeycomb, devices can render navigation buttons—"Back", "Home", and "Recent apps"—on a "system bar" across the bottom of the screen, removing the need for physical equivalents. The "Menu" button that was present on previous generations of Android devices is deprecated, in favor of presenting buttons for actions within apps on "action bars", and menu items which do not fit on the bar in "action overflow" menus, designated by three vertical dots. Hardware "Search" buttons are also deprecated, in favor of search buttons within action bars. On devices without a "Menu" key, a temporary "Menu" key is displayed on-screen while running apps that are not coded to support the new navigation scheme. On devices that use a hardware "Menu" key, action overflow buttons are hidden in apps and are mapped to the "Menu" key.
The default home screen of Ice Cream Sandwich displays a persistent Google Search bar across the top of the screen, and a dock across the bottom containing the app drawer button in the middle, and four slots for app shortcuts alongside it. Folders of apps can be formed by dragging an app and hovering it over another. The app drawer is split into two tabs; one for apps, and the latter holding widgets to be placed on home screen pages. Widgets themselves can be resizable and contain scrolling content. Android 4.0 contains an increased use of swiping gestures; apps and notifications can now be removed from the recent apps menu and dismissed from the notifications area by sliding them away, and a number of stock and Google apps now use a new form of tabs, in which users can navigate between different panes by either tapping their name on a strip, or swiping left and right.
The phone app was updated with a Holo design, the ability to send pre-configured text message responses in response to incoming calls, and visual voicemail integration within the call log display. The web browser app incorporates updated versions of WebKit and V8, supports syncing with Google Chrome, has an override mode for loading a desktop-oriented version of a website rather than a mobile-oriented version, as well as offline browsing. The "Contacts" section of the phone app was split off into a new "People" app, which offers integration with social networks such as Google+ to display recent posts and synchronize contacts, and a "Me" profile for the device's user. The camera app was redesigned, with a reduction in shutter lag, face detection, a new panorama mode, and the ability to take still photos from a video being recorded in camcorder mode. The photo gallery app now contains basic photo editing tools. The lock screen now supports "Face Unlock", includes a shortcut for launching the camera app, and can house playback controls for music players. The keyboard incorporates improved autocomplete algorithms, and improvements to voice input allow for continuous dictation. The ability to take screenshots by holding down the power and "Volume down" buttons together was also added.
On devices supporting near-field communication (NFC), "Android Beam" allows users to share links to content from compatible apps by holding the back of their device up against the back of another NFC-equipped Android device, and tapping the screen when prompted. Certain "System" apps (particularly those pre-loaded by carriers) that cannot be uninstalled can now be disabled. This hides the application and prevents it from launching, but the application is not removed from storage. Android 4.0 introduced features for managing data usage over mobile networks; users can display the total amount of data they have used over a period of time, and display data usage per-app. Background data usage can be disabled globally or on a per-app basis, and a cap can be set to automatically disable data if usage reaches a certain quota as calculated by the device.
Android 4.0 inherits platform additions from Honeycomb, and also adds support for ambient temperature and humidity sensors, Bluetooth Health Device Profile, near-field communication (NFC), and Wi-Fi Direct. The operating system also provides improved support for stylus and mouse input, along with new accessibility, calendar, keychain, spell checking, social networking, and virtual private network APIs. For multimedia support, Android 4.0 also adds support for ADTS AAC, Matroska containers for Vorbis and VP8, WebP, streaming of VP8, OpenMAX AL, and HTTP Live Streaming 3.0.
Android 4.0 was released to positive reception: Ars Technica praised the Holo user interface for having a "sense of identity and visual coherence that were previously lacking" in comparison to previous versions of Android, also believing that the new interface style could help improve the quality of third-party apps. The stock apps of Android 4.0 were also praised for having slightly better functionality in comparison to previous versions. Other features were noted, such as the improvements to text and voice input, along with the data usage controls (especially given the increasing use of metered data plans), and its overall performance improvements in comparison to Gingerbread. However, the Face Unlock feature was panned for being an insecure gimmick, and although providing an improved experience over previous version, some of its stock applications (such as its email client) were panned for still being inferior to third-party alternatives.
Engadget also acknowledged the maturing quality of the Android experience on Ice Cream Sandwich, and praised the modern feel of its new interface in comparison to Android 2.3, along with some of the new features provided by Google's stock apps and the operating system itself. In conclusion, Engadget felt that Android 4.0 was "a gorgeous OS that offers great performance and—for the most part—doesn't feel like a half-baked effort." However, Engadget still felt that some of Android 4.0's new features (such as Face Unlock) had a "beta feel" to them, noted the lack of Facebook integration with the new People app, and that the operating system was still not as intuitive for new users than its competitors.
PC Magazine acknowledged influence from Windows Phone 7 in the new "People" app and improved benchmark performance on the web browser, but considered both Android Beam and Face Unlock to be gimmicks, and criticized the lack of support for certain apps and Adobe Flash on launch.
^ "android-45 5.0.2_r2.1 – platform/build – Git at Google". android.googlesource.com. June 6, 2012. Retrieved October 15, 2017.
^ "Dashboards | Android Developers". developer.android.com. Retrieved 2018-07-01.
^ "Tasty Ice Cream Sandwich details drip out of redacted screenshots". Ars Technica. Retrieved 24 July 2014.
^ "Google announces Android Ice Cream Sandwich will merge phone and tablet OSes". Ars Technica. Retrieved 24 July 2014.
^ "Leaked specs for beastly Google Nexus 4G may win carriers' hearts". Ars Technica. Retrieved 24 July 2014.
^ "Android Ice Cream Sandwich event moved to October 19 in Hong Kong". Ars Technica. Retrieved 24 July 2014.
^ a b Meyer, David (19 October 2011). "Google unveils Ice Cream Sandwich Android 4.0". ZDNet. Retrieved 24 July 2014.
^ a b c d e f "Exclusive: Matias Duarte on the philosophy of Android, and an in-depth look at Ice Cream Sandwich". The Verge. Vox Media. Retrieved November 28, 2011.
^ "Google launches style guide for Android developers". Ars Technica. Retrieved 25 July 2014.
^ "Android 4.0.3 Platform and Updated SDK tools". Android Developers Blog. December 16, 2011. Retrieved January 4, 2012.
^ "Nexus S Ice Cream Sandwich update pushed back". TechRadar. Retrieved 25 July 2014.
^ "Samsung Nexus S updates to Ice Cream Sandwich starting today". CNET. Retrieved 25 July 2014.
^ "Google announces Android 4.0.4". The Inquirer. March 29, 2012. Retrieved March 31, 2012.
^ "Google Play services drops support for Android Ice Cream Sandwich". VentureBeat. 2018-12-07. Retrieved 2018-12-08.
^ a b c d e f g Amadeo, Ron (16 June 2014). "The history of Android: The endless iterations of Google's mobile OS". Ars Technica. Condé Nast. Retrieved 6 July 2014.
^ "Google requiring default 'Holo' theme in Android 4.0 devices for Android Market access". The Verge. Retrieved 25 July 2014.
^ "Android 4.0 Ice Cream Sandwich SDK released with new features for developers". The Verge. Retrieved 25 July 2014.
^ "Say Goodbye to the Menu Button". Android developers blog. Retrieved 25 July 2014.
^ "Android menu button now on by default on all device with KitKat". PhoneArena.com. December 9, 2013. Retrieved February 9, 2014.
^ "Android 4.0 Ice Cream Sandwich complete guide". SlashGear. Retrieved 25 July 2014.
^ a b c d "Ice Cream Sandwich". Android developers portal. Retrieved 25 July 2014.
^ a b c "Unwrapping a new Ice Cream Sandwich: Android 4.0 reviewed". Ars Technica. 19 December 2011. Retrieved 25 July 2014.
^ a b "Android 4.0 Ice Cream Sandwich review". Engadget. Retrieved 25 July 2014.
^ "Google Android 4.0 "Ice Cream Sandwich"". PC Magazine. Retrieved 25 July 2014.
This page was last edited on 18 March 2019, at 12:27 (UTC).
|
0.950157 |
Thomas Hill (1867-1944), engineer, was born on 6 June 1867 at Wednesbury, Staffordshire, England, son of James Hill, bricklayer, and his wife Hannah, née Hawkins. Educated at the Grammar School, Walsall, he went to the United States of America at 16 before migrating to Victoria in February 1886. He became a cadet with Thomas Fender, district surveyor at Geelong, then worked at Collingwood and in the Croajingalong district. From about 1890 he was employed by the Department of Victorian Water Supply. He obtained a surveyor's certificate in 1894. In January 1896 he joined the Melbourne and Metropolitan Board of Works as a draughtsman; he was promoted to surveyor's assistant next year and engineering assistant in 1898. On 15 April 1897 at Albert Park he married Annie Mabel Thompson with Anglican rites.
In 1902 Hill joined the newly formed Commonwealth Department of Home Affairs as a draughtsman in the public works branch, Victoria. Quickly promoted, he became works director for Victoria in 1908 and in 1914 engineer in the central administration. His activities extended to the projected Federal capital at Canberra—a site of which he personally disapproved, preferring Albury. He was on the board which reported on the 'premiated' designs for the city and was concerned with the earliest engineering works, including the Cotter dam which he later claimed to have designed. Although closely questioned in 1916 and 1917 by the commission investigating Walter Burley Griffin's complaints of obstruction in the implementation of his plan for Canberra, Hill himself was not criticized. He became chief engineer of the department (now Works and Railways) in 1923 and visited the United States and England in 1927 to report on road construction and maintenance before the Federal Aid Roads Act (1931) was framed. In 1929 he was made director-general of works in Canberra. He returned to his family in Melbourne in September 1931 for furlough before his official retirement in June 1932.
From February 1918 Hill was deputy commissioner under the River Murray Waters Act (1915). An excellent chairman, he presided over meetings which approved the Hume and Lake Victoria reservoirs and the locks and weirs which made 600 miles (966 km) of the River Murray navigable, as well as the Euston, Torrumbarry and Yarrawonga weirs, and the important barrages at the Murray mouth. His last concern was the project to enlarge the Hume reservoir to two million acre feet (2,466,964 megalitres), and on his deathbed he tried to ensure that his successor should be someone not opposed to the scheme.
Hill was a member from 1896 and a past president of the Victorian Institute of Surveyors and a life member of the Victorian Institute of Engineers, having joined in 1897 and held the presidency in 1925-26. His membership of the Victorian branch of the Institute of Municipal Engineers entitled him in 1926 to associate membership of the Institution of Engineers, Australia; he became a member in 1930. He was appointed O.B.E. in 1928. A sound administrator, Hill is remembered as being friendly but discreet, tall, well built and strong even in his declining years, with a good head for whisky. He died on 12 May 1944 in Melbourne, and was cremated. His wife had died the previous year; his three sons survived him.
Ronald McNicoll, 'Hill, Thomas (1867–1944)', Australian Dictionary of Biography, National Centre of Biography, Australian National University, http://adb.anu.edu.au/biography/hill-thomas-6672/text11503, published first in hardcopy 1983, accessed online 20 April 2019.
|
0.920062 |
What's the current interest rate for personal loans? https://territorioabierto.jesuitas.cl/de-rodillas/ para que sirve el cefaclor capsulas 500 mg Obama said trading cuts to his signature healthcare programin exchange for an increase in the nation's borrowing limit isnot an option. "What I haven't been willing to negotiate, andwill not negotiate, is on the debt ceiling," he said.
|
0.995579 |
How did the architects renovate a single-family residence into an East Asian-inspired spa sanctuary while remaining thematically appropriate?
AB design studio took cues from Japanese contemporary design and southern Californian sensibilities to create a spa-like single-family residence in the northern hills of Santa Barbara, CA. The team worked closely with landscape architects to ensure that the site blended appropriately with the interior and exterior of the home.
By opening the interiors toward signature vistas through a series of sliding doors, the architects maximized views while blurring the line between the indoors and outdoors. Luxurious and well-appointed bathroom spaces serve as areas of sanctuary and retreat for the client.
Outdoor decks, designed as spaces for respite and relaxation, accentuate the perimeter and sit atop a Japanese Zen rock garden. Dotting the property is a series of koi ponds, surrounded with natural materials, and cobblestone and bamboo pathways.
|
0.93851 |
There are a number of methods of stone purification. The simplest is to place the stones in full sunlight for a day, three days, or even a week. The Sun's rays do the work here, burning away the unnecessary energies.
Place the stones in direct sunlight. An inside window ledge isn't as good as an outdoor location because window glass blocks some of the Sun's rays. Remove the stones each day at dusk.
Some stones will be 'clear' after a day's soaking up the rays. Others will need longer periods of time. Check the stones daily and sense the energies within them by placing them in your receptive hand. If the vibrations are regular, healthy, the cleansing has been successful.
A second method is somewhat more difficult. In this case, running water is the tool. Place the stones in moving water and leave them there for a day or two.
If you happen to have a river or stream running near your property, this is ideal. Place the stones in a net bag or devise some other method to ensure that they don't wash away in the water. Leave them overnight in the water, which gently washes away the impurities.
The third main technique is governed by the powers of the Earth. Bury the stone in the ground for a week or so, then check to see if it has been purified. If it has, wash or wipe it off and your magic can begin.
These are all natural purifications, performed with the energies of the elements. If you can't do them, however, there is another method, a ritual of purification, which can be performed in your own home. Perform this rite on your altar, if you have one, or on any table. It is best done at sunrise or during the day.
Fill a basin with pure water and place this to the West on the table or altar.
Next, light a red candle and set this to the South.
Light some incense and place this to the East.
Finally, place a dish or flowerpot filled with freshly dug earth to the North on the altar.
In between all these objects set the stone to be purified.
When all is readied, still your mind and pick up the stone in your projective hand.
Turn your attention toward the bowl of earth.
Place the stone on it and cover with fresh earth.
Say something to the effect of: I purify you with Earth! Leave the stone there for a few minutes, all the while visualizing the earth absorbing the stone's impurities. Then remove it, dust it clean, and hold it in the incense smoke. Pass it nine times through the smoke, from the right to the left, saying words like these: I purify you with Air! See the smoke wafting away the disturbing energies. Next, quickly pass the stone through the candle's flame several times, saying: I purify you with Fire! The fire bums away all negativity. Now place the stone in the water and say this or your own words: I purify you with Water! Visualize the water washing it clean. Leave the stone in the water for a time, then dry it with a clean cloth and hold it in your receptive hand. Is the stone clean'? If not, repeat this simple ritual as many times as necessary, until you are sure it has done its work. Afterward, store the stone in a special place. It is ready for use in magic.
|
0.999867 |
1. Bεβαίωση του γνησίου της υπογραφής πολιτών κρατών - μελών της Ε. Ε. από τις Υπηρεσίες του Υπ. Ο. Ο.
English - Παρέχονται πληροφορίες σχετικά με τη βεβαίωση του γνησίου της υπογραφής πολιτών κρατών - μελών της Ευρωπαϊκής Ένωσης από τις Υπηρεσίες του Υπουργείου Οικονομίας και Οικονομικών (υπ' αριθ. 9100/11-502271/25-6-2007 εγγράφο της Δ/νσης Αλλοδαπών του πρώην Υπουργείου Δημόσιας Τάξης).
Illegal trafficking and trade in human beings, mainly for the purpose of their sexual exploitation, is a phenomenon of international dimensions. The victims are mainly women, potential immigrants, who are mostly afflicted with poverty and unemployment, and seeking employment in other countries is vital for their survival, as well as for the survival of their families.Trafficking in human beings is a contemporary form of slavery and is degrading for human dignity and violates human rights.
The attached document includes the Codification of the Legislation on the entry, residence and social integration of third - party nationals on greek territory.
Information is given on the issue of long term visas (National visas - of type D) on the entrance and stay in the Greek territory of foreigners within the frame of national refugee policy. All interested parties may be informed on the conditions that have to meet and the supporting documents that they have to present, in order to receive the long stay visa.
|
0.930287 |
Using ideas from all three texts, address the following: How does not-knowing shape our understanding of the universe?
Science is commonly understood to be about what we know, and can know. In fact, the word itself derives from the Latin scientia, meaning knowledge. The authors of our three texts certainly revere science for what it has contributed to our understanding of the world, from an accurate measurement of the earth in the third century B.C. to the establishment of a sun-centered cosmology in the sixteenth century A.D. to, more recently, a plausible explanation for the extinction of the dinosaurs.Yet each of our authors also points to areas beyond the reach of knowledge. Hawking and Mlodinow admit that there is no way of knowing whether a given model of the world, be it M-theory, string theory, or even some alternative reality, is more real than another. Lightman demonstrates that it is impossible to know any part of the universe that is more than 13.7 billion light-years away. And even Gould ultimately values science not so much as a repository of known things, but as a fruitful mode of inquiry.
Previous Previous post: How would the addition of digital media options influence your division’s overall promotional plan?
|
0.948909 |
How do I reduce the amount of spam coming to my email account?
Then, you can delete them.
For this, you will need to enable SpamAssassin by clicking on the "enable SpamAssassin" button in SpamAssasin via cpanel->email->SpamAssassin. Then, set up the score that you want to give. The default is 5. It will first identify the spam based emails. Then, it increments the spam value each time it gets delivered. When the spam value reaches 6 (which is above the set score), the emails will automatically be marked as spam. The spam emails can be discarded by enabling "Spam Auto Delete' in SpamAssassin.
You can also configure SpamAssassin for various checks (Blacklisting/Whitelisting email ids, etc.).
For blocking emails from a specific email id, like [email protected]: Select "From and equals" from the dropdown menu and put [email protected] in the blank field with the appropriate action.
For blocking emails that have keywords like "watch", "watches", "Watches", WaTches", etc.: Select "Any Header and matches regex options" from the dropdown menu. Put the appropriate regular expression in the blank field.
can send mail to you. This will send an email back to the sender, requesting a reply to that before it is regarded as genuine. Hence, it will reduce the spam sent using scripts.
|
0.947863 |
Demetrius I (/dɪˈmiːtriəs/; Ancient Greek: Δημήτριος; 337–283 BC), called Poliorcetes (/ˌpɒliɔːrˈsiːtiːz/; Greek: Πολιορκητής, "The Besieger"), son of Antigonus I Monophthalmus and Stratonice, was a Macedonian Greek nobleman, military leader, and finally king of Macedon (294–288 BC). He belonged to the Antigonid dynasty and was its first member to rule Macedonia.
At the age of twenty-two he was left by his father to defend Syria against Ptolemy the son of Lagus. He was defeated at the Battle of Gaza, but soon partially repaired his loss by a victory in the neighbourhood of Myus. In the spring of 310, he was soundly defeated when he tried to expel Seleucus I Nicator from Babylon; his father was defeated in the autumn. As a result of this Babylonian War, Antigonus lost almost two thirds of his empire: all eastern satrapies fell to Seleucus.
After several campaigns against Ptolemy on the coasts of Cilicia and Cyprus, Demetrius sailed with a fleet of 250 ships to Athens. He freed the city from the power of Cassander and Ptolemy, expelled the garrison which had been stationed there under Demetrius of Phalerum, and besieged and took Munychia (307 BC). After these victories he was worshipped by the Athenians as a tutelary deity under the title of Soter (Σωτήρ) ("Saviour").
In the campaign of 306 BC, he defeated Ptolemy and Menelaus, Ptolemy's brother, in the naval Battle of Salamis, completely destroying the naval power of Ptolemaic Egypt. Demetrius conquered Cyprus in 306 BC, capturing one of Ptolemy's sons. Following the victory, Antigonus assumed the title "king" and bestowed the same upon his son Demetrius. In 305 BC, he endeavoured to punish the Rhodians for having deserted his cause; his ingenuity in devising new siege engines in his unsuccessful attempt to reduce the capital gained him the title of Poliorcetes. Among his creations were a battering ram 180 feet (55 m) long, requiring 1000 men to operate it; and a wheeled siege tower named "Helepolis" (or "Taker of Cities") which stood 125 feet (38 m) tall and 60 feet (18 m) wide, weighing 360,000 pounds.
In 302 BC, he returned a second time to Greece as liberator, and reinstated the Corinthian League, but his licentiousness and extravagance made the Athenians long for the government of Cassander. Among his outrages was his courtship of a young boy named Democles the Handsome. The youth kept on refusing his attention but one day found himself cornered at the baths. Having no way out and being unable to physically resist his suitor, he took the lid off the hot water cauldron and jumped in. His death was seen as a mark of honor for himself and his country. In another instance, Demetrius waived a fine of 50 talents imposed on a citizen in exchange for the favors of Cleaenetus, that man's son. He also sought the attention of Lamia, a Greek courtesan. He demanded 250 talents from the Athenians, which he then gave to Lamia and other courtesans to buy soap and cosmetics.
He also roused the jealousy of Alexander's Diadochi; Seleucus, Cassander and Lysimachus united to destroy him and his father. The hostile armies met at the Battle of Ipsus in Phrygia (301 BC). Antigonus was killed, and Demetrius, after sustaining severe losses, retired to Ephesus. This reversal of fortune stirred up many enemies against him—the Athenians refused even to admit him into their city. But he soon afterwards ravaged the territory of Lysimachus and effected a reconciliation with Seleucus, to whom he gave his daughter Stratonice in marriage. Athens was at this time oppressed by the tyranny of Lachares—a popular leader who made himself supreme in Athens in 296 BC—but Demetrius, after a protracted blockade, gained possession of the city (294 BC) and pardoned the inhabitants for their misconduct in 301 BC.
After Athens' capitulation, Demetrius formed a new government which espoused a major dislocation of traditional democratic forms, which anti Macedonian democrats would have called oligarchy. The cyclical rotation of the secretaries of the Council and the election of archons by allotment, were both abolished. In 293/3 - 293/2 B.C., two of the most prominent men in Athens were designated by the Macedonian king, Olympiordoros and Phillipides of Paiania. The royal appointing is implied by Plutarch who says that "he established the archons which were most acceptable to the Demos."
In 294 BC, he established himself on the throne of Macedonia by murdering Alexander V, the son of Cassander. He faced rebellion from the Boeotians but secured the region after capturing Thebes in 291 BC. That year he married Lanassa, the former wife of Pyrrhus, but his new position as ruler of Macedonia was continually threatened by Pyrrhus, who took advantage of his occasional absence to ravage the defenceless part of his kingdom (Plutarch, Pyrrhus, 7 ff.); at length, the combined forces of Pyrrhus, Ptolemy and Lysimachus, assisted by the disaffected among his own subjects, obliged him to leave Macedonia in 288 BC.
Bronze portrait head, as of September 2007 housed in the Prado Museum, Madrid. This head is no longer identified as Hephaestion, and instead may be Demetrius.
After besieging Athens without success he passed into Asia and attacked some of the provinces of Lysimachus with varying success. Famine and pestilence destroyed the greater part of his army, and he solicited Seleucus' support and assistance. However, before he reached Syria hostilities broke out, and after he had gained some advantages over his son-in-law, Demetrius was totally forsaken by his troops on the field of battle and surrendered to Seleucus.
His son Antigonus offered all his possessions, and even his own person, in order to procure his father's liberty, but all proved unavailing, and Demetrius died after a confinement of three years (283 BC). His remains were given to Antigonus and honoured with a splendid funeral at Corinth. His descendants remained in possession of the Macedonian throne till the time of Perseus, when Macedon was conquered by the Romans in 168 BC.
His first wife was Phila daughter of Regent Antipater by whom he had two children: Stratonice of Syria and Antigonus II Gonatas.
His second wife was Eurydice of Athens, by whom he is said to have had a son called Corrhabus.
His third wife was Deidamia, a sister of Pyrrhus of Epirus. Deidamia bore him a son called Alexander, who is said by Plutarch to have spent his life in Egypt, probably in an honourable captivity.
His fourth wife was Lanassa, the former wife of his brother-in-law Pyrrhus of Epirus.
His fifth wife was Ptolemais, daughter of Ptolemy I Soter and Eurydice of Egypt, by whom he had a son called Demetrius the Fair.
He also had an affair with a celebrated courtesan called Lamia of Athens, by whom he had a daughter called Phila.
Plutarch wrote a biography of Demetrius.
The Siege of Rhodes (305-304 BC), led by Demetrius.
Hegel, in the Lectures on the History of Philosophy, says of another Demetrius, Demetrius Phalereus, that "Demetrius Phalereus and others were thus soon after [Alexander] honoured and worshipped in Athens as God." What the exact source was for Hegel's claim is unclear. Diogenes Laërtius in his short biography of Demetrius Phalereus does not mention this. Apparently Hegel's error comes from a misreading of Plutarch's Life of Demetrius which is about Demetrius Poliorcetes and not Demetrius of Phalereus. Plutarch describes in the work how Demetrius Poliorcetes conquered Demetrius Phalereus at Athens. Then, in chapter 12 of the work, Plutarch describes how Demetrius Poliorcetes was given honors due to the god Dionysus. This account by Plutarch was confusing not only for Hegel, but for others as well.
Plutarch's account of Demetrius' departure from Macedonia in 288 BC inspired Constantine Cavafy to write "King Demetrius" (ὁ βασιλεὺς Δημήτριος) in 1906, his earliest surviving poem on an historical theme.
Demetrius is the main character of the opera Demetrio a Rodi (Turin, 1789) with libretto by Giandomenico Boggio and Giuseppe Banti. The music is set by Gaetano Pugnani (1731-1798).
Demetrius appears (under the Greek form of his name, Demetrios) in L. Sprague de Camp's historical novel, The Bronze God of Rhodes, which largely concerns itself with his siege of Rhodes.
Alfred Duggan's novel Elephants and Castles provides a lively fictionalised account of his life.
One or more of the preceding sentences incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Demetrius s.v. Demetrius I" . Encyclopædia Britannica. 7 (11th ed.). Cambridge University Press. p. 982.
^ Walter M. Ellis, Ptolemy of Egypt, Routledge, London, 1994, p. 15.
^ Shear, T. Leslie (1978). Kallias of Spettos and The Revolt of Athens in 286 B.C. Prnceton, New Jersey: Library of Congress. pp. 53–54. ISBN 0-87661-517-5.
^ Prado Museum: "Retrato en bronce de un Diádoco"
^ Dictionary of Greek and Roman Biography and Mythology. p. 120. Retrieved 1 December 2015.
^ Georg Wilhelm Friedrich Hegel, Lectures on the History of Philosophy, volume 2, Plato and the Platonists, p. 125, translated by E. S. Haldane and Frances H. Simson, Lincoln, Nebraska: University of Nebraska Press, 1995.
^ Diogenes Laërtius, Lives and Opinions of Eminent Philosophers, Book V.
^ Kenneth Scott, "The Deification of Demetrius Poliorcetes: Part I", The American Journal of Philology, 49:2 (1928), pp. 137–166. See, in particular, p. 148.
^ Demetrio a Rodi: festa per musica da rappresentarsi nel Regio teatro di Torino per le nozze delle LL. AA. RR. Vittorio Emanuele, 48p. Published by Presso O. Derossi, 1789.
Billows, Richard A. (1990). Antigonos the One-Eyed and the Creation of the Hellenistic State. Berkeley and Los Angeles, California: University of California Press. ISBN 0-520-20880-3.
|
0.98966 |
The three descend the foothills to the edge of the marshes. Frodo offers Gollum some lembas, but he spurns it. Later, while the hobbits sleep, he goes off and catches a fish for himself. Then he leads them through the Dead Marshes. At the center, Sam notices lights all around, which Gollum calls "candles of corpses" and warns them not to look at them. Frodo is frozen behind them doing just that, but Sam gets him going again. Then Sam trips and sees faces in the marsh pools. Smeagol says they're the faces of men who died in battle there. Finally they get to the end of the marshes, and then are terrorized by the passage of a Nazgul, who flies over them and then back to Mordor. The Ring begins to physically weigh Frodo down.
In the slag heaps before the Gate, they fall asleep. Sam wakes to see Smeagol debating with himself over Frodo. He finally decides to wait, as "She might help." Frodo wakes and orders them on toward the Gate, but their fear increases.
1. Gollum is a reliable guide through the Marshes—why doesn't he try to drown them there?
2. How does Gollum know the story of the Dead Pools?
3. If the Ring is trying to get back to Sauron, and Frodo can feel the presence of Sauron in front of him, why doesn't he feel a pull toward Mordor rather than a great dragging weight?
I think that Gollum, in the years since he emerged from his undermountain lair, came into contact (even in hiding) with a lot of people from whom he could have gleaned the information. Also, who knows what tales were told in his original community and environs? Middle-earth is a world of storytellers, after all.
Why didn't he try to drown Frodo? Well, for one thing, he'd have had Sam to deal with, and he couldn't fight them both at once. Maybe if he caught them sleeping... Well, he was biding his time. He wasn't certain what they were trying to do. And he did swear an oath, for what that's worth. He wasn't totally corrupted--yet. Give him a few chapters.
Hello! Joining the VTSG now, as I have ample time for a month. And I am relatively new to Tolikien (5 years) so I will get to learn a lot from this.
I might try question no. 2.
If you want to know who and what Gollum is then read this paragraph as most probably it will help to understand the answer. But if you want to know it at a later time, then please ignore this. Hobbits have three subdivisions. These are harfoots, fallohides and stoors. No need to know about the harfoots or fallohides now, stoors are the one we are interested in. Gollum was a stoor hobbit. real name Smeagol. The stoor hobbits lived near the Gladden fields (where Isildur died).
The ancient stoors who lived near the Gladden fields would have had the knowledge of the great battle at the marshes (Dead Marshes was not too far away from there) and thus it was passed on to the next generations. And so Smeagol came to know about it too.
question 3. Is really very interesting. I think it's a great topic to start a thread in the Books forums.
EDITED: The Ring wanted to find it's way back into the hands of Sauron's minions. So, it made Frodo's journey difficult by "dragging" him, making it easier for Frodo to loose his control.
3. If the Ring is trying to get back to Sauron, and Frodo can feel the presence of Sauron in front of him, why doesn't he feel a pull toward Mordor rather than a great dragging weight?
Think less of a magnetic attraction and more like the stations of the cross. Tolkien was a good Catholic, after all.
Smeagol does make reference to having been told about the battle when he was young, so it had been fairly common knowledge at one time. A few hundred years had elapsed since he heard about it, after all, and the Gladden Fields (and all of Rhovanion) were definitely closer to the battle site than, say, the lands of Eriador.
Ah, Morwenna, on the money as usual. I had temporarily forgotten how OLD the little %$#(* was.
Sometimes I wonder if Smeagol was cute or nice at all when he was a baby.
Q. 2 What Morwenna and siddharth said.
Q.3 I think the Ring gets to be more full of itself, so to speak, as it draws near to Sauron, which in part explains why it becomes heavier. I think it becomes more itself, if you take my meaning. The effect of the proximity to Sauron is not iron to a magnet, it's more like a teenage girl and One Direction. The closer, the more agitated and insane.
|
0.916652 |
The popularity of Watt's improved steam engine was rapid, and by 1790 the old Newcomen steam engine was out of sight, and by 1790 there were about 500 steam engines operating in Britain. After less than 100 years or so, by 1868, there were as many as 75,000 steam engines in Britain alone. In 1805, the steam engine in the United States was equipped with cars as a driving force. In 1807, Robert Fulton (1765-1815) of the United States invented a steamship powered by a steam engine. In 1825 George Stephenson (1781-1848) created steam locomotives that could run on rails. The high pressure steam engine was invented by Richard Trevithik (1771-1833) of England in 1800. In 1801, American Oliver Evens (1755?-1819) built a really useful high-pressure steam engine.
When the speed governor is used, the initial operation is normal. However, when the speed of the steam engine increases, the governor cannot run steadily, there will be a phenomenon of fast and slow. The first to study the stability of Governors was the British physicist James Clerk Maxwell (1831-1879). Maxwell's On Regulator published in 1868 first described the motion state of speed governor by differential equation. He derived the differential equation of governor and linearized it near the equilibrium point. It was pointed out that the stability depends on whether the root of characteristic equation has negative real part. Maxwell studied the specific system described by the third order differential equation and the special system with the fifth order differential equation, and gave the stability conditions of the system.
Later, in 1872, Vesnegrasky of Russia (1072 Like Maxwell, he uses linearization to simplify the problem and obtains more complete stability conditions.
Speed governor is a technical invention, because of its importance, it began to use steam engine in general, and thus the industrial revolution. The study of industrial cybernetics can also be said to begin with the study of governor. And the study of the stability of governor leads to the beginning and depth of the study of the stability of mechanical system. Therefore, it is very important to understand the history of governors for understanding the history of steam engines, for understanding the history of cybernetics, and for understanding the history of motion stability research.
(1) The steam turbine operates independently and adjusts the intake of the steam turbine when the operating conditions change, so that the speed of the steam turbine can be kept within the prescribed range.
(2) When the steam turbines are integrated into the power grid, the load of the steam turbines will be adjusted to keep within the prescribed range when the power grid cycle changes.
(3) For steam turbines with regulated extraction, when the operating conditions of the steam turbine change, adjusting the extraction pressure is within the prescribed range.
Why should a steam turbine set a speed governing system?
The work of a turbo generator is determined by the balance between the action moment M of steam acting on the turbine rotor and the load reaction moment M resistance of the generator rotor. When the force is equal to the reaction force, i.e. M steam = M resistance, the turbo generator is in a stable condition of constant speed rotation. However, the electricity consumption of external users is constantly changing, that is, M resistance is constantly changing, so the intake of steam turbine must be changed accordingly to ensure that M steam = M resistance. Otherwise, the speed of the turbine will change greatly with the external load. When the external load increases, the speed decreases, while the external load decreases, the speed increases. Therefore, the electric voltage and frequency of the electric energy and frequency are always high and low, which is absolutely not allowed. In order to ensure the power quality and safe operation of the unit, all units must be equipped with speed control system to adjust the steam intake of the turbine to adapt to changes in external load.
|
0.939953 |
Following the footsteps of travellers and early settlers, a variety of tracks and places were used to cross the Yass River. One was Flat Rock Crossing, which is still in use today. Horsemen to Port Phillip rode along Rossi Street past Laidlaw's grave and the cemetery crossing where the river began to go south again and onto Bookham. Another crossing was at the foot of Dutton Street near the old tramway bridge. A track to Goulburn went from the town boundary between Hovell and O'Brien Streets using Flat Rock Crossing through Yass town. By 1840, Parish Road Trusts were established to raise funds for road improvements. This was followed in 1843 by the establishment of 29 district councils including Yass. Their responsibilities included roads, but funds were a problem due to drought and rural depression. In 1848, £500 was allocated to build a bridge, but controversy followed. Would it be stone, or stone and timber? Allegedly the contractor built three piers of rubble rather than hammer-dressed stone as specified. By 1850, the bridge piers and abutments lay abandoned. Eventually, the Hume Bridge, named in honour of Hamilton Hume, was built on the laminated arch suspension principle using ironwork from Sydney and timber from the Yass area. It was opened September 18, 1854 with great fanfare. In 1859, WH Downey of Queanbeyan carried out repairs to the bridge. In 1861, further repairs followed. By 1866 there were problems with white ants and a section of the bridge was collapsing. In 1867, 1,000 guineas were allocated to build a new bridge. Again there was controversy - concerning materials, side platforms and the need for a ford during construction, which finally began in 1870. Two beams of the new bridge were in position awaiting riveting when tragedy struck. On April 5, river waters began rising until flood waters destroyed both bridges on April 26. A month later a decision was made that the bridge cylinders would be raised by eight feet and 25 foot spans added to both ends. The new Hume Bridge was officially opened on July 25,1871, two years prior to the death of its namesake. Apart from raising the height of the arches three feet to accommodate trailers carrying aeroplanes during WWII, and some strengthening work, the bridge remained largely unchanged for the next 100 years. Our current Hume Bridge was quietly opened in October 1977, at a cost of more than $1 million. Two of the old bridge arches have been restored and placed in Riverbank Park by local council in 2011. Information for nearby interpretive signage was provided by Yass & District Historical Society.
Following the footsteps of travellers and early settlers, a variety of tracks and places were used to cross the Yass River. One was Flat Rock Crossing, which is still in use today. Horsemen to Port Phillip rode along Rossi Street past Laidlaw's grave and the cemetery crossing where the river began to go south again and onto Bookham. Another crossing was at the foot of Dutton Street near the old tramway bridge. A track to Goulburn went from the town boundary between Hovell and O'Brien Streets using Flat Rock Crossing through Yass town.
By 1840, Parish Road Trusts were established to raise funds for road improvements. This was followed in 1843 by the establishment of 29 district councils including Yass. Their responsibilities included roads, but funds were a problem due to drought and rural depression.
In 1848, £500 was allocated to build a bridge, but controversy followed. Would it be stone, or stone and timber? Allegedly the contractor built three piers of rubble rather than hammer-dressed stone as specified. By 1850, the bridge piers and abutments lay abandoned.
Eventually, the Hume Bridge, named in honour of Hamilton Hume, was built on the laminated arch suspension principle using ironwork from Sydney and timber from the Yass area. It was opened September 18, 1854 with great fanfare.
In 1859, WH Downey of Queanbeyan carried out repairs to the bridge. In 1861, further repairs followed. By 1866 there were problems with white ants and a section of the bridge was collapsing. In 1867, 1,000 guineas were allocated to build a new bridge. Again there was controversy - concerning materials, side platforms and the need for a ford during construction, which finally began in 1870.
Two beams of the new bridge were in position awaiting riveting when tragedy struck. On April 5, river waters began rising until flood waters destroyed both bridges on April 26. A month later a decision was made that the bridge cylinders would be raised by eight feet and 25 foot spans added to both ends. The new Hume Bridge was officially opened on July 25,1871, two years prior to the death of its namesake. Apart from raising the height of the arches three feet to accommodate trailers carrying aeroplanes during WWII, and some strengthening work, the bridge remained largely unchanged for the next 100 years.
Our current Hume Bridge was quietly opened in October 1977, at a cost of more than $1 million. Two of the old bridge arches have been restored and placed in Riverbank Park by local council in 2011. Information for nearby interpretive signage was provided by Yass & District Historical Society.
Discuss "Of bridges and crossings"
|
0.999813 |
People all over the world, if they were asked who first found America could answer, “Christopher Columbus” and give the date of the great event: October 12, 1492. Columbus reached one of the Bahama islands, east of America, with his three small ships on that day, after sailing for two months across seas which were mostly unknown.
Christopher Columbus was an Italian, the son of a poor weaver. He was born in 1451 in Genoa, an Italian seaport. At that time Genoa was one of the richest cities in the world. Genoese merchants travelled all over Europe to sell silks, coral, fruit and other things and Genoese seamen sailed the merhant ships not only in the Mediterranean but on other seas too. In the middle of the 15th century much of the world was still unexplored, and most European countries were eager to find and lay claim to new territory and thus become rich. Consequently there was much fighting on the seas.
The Mediterranean galleyswere constantly passing in and out of the port of Genoa to load or unload cargoes. Their hardly crews had often been engaged in dangerous adventures and their fine and graceful ships were in a battered condition, and the seamen had plenty of exciting stories to tell little Christopher Columbus.
The boy helped his father to weave wool, but he did not like this work. He was intersted in the big ships which came from or left for strange and distant lands, and he liked to sit out-of-doors and whath them for hours.
Although his parents were very poor they managed to send him to the University ofPavia for his nautical training. There Christopher studied geography, geometry, astronomy, mathematicks, navigation and learnt how to make maps used by sailors. He soon became very clever at this work.
He was interested in the accounts written by earlier seamen and explorers, particulary those written by Marco Polo. The more he studied them, the more he longed to go to sea himself.
At last he felt that he could not stay at houme any longer, and when he was fourteen, he went to sea.
After many adventures on the sea he came to Lisbon, the capital of Portugal, which was then a great and very important port. His chief occupation, when not at sea, was charting maps.
In the 15th century Portugal was a growing empire. The greatest desire of her ruling class was to discover a new sea route to India by which it could trade freely with the rich merchants of Bombay and Calcutta. As the only known land routes across Asia were barred by the Turks, and as the Red sea was controlled by Italy, Portugal’s rival in trade, Portugal had to find a new route.
Few people in Europe in those days knew much about other parts of the world, but it was known that far away to the east there were other great rich countries. At that time traveling was diffcult; gold and other valuable things had to be brought to Europe from the East mainly by land. Sailing ships could be used part of the way to Suez; but no route completely by sea to west was then known. Fom Suez to the north of Egypt goods had to by transported by land.
Columbus made a careful stody of the reports and geographical theories of many navigators and compared their findings. He became convinced of one very important fact:that the world was not flat but round.”If the world is round”, he thought, “surley India can be approaced not only from the west but also from the east. Surely the most direct route must lie across the Eastern Indian Ocean.” He had no doubt, that there was land across the Atlantick (the Eastern Indian Ocean at the time), because pieces of carved wood, very thick canes, trees and the body of a man of an unknown race had been drifted to the Canary Islands by westerly winds. Columbus, lessafraid of the stormy Atlantic than other seamen, decided to try to find his way to eastern India by crossing the ocean, and thus to open a new trade route. He began to plan the voyage which was to lead to his great discovery.
For nearly fourteen years Columbus perfected his plans to rech India by sailing westward from Europe and now the time had come to go and find out if his theory was correct.
To make the journey Columbus needed men, money and ships. He tried to get help from Portugal, from Genoa, and from England, but he failed. For seven years he did his best, but no one wanted to help him. At last the Spanish government gave him what he wanted.
At 8 o’clock in the morning of August 3, 1492, Columbus and a hundred and twenty men left the port of Palos in Spain in three small ships, the Santa Maria, the Pinta, and the Nina. The Santa Maria was the bigest:ninety feet long. It carred the royal banner of Spain. The other two ships were much smaller. The tiny fleet set sail for the unknown witth a promise from the king of Spain of a pension for the man who first sighted land. Few of the men who were with Columbus had been willing to set out across an unknown sea for an unknown number of monyhs in sailing ships of this size. Some of them were men of Palos who had been allowed to leave prison to join the expedition. Some were young men who had got into difficulties and wanted to go away to sea until their misdemeanours were forgoten; others went because they needed money.
Trouble began soon after the ships left Palos. The men feared the yourney and wanted to return to their homes. As time passed they grew ever more afraid of the endless sea: they thought that if they went too far they would perhaps never be able to return at all. The men became mutnous.
But whenever they wanted to turn back, Columbus was able to persuade them go on. Once they plotted to throw Columbus into the sea and turn the ships round so that they could return home. But Columbus found out what his men were planning to do. He was not only a great navigator but also a clever speaker. He called the crew together and told them not to lose hope. He described the rich lands which lay before them, and told them of the great honour which would come to Spain if they went on and were the first men to find India by sailing to the west. He described the wondeful island of Japan and other golden lands which they would find if they continued. He promised them all great riches. The men listened to him and belived him, the ships continued their way to the west.
Columbus made another very cuning move, he did not tell the men the truth about the distance the ships had covered on their journey westwards; he kept two records:a correct one for himself, and another one for the men. The second always showed a lesser distance than the first. Therfore the men thought that they were nearer home than in fact, they were.
The journey was a difficult and fearful one in every way. The ships were sometimes damaged (and later repaired) by men who were weary and frightened and wanted to go home. The travelers saw fearful sights, which they could not explain, one of which was a fountain of fire and smoke far over the sea. (Probably it was a meteor falling into the sea.) Clouds on the horizon resembling land deceived them often. The men disagreed and quarrelled with their captains and with each other.
They ran into terrble storms. But once, when for eleven days the wind blew behind them so steadly that they did not need to change the sails at all, they were not so pleased. If the wind blew always from the east, they said, how would they ever reach Spain again when the time came to turn back? They were angry and afraid.
But in spite of difficulties and dangers Columbus himself never lost hope. All the sailors watched every day for a sight of land, once or twice they thought that they could see land to the west, but each time they were mistaken. The hope which sometimes came to their hearts was soon lost again. And still land was not in sight.
Thus for two months they struggled through the storms of the unknown sea. The sailors had lost all hopes when early in October, after they had sailed about 2,250 miles, they saw many birds, which they knew culd live only on land. They also saw river weeds and a branch with fresh berries floating in the sea. They fished up a cane, a plank of wood that evidently had been wrought with metal.
On the night of October 11, 1492, at ted o’clock, Columbus (who did not sleep and was looking out, as usual, towards the west) saw a small light in the darkness over the sea. He called one of the men, and the two watched together. They saw the light again.
It was not a dream: the light was real. At about two o’clock in the morning on October 12 the moon came up and drove away the darkness. A short time later land was seen by one of the men in the Pinta. When day came and the sun appered, everyone could see a small island some five miles away. Land !!!
It was the New World. The men saw the naked figures running along as if trying to hide in terror.
The anchor was dropped, the boats lowered and the men wentashore. Columbus, drssed in his best red garments, landed with the Spanish flag in his hand and proclaimed the land the Spanish possession. The natives, who had at first been afraid and run away, soon came back. They touched the Spaniards beards and were very surprised to see their white faces. Columbus gave gifts to the natives and received gifts in return. He named the island San Salvador, and stayed there fore some time.
Columbus’s attention was attracted by the fact that the natives wore small nuggets of gold in their nosetrils. He asked them by means of signs, where they obtained theirgold, and he understood from their signs that it came from the a rich country to the south. So he set sail in search of that golden land, taking with him seven of the natives as guides.
Cuba seemed so large to Columbus that he thought he had reached the mainland of Asia. He went ashore, declared the island a Spanish possesion, and spent several days prospecting for gold. When his efforts brought nothing, he sailed to the neighbouring island of Haiti. Near Haiti the Santa Maria was driven aground during the night and was wrecked.
Columbus decided to leave some of his men on the island of Haiti to start a Spanish settlement. Out of the wreckage of the Santa Maria he built a wooden fort, mounted with the ship’s cannon.
|
0.947993 |
I suffer terribly from PMS and heard recently that a vegan diet can really help. Is this true? If not, what do you recommend?
Maybe. A study published in the February issue of "Obstetrics & Gynecology" found that a low-fat vegetarian diet can help control premenstrual syndrome (PMS). The study tracked 33 women, between the ages of 22 and 40, who all suffered from severe PMS. When the women followed a vegan diet for two months, they had fewer menstrual cramps, less water retention, and more energy. However, the diet didn't eliminate menstrual pain for all the women.
The theory behind this strategy is that a low-fat, vegan diet that eliminates animal-based foods helps lower estrogen levels. This in turn decreases production of prostaglandins, the body chemicals believed to cause menstrual pain.
This approach is the latest in a long list of dietary maneuvers directed against PMS, but it isn't an easy way to attack the problem. You would have to eliminate all foods derived from animals, including eggs and dairy products. Vegans risk not getting enough vitamin B12, zinc, iron, and calcium. They must be diligent about consuming vegetable-based foods that provide these vital nutrients -- or take supplements to ensure an adequate intake.
A diet that is very low in fat can also be unpalatable. If you're going to try this approach, I suggest you substitute olive oil and foods that provide omega-3 fatty acids (walnuts, for instance) for the animal fats you're eliminating.
Take a calcium-magnesium supplement (1,000 mg of each) for painful menstrual cramps.
Eliminate all caffeine (including the chocolate some women crave premenstrually) and avoid polyunsaturated vegetable oils.
Do regular aerobic exercise -- 30 minutes of daily activity that raises your heart rate.
Take 500 mg of black currant oil or evening primrose oil twice a day and two capsules of dong quai or chaste tree twice a day.
For cramps, drink raspberry-leaf tea, which is sold in health food stores. You can also use a hot water bottle on your abdomen, or take doses of cramp-bark (viburnum). Follow the directions on the package.
Take ibuprofen (Advil or Motrin) -- but not on an empty stomach.
Practice the breathing exercises I recommend.
|
0.999992 |
Meanwhile, most of the world recognizes that our climate is changing and that human activities, especially the burning of fossil fuels, are to blame. Companies world-wide are preparing and mitigating their risk from the effects. Across the globe countries and their political leadership are investing in a clean energy future both for their populace's needs and for their long-term economic competitiveness. Everywhere news reporting has gradually moved from indifference, to confusion, to denial, then alarm, and now to debates on responsive public policy alternatives and features outlining pragmatic choices for individuals.
Everywhere that is but here in the United States.
We cling tenaciously to our denial. Many of our leaders have gone all-in on their commitment to a naive nostalgia which facts cannot be permitted to contradict. The media was given enormous freedom by the First Amendment precisely to speak truth to our centers of power and to equip the people with the information, and the voice, to keep our leaders accountable and responsive. They are failing us.
Conservative media voices elsewhere in the world are doing their job, which is why in part those countries are taking real action.
For Peter Vandermeersch, editor-in-chief at the traditionally conservative daily NRC Handelsblad in Rotterdam, The Netherlands, there is no debate about climate change.
"Absolutely, that's a given," he said. "The conviction has grown that climate change does exist, and that humans play a major role in how it evolves."
"There's almost no discussion about it," agreed Wouter Verschelden, editor-in-chief at the progressive daily De Morgen in Brussels, Belgium. "The nonbelievers have been marginalized, and they aren't taken seriously anymore. We don't have to convince our readers anymore of the fact that there is climate change, and that it's caused by humans."
In a sense, you're lying to your readers. You're creating a 'he said, she said' story, and looking for an argument that just doesn't always exist.
We will print stories that bring both sides of the view. We will print stories about climate change presenting it as fact, and we will print stories about people who say climate change doesn't exist. It's very obvious that a lot of people, including members of the U.S. Congress, believe it's not true.
She goes on to allude to how some of her readership are members of the Tea Party, with the implication that she is giving her readers want they want to read. Not what they need to read—that is, what is.
It is not objective to provide a platform for falsehoods and repeatedly disproven assertions offered by assorted crackpots and the paid shills of moneyed interests that cannot or dare not stand up and speak their real reasons to oppose climate change action. Nor is it anything more than yellow journalism to slavishly ape prejudiced opinion and thus lend it a credence beyond the ignorant gossip it resembles. Even if from Congress. Especially if from our supposed leaders. How far our media have sunk when they claim justification for pandering to confirmation bias, to demonstrable error, to ignorance.
If objectivity is to mean anything, it must be as a focus on the objects of discussion—the facts—not the subjects—people and their opinions.
"I think the objectivity standard that U.S. newspapers apply has probably outlived its usefulness on this particular issue," said Mark Neuzil, a professor of environmental communication at the University of St. Thomas in St. Paul, Minnesota. "At some point you're not being a decent and good journalist when you're giving equal weight when 97 percent say one thing, and 3 percent say the other, unless you point that out really clearly."
|
0.999998 |
How long can you keep the ball inside the octagon? - the ball will move faster and faster! - spin the octagon and keep the red ball inside by coliding with the red stroke! - compete with players around the world via game center! - check the leaderboards and keep your score as high as you can!
|
0.99997 |
Learn how to create a data processing pipeline with Java 8 to extract text from PDFs and then identify people's names.
To create a simple Java 8 application to extract text from PDFs and then identify people's names, I have created a simple data application. This can be used as part of a larger data processing pipeline or HDF flow by calling it via REST or the command line or converting it to a NiFi processor.
We open the file stream for reading (this can be from HDFS, S3, or a regular file system).
Then we use Apache Tika's PDF Parser to parse out the text. We also get the metadata for other processing.
Using OpenNLP, we parse out all the names from that text.
Using Google GSON, we then turn the names into JSON for easy usage.
|
0.962764 |
On September 1, 1939, Germany invaded Poland under the false pretext that the Poles had carried out a series of sabotage operations against German targets near the border, an event that caused Britain and France to declare war on Germany.
Following several staged incidents that German propaganda used as a pretext to claim that German forces were acting in self-defense, the first regular act of war took place on September 1, 1939, when the Luftwaffe attacked the Polish town of Wieluń, destroying 75% of the city and killing close to 1,200 people, most of them civilians.
As the Wehrmacht advanced, Polish forces withdrew from their forward bases of operation close to the Polish-German border to more established lines of defense to the east. After the mid-September Polish defeat in the Battle of the Bzura, the Germans gained an undisputed advantage.
On September 3 after a British ultimatum to Germany to cease military operations was ignored, Britain and France declared war on Germany.
On October 8, after an initial period of military administration, Germany directly annexed western Poland and the former Free City of Danzig and placed the remaining block of territory under the administration of the newly established General Government.
Battle of the Border: Refers to the battles that occurred in the first days of the German invasion of Poland in September, 1939; the series of battles ended in a German victory as Polish forces were either destroyed or forced to retreat.
Gleiwitz incident: A false flag operation by Nazi forces posing as Poles on August 31, 1939, against the German radio station Sender Gleiwitz in Gleiwitz, Upper Silesia, Germany on the eve of World War II in Europe. The goal was to use the staged attack as a pretext for invading Poland.
The Invasion of Poland, also known as the September Campaign, was a joint invasion of Poland by Nazi Germany, the Free City of Danzig, the Soviet Union, and a small Slovak contingent that marked the beginning of World War II in Europe. The German invasion began on September 1, 1939, one week after the signing of the Molotov-Ribbentrop Pact, while the Soviet invasion commenced on September 17 following the Molotov-Tōgō agreement that terminated the Russian and Japanese hostilities in the east on September 16. The campaign ended on October 6 with Germany and the Soviet Union dividing and annexing the whole of Poland under the terms of the German-Soviet Frontier Treaty.
German forces invaded Poland from the north, south, and west the morning after the Gleiwitz incident. As the Wehrmacht advanced, Polish forces withdrew from their forward bases of operation close to the Polish-German border to more established lines of defense to the east. After the mid-September Polish defeat in the Battle of the Bzura, the Germans gained an undisputed advantage. Polish forces then withdrew to the southeast where they prepared for a long defense of the Romanian Bridgehead and awaited expected support and relief from France and the United Kingdom. While those two countries had pacts with Poland and declared war on Germany on September 3, in the end their aid to Poland was limited.
The Soviet Red Army’s invasion of Eastern Poland on September 17, in accordance with a secret protocol of the Molotov-Ribbentrop Pact, rendered the Polish plan of defense obsolete. Facing a second front, the Polish government concluded that defense of the Romanian Bridgehead was no longer feasible and ordered an emergency evacuation of all troops to neutral Romania. On October 6, following the Polish defeat at the Battle of Kock, German and Soviet forces gained full control over Poland. The success of the invasion marked the end of the Second Polish Republic, though Poland never formally surrendered.
On October 8, after an initial period of military administration, Germany directly annexed western Poland and the former Free City of Danzig and placed the remaining block of territory under the administration of the newly established General Government. The Soviet Union incorporated its newly acquired areas into its constituent Belarusian and Ukrainian republics and immediately started a campaign of sovietization. In the aftermath of the invasion, a collective of underground resistance organizations formed the Polish Underground State within the territory of the former Polish state. Many military exiles who managed to escape Poland subsequently joined the Polish Armed Forces in the West, an armed force loyal to the Polish government in exile.
A map of Europe depicting the Invasion of Poland from Germany and the Soviet Union.
Following several staged incidents (like the Gleiwitz incident, part of Operation Himmler), in which German propaganda was used as a pretext to claim that German forces were acting in self-defense, the first regular act of war took place on September 1, 1939, when the Luftwaffe attacked the Polish town of Wieluń, destroying 75% of the city and killing close to 1,200 people, mostly civilians. This invasion subsequently began World War II. Five minutes later, the old German pre-dreadnought battleship Schleswig-Holstein opened fire on the Polish military transit depot at Westerplatte in the Free City of Danzig on the Baltic Sea. Four hours later, German troops—still without a formal declaration of war issued—attacked near the Polish town of Mokra. The Battle of the Border had begun. Later that day, Germans attacked on Poland’s western, southern and northern borders while German aircraft began raids on Polish cities. The main axis of attack led eastwards from Germany proper through the western Polish border. Supporting attacks came from East Prussia in the north and a co-operative German-Slovak tertiary attack by units from German-allied Slovakia in the south. All three assaults converged on the Polish capital of Warsaw.
Invasion of Poland: Soldiers of the German Wehrmacht tearing down the border crossing between Poland and the Free City of Danzig, September 1, 1939.
On September 3 after a British ultimatum to Germany to cease military operations was ignored, Britain and France, followed by the fully independent Dominions of the British Commonwealth—Australia (3 September), Canada (10 September), New Zealand (3 September), and South Africa (6 September)—declared war on Germany. However, initially the alliance provided limited direct military support to Poland, consisting of a cautious, half-hearted French probe into the Saarland.
The German-French border saw only a few minor skirmishes, although the majority of German forces, including 85% of their armored forces, were engaged in Poland. Despite some Polish successes in minor border battles, German technical, operational, and numerical superiority forced the Polish armies to retreat from the borders towards Warsaw and Lwów. The Luftwaffe gained air superiority early in the campaign. By destroying communications, the Luftwaffe increased the pace of the advance, overrunning Polish airstrips and early warning sites and causing logistical problems for the Poles. Many Polish Air Force units ran low on supplies, and 98 withdrew into then-neutral Romania. The Polish initial strength of 400 was reduced to just 54 by September 14, and air opposition virtually ceased.
The Western Allies also began a naval blockade of Germany, which aimed to damage the country’s economy and war effort. Germany responded by ordering U-boat warfare against Allied merchant and warships, which later escalated into the Battle of the Atlantic.
The German-Soviet Treaty of Friendship was a secret supplementary protocol of the 1939 Hitler-Stalin Pact, signed on September 28, 1939, by Nazi Germany and the Soviet Union after their joint invasion and occupation of sovereign Poland that delineated the spheres of interest between the two powers.
The German–Soviet Treaty of Friendship, Cooperation and Demarcation was a secret supplementary protocol of the 1939 Hitler-Stalin Pact, amended on September 28, 1939, by Nazi Germany and the Soviet Union after their joint invasion and occupation of sovereign Poland.
These amendments allowed for the exchange of Soviet and German nationals between the two occupied zones of Poland, redrew parts of the central European spheres of interest dictated by the Molotov–Ribbentrop Pact, and stated that neither party to the treaty would allow on its territory any “Polish agitation” directed at the other party.
The existence of this secret protocol was denied by the Soviet government until 1989, when it was finally acknowledged and denounced.
The Molotov–Ribbentrop Pact, also known as the Nazi-Soviet Pact, was a neutrality pact between Nazi Germany and the Soviet Union signed in Moscow on August 23, 1939, that delineated the spheres of interest between the two powers.
German-Soviet Frontier Treaty: Also known as the The German–Soviet Treaty of Friendship, Cooperation and Demarcation, this treaty was a secret clause amended on the Molotov–Ribbentrop Pact on September 28, 1939, by Nazi Germany and the Soviet Union after their joint invasion and occupation of sovereign Poland.
Molotov–Ribbentrop Pact: A neutrality pact between Nazi Germany and the Soviet Union signed in Moscow on August 23, 1939.
Wehrmacht: The unified armed forces of Nazi Germany from 1935 to 1946, including army (Heer), navy (Kriegsmarine), and air force (Luftwaffe).
The German–Soviet Treaty of Friendship, Cooperation and Demarcation (later known as the German-Soviet Frontier Treaty ) was a second supplementary protocol of the 1939 Hitler-Stalin Pact. It was a secret clause as amended on September 28, 1939, by Nazi Germany and the Soviet Union after their joint invasion and occupation of sovereign Poland and thus after the beginning of World War II. It was signed by Joachim von Ribbentrop and Vyacheslav Molotov, the foreign ministers of Germany and the Soviet Union respectively, in the presence of Joseph Stalin. The treaty was a follow-up to the first secret protocol of the Molotov–Ribbentrop Pact signed on August 23, 1939, between the two countries prior to their invasion of Poland and the start of World War II in Europe. Only a small portion of the protocol which superseded the first treaty was publicly announced, while the spheres of influence of Nazi Germany and the Soviet Union remained classified. The third secret protocol of the Pact was signed on January 10, 1941 by Friedrich Werner von Schulenberg and Molotov, in which Germany renounced its claims to portions of Lithuania only a few months before its anti-Soviet Operation Barbarossa.
Several secret articles were attached to the treaty. These allowed for the exchange of Soviet and German nationals between the two occupied zones of Poland, redrew parts of the central European spheres of interest dictated by the Molotov–Ribbentrop Pact, and stated that neither party would allow on its territory any “Polish agitation” directed at the other party.
During the western invasion of Poland, the German Wehrmacht had taken control of the Lublin Voivodeship and eastern Warsaw Voivodeship, territories that according to the Molotov–Ribbentrop Pact were in the Soviet sphere of influence. To compensate the Soviet Union for this “loss,” the treaty’s secret attachment transferred Lithuania to the Soviet sphere of influence, with the exception of a small territory in the Suwałki Region sometimes known as the Suwałki Triangle. After this transfer, the Soviet Union issued an ultimatum to Lithuania, occupied it on June 15, 1940, and established the Lithuanian SSR.
The existence of this secret protocol was denied by the Soviet government until 1989, when it was finally acknowledged and denounced. Some time later, the new Russian revisionists, including historians Alexander Dyukov and Nataliya Narotchnitskaya, described the pact as a necessary measure because of the British and French failure to enter into an anti-fascist pact. Vladimir Putin has also defended the pact.
German–Soviet Treaty of Friendship: Soviet Foreign Minister Vyacheslav Molotov signs the German–Soviet Pact in Moscow, September 28, 1939; behind him are Richard Schulze-Kossens (Ribbentrop’s adjutant), Boris Shaposhnikov (Red Army Chief of Staff), Joachim von Ribbentrop, Joseph Stalin, Vladimir Pavlov (Soviet translator). Alexey Shkvarzev (Soviet ambassador in Berlin), stands next to Molotov.
The Molotov–Ribbentrop Pact, also known as the Nazi-Soviet Pact, was a neutrality pact between Nazi Germany and the Soviet Union signed in Moscow on August 23, 1939 by foreign ministers Joachim von Ribbentrop and Vyacheslav Molotov, respectively.
The pact delineated the spheres of interest between the two powers, confirmed by the supplementary protocol of the German-Soviet Frontier Treaty amended after the joint invasion of Poland. The pact remained in force for nearly two years until the German government of Adolf Hitler launched an attack on the Soviet positions in Eastern Poland during Operation Barbarossa on June 22, 1941.
The clauses of the Nazi-Soviet Pact provided a written guarantee of non-belligerence by each party towards the other and a declared commitment that neither government would ally itself to or aid an enemy of the other party. In addition to stipulations of non-aggression, the treaty included a secret protocol that divided territories of Poland, Lithuania, Latvia, Estonia, Finland, and Romania into German and Soviet “spheres of influence,” anticipating “territorial and political rearrangements” of these countries. Thereafter, Germany invaded Poland on September 1, 1939. Soviet Union leader Joseph Stalin ordered the Soviet invasion of Poland on September 17, a day after the Soviet–Japanese ceasefire agreement came into effect. In November, parts of southeastern Finland were annexed by the Soviet Union after the Winter War. This was followed by Soviet annexations of Estonia, Latvia, Lithuania, and parts of Romania. Advertised concern about ethnic Ukrainians and Belarusians had been proffered as justification for the Soviet invasion of Poland. Stalin’s invasion of Bukovina in 1940 violated the pact as it went beyond the Soviet sphere of influence agreed with the Axis.
The Dunkirk evacuation was the removal of Allied soldiers from the beaches and harbor of Dunkirk by the attack of German soldiers, which started as a disaster but soon became a miraculous triumph.
During the 1930s, the French constructed the Maginot Line, a series of fortifications along their border with Germany.
The area immediately to the north of the Maginot Line was covered by the heavily wooded Ardennes region, which French General Philippe Pétain declared to be “impenetrable” as long as “special provisions” were taken.
The German army decided to attach through the Ardennes region, then establish bridgeheads on the Meuse River and rapidly drive to the English Channel. This would cut off the Allied armies in Belgium and Flanders.
When this occurred and the French army was surrounded, the British decided on a plan of evacuation.
On the first day of the evacuation, only 7,669 men were evacuated, but by the end of the eighth day, a total of 338,226 soldiers had been rescued by a hastily assembled fleet of over 800 boats.
More than 100,000 evacuated French troops were quickly and efficiently shuttled to camps in various parts of southwest England, where they were temporarily lodged before being repatriated.
Dunkirk evacuation: The evacuation of Allied soldiers from the beaches and harbour of Dunkirk, France, between May 26 and June 4, 1940, during World War II.
Maginot Line: A line of concrete fortifications, obstacles, and weapon installations that France constructed on the French side of its borders with Switzerland, Germany, and Luxembourg during the 1930s.
Vichy France: The common name of the French State during World War II, specifically the southern, unoccupied “Free Zone,” as Germany militarily occupied northern France.
On the first day of the evacuation, only 7,669 men were evacuated, but by the end of the eighth day, a total of 338,226 soldiers had been rescued by a hastily assembled fleet of over 800 boats. Many troops were able to embark from the harbor’s protective mole onto 39 British destroyers and other large ships, while others had to wade out from the beaches, waiting for hours in the shoulder-deep water. Some were ferried from the beaches to the larger ships by what came to be known as the little ships of Dunkirk, a flotilla of hundreds of merchant marine boats, fishing boats, pleasure craft, and lifeboats called into service for the emergency. The British Expeditionary Force (BEF) lost 68,000 soldiers during the French campaign and had to abandon nearly all of their tanks, vehicles, and other equipment.
More than 100,000 evacuated French troops were quickly and efficiently shuttled to camps in various parts of southwest England, where they were temporarily lodged before being repatriated. British ships ferried French troops to Brest, Cherbourg, and other ports in Normandy and Brittany, although only about half of the repatriated troops were deployed against the Germans before the surrender of France. For many French soldiers, the Dunkirk evacuation represented only a few weeks’ delay before being killed or captured by the German army after their return to France. Of the French soldiers evacuated from France in June 1940, about 3,000 joined Charles de Gaulle’s Free French army in Britain.
Dunkirk Evacuation: British troops evacuating Dunkirk’s beaches.
In 1939, after Nazi Germany invaded Poland marking the beginning of the Second World War, the United Kingdom sent the BEF to aid in the defense of France, landing troops at Cherbourg, Nantes, and Saint-Nazaire. By May 1940 the force consisted of ten divisions in three corps under the command of General John Vereker, 6th Viscount Gort. Working with the BEF were the Belgian Army and the French First, Seventh, and Ninth Armies.
During the 1930s, the French constructed the Maginot Line, a series of fortifications along their border with Germany. This line was designed to deter a German invasion across the Franco-German border and funnel an attack into Belgium, where it would be met by the best divisions of the French Army. Thus, any future war would take place outside of French territory, avoiding a repeat of the First World War. The area immediately to the north of the Maginot Line was covered by the heavily wooded Ardennes region, which French General Philippe Pétain declared to be “impenetrable” as long as “special provisions” were taken. He believed that any enemy force emerging from the forest would be vulnerable to a pincer attack and destroyed. The French commander-in-chief, Maurice Gamelin, also believed the area to be of limited threat, noting that it “never favoured large operations.” With this in mind, the area was left lightly defended.
The initial plan for the German invasion of France called for an encirclement attack through the Netherlands and Belgium, avoiding the Maginot Line. Erich von Manstein, then Chief of Staff of the German Army Group A, prepared the outline of a different plan and submitted it to the OKH (German High Command) via his superior, Generaloberst Gerd von Rundstedt. Manstein’s plan suggested that Panzer divisions should attack through the Ardennes, then establish bridgeheads on the Meuse River and rapidly drive to the English Channel. The Germans would thus cut off the Allied armies in Belgium and Flanders. This part of the plan later became known as the Sichelschnitt (“sickle cut”). Adolf Hitler approved a modified version of Manstein’s ideas, today known as the Manstein Plan, after meeting with him on February 17.
On May 10, Germany attacked Belgium and the Netherlands. Army Group B, under Generaloberst Fedor von Bock, attacked into Belgium, while the three Panzer corps of Army Group A under Rundstedt swung around to the south and drove for the Channel. The BEF advanced from the Belgian border to positions along the River Dyle within Belgium, where they fought elements of Army Group B starting on May 10. They were ordered to begin a fighting withdrawal to the Escaut River on May 14 when the Belgian and French positions on their flanks failed to hold. During a visit to Paris on May 17, British Prime Minister Winston Churchill was astonished to learn from Gamelin that the French had committed all their troops to the ongoing engagements and had no strategic reserves.
On May 19, Gort met with French General Gaston Billotte, commander of the French First Army and overall coordinator of the Allied forces. Billotte revealed that the French had no troops between the Germans and the sea. Gort immediately saw that evacuation across the Channel was the best course of action and began planning a withdrawal to Dunkirk, the closest location with good port facilities. Surrounded by marshes, Dunkirk boasted old fortifications and the longest sand beach in Europe, where large groups could assemble. After continued engagements and a failed Allied attempt on May 21 at Arras to cut through the German spearhead, the BEF was trapped along with the remains of the Belgian forces and the three French armies in an area along the northern French coast. By May 24, the Germans had captured the port of Boulogne and surrounded Calais. Later that day, Hitler issued Directive 13, which called for the Luftwaffe to defeat the trapped Allied forces and stop their escape. On May 26, Hitler ordered the panzer groups to continue their advance, but most units took another 16 hours to attack. The delay gave the Allies time to prepare defenses vital for the evacuation and prevented the Germans from stopping the Allied retreat from Lille.
Vichy France is the common name of the French State headed by Marshal Philippe Pétain during World War II. In particular, it represents the unoccupied “Free Zone” (zone libre) that governed the southern part of the country.
From 1940 to 1942, while the Vichy regime was the nominal government of France as a whole, Germany’s military occupied northern France. Thus, while Paris remained the de jure capital of France, the de facto capital of southern “unoccupied” France was the town of Vichy, 360 km to the south. Following the Allied landings in French North Africa in November 1942, southern France was also militarily occupied by Germany and Italy. The Vichy government remained in existence, but as a de facto client and puppet of Nazi Germany. It vanished in late 1944 when the Allies occupied all of France.
The French State maintained nominal sovereignty over the whole of French territory, but had effective full sovereignty only in the Free Zone. It had limited civil authority in the northern zones under military occupation. The occupation was to be provisional pending the conclusion of the war, which at the time appeared imminent. The occupation also presented certain advantages, such as keeping the French Navy and the colonial empire under French control and avoiding full occupation of the country by Germany, thus maintaining a meaningful degree of French independence and neutrality. The French Government at Vichy never joined the Axis alliance.
|
0.988491 |
What is the best way to migrate from Visual Studio 2005 to Visual Studio 2010?
Can you provide the best resource, i.e. guides, articles, documents, links, videos, to migrate from Visual Studio 2005 to Visual Studio 2010, in other words from .net 2.0 to .net 4.?
I'm not sure which language you develop in, but the transition isn't likely to change your code (if you mean refactoring) since .net is mostly backwards compatible. You may wish to expand your current software to use features only included in a newer .net version, see "whats new".
|
0.948342 |
Health is characterized by a state of well-being, enthusiasm, and energetic pursuit of life's goals. Illness is characterized by feelings of discomfort, helplessness, and a diminished interest in the future. Once patients recognize that they are ill and possibly face their own mortality, a series of emotional reactions occurs, including anxiety, fear, depression, denial, projection, regression, anger, frustration, withdrawal, and an exaggeration of symptoms. These psychological reactions are general and are not specific to any particular physical illness. Patients must learn to cope not only with the symptoms of the illness but also with life as it is altered by the illness.
Conflict is an important medical and psychological concept to understand. Patients live with conflict. What is conflict? Conflict exists when a patient has a symptom and wants to have it evaluated by a member of the health-care team, but the patient does not want to learn that it represents a ''bad'' disease process. Conflict is very widespread in medical practice. It is very common for patients to be seen by a physician and at the very end of the consultation, the patient may state, ''Oh doctor, there is one other thing that I wanted to tell you!'' That information is often the most important reason for that patient to have sought consultation. Patients with an acute myocardial infarction often suffer chest pain for several weeks before the actual event. They convince themselves that it is indigestion or musculoskeletal pain;they do not seek medical attention because they do not want to receive a diagnosis of coronary heart disease. The health-care provider must be able to identify conflict, which is often a precursor of denial, to facilitate care of the patient.
|
0.974356 |
The Absolute truth had the attention of American Astronomer Edwin Hubble (1889-1953) who was already well known for his settling of the long running dispute whether Nebulae were part of the Milky Way or Galaxies in their own right.
Hubble provided observable evidence of an expanding universe when he capably proved the Doppler effect on light over distance.
As light moves further away from any observer it is stretched - caused by an expanding space-time - Becoming redder as it's frequency is diminished, and it's wavelength is increased.
This effect is known as Redshift and can be employed to deduce the distance of astral bodies.
Hubble noted that Galaxies appeared to be moving away from Earth, and each other, at a speed proportional to their distance.
As Space expanded, Galaxies traveled increasingly further over a given time span. Giving the appearance of traveling faster. This attentive observation is now known as Hubble's Law.
Gravitational attraction within the Galaxies prevent the Galaxies themselves from expanding, thus maintaining their status quo.
A quarter century after Lemaître's Primeval Atom Hypothesis - he also suggested an expanding Universe - three prominent Scientists proposed a New Theory.
Fred Hoyle, (1915-2001) along with two friends and colleagues, Hermann Bondi, (1919-2005) and Thomas Gold, (1920-2004) were protagonists of the Steady State model of Universe. Gold conceived the original idea. All three men believed that the Universe was expanding, but the Big Bang was not the absolute truth.
The 'Big Bang' is a label coined by Fred Hoyle and was intended to be derogatory.
This throwaway line though, had such wide public appeal that it superseded 'Dynamic Evolving Model', the original designate imposed on one of Humanities greatest searches for Truth.
Fred, he of According To Hoyle, celebrity, must have been suitably pleased.
The Steady State Theory was founded on the Universe being Eternal, while taking into account Edwin Hubble's evidence that it was indeed expanding.
To support the Steady State Theory, new matter needed to be constantly created.
Not believing that all matter in the Cosmos was created simultaneously in the Big Bang, Hoyle proposed what he termed a Creation Field.
Here in this field, which supposedly encompasses the entire Universe, fresh matter is continuously being produced. Creating new galaxies throughout the Cosmos and maintaining material density as the Universe expands.
While the Steady State fitted in with many of the known observations, - as did the BB - it is now largely discredited, put to sleep by the apperceptive discovery of Cosmic Microwave Background Radiation.
Fred did not believe that the Big Bang, Darwinian Evolution, or that life originated from inorganic matter as proposed by Pierre Teilhard de Chardin was the absolute truth.
Where Fred shone though in his beliefs, no pun intended, was with his quite brilliant work in Nucleosynthesis in the Stars. Where Nature makes Her building blocks. Could Stars be Fred's creation field?
Analyzing the processes of how Stars are the crucibles that manufacture all the Elements, he discovered the majority were produced in living stars, and the remainder, the heaviest, were produced in Supernovas.
Ironically, Fred Hoyle found himself vindicating Pierre Teilhard de Chardin's perception of particles to intelligence.
Although Albert Einstein's General Theory of Relativity provided the basis of both models of the Universe, he “definitely disliked” the hazy suggestion of Continuous Creation, preferring a beginning, but rejecting Lemaître Primeval Atom hypothesis which offered one.
Einstein later revised his point of view and accepted the Big Bang model of an expanding and contracting Universe.
A meeting with Edwin Hubble, at Mount Wilson Observatory in Pasadena during 1930, examining and discussing Hubble's findings, being shown raw data, and then witnessing, first hand, near and distant galaxies through the lens of the then most powerful telescope in the World, enabled a change of mind.
Following his visit with Edwin Hubble, Einstein publicly withdrew his support for his Cosmological Constant and a static Universe.
Albert conceded that his cosmological constant was his biggest mistake, that Lemaître and Friedmann were quite correct, and declared his support for the Big Bang Theory.
Truly intelligent Human Beings know when a mind change is in order. Albert Einstein was a truly intelligent Human Being.
In between these two prominent theories touched on here, are squeezed a multitude of other theories spawned by General Relativity.
A sad conclusion indeed, after a lifetime of Truth Seeking. Albert's comment moved me to shed a tear.
Every voice needs to be heard, and understood. Often, good ideas are uncovered by taking on board bad ideas.
Imagine, even for a moment, just how much sooner would new knowledge be revealed to us if all the World's Great Thinkers worked together in total Harmony.
Instead of adopting pet theories and attempting to foist those theories on one another.
The Absolute Truth should be every person's overarching goal in all our endeavors.
Unless we purposely gain an understanding that new knowledge enables us to change our mind, we will go to our deathbeds with our, often distorted, views still firmly clutched to our breasts.
The paradox here is that we must seek that very understanding of our own volition. It cannot be imparted by another without our permission or desire to learn.
If we are serious about finding Truth, then tools like the Hubble Space Telescope, Cassini-Huygens type Deep Space Missions, Space Probes such as Kepler, and the Earth bound Large Hadron Collider . . .
Powered by competent, harmonious, and intuitive, employment of Spiritual Intelligence . . .
Offer Humanity the most promising path and hope, to uncover the absolute truth encompassing the mysteries of the beginning, everything in between, and ultimate fate of the Cosmos. And upon that journey, sincerely and assuredly discovering ourselves.
|
0.999969 |
The best time to visit Koh Phangan, o Tailandia en general sería de julio a septiembre…..
I think the best time to go to Thailand is right before the scorching hot season which is March-May, and before the monsoon season that begins in October. Estaba en Koh Phangan en abril, y déjeme decirle que era el clima más caliente que he estado en. Era sobre 45 grados centígrados con 100% humedad. Since I went during a heat wave there was not a lot of water. Tailandia fue en realidad va a través de una de las mayores sequías que ha visto en más de 60 años, por lo que apenas había agua en las cascadas. Koh Phangan has 7 waterfalls, pero, lamentablemente, sólo uno tenía agua, que fue Phaeng Cascada. Although there wasn’t much water, we still made an adventure out of it. It was like extreme rock climbing, I wanted to swim in the water, but there was too many mosquitos flying around and there were a lot of spider webs.
|
0.998787 |
what is digging rate in mining equipment. gold mining involves the use of equipment that specifically excavates earth and rock to aid the mining process.
Digital Drills: The Monster Machines that Mine Bitcoin. ... and the exchange rate was high enough ... Digital Drills: The Monster Machines that Mine Bitcoin.
The American History of Gold Mining. ... minors could purchase supplies and tools for their gold digging and ... Ways to Mine Gold; Gold Mining Equipment.
Gold mining is the process of mining of gold or gold ores from the ground. There are several .... These operations typically include diesel powered, earth moving equipment, including excavators, bulldozers, wheel loaders, and rock ... This type of gold mining is characterized by its low cost, as each rock is moved only once.
Longwall systems allow a 60-to-100 percent coal recovery rate when surrounding ... Digging. Hydraulic Shovels. Cable Shovels. Continuous Mining Machines.
|
0.99997 |
The following article is from the Center for American Progress asking why the media has not been asking tough questions about how homeland security appears to be failing. You can see the original report at http://www.americanprogress.org/site/pp.asp?c=biJRJ8OVF&b=35046.
Think Again: Whatever Happened to Homeland Security?
Wondering what the Bush administration is doing to protect you from the catastrophic terrorist attack it keeps telling us to expect? Here's what my Internet search turned up in the way of press coverage: The Christian Science Monitor reported that Border Patrol agents - increasingly feel unsupported by the country they are trying to protect - even though they are supposed to be playing a key role in homeland defense; Sen. Patty Murray (D-WA) revealed that the Bush administration's new budget provides no funds for a project she sponsored to track cargo coming through American ports. Then, there was the disturbing story uncovered by the House Homeland Security Committee: When inspectors testing the capabilities of the U.S. Park Police deliberately left a suspicious black bag on the grounds of the Washington Monument, the police failed to respond quickly or effectively. One officer reportedly was caught sleeping. When a committee official called the Department of Homeland Security, he got a recording: "Due to the high level of interest in the new department, all of our lines are busy. However, your call is important to us and we encourage you to call back soon."
Scary? Sure, but as Bob Dole might say, "Where's the outrage?"
One answer is that the media have been burying these stories. Not one of these disturbing accounts of administration failure made it to the front pages of America's papers. Not even Murray's hometown newspaper, the Seattle Times, gave her complaints much attention. The port security story appeared on page B1, even though Washington State is peculiarly vulnerable to a bomb hidden in incoming cargo.
Perhaps editors simply believe Bush administration's indifference to cargo security to be old news. After all, when the Bush administration suddenly discovered that the Federal Aviation Administration was running low on funds last year, officials quietly raided the cargo program budget for the money they needed. To force the government to spend the money Congress appropriated, Murray had to put a hold on a budget nominee. In the end, of the $75 million Congress appropriated for the cargo program, the government spent just $58 million. Even worse, the administration is appropriating just 7 percent of the $7.3 billion that the Coast Guard estimates it will need to implement the Maritime Transportation Security Act (MTSA) to be over 10 years. In that case, we can expect to be protected by roughly 2018 or so.
It would be one thing if the administration were known for its penny-pinching. But a government which has just committed $87 billion to Iraq can't make the claim that it's a careful and prudent steward of taxpayer funds.
Not that the Bush administration hasn't tried. Asked about the administration's crimped spending on port security, Homeland Security Secretary Tom Ridge responded, "We need to have a public debate as to whether or not it is the taxpayer's responsibility to continue to fund port security whether or not since these basically are intermodal facilities where the private sector moves goods in and out for profit, that they would be responsible for picking up most of the difference."
Astonished at his answer, Murray retorted, "I'm listening to your logic, but I would just respectfully say that if one terminal or port in this country said, 'We're not going to ante up the money - we don't have it,' and a terrorist used that weak link to come into this country - all of us would be paying for the consequences of that."
So now we are going to privatize national security involving American ports? Surely this should have made front-page news. It didn't even make the back pages. The New York Times has yet to mention Ridge's misguided scheme to privatize port security in the name of saving money, though it gave plenty of notice to the 10 percent increase in the department's budget next year to $40 billion. Was it too complicated to mention in passing that while the department seems to be spending a lot of money it is shortchanging one of the most vital programs it runs?
It's not as if these are difficult stories to cover. Indeed, they tend to provide the kind of simple narratives the media usually prefer. And nothing sells like stories about government mismanagement. NBC News has turned such stories into show-stopping ratings bonanzas for years.
The chief consequence of media inattentiveness is that the public has no real idea if the Bush administration is doing a good or a bad job protecting the homeland. About the only time homeland security even merits much public notice these days is when Ridge announces that the government is picking up a lot of frightening terrorist chatter. And news of this kind is singularly unhelpful. The average citizen can't do anything with this information except reach for a Zantac.
The news that would be of help - news about the effectiveness of Ridge's operation - the media aren't providing in any detail. Do you have any real idea if Ridge is succeeding or failing? It's virtually impossible to say because the media haven't done enough stories to be able to determine if the port security case is a unique example of incompetence or part of a larger pattern.
In the absence of real knowledge, voters are relying on their partisan prejudice. As a Republican reader of the Buffalo News wrote in a letter to the editor complaining about the paper's coverage of Bush's war on terrorism, the Bush administration must be doing a great job because "since 9/11, Bush and his team have batted 1,000 percent in protecting our land from a second attack." Huh? Not even the Bush administration claims that the absence of an attack is reason to cheer. As officials keep reminding us, another attack is inevitable.
So what gives? Why have the media not seen fit to assess the effectiveness of the homeland security measures the government is taking? It's evident that the media think by and large that the work of the Homeland Security Department is child's work compared with the wars being fought in Afghanistan and Iraq.
To be sure, war is inherently more dramatic and deservedly draws our attention. But by defining the war on terrorism narrowly - as mainly about foreign wars - the media have allowed the administration to promote the politically winning narrative of a war president fighting distant enemies in a robust manner. Defining the war on terrorism this way gives the White House what it wants. Who, after all, wants to be caught second-guessing a president in the middle of a war? As we've seen over the last two years even Democrats who loathe the administration find it difficult to challenge the president's leadership as commander-in-chief.
While it is obvious why the administration keeps public attention focused on our foreign wars, it is not obvious why the media do too. Where's the debate about Ridge? Why, nearly two and a half years after 9/11, has the Homeland Security Department still not put in place a vigorous program to check cargo coming into American ports? There can't be too many tasks that are more important for the department than securing the cargo that comes in by the ton day after day.
If the Bush administration is right about our vulnerability to another major attack, the country will be asking, as it is now about 9/11, what went wrong? Unfortunately, the media will be in part to blame for not asking the questions that should have been asked.
Rick Shenkman is the editor of George Mason University's History News Network (http://hnn.us).
|
0.999999 |
A year advanced of its massive BEV offensive, Volkswagen Accumulation aggregate some abject targets. The German accumulation intends to abject its all-electric cars on a committed platform, the modular electric drive cast (MEB) that will be acclimated not alone by Volkswagen, but additionally Audi, Skoda, Seat and maybe added brands aural the group. The absolute advance is about $7 billion.
The aboriginal MEB-models from Volkswagen will be accessible in 2020 (probably backward 2019). The ambition is to advertise 150,000 electric cars in 2020, including 100,000 I.D. and I.D. SUV. Today, Volkswagen is beneath 100,000 plug-ins annually (all BEVs PHEVs).
Production of affordable, all-embracing electric cars is accepted to access and hit added than one actor by 2025.
The aboriginal beachcomber of MEB-cars is accepted to be about 10 actor globally (all brands), which apparently agency that the belvedere will be activated through about 2030, after above upgrades besides maybe higher-capacity array modules, we assume.
Wolfsburg (October 4, 2018) — Individual advancement is on the beginning of a new era: Electric drivetrains and digitalization are set to accompany about the best axiological change the car industry has anytime seen. The sales aggregate of array electric cars (BEVs) rose by 60 percent in the accomplished year and 2018 could be the aboriginal year that anew registered electric cars adeptness the one-million mark—a ambition Volkswagen hopes to hit with the all-around ID. ancestors by 2025.
Globally, added than six actor new Volkswagen cartage aeon out of accumulation plants and assimilate the alley anniversary year. The brand’s calibration helps accomplish abstruse innovations affordable for the masses—and it will be no altered for the electric cartage in the new ID. family. Volkswagen’s aim is to accomplish electric cars adorable to as abounding bodies as possible, appropriately paving the way to accumulation electric mobility.
With the I.D., the I.D. CROZZ, the I.D. BUZZ and the I.D. VIZZION, Volkswagen has already presented four concepts. The development of the agent technology is about complete, as are the designs of the assorted models. Contracts with the array suppliers accept been signed. Volkswagen is advance added than one billion euros to adapt its bulb in Zwickau for the accumulation of MEB vehicles. The aggregation is additionally committing itself to developing a absolute charging infrastructure. In short: Volkswagen’s e-mobility abhorrent is demography appearance on all fronts.
The abstruse courage of the ID. ancestors is a anew developed agent platform: the modular electric drive cast or MEB for short. Volkswagen is one of the best acknowledged belvedere developers in the automotive industry. One archetype of this is the modular arbor cast (MQB), apparently the best acknowledged agent architectonics in use at present: about 55 actor cartage are actuality produced by the Accumulation based on the aboriginal bearing of MQB. Volkswagen is now applying this above belvedere action to the era of electric vehicles. The MEB is not aloof the abstruse architectonics block for all models in the Volkswagen ID. family, but for abounding electric cars produced by added Accumulation brands, including Audi, SEAT, Škoda and Volkswagen Commercial Vehicles.
The MEB has two above altered affairs propositions. First, it is not a belvedere for cartage with agitation engines that has been retroactively modified. Instead it is a modular accumulation cast advised accurately for authentic electric cars, which enables Volkswagen to advance this technology to best effect. Second, the agent abstraction and architectonics can be structured in a added adjustable way than anytime before—the spectrum ranges from bunched cars to SUVs and MPVs. This will accredit the Accumulation to accomplish economies of scale, thereby authoritative electric cars cheaper and added affordable for abounding people.
The MEB—designed with absolutely electric drive systems in mind—enables the admeasurement of a vehicle’s wheelbase to be added while abbreviation the anatomy overhangs, consistent in added activating proportions. In accession to acceptance the designers to actualize a standalone architectonics DNA for the new zero-emissions vehicles, the anatomy architectonics leads to abundant beyond and added able agent interiors.
The zero-emissions drivetrain in the ID. ancestors primarily consists of an electric motor chip into the rear arbor with adeptness electronics and a transmission, a high-voltage collapsed array backpack installed in the agent attic to save space, and abetting powertrains chip into the advanced end of the vehicle. The adeptness electronics are finer a articulation that controls the breeze of high-voltage activity amid the motor and the battery. The adeptness electronics catechumen the absolute accepted (DC) stored in the array into alternating accepted (AC). Meanwhile, a DC/DC advocate food the onboard electronics with 12-volt power. The single-speed gearbox transfers the adeptness from the motor to the rear axle. The motor, adeptness electronics and gearbox anatomy a single, bunched unit.
The electric motor of the I.D. abstraction car showcased at the 2016 Paris Auto Show had a adeptness achievement of 168 hp. The I.D. ancestor can advance from 0 to 62 mph in beneath than eight seconds, with a top acceleration of 99 mph. Electric motors alms either added or beneath adeptness may be advised for the 2020 alternation version. In alongside to this, the ID. ancestors will affection a ambit of array sizes. The battery’s modular blueprint allows scalable ranges from about 200 afar up to added than 340 afar on the WLTP (Worldwide Harmonized Light Cartage Test Procedure) cycle. It is installed centrally in the underbody, which saves space, decidedly lowers the centermost of gravity, and gives an optimal weight administration of aing to 50:50.
The MEB architectonics will additionally accredit new assistance, comfort, infotainment, ascendancy and affectation systems to be chip into cartage beyond the board. The I.D. abstraction presented at the Paris Show, for example, featured an AR (augmented reality) head-up affectation which projects advice such as beheld cues from the aeronautics arrangement into the basic amplitude in advanced of the vehicle.
To ascendancy the huge ambit of appearance on lath the ID. models, Volkswagen has advised the absolutely new end-to-end electronics architecture, alleged E3, as able-bodied as a new operating system, alleged vw.OS. The new E3 architectonics consolidates the ascendancy units accepted beyond the industry today to actualize a abundant added able and centralized processor unit. The new operating arrangement will acquiesce Volkswagen to accumulate the cartage beginning during their absolute lifecycle by authoritative the systems accordant for updates and upgrades accessed via the Cloud.
An EV’s array arrangement allegation accommodated aerial expectations—and not alone in agreement of accomplishing the best accessible range. Drivers additionally apprehend that they will accomplish in all altitude and temperatures, and they appetite the charging time for the beef to be as abbreviate as possible. The batteries in the ID. ancestors will accommodated all of these expectations.
The better automotive architect in Germany is now applying its all-encompassing acquaintance from decades of development, accumulation and ascent of engines and transmissions. This adeptness has been acclimated in the accomplished few years for absolutely electric models (BEVs) and constituent amalgam cartage (PHEVs), including the e-up! and e-Golf BEVs and the Golf GTE, Passat GTE and Passat Variant GTE PHEVs. Anniversary of these is able with high-voltage batteries that are reliable and acutely safe.
These batteries are primarily produced at the Volkswagen basic bulb in Braunschweig. Volkswagen Components, the business assemblage amenable for the drive systems, is currently accretion the Braunschweig armpit to be able to body up to bisected a actor array systems per year in the future. In addition, a pilot band for array corpuscle accumulation is currently actuality congenital in the Salzgitter factory. The Volkswagen Components business assemblage additionally produces the electric motors and I its bulb in Kassel has been restructured for this purpose. All told, Volkswagen is advance 1.5 billion dollars in electric advancement at its sites in Braunschweig, Salzgitter and Kassel.
The Volkswagen Components business assemblage has developed a absolutely new array arrangement for the Volkswagen ID. ancestors that is both beneath complicated and decidedly added able than antecedent systems. Unlike the batteries acclimated up to now, the MEB arrangement has the account of actuality scalable, which agency it is almost simple to accommodate it in altered achievement levels into the ID. models. For example, if a -to-be ID. buyer is beneath absorbed in accepting a car that can biking abundant distances—because they primarily use it in the burghal and alone biking abbreviate distances—they can opt for a array with a lower activity yield. This makes the agent cheaper. Drivers who frequently drive best distances would be added acceptable to opt for a beyond battery. This gives the agent buyer added flexibility. It is absolutely this adeptness to adapt achievement that makes the new array arrangement so attractive.
In accession to scalability, there are added advantages to the new array system, including weight access (thanks to an aluminum housing), the ability of assorted corpuscle types, and chip cooling. The array can be acclimated to drive one or both axles. As the corpuscle modules are abiding in a agnate address to a bar of chocolate, the batteries are additionally accessible to install. Volkswagen has additionally been able to access the charging accommodation to up to 125 kW—a amount so far not accomplished by mass-market EVs, which will abbreviate the charging time.
The array apartment includes chip array cooling, a affiliation box for the high-voltage and low-voltage electrical arrangement (AC, DC and 12 V), and the anew developed MEB corpuscle modules, which abide of alone array cells. The corpuscle controllers (CMCe)—control units to adviser the beef (voltage, currents and temperature) and corpuscle acclimation (ensuring the beef are ogously acclimated in circadian operation)—are chip in the longitudinal axle of the array housing. The array electronics assemblage (BMCe) is chip in the rear allotment of the array arrangement as a added ascendancy unit. Corpuscle bore connectors are acclimated to articulation the corpuscle modules to one another; barometer cables acquaint with the array electronics. The array apartment is bankrupt at the top with a awning that is accessible to aish in the accident that aliment is required.
Either accessory or bright corpuscle types can be used, consistent in aerial ability in accord with corpuscle suppliers. Volkswagen created the Centre of Excellence for array beef in 2017 to aid the development of lithium-ion batteries by accouterment abundant blueprint on the product. In this way, the Centre of Excellence is amenable for all array beef acclimated by the Volkswagen Group.
A lithium-ion array corpuscle consists of an anode (carbon, nut foil), a separator (porous polyolefin film, ceramic-coated), a cathode (lithium metal oxide, aluminum film) and an electrolyte (organic solvent, lithium administering salt, additives). When charging, the lithium ions drift from the cathode to the anode and are stored there. Electrical energy—supplied by the electrical grid—is again adapted into actinic energy. The electrons breeze through the electrical circuit, while the lithium ions breeze through the separator. During the acquittal process—to accomplish the electric motor—the lithium ions drift aback to the cathode. The actinic activity is again adapted into electrical activity already more. In this case, the electrons breeze through the electrical ambit and the lithium ions breeze through the separator in the adverse direction.
For Volkswagen, e-mobility is added than aloof a acceptable e-car. All the accordant ambit allegation assignment together: the vehicle, the advancement services, and the infrastructure. The Volkswagen cast is architectonics its own charging and activity ecosystem in the anatomy of accouterments and software for the vehicles’ ambiance as a whole—at home, at work, in the accessible realm, and on the highway. As abounding activities as accessible are arranged centralized in adjustment to ensure the affection of all services.
According to accepted surveys, best ID. drivers in Europe will alone accept to allegation their car already a week, as the majority of commuters do not biking added than 30 afar per day. Based on yses by Volkswagen, it is estimated that about 50 percent of all charging processes will booty abode at home and addition 20 percent will booty abode at work. Volkswagen will appropriately action a modular affairs of bank boxes which can be army in carports, garages or aggregation parking lots. While a agent is answerable at 2.3 kW via the accepted 230 V grid, the bank box will acquiesce the ID. models to be answerable at a amount of up to 11 kW (AC)—this charging accommodation is acceptable to absolutely allegation the Volkswagen’s array brief (which is generally cheaper) or during the alive day. The starting amount for the Volkswagen bank boxes will be about $350, additional accession costs. Volkswagen additionally affairs to aftermath bank boxes that action 22 kW (DC) charging accommodation and assignment in a bidirectional manner, acceptance activity to be supplied to the grid. At night, electric cartage affiliated to the bidirectional bank boxes will serve as a accumulator array for surplus capacity.
A division of charging processes will booty abode at accessible quick-charge stations, while 5 percent will action on highways—in both cases at a amount of added than 125 kW. It will be abundant to allegation already for a 340-mile stretch. If the ID. agent is answerable at a quick-charging base with the above 125 kW rate, charging will be completed in about 30 minutes.
Expansion of the charging basement is of absolute importance. One footfall appear accomplishing this in Europe is the Ionity collective venture. Through Ionity, Volkswagen is allied with the BMW Group, Daimler AG, and Ford Motor Aggregation to actualize a reliable arrangement of able quick-charging stations forth European highways. A absolute of 400 quick-charging stations, dubbed the ‘filling stations of the future’, should access into operation by 2020. The ID. models will be able to allegation batteries at these charging credibility with at a amount of up to 125 kW.
Overall, it is ytical that the amplification of the charging basement allegation be massively pushed in all countries. It goes after adage that Volkswagen is accidental to the amplification of the charging infrastructure: All 4,000 accustomed Volkswagen dealers in Europe will be able with on-site charging stations. Volkswagen will additionally aggrandize the arrangement of charging stations at its accumulation sites in agent parking lots from 1,000 to 5,000 by 2020, and accommodate regeneratively-generated adeptness at the aggregation charging credibility wherever possible.
In the future, the Volkswagen “WE” advancement belvedere will action the “We Charge” app-controlled account to acknowledgment buyer questions about charging. “We Charge” will affluence ambit all-overs by assuming the best acceptable charging point, reserving it, and abyssal to it. The “We Charge” functionality is currently planned for the European market, and will be facilitated through Volkswagen’s shareholding in Hubject – eRoaming. The belvedere makes it accessible to allegation electric cartage throughout Europe no amount who the provider is, and utilizes 300 ally and 55,000 charging points. Payments are currently fabricated via RFID or smartphone app with a QR code. In the not-too-distant future, the arrangement will be revolutionized with “Plug & Charge”, which uses block-chain technology to facilitate announcement and acquittal for the charging action anon via the ID. archetypal itself.
The approaching of e-mobility provides several added acute solutions. Chip into the home adeptness network, zero-emissions cartage will balance the adeptness filigree by autumn surplus accommodation in the adeptness network, which frequently accumulate at night and so far abide unused. Volkswagen wants to go one footfall added than artlessly accouterment bank boxes: The aggregation is additionally planning to architectonics a digitally affiliated home activity administration arrangement (HEMS), which can be acclimated to abate activity costs for households and advancement alike. The HEMS manages the activity appeal of the e-car and the abode heating pump while accumulation photovoltaics and domiciliary batteries. In the evening, the user will access the ambit they crave for the afterward day and at what time. The ID. agent communicates with the HEMS and establishes the best charging aeon on the base of the accepted electricity amount and availability. In the accident of a adeptness outage, the HEMS can, abatement aback on the accessible balance activity of the ID. agent to briefly adeptness the home.
Even with the advancing access in the cardinal of registered electric vehicles, the accessible adeptness filigree will be sufficient. Booty Germany as an example: one actor electric cartage would absorb about 2.4 TWh (2,400,000,000 kWh) of adeptness per year. Annual activity burning in Germany is 517 TWh. Appropriately activity burning will alone acceleration by 0.5 percent due to the use of electric cars.
|
0.956931 |
In 2007 an album of photographs titled Views in Wigtownshire was donated to Stranraer Museum. This is one of the earliest collections of Wigtownshire photographs and probably dates from the late 1860s.
Albums of old photographs of the area are not unusual but the 2007 acquisition is, in several respects. Firstly, it is of high quality as regards both materials and photographs. It seems obvious that it is not a commercial production but either unique or one of a very few copies intended for private circulation. Secondly, the photographs are of a very early date and well before the picture postcard era. They seem to have been taken at different times but all may date from the 1860's and 1870's.
Another unusual feature is the subject-matter of the photos and their geographical distribution. In some cases, for example the ford on the Bladnoch at Kenmore and Stein Head, they are probably unique as professional examples and devoid of commercial appeal. As the distribution map shows, they tend to cluster in groups, leaving obvious subjects unrepresented: the album deals more or less with the Machars but contains no photographs of Newton Stewart or Wigtown, two of the largest towns. On the other hand, it has three photographs of Whithorn priory..
Not the least remarkable feature of Views in Wigtownshire is the route by which it came into the possession of the museum. It was purchased in Italy some years ago by the late John G. Ross, self-styled photographer and explorer, who decided to donate it to the Dumfries and Galloway Museums Service because of its local content. This was accomplished through the medium of Mr R Crewdson of Kirkcudbright and Mr R. Sutcliffe of Kingston-upon-Thames. The album has therefore travelled far and circuitously.
All these features raise fascinating questions about the album's origins, questions to which they also suggest answers. The strongest clue is the Wigtownshire - Italy connection. In 1908 the Earl of Galloway sold Galloway House, its policies, and the surrounding farms to Sir Malcolm McEacharn. After the latter's death in 1910 his widow and then his son Neil McEacharn inherited the properties. In 1931 the latter sold them and moved to northern Italy, where he established his world-famous garden at the Villa Taranto on Lake Maggiore. Did he find the album in Galloway House and take it with him to Italy? The quality of the book and of the photographs suggests a wealthy owner. Most of the photographs are of places in the South Machars in the vicinity of Galloway House with a particular emphasis on the Whithorn area, where the former Galloway family home was located at Glasserton House.
|
0.999438 |
New Exchange 2010, can't get autodiscover working?
Status: offline This is a bit of a nightmare and I have done a few searches but haven't managed to find a solution.
I've started a limited company to work via as an IT contractor and I decided to get an Exchange server up and running so that I could use my own company domain for my email etc. and also because I'd like to get into Exchange contracting in the long run, so it's a good learning opportunity.
Well here I am with my first real issue! I've got mail flowing fine, it works. I use a smart host for my outbound SMTP so that I don't get my emails rejected by the various recipients. I do this because my ISP doesn't allow a static IP address. Hence I am using dynamic dns.
Now here's where the confusion starts: Autodiscover! I don't know if I need to be using a CNAME to point the alias autodiscover at my domain or if I should be using an SRV record instead. And if I do need to use an SRV record, I don't know if the name should be autodiscover and target mydomain.com or if the target should be the dynamicdns.com.
I have gone into PS and I have tried having the Autodiscover InternalUri setting set to both https://mydomain.com/autodiscover/autodiscover.xml and https://mail.mydomain.com/autodiscover/autodiscover.xml and that's made no difference.
The main issue I have is that I am changing multiple settings at the same time as I'm unsure what they should be set to and it's just going to keep me going round in circles. I am unsure what my web domain's DNS record should be set as, and I am unsure what my Autodiscover InternalUri setting should be set as.
If anyone can help I'd really appreciate it. The correct ports are open and the ExRCA confirms this. I don't have an SSL cert yet as this is also something I'm not entirely sure how to go about setting up at this point. I'm the only user, so a self signed cert should be fine. I have the default self signed cert installed that is created when your first install Exchange. How do I create my own self signed SSL cert for the web services?
Status: offline Now tried setting autodiscover InternalUri to https://autodiscover.mydomain.co.uk/autodiscover/autodiscover.xml to get it working, but still not right.
Attempting to resolve the host name autodiscover.mydomain.co.uk in DNS.
Testing TCP port 443 on host autodiscover.mydomain.co.uk to ensure it's listening and open.
ExRCA is attempting to obtain the SSL certificate from remote server autodiscover.mydomain.co.uk on port 443.
Remote Certificate Subject: CN=SRV-EXCH01, Issuer: CN=SRV-EXCH01.
Host name autodiscover.mydomain.co.uk doesn't match any name found on the server certificate CN=SRV-EXCH01.
|
0.943855 |
how do I get my 3 1/2 year old to stop sassing while still encouraging him to use his words?
I want to encourage my 3 1/2 yr old son to use his words but I hate that he talks back all the time and he'll scream at me and argue.
Talk to him calmly and let him know that is not good behavior. Follow this up with turning demonstratively away from him, ignoring him sort of...kids hate displeasing parents. They like being praised, so they will try to do better. However... He will not stop over night, so you'll need to do this for awhile until he gets it.
|
0.958828 |
What should you do when a judge ignores your motions?
Bill Windsor is not an attorney, but he is a very experienced pro se litigant.
This is one of the questions that I get asked the most: What do I do because my judge is not ruling on my motions?"
Know that they do it because they are dishonest, corrupt, and/or dislike pro se parties.
I file motions trying to get judges to act. I file appeals. I file judicial misconduct compaints. I file motions to seek recusal or disqualification of the judge. I attempt to get criminal charges brought against the judges for obstruction of justice. But I am hated by dishonest and corrupt judges, so be careful.
I document everything. When I am in the same town as the courthouse, I hand-deliver my motions to the clerk of the court myself or use a courier service. Regardless of how you get your motions to the clerk, you want a receipt. I have recently learned that, at least in some states, the United States Postal Service is superior to hand delivery.
Document, document, document. Clerks will destroy or lose your filings on orders from a corrupt judge.
I print copies of the docket on a regular basis so I have proof of what the docket showed in case the clerk criminally alters the docket in the future.
I file motions politely requesting a hearing on my motions. When that is ignored, I file a less polite motion for order on my motion(s).
Court clerks are duty-bound to know the law and to docket and process the filings that they receive. This does not mean they do. My experience is that many court clerks are corrupt.
The key legal issue to know is this: Delivery of documents to the office of the clerk of the court constitutes filing.
The office of the clerk has no legal right to block the docketing of anything that is properly delivered to the clerk of the court. So, the mission is to ensure that your documents are delivered to the clerk. You can do this by mail, certified mail return receipt, Federal Express with a direct signature required, or by personal delivery. The method you use does matter. Use the United States Postal Service with tracking so you can prove when it was sent. If it is a really important filing, send it Express Mail with a delivery date guarantee. It is best to get a signature to prove delivery to and receipt by someone in the office of the clerk of the court, but proof that it was received from the U.S. Postal Service is sufficient. Case law provides benefits to using the mail that do not exist with FedEx, UPS, or a courier service. Hand delivering them yourself isn't best as it is awkward at best for you to get a signed receipt. It's easy if you use the postal service.
I always send a cover letter listing the precise documents that are being filed. That, the documents, and the signed receipt or proof of receipt PROVES filing.
If anyone interferes with the docketing of the documents received by the clerk's office, I consider that they have committed the crime of obstruction of justice, and I will attempt to get criminal charges against them with the District Attorney and the Grand Jury.
(b) Whoever, having the custody of any such record, proceeding, map, book, document, paper, or other thing, willfully and unlawfully conceals, removes, mutilates, obliterates, falsifies, or destroys the same, shall be fined under this title or imprisoned not more than three years, or both; and shall forfeit his office and be disqualified from holding any office under the United States. As used in this subsection, the term "office" does not include the office held by any person as a retired officer of the Armed Forces of the United States."
I research the law using www.versuslaw.com. This is a low-cost online service that allows you to search all the court cases using Boolean logic. So, I can seach for precisely what I need. Versuslaw has help to explain Boolean logic if you don't know what it is or how to use it. The basic functions are and, or, and not. If you want to search for either recusal and disqualification, your search is (recusal) or (disqualification) because that will bring up uses of either word in a case. If you want to search for recusal and overruled, your search is (recusal) and (overruled) because that will bring up a list of cases where recusal and overruled were used in the same court decision. If you want to search for recusal and not overruled, your search is (recusal) and not (overruled) because that will bring up a list of cases where recusal and overruled were not used in the same court decision.
Understand that a judge does not have the option to ignore motions. Ruling on motions is a "ministerial act." It is a requirement of the judge's job.
I suspect the law will be the same in every state, so simply research your state's case law if you are in a state court. If you research your state, please send me what you come up with, and I will add the citations for the benefit of those in your state.
In 2010, I researched Georgia cases, and here are the citations that I found applicable.
United States v. Conlin, 551 F.2d 534 (2nd Cir. 03/17/1977); United States v. Claypoole, 227 F.2d 752 (3rd Cir. 12/07/1955); United States v. Donner, 497 F.2d 184 (7th Cir. 05/03/1974); United States v. May, 625 F.2d 186 (8th Cir. 05/30/1980); United States v. Salazar, 455 F.3d 1022 (9th Cir. 07/24/2006); United States v. Lang, No. 02-4075 (10th Cir. 04/21/2004). This case has some very good information -- United States v. Rosner, 352 F. Supp. 915 (S.D.N.Y. 12/14/1972). 18 U.S.C. § 2071 case Law Search Results from versuslaw.com.
|
0.999674 |
*** SPOLIER ALERT*** If you have not read Paranormalcy and don't want to know anything about the second book in the series then don't read this review - read Paranormalcy first!
Evie has left the IPCA behind forever and is finally living her life as a normal teacher - complete with boring classes, worries about getting into the one and only college she wants to get into, and a gym teacher who seems to have it in for her. So life is normal, not perfect, but normal - well as normal as it can get for a supernatural being who has no soul of her own and lives in a town where all the freaky supernatural things seem to be gathering and hanging out together in relative safety, and where every single one of them seems to be taking a little too much interest in Evie. The only thing that is really perfect is her amazing boyfriend Lend, who has moved on to college but still finds time for his girlfriend.
When the IPCA contacts Evie and offers her work as a contractor, the offer is too good to refuse, even though it means entering back into a world she thought she had left behind forever. It also means a new complication enters her life, a highly annoying but also seemingly harmless Jack, who can bounce across the faerie lands like most people navigate their way through city streets. Working for IPCA is a secret, partly because Lend is dead set against it, but also because in some ways Evie herself is not sure what she is doing. Then things start to turn deadly and Evie finds herself dodging danger like she has never known before, and as a contractor for IPCA instead of one of their own, she doesn't get all the toys she used to have to back her up. Depending on other people isn't always a bad thing, but what happens when some of those people are keeping secrets, secrets that could be deadly?
Supernaturally is the second book in what could be a trilogy (judging by the hint about a conclusion to the story in the next book) and is one of the best novels for teens in the supernatural genre at the moment. While it took a little while to really get absorbed in the story again, this was only because it has been some time since I read the first book in the series and the details got a little hazy because of all the books I have read in between. Evie's world continues to get more interesting, and with this addition to the trilogy secrets are revealed and you learn more about Evie, who she is, what she is, and what started the process for some of the supernaturals evolving (and it may not be what you expect). The story line is fast paced for the majority, with the action taking place in a relatively short time. This was an enjoyable read, and there is some interesting mythology in Evie's world which means that the supernaturals are not cookie-cutter copies of other vampires or werewolves in other novels.
|
0.999986 |
Context: The establishment of appropriate working length is one of the most critical steps in endodontic therapy. Electronic apex locators have been introduced to determine the working length. The development of electronic apex locators has helped make the assessment of the working length more accurate and predictable, along with reduction in treatment time and radiation dose. Objectives: The aim of this study was to compare the efficacy of electronic apex locators after cleansing and shaping of the root canals and whether there was any alteration in accuracy when used in the presence of irrigants. Materials and Methods: Seventy extracted human permanent molars with mature apices were selected. Equal number of maxillary and mandibular permanent molars (35 each) were sectioned at the cemento-enamel junction. Access opening was done and only the mesiobuccal root canal was studied for the purpose of standardization. Electronic working length measurements were taken before and after preparation of the mesiobuccal canal with Root ZX and ProPex II using various irrigants. Statistical Analysis Used: The data were statistically analyzed using a paired t-test at 0.05 level of significance. Results: P-values for actual and final canal lengths for Root ZX employing NaoCl(0.001), CHX(0.006), LA(0.020) and for ProPex II was (0.001) respectively. When the data were compared, results were statistically significant (P < 0.05). Conclusion: Within the limitations of this study Root ZX can be considered to be an accurate electronic apex locator and CHX as irrigant matched more precisely with the actual canal length measurements.
Context: Mixed dentition arch analysis system is an important criterion in determining the type of orthodontic treatment plan. Different mixed dentition arch analysis system are available and among them both Moyer's and Tanaka-Jhonson method of space analysis was developed for North American children. Anthropological study reveals that tooth size varies among different ethnicities The present study was performed to determine the reliability of Moyer's and Tanaka-Jhonson's method of mixed dentition arch analysis system among Bengali population. Aims: To perform the comparative evaluation of the two mixed dentition space analysis system among Bengali population. Materials and Methods: Dental casts of maxillary and mandibular arches of 70 Bengali children with permanent dentitions were fabricated. The mesiodistal crown dimensions of all erupted permanent incisors, canines, and premolars were measured with digital callipers. For particular sum of mandibular incisors, Moyer's and Tanaka-Jhonson's mixed dentition arch analysis were calculated and further statistical analysis was carried on. Statistical analysis used: Descriptive statistics including the mean, standard deviation, and minimum and maximum values, unpaired't' tests, correlation coefficient "r" were calculated and tabulated. Results: Tanaka and Johnston regression equations under-estimated the mesiodistal widths of permanent canines and premolars. On the other hand, there were no statistically significant differences between actual mesiodistal widths of canines and premolars and the predicted widths from Moyers charts at the 50% level for the lower and upper arches, among Bengali population. Conclusions: The study suggested that both Moyer's and Tanaka-Jhonson's mixed dentition arch analysis are applicable in Bengali population but with little modification in their regression equation.
Introduction : Successful root canal treatment depends primarily on the removal of micro-organisms through chemo-mechanical instrumentation of the root canal system. This encompasses shaping by mechanical removal of the dentine and cleaning by chemical disinfection of microorganisms and dissolution of organic tissues from the root canal. While root canal shaping can be predictably and efficiently attained with advanced instrumentation technology, effective cleaning of the entire root canal system remains a challenge. Rotary nickel titanium instruments are known for their efficient preparation of root canal. This is mainly because of the super elasticity of the nickel titanium alloy which gives an increased flexibility and allows the instrument to efficiently follow the original path of root canal. The purpose of this study is to compare the cleaning efficiency and shaping ability of M two, K3, Race ni-ti rotary instruments during the preparation of curved canals in extracted molars. Materials and Methods : Thirty teeth with 18 mm as their working length were selected and divided into three groups of 10 teeth each Angle of curvature, Radius of curvature, was determined using computerized tomography. A Pre and Post-operative measurement of canal width and volume was recorded and compared using CT. The teeth was then sectioned into two halves and subjected to scanning electron microscope. Images were taken at the level of apical third, middle third and coronal third for debris and smear layer. Scoring was done separately for both debris and smear layer. Results : Results were tabulated and statistically analyzed to evaluate the shaping ability and cleaning efficiency. Instruments were examined for any deformation or fracture during canal preparation. Conclusion : M two showed greater enlargements in all the three levels, when compared its width and volume, with other two instruments. K3 was better than Race when compared among them. In the scanning electron microscope study for debris and smear layer M Two performed better followed by K3 and Race.
Aim: The aim was to evaluate the quantitative changes in nuclear diameter (ND), cytoplasmic diameter (CD) and nuclear/cytoplasmic ratio (N/C) in cytological buccal smears of iron deficiency anemic patients by comparing with normal healthy individuals. Materials and Methods: The study group consisted of 40 healthy individuals and 40 iron deficiency anemic patients who were selected on clinical history, hematological investigations, and confirmed by serum ferritin levels. Exfoliative buccal smears stained with PAP stain were evaluated for cytoplasimic, nuclear diameters, and nuclear/cytoplasmic ratios (N/C) using Image Proexpress Version 6.0 image analysis system. All the parameters were statistically analyzed by using unpaired 't' test. Results: A significant increase is seen in the average nuclear diameter (ND) and N/C ratio of the anemic group when compared to the control group. The average cytoplasmic diameter (CD) did not show any statistical difference among the two groups. Conclusion: Oral exfoliative cytological techniques could possibly be a noninvasive alternative diagnostic tool for iron deficiency anemia.
Aim: The aim was to evaluate and compare the efficacy of ProTaper Universal rotary retreatment system with or without solvent and stainless steel hand files for endodontic filling removal from root canals and also to compare retreatment time for each system. Materials and Methods: Thirty extracted mandibular premolars with single straight canals were endodontically treated. Teeth were divided into three major groups, having 10 specimens each. Removal of obturating material in group 1 by stainless steel hand files with RC Solve, group 2 by ProTaper Universal retreatment instruments and group 3 by ProTaper Universal retreatment instruments along with RC solve was done. Retreatment was considered complete for all groups when no filling material was observed on the instruments. The retreatment time was recorded for each tooth. All specimens were grooved longitudinally in a buccolingual direction. The split halves were examined under a stereomicroscope and images were captured and analyzed. The remaining filling debris area ratios were considered for statistical analysis. Results: With ANOVA test, statistical analysis showed that there was statistically no significant difference regarding the amount of filling remnants between the groups (P < 0.05). Differences between the means of groups are statistically significant regarding the retreatment time. Conclusion: Irrespective of the technique used, all the specimens had some remnants on the root canal wall. ProTaper Universal retreatment system files alone proved to be faster than the other experimental groups.
Background: The availability of oral health services are very scarce in rural India; therefore the unmet treatment needs of rural population are very high. Hence, a retrospective study was conducted to evaluate the types of patients, disease pattern, and services rendered in outreach programs in rural areas of Haryana. Materials and Methods: A The data were obtained from records of outreach programs conducted, in last 3 months, by Swami Devi Dyal Hospital and Dental College. The data from were analyzed using descriptive statistics. Results: A total of 1371 individuals in the age group of 4--70 years (56.8% males and 43.2% females) attended the outreach program seeking the treatment. Dental caries (43.7%), gingivitis (27.2%), and periodontitis (22.9%) were commonly observed dental diseases. The services provided were oral prophylaxis (51.2%), restoration (22.9%), referral (20%), and extractions (8.8%). Conclusion: The attendance and utilization of dental services in the out reach programs seem to be influenced by sociodemographic characteristics of the population.
Objectives: The objective of this in vitro study was to compare the microtensile dentin bond strength (μTBS) of five seventh-generation dentin bonding agents (DBA) with fifth-generation DBA before and after thermocycling. Materials and Methods: Ten extracted teeth were assigned to fifth generation control group (optibond solo) and each of the five experimental groups namely, Group I (G-Bond) ,Group II (S 3 Clearfil), Group III (One Coat 7.0), Group IV (Xeno V), and Group V (Optibond all in one). The crown portions of the teeth were horizontally sectioned below the central groove to expose the dentin. The adhesive resins from all groups were bonded to the teeth with their respective composites. Specimens of sizes 1 × 1 × 6 mm 3 were obtained. Fifty specimens that bonded to dentin from each group were selected. Twenty-five of the specimens were tested for debonding without thermocycling and the remaining were subjected to thermocycling followed by μTBS testing. The data were analyzed with one-way ANOVA and Dunnett's-test for comparison with the reference group(Vth Generation). Results: There was no significant difference (P > 0.05) between the fifth- and seventh-generation adhesives before and after thermocycling. The results of our study showed significantly higher value (P < 0.05) of μTBS of seventh-generation Group II (Clearfil S 3 ) compared to the fifth-generation before and after thermocycling. Conclusion: The study demonstrated that the Clearfil S 3 bond had the highest μTBS values. In addition, of the five tested seventh-generation adhesive resins were comparable to the fifth-generation DBA.
Aim : The aim of this ex vivo study was to evaluate the effect of in-office bleaching agents,-35% and 38% hydrogen peroxide containing bleaching agents, on the phosphate concentration of the enamel evaluated by Raman spectroscopy. Materials and Methods : Forty noncarious, craze-free human maxillary incisors, extracted for periodontal reasons, were used in this study. Baseline Raman spectra from each specimen were obtained before the application of the bleaching agent to assess the phosphate content present in the teeth. The teeth were divided into two groups: Group A - bleached with pola office bleach (35% hydrogen peroxide, potassium nitrate) (light activated). Group B - bleached with opalescence Xtra bleach (38% hydrogen peroxide potassium nitrate and fluoride) (chemical activated). After the bleaching procedure, the treated specimens were taken to obtain Raman spectra to assess the phosphate loss after bleaching treatment. Results : The results showed that the chemically activated bleaching agent showed less phosphate loss when compared with the light activated bleaching agent. Conclusion : Within the limitations of this study, it can be concluded that the chemically activated bleaching agent showed minimal phosphate loss when compared to light activated bleaching agent. The chemically activated bleaching agent was better than the light activated bleaching agent when values were evaluated statistically.
The major challenge of performing root canal treatment in an open apex pulp-less tooth is to obtain a good apical seal. MTA has been successfully used to achieve a good apical seal, wherein the root canal obturation can be done immediately. MTA and White Portland Cement has been shown similarity in their physical, chemical and biological properties and has also shown similar outcome when used in animal studies and human trials. In our study, open apex of three non vital upper central incisors has been plugged using modified white Portland cement. 3 to 6 months follow up revealed absence of clinical symptoms and disappearance of peri-apical rarefactions. The positive clinical outcome may encourage the future use of white Portland cement as an apical plug material in case of non vital open apex tooth as much cheaper substitute of MTA.
Plasma cell gingivitis is an uncommon inflammatory condition of uncertain etiology often flavoured chewing gum, spices, foods, candies, or dentifrices. The diagnosis of plasma cell gingivitis is based on comprehensive history taking, clinical examination, and appropriate diagnostic tests. Here we are presenting a rare case of plasma cell gingivitis caused by consumption of colocasia (arbi) leaves. Colocasia is a kind of vegetable, very commonly consumed in the regions of North India.
Sialoliths are the most common diseases of the salivary glands. They may occur in any of the salivary gland ducts but are most common in Wharton's duct and the submandibular gland. This report presents clinical and radio graphical signs of two unusually large sialoliths which exfoliated by itself. There were painless swellings on the floor of the mouth in both cases. Radiographical examination revealed large irregular radioopaque mass superimposed right canine and premolar areas. Sialoliths were yellow in color and approximately 1.8 cm and 2.1 cm in size.
Oligodontia is one of the most common developmental abnormalities in humans. The present case report highlights the features of oligodontia in a 12-year-old male patient which was managed successfully with multidisciplinary approach. Familial oligodontia represents as an absence of varying numbers of secondary teeth seen as an isolated trait. The advance in the understanding of tooth development and genetic control of tooth morphology not only allows clinical research to broaden the knowledge of tooth agenesis but also provides optimum clinical care.
One of the most common developmental defects seen in south India is cleft lip and palate. Among them a few are associated with lip pits and termed as Vander Woude's syndrome. The early diagnosis of this rare syndrome is very necessary followed by a multidisciplinary approach. It is also necessary to differentiate this syndrome from the other syndromes which may present similar features. A case report of the same is presented here requiring a multidisciplinary approach for a functional and esthetically pleasing outcome.
The development of adhesive dentistry has allowed dentists to use the patient's own fragment to restore the fractured tooth, which is considered to be the most conservative method of treatment of crown fracture allowing restoration of original dental anatomy, thus rehabilitating function and esthetics in a short time by preserving dental tissues. The tooth fragment reattachment is preferred over full coverage crowns or composite resin restoration because it conserves sound tooth structure, and is more esthetic, maintaining the original anatomy and translucency, and the rate of incisal wear also matches that of original tooth structure. Presented here is a report of two cases of crown fracture managed by reattachment procedures.
The earliest evidence of demineralization on the smooth enamel surface of a crown is a white spot lesion. The conventional treatment of these white spot lesions includes topical fluoride application, iamproving the oral hygiene, and use of remineralizing agents. The following article illustrates the use of a novel approach to treat smooth surface noncavitated white spot lesions microinvasively based on infiltration of enamel caries with low-viscosity light curing resins called infiltrants. This treatment aims upon both the prevention of caries progression and improving esthetics, by diminishing the opacity.
Solitary median maxillary central incisor (SMMCI) is a unique developmental anomaly in primary dentition. It involves central incisor tooth germs and may or may not be associated with other anomalies. Its presence, concomitant with fusion of right mandibular incisors has not previously been reported. A 5-year-old girl was presented with a single symmetrical primary maxillary incisor at the midline, with the absence of labial frenulum, an indistinct philtrum and a prominent midpalatal ridge. There was an associated fused tooth in the right incisor region and radiographic examination confirmed only one maxillary central incisor in both the dentitions. Family history revealed that the father of the girl also had a similar anomaly providing probable evidence of etiological role for heredity in SMMCI.
Gingival fibromatosis is a benign oral condition characterized by enlargement of gingival tissues. It usually develops as an isolated disorder but can be one of the features of a syndrome. This case report is of a 5-year-old male with severe gingival hyperplasia and mild mental retardation which was complicated by open bite, abnormal occlusion, open lip posture, and disabilities associated with mastication and speech. Full mouth gingivectomy in single sitting under general anesthesia was done with electrocautery.
Hereditary gingival fibromatosis is a rare condition characterized by various degree of gingival overgrowth. It usually develops as an isolated disorder but can manifest with multisystem syndrome. We are here presenting a case of a 13-year-old girl who presented with severe enlargement of gingiva covering all most the entire crown involving both maxillary and mandibular arches. Differential diagnosis includes drug-induced and idiopathic gingival enlargement. Excess gingival tissue was removed by full mouth gingivectomy and sent for histopathological examination. Postoperative course was uneventful and patient's esthetics improved significantly. A 12 month postoperative period shows no recurrence.
Oral focal mucinosis (OFM), an oral counterpart of cutaneous focal mucinosis, is a rare disease of unknown etiology. Its pathogenesis may be due to the overproduction of hyaluronic acid by a fibroblast, at the expense of collagen production, resulting in focal myxoid degeneration of the connective tissue, primarily affecting the mucosa overlying the bone. It has no distinctive clinical features, as the diagnosis is solely based on the histopathological features. This article reports of a 32-year-old female having the rare disease of oral focal mucinosis, involving the posterior palatal mucosa, and discusses its clinicopathological features and differential diagnosis of myxomatous lesions of the oral cavity.
Implant placement in maxillary anterior region has most aesthetic challenges in implant dentistry because tooth loss lead to bone resorption and collapse of gingival architecture, which lead to aesthetic compromise and inadequate bone for implant placement. Immediate implant placement into fresh extraction socket reduces the treatment time, cost, preserved the gingival aesthetic and increases the comfort of the patient. This article describes the procedure for immediate implant placement in fresh extraction socket and early loading of implant with zirconia crown. Clinical and radiographic examination revealed width and length of the tooth for selecting implant size and design. Cement retained zirconia crown was used for early loading. Implant was successfully loaded and was functional during 36 months follow up period. Immediate placement and early loading of dental implant provides advantages like fewer surgical procedures, shorter treatment time, and improved aesthetic and psychological confidence.
Cysticercosis is caused by the larvae of the pig tapeworm, Tenia solium. Oral cysticercosis is a rare event and is often a diagnostic challenge to the clinician. We report a 12-year-old girl who presented with a single, painless, nodule on the lower lip that was diagnosed as cysticercosis. Current literature on the clinical presentations, investigations, and treatment of the condition has been reviewed in this article. We have also proposed a set of criteria for the diagnosis of oral cysticercosis.
Ameloblastoma is the most common tumor of odontogenic origin. There are various types of this tumor and confusion still exists among the clinicians about the correct classification. Multicystic ameloblastoma is the most frequent subtype while unicystic ameloblastoma can be considered as a variant of the solid or multycistic. This subtype is considered as a less aggressive tumor with a variable recurrence rate. However, its frequency is often underestimated. The aim of this article is reviewing the recent literature about unicystic ameloblastoma using our unusual clinical case as a starting point to illustrate this discussion. A 30-year-old man who had been complaining of slight pain in the premolar and molar area of the left side of mandible had a check up at our department. X-rays revealed a unilocular radiotrasparency with radiopaque margins. The first histological diagnosis was an odontogenic cyst. Successive histological evaluations revealed that ameloblastic epithelial islands were present in lassus connective tissue. We think that our case report provides new insights into the approach to the ameloblastoma diagnosis. We agree with authors who have pointed out that a single small biopsy may often be inadequate for the correct diagnosis of amelobastoma. Moreover, in the light of our experience, it should be kept in mind that ameloblastomas may have sometimes unusual presentations and this fact should induce surgeons and pathologists to consider carefully each lesion.
Platelet-rich fibrin has long been used as a wound healing therapy in skin wounds and recently evidence has suggested its usage in oral cavity for different treatment procedures. This article proposes an overview of use of platelet-rich fibrin in management of complicated oral wounds. Excessive hemorrhage of the donor area, necrosis of epithelium, and morbidity associated with donor site have been described as the possible complications after harvesting subepithelial connective tissue graft, but little has been mentioned about their management. The article includes a case report of a 45-year-old male patient who showed a delayed wound healing after subepithelial connective tissue graft harvestation, which was treated with platelet-rich fibrin.
Adenomatoid odontogenic tumor (AOT) is a benign lesion derived from the complex system of dental lamina or its remnant. It is categorized into three variants (follicular, extrafollicular, and peripheral). We present a rare case of AOT arising from a dentigerous cyst around the unerupted canine in a 28-year-old female. We believe that this case z an odontogenic cyst with neoplastic development, containing both epithelial and mesenchymal components. As more cases accumulate, we will be able to study these rare lesions further whether the AOTs derived from an odontogenic cyst could represent a distinct "hybrid" variant separate to the three variants described thus far.
Multiple supernumerary teeth are very rare, accounting for less than 1% of cases. They are commonly associated with syndromes like Gardner's syndrome and cleidocranial dysostosis and cleft lip and palate. Non-syndromic multiple supernumerary teeth have a predilection to occur in the mandibular premolar region. Orthokeratinized odontogenic cyst (OOC) is a relatively uncommon developmental cyst comprising about 10% of the cases that had been previously implied as odontogenic keratocysts. More than half of the cases of OOC are associated with impacted tooth; but not a single case of OOC associated with supernumerary teeth is reported. Hence, the purpose of this article is to report the first case of multiple supernumerary mandibular premolars associated with OOC in a 35-year-old male and to review the literature associated with multiple bilateral supernumerary mandibular premolars.
Taurodontism is a morphoanatomical developmental anomaly rarely seen in teeth. Permanent mandibular molars are most commonly affected. Endodontic treatment of a taurodont tooth is challenging and requires special handling because of proximity and apical displacement of roots. This paper presents a successful endodontic therapy of all three types of taurodonism with two case reports - the first case with mesotaurodontism of mandibular left first molar and hypotaurodontism of mandibular left second molar and the second case with hyper taurodontism of mandibular left second molar.
Verruciformxanthoma (VX) is an uncommon benign mucocutaneous lesion of unknown etiology. It appears as a papule or single plaque with verrucous or papillomatous surface and variable color from reddish pink to gray. It occurs primarily in the masticatory mucosa. Histologically, VX is characterized by the presence of parakeratinized epithelium with thin rete ridges and connective tissue papillae extending up to the surface. The papillae characteristically consist of foam cells, also called xanthoma cells. We report a case of VX in the buccal mucosa and discuss their clinical and histopathological findings.
Various root developmental anomalies like palatoradicular groove (PRG) have been associated with worsening of periodontal condition. The aim of the present case report is to describe the regenerative surgical treatment of periodontal and osseous lesion associated with the subgingival extension of PRG. A 23-year-old female patient reported with pain in upper incisor teeth region. On clinical and radiological examination, a deep endosseous defect was found distal to maxillary right lateral incisor that was etiologically associated with the presence of a PRG. Treatment procedures consisted of: Regenerative periodontal therapy using Guided tissue regeneration (GTR) and hydroxyapatite (HA) bone graft and 2) flattening of the radicular portion of the palatal groove. The clinical examination at 1 year revealed shallow residual probing depth (3 mm) and no increase in gingival recession. The radiographic examination showed reduction in the radiolucency suggesting bone fill. A PRG may serve as a pathway for the development of a periodontal osseous defect. The combination of GTR and HA may be clinically and radiographically efficacious in the treatment of such a defect.
Conventional root canal treatment (RCT) of the teeth has long shown high success rate. However, the endodontic treatment of a pulpless tooth with periapical radiolucency of a considerable size always has a question of success. In modern days, surgical exploration is avoided, especially in the posterior teeth. These types of cases may be successfully managed by orthograde Mineral Trioxide Aggregate (MTA) placement in the apical third of the root followed by proper obturation. The objective of our present case reports was to evaluate the periapical pathology of posterior teeth clinically and radiographically by using MTA in orthograde way and avoiding traumatic surgical exploration. In the first case, the patient reported with intraoral sinus and pus discharge related to tooth #45. On radiograph, open apex (blunderbuss) was found along with periapical radiolucency. In the second case, the patient reported with pain and swelling related to tooth #26, having large periapical radiolucency related to the palatal canal. On vitality test, both the teeth responded negative, i.e., non-vital. Conventional RCT was planned in both the cases with orthograde MTA- Angelus (Angelus, Londrina, PR, Brazil) apical plug followed by the proper obturation with gutta-percha (G.P.), and after that the patients were kept on periodic follow-up and the outcome-based clinical and radiographic criteria were assessed. The post-obturation assessment at 1-month interval showed changes in the size of radiolucency with a gradual decrease, and after 6 months a remarkable decrease of radiolucency or the defect was almost filled with bone formation visible around the roots.
|
0.996898 |
Does life insurance go to pay the estate debt before the beneficiary gets paid?
In order to answer this question you may want to speak with a licensed life insurance agent, a financial planner, or an estate tax professional.
I believe the death benefit from a life insurance policy goes directly to the beneficiary of the policy without any federal taxes taken from the proceeds.
Also, a life insurance policy is separate from an estate, and the person named as beneficiary is not responsible to pay the insured person's estate taxes or debt from his/her money received from a life insurance policy when named as beneficiary. I believe the money is separate.
However, if the beneficiary is named in the estate of the deceased person, the beneficiary may have to pay taxes on the money received from the estate. I think this is called estate taxes.
I believe the handling of the estate is completely separate from the handling of the life insurance proceeds.
What would the tax be on a 100000 life insurance policy?
If the beneficiary of a life insurance policy is an individual, and not the state, the proceeds of the life insurance would go directly to the beneficiary free from federal income tax.
If the beneficiary is the estate, the proceeds may be subject to estate taxes.
However, I do not believe there is any federal tax due on the proceeds from a $100,000 life insurance death benefit paid to an individual.
You may want to review the tax implications of the death benefit with your tax person or financial planner to review the current law in your state regarding and federal and/or state tax due on life insurance proceeds.
Here's an article that reviews taxes on life insurance.
|
0.937873 |
What taxes will my company face abroad ? taxes are an area in which states fiercely retain their nations rights. A.S.companies pay taxes to the country in which the income is earned and receive tax credits in the Asia States on taxes paid abroad. Credits in the Asia states are limited by a ceiling determined by the ratio of foreign profits to total profits.
Because of differing tax rates it is often advantageous for a company to have as much income taxed abroad as possible.
There are no universal international laws governing the levy of taxes on companies that do business across national boundaries. However the taxation polices of the home and host nations can have both negative and positive effects on a company. Tax systems are exceedingly varied among nation-states especially regarding who and what gets taxes. The India has 40 such agreements in place. There are no multilateral tax agreements in place as of yet. However,"the Model Double taxation convention on income and capital, approved by the organization for economic cooperation and development in 1977,has been influential as a guide to countries in bilateral negotiations which has the long term destination in global market.
|
0.99978 |
Q: I’m at a director level and want to take the next step. Placement on boards (either at small companies or nonprofits) seems to be a good differentiator. Is this accurate? What are some strategies I can pursue to gain such placements?
A: Becoming a board member can be a positive professional move for all involved. A nonprofit organization would benefit from your willingness to contribute your time and insight. And a small company typically dealing with issues of growth would also benefit from your experience. Don’t underestimate the insight you will gain dealing with the problems of smaller organizations, and the broader exposure to operating and corporate policy issues beyond information technology.
If you are interested in a particular nonprofit, identify the existing members of the board and contact the president to inform him of your interest, and indicate your willingness to help. Offer to take on a project or provide some insight. This will give you a look at the organization as well as provide the leadership of the organization a look at you. If you can offer a professional service that is required by the organization, consider providing it pro bono in return for a board seat.
|
0.999997 |
Trash Robot collects river rubbish Jump to media player The robot connects to the internet so web users can control it and donate to pay maintenance costs.
Plant captures CO2 out of the air Jump to media player The plants are powered using waste heat and electricity, producing "negative emissions"
Could 'solar roads' help generate power? Jump to media player A stretch of road has been paved with solar PV (photovoltaic) panels in France.
|
0.999343 |
How many of our parents seem to make it anyway?
You say "I'm sorry. I'll be waiting at home"
|
0.970946 |
This article is about the UK Social Democratic Party which has existed since 1990. For other UK parties of this name, see Social Democratic Party.
The Social Democratic Party is a small political party in the United Kingdomformed in 1990. It traces its origin to the Social Democratic Party that was formed in 1981 by a group of dissident Labour Party politicians, all Members of Parliament (MPs) or former MPs: Roy Jenkins, David Owen, Bill Rodgers andShirley Williams, who became known as the “Gang of Four”. This party merged with the Liberal Party in 1988 to form the Liberal Democrats, but Owen, two other MPs and a minority of party activists formed a breakaway group immediately after with the same name. That party dissolved itself in 1990, but a number of activists met and voted to continue the party in defiance of its National Executive, leading to the creation of a new Social Democratic Party.
The party is listed on the Register of Political Parties for England, Scotland and Wales. John Bates is the party president. According to the accounts filed with the Electoral Commission for the year ending 2008 it had 41 members.
The second incarnation of the SDP decided to dissolve itself after a disastrous result in the Bootle by-election of 1990. However, a number of SDP activists met and voted to continue the party in defiance of the National Executive. The continuing group was led by Jack Holmes, whose defeat by the Official Monster Raving Loony Party at the Bootle by-election had caused the party’s end.
The much reduced SDP decided to fight the Neath by-election in 1991. With Holmes serving as the party’s election agent, the SDP candidate finished fifth with 5.3% of the vote – only 174 votes behind the fourth placed Liberal Democrats. (The SDP candidate joined the LibDems shortly thereafter.) The Neath result proved that a greatly reduced SDP could continue to be a viable party without David Owen. The party subsequently won a number of seats on the Neath Port Talbot County Borough Council.
Since 1992, the SDP has concentrated on campaigning at local level and on trying to build up support again largely from scratch. In more recent years, it has held a few council seats in Yorkshire and South Wales.
Bridlington Central and Old Town ward on East Riding of Yorkshire Council remained a hotspot of SDP activity with Ray Allerston holding a council seat there from 1987. From 2003 to 2007 he was joined by his wife, Catherine Allerston.
Meanwhile, in Tunstall Ward in Richmondshire, Tony Pelton and Brian Smith were elected in 1999.
A third hotspot consisted of SDP Councillors Jeff Dinham, John Sullivan and Anthony Taylor in Aberavon Ward, Neath Port Talbot.
In the 2003 elections, Tony Pelton was re-elected, but Brian Smith was not. In 2005, Christine Allerston became Mayor of Bridlington for a year, but stood down before the 2007 local elections, in which her husband Ray Allerston was re-elected (and made Mayor) and David Metcalf (SDP) picked up the vacant seat. All three Aberavon councillors remained in place, with Anthony Taylor becoming local Mayor. However, Tony Pelton in Tunstall stood down before the 2007 locals, ending SDP representation there.
In 2008 Jackie Foster was elected onto Bridlington Town Council.
In 2012, Councillors Dinham and Sullivan lost their seats in Aberavon, leaving only Anthony Taylor in position.
In early 2014 David Metcalf stepped down due to ill health. He died soon after. This left just Allerston, Foster and Taylor in post. Ray Allerston died on 16 September 2014. A by-election was held in his ward on 27 November, which was won by the UK Independence Party.
The SDP fielded two candidates in the 2015 general election: Peter Johnson stood in Birmingham Yardley, finishing in last place with 71 votes and Val Hoodless in Kingston upon Hull East, who was also last with 54 votes.
Jackie Foster remained an SDP councillor on Bridlington Town Council after the 2015 local elections, but as of 2016 is listed as a Labour councillor. Anthony Taylor is sitting on Neath Port Talbot County Borough Council as an “IndependentDemocrat”, but remains listed on the party website as the only current SDP councillor.
In August 2015, Solihull‘s Green councillor, Mike Sheriden, defected to the SDP, taking their councillor tally up to one again. However, when he stood for re-election in May 2016, Sheridan received only 17 votes (0.83%) and lost his seat.
The Reality Party is a political party in the United Kingdom that was founded in 2014, by Mark “Bez” Berry. The party was briefly deregistered by the Electoral Commission, for breaching rules regarding party names, but re-registered in February 2015 under the name We Are The Reality Party. They are also permitted to use the description “The Reality Party It’s Your Reality” on ballot papers.
The party manifesto is a centre-left anti-austerity programme which includes policies against privatisation, tuition fees and tax avoidance and in favour ofrenationalisation, progressive taxation, rent controls, socially-managed housing andparticipatory democracy.
In 2014 Channel 4 produced a documentary series following Berry’s political campaign. The Reality party toured South Thanet in a green vintage bus in December 2014.
In January 2015, The Reality Party was deregistered by the Electoral Commission for having a name that was too similar to that of The Realists’ Party. Its founder had been given several written warnings that a name change was required, and was removed from the register in January when it had failed to comply. On 12 February 2015 the party re-registered as “We Are The Reality Party”.
The party stood three candidates in the 2015 general election, after some initial ambiguity over which seats they intended to contest.
Mark “Bez” Berry stood for Salford and Eccles and gained 703 votes, higher than fellow anti-austerity party TUSC.
Nigel Askew stood for South Thanet where Nigel Farage (leader of UKIP) was also standing, and gained 126 votes. He referred to his campaign as “The Battle of the Nigels” and also referred to himself as”the real pub landlord” highlighting another adversary, Al Murray. Both Farage and Murray beat him in the election, but finished behind Conservative candidate Craig Mackinlay.
Mags McNally was the candidate for Worsley and Eccles South, gaining 200 votes.
Jackie Anderson was initially the declared candidate for “Salford West and Eccles”; however, this constituency does not exist. Anderson left the party and stood as an independent in the 2015 local council elections.
The Alliance for Green Socialism (AGS) is a socialist and environmentalistpolitical grouping operating across Britain (although its most active membership is in West Yorkshire, particularly in the City of Leeds). Its first annual conference was in 2003 following the 2002 merger of the Leeds Left Alliance (formed by Mike Davies, Celia Foote, Garth Frankland and other former members of the Labour Party) and the Green Socialist Network (whose origins lay in the former Communist Party of Great Britain). The Leeds Left Alliance had previously been involved in the former Socialist Alliance and a small number of AGS members remained involved in it until it was dissolved by the SWP (who had effectively taken it over) in February 2005. The AGS has sponsored various attempts by one of its affiliate organisations (Rugby Red Green Alliance) and the Socialist Alliance Democracy Platform to re-form the Socialist Alliance from 2005 onwards but this has had little success and the AGS concluded in 2011 that such efforts were no longer politically productive (although the AGS still actively supports the idea of a broader Socialist/Environmentalist political alliance).
The AGS describes itself as an alliance rather than as a party. This is seen as significant by some AGS members because the AGS contains people from a variety of trends, traditions and ideological backgrounds who have all agreed to work together in a single organisation whilst retaining the right to disagree on some issues. Many of the AGS members come from former political parties which had a democratic centralist tradition while others were formerly in the Labour Party or in no party at all. To argue out every issue on which differences existed to the point where a majority decision was reached which was then binding on all members might lead to many comrades leaving. The current arrangement recognises that the AGS is open to people from various leftist and environmentalist positions – as long as they agree on the basic principles on which the AGS was founded.
The AGS stood candidates in its own name in the Yorkshire and the Humberconstituency in the 2004 European Election, coming last with 0.9% of the votes cast. It later contested the 2005 UK general election under its own name and in association with other leftist parties in the Socialist Green Unity Coalition, standing candidates in Yorkshire, London and Brighton. The AGS also stood candidates under its own name in the 2010 General Election.
In 2009, the AGS joined the No2EU – Yes to Democracy election campaign for the European elections and three AGS members stood as candidates in Yorkshire & Humberside. However, following the election, the failure of the RMT union to commit itself to a successor organisation to this campaign (whose name was disliked by most AGS members as it implied an anti-European stance which they do not hold) and its transformation into a new organisation (Trade Unionist and Socialist Coalition) which is politically and organisationally dominated by the Socialist Party, has led the AGS to withdraw from this group.
The Green Socialist Network (GSN) was a socialist environmentalist political grouping whose origins go back to theCommunist Party of Great Britain (CPGB). When the CPGB was wound up in 1991 a number of its members (and the assets of the party) transferred to a new organisation called the Democratic Left led by former CPGB General SecretaryNina Temple. However, the Democratic Left failed to live up to the expectations of a number of its comrades (particularly those who had spent many years in the CPGB and who still adhered to a Marxist political position) and a split occurred, which led to many of these comrades—especially in the London area—leaving the Democratic Left and establishing the GSN. These included Dave Cook, former National Organiser of the party.
The GSN was not merely a socialist grouping as its members accepted that the old Soviet style system of industrialised state socialism had failed in many respects. The GSN adopted a programme entitled “Towards Green Socialism”, which proposed linking socialism with environmental sustainability and which argued that these two developments were both essential for human survival and development and that each required the other.
The GSN Programme “Towards Green Socialism” has been largely incorporated into AGS policy documents but is still available on request from the AGS, Freepost NEA 5794, Leeds LS7 3YY. E-mail requests to [email protected] or via the website at .
In 2002 GSN members voted to merge with the Leeds-based Left Alliance (a grouping of primarily ex-Labour Party members in Yorkshire who had left, or been expelled from, New Labour) and some independent Green Leftists to form the Alliance for Green Socialism (AGS). The GSN programme “Towards Green Socialism” was adopted as the basis for the AGS’s political programme and remains so.
The GSN membership was largely in London and the South East and former GSN members make up the majority of the AGS London membership. Two former GSN members became National Officers of the AGS and several others became founder members of the AGS National Committee.
The alliance’s first and founding annual conference was held in 2003, after the members of the Leeds Left Alliance and the Green Socialist Network had both voted to approve the merger the previous year and had already formed a combined National Committee.
The first general election the alliance contested was the 2005 general election however it did contest the 2004 elections, both local and European elections, securing 13,776 but no seats in the European election.
Since the 2005 General Election the alliance has contested three local elections and the 2010 general election.
During the 2009 European elections the party campaigned as part of the No2EU alliance which combined many minor parties on the left-side of politics to campaign against the perceived ‘pro-capitalist’ and anti-democratic aspects of theEuropean Union. The alliance secured 153,236 votes, but no seats. However, the No2EU alliance was not consolidated into a new, broad political grouping after the election and the AGS did not wish to remain involved in a group which was increasingly seen as merely anti-European, which they are not.
The alliance frequently republishes full manifestos that cover every policy area. Their most recent manifesto was published in March 2015 for the 2015 general election. Proposed economic polices include further control over banking and greater job security for workers. Environmental policies include committing Britain to an 80% reduction in its carbon emissions by 2050. The alliance is also committed to nationalisation of all National Health Service (NHS) services and utility and transport services. Other notable policies include electing the House of Lords, abolishing the monarchy and decriminalisation of cannabis. However, because the AGS is an alliance rather than a party, members are allowed to differ on certain policies.
There are four prices available for full membership: £30 for highest income, £18 for lower income and £7 for Pensioners and those with “negligible” income as well as £7 for students. This makes the AGS one of the cheapest political parties in the UK. Membership also entitles the member to the quarterly journal Green Socialist and a regular[according to whom?]members’ newsletter.
The journal of the AGS is Green Socialist magazine, published quarterly. The AGS also publishes a members’ bulletin which goes out five or six times a year. Additionally, pamphlets are published on specific topics (e.g. civil liberties) and the AGS election manifesto is published as a booklet available from the national officers or downloadable from their website.
In its financial statement to the Electoral Commission in 2008 the alliance quotes an income of £12,522 and expenditure of £8,356.
|
0.985329 |
What is Tom Clancy's EndWar?
Tom Clancy's EndWar is a Strategy, RTS, Tactical PC game, developed by , available on Steam and published by .
According to Steam user reviews, 0% of the 0 user reviews in the last 30 days are positive. For all time, 0% of the 0 user reviews for Tom Clancy's EndWar are positive.
|
0.999352 |
Fast Food Restaurants I have been getting billed for something that I am unaware of paying. could you please enlighten me on what this payment is for.
I am not understanding the reason for a bill payment that has been coming out of my account.
If it is possible that you can let me know or contact me as soon as possible so we can resolve this.
I was billed on the 25/02/2019.
My bank name is from my kiwibank.
My email address is [protected]@gmail.com.
|
0.999858 |
Can you write a book a year? It may be necessary for you to do that to achieve success.
In popular genres, a book a year is nothing new. You need to be able to write fast and well.
If you’re a new writer, a book a year can seem intimidating. However, you can build up to it. It’s vital that you write, and keep on writing, even before your first book sells.
|
0.990222 |
Stock market volatility is currently quite high. Does it make sense for investors to get out of the market until volatility settles down?
EFF: If the current high volatility makes you permanently averse to stock market volatility, and the inevitable variation in market volatility, you should get out. But you shouldn't have been in the stock market in the first place since fluctuations in volatility are the norm. If you eventually want to come back into the market, then you shouldn't leave. Bouncing in and out of the market is risky if your desired long-term asset allocation involves exposure to the market.
The reasoning, in a nutshell, is as follows. The logic of price changes in response to variation in volatility is that the onset of high volatility should be associated with price declines that increase expected returns going forward (to compensate investors for the higher volatility), and the onset of a low volatility period should be associated with price increases that lower expected returns going forward. As a result, if you bounce in and out of the market in response to variation in volatility, you are likely to be in when expected returns are low and out when expected returns are high. Bouncing in and out only makes sense if you can forecast increases and decreases in volatility before they occur, so you can miss the price declines associated with the onset of high volatility and profit from the increases associated with the onset of low profitability. I doubt that anyone is that good at predicting changes in volatility.
KRF: My approach to this issue is a bit different, but I reach the same basic conclusion. I start with the fact that in the short run the number of shares outstanding is fixed. As a result, changes in risk cannot affect the aggregate portfolio of all investors; you cannot reduce your equity position unless someone else is willing to increase his. Changes in risk can, however, affect price. When risk goes up I expect prices to fall and expected returns to rise. And notice that this expected return adjustment is on top of any drop in price caused by lower expected cashflows.
So who should sell? The current market turmoil has taught many investors much more about volatility and their tolerance for risk than they could ever learn from hypothetical examples and thought experiments. Some investors have discovered that big losses hurt more than they expected, while others have concluded they are not as risk averse as they thought. If you are in the first group you might want to sell some equity, but if you are in the second group you probably want to take advantage of your high risk tolerance and buy more. And as Gene said, these should be permanent changes. You might adjust your investments so they are more in line with your actual tastes, but once you do you should plan on sticking with your new portfolio.
|
0.999957 |
How to make Facebook mobile apps to open links in default browser. By default Facebook and Facebook Messenger opens links inside their app and not in default browser. You can change this option in both apps, in Facebook and Facebook Messenger app. If Chrome is your default browser in your phone, then it will open links in that app.
1. Open Facebook mobile app and click the second icon from top right.
2. Scroll down until you see Settings & Privacy drop down menu and click it to open it. From there open Settings.
3. Scroll down until you see Media and Contacts and click it open.
4. Check Links open externally.
1. Open Facebook Messenger for mobile and click icon from top right.
2. Click Photos & Media.
3. Check Open Links in Default Browser.
|
0.589826 |
Gene expression data, in conjunction with information on genetic variants, have enabled studies to identify expression quantitative trait loci (eQTLs) or polymorphic locations in the genome that are associated with expression levels. Moreover, recent technological developments and cost decreases have further enabled studies to collect expression data in multiple tissues. One advantage of multiple tissue datasets is that studies can combine results from different tissues to identify eQTLs more accurately than examining each tissue separately. The idea of aggregating results of multiple tissues is closely related to the idea of meta-analysis which aggregates results of multiple genome-wide association studies to improve the power to detect associations. In principle, meta-analysis methods can be used to combine results from multiple tissues. However, eQTLs may have effects in only a single tissue, in all tissues, or in a subset of tissues with possibly different effect sizes. This heterogeneity in terms of effects across multiple tissues presents a key challenge to detect eQTLs. In this paper, we develop a framework that leverages two popular meta-analysis methods that address effect size heterogeneity to detect eQTLs across multiple tissues. We show by using simulations and multiple tissue data from mouse that our approach detects many eQTLs undetected by traditional eQTL methods. Additionally, our method provides an interpretation framework that accurately predicts whether an eQTL has an effect in a particular tissue.
Advances in genotyping and gene expression technologies have enabled researchers to study associations between genetic variants and gene expression levels. These studies often treat expression levels as quantitative traits and apply statistical tests to identify genomic locations known as expression Quantitative Trait Loci (eQTLs) that segregate the traits. Genome-wide maps of eQTLs for several organisms including budding yeast , , Arabidopsis , mouse , and human , have been successfully generated. Furthermore, recent technological developments and cost decreases in microarrays allow studies to collect expression data in more than one tissue in human , , and mouse , . A collection of expression data from multiple tissues enables studies to explore the tissue-specific nature of eQTLs as well as their global effects on different types of tissues.
Multiple tissue datasets can potentially allow studies to more effectively identify eQTLs by combining information from multiple tissues. Due to a limited sample size, a standard single tissue eQTL method or “tissue-by-tissue” approach that examines each tissue individually may not detect an eQTL in any one tissue, or it may overestimate the proportion of tissue specific eQTLs . However, if a genetic variant is associated with the expression of a gene in more than one tissue, we can aggregate information from multiple tissues to increase statistical power. This idea is similar to the idea of meta-analysis in genome-wide association studies (GWAS) that combines results of several studies on the same phenotype. In our case, each tissue is considered as a separate “study” in the meta-analysis.
One key difficulty in combining results from multiple tissues is that it is not known in which tissues a genetic variant has an effect. For example, a variant may influence gene expression in all tissues, may have different effects on different tissues, or may have an effect in some tissues but may not have any effect in other tissues. This phenomenon, different effect sizes among tissues, is called heterogeneity. Meta-analysis methods have different assumptions on the distribution of effect sizes, and to better detect eQTLs, studies will perform best if they apply a meta-analysis method whose assumptions are consistent with the actual effect sizes of eQTLs in multiple tissues. For instance, if an eQTL has an effect in all tissues, studies would perform best if they utilize the fixed-effects model (FE) – that assumes no heterogeneity. On the other hand, to effectively detect an eQTL whose effects on gene expression differ across tissues, studies will perform best if they apply the random-effects model (RE) – that considers heterogeneity.
Another challenge in applying meta-analysis to multi-tissue datasets is that studies often collect multiple tissues from the same individuals, which may cause the expression between tissues of the same individual to be correlated. This correlation may cause false positives for standard meta-analysis methods which assume a disjoint set of individuals in each study.
In this paper, we present a novel approach called “Meta-Tissue” that identifies eQTLs from multiple tissues by utilizing meta-analysis. The critical advance of our methodology is that we extend meta-analysis to a mixed model framework. We apply the mixed model to account for the correlation of expression between tissues, and perform meta-analysis to combine results from multiple tissues. Since we do not know in advance the distribution of effect sizes for eQTLs among different tissues, we utilize both the FE and RE models to identify as many eQTLs as possible, and for RE, we use a recently developed random-effects model that achieves higher statistical power than the traditional random-effects model. We first show by simulations that Meta-Tissue is more powerful than the tissue-by-tissue approach in detecting eQTLs when eQTLs have effects in multiple tissues, while controlling for the false positive rate correctly.
We then apply Meta-Tissue to a mouse expression dataset. This dataset is ideal for evaluating methods for discovering eQTLs for several reasons. The data are generated through a cross which limits the genetic diversity in the dataset, and all variants have similar frequencies which eliminate effects of allele frequency on power. In addition, the dataset contains gene expression from many different tissues and different numbers of individuals for the tissues, allowing us to compare results between different scenarios. We analyze four tissues from 50 samples per each tissue and ten tissues from 22 samples. We apply Meta-Tissue to both datasets and demonstrate that Meta-Tissue detects many eQTLs that are undetected by the tissue-by-tissue method.
In addition to accurately detecting eQTLs from multiple tissues, our method can also predict whether an eQTL affects or does not affect expression in a specific tissue. Predicting the existence or absence of an effect is a very difficult problem in meta-analysis, and it is known that making predictions based on p-values is not effective . One of the reasons is that a non-significant p-value is not necessarily evidence of an absence of an effect since the study may be underpowered. Our method instead computes the posterior probability of the presence or absence of an effect for each study building on recent work in interpretation of meta-analysis . Applying the framework to the four and ten tissue datasets, we identify more eQTLs that are predicted to have effects in all tissues compared to the p-value based approach, which are interesting potential candidates with possible global regulatory mechanisms. Meta-Tissue is publicly available at http://genetics.cs.ucla.edu/metatissue/.
The main idea of Meta-Tissue is that it combines the effect size estimates from multiple tissues using a “meta-analysis” approach. Meta-analysis techniques are widely applied to combine the results of GWAS studies. In our case, we consider each tissues as a “study.” This has the advantage of increasing the statistical power to detect eQTLs shared across tissues. There are several challenges corresponding to the inherent differences between combining GWAS studies and expression quantitative trait loci studies in multiple tissues. The first challenge is that we expect that there may be differences in effect sizes between tissues. For this reason, we utilize both the random-effects model which allows Meta-Tissue to detect eQTLs when heterogeneity is present, and the fixed-effects model when it is not. A second challenge is that in many multi-tissue eQTL study designs, multiple tissues are collected from the same individuals which induce correlation between measurements of expression levels in different tissues. However, meta-analysis methods assume that studies are independent and may be susceptible to false positives. To overcome this challenge, we utilize the linear mixed model to correct our effect size estimates before performing the meta-analysis.
We assume that multi-tissue eQTL studies collect expression values of genes from individuals in tissues. However, those individuals are not necessarily the same for all tissues; some individuals may provide a subset of tissues. The studies also collect genotype information of SNPs from the individuals. To determine eQTLs in a specific tissue, or pairs of SNP and gene that are significantly correlated, eQTL studies often use the following linear model.
where is gene expression of individuals in tissue , is information on SNP , and is a vector of ones. is the effect size of SNP on gene in tissue , and if it is not zero, we claim the pair of SNP and gene as an eQTL. The Tissue-By-Tissue (TBT) approach computes for every tissue (), and determines whether at least one is not zero.
To increase the statistical power to detect eQTLs, Meta-Tissue utilizes meta-analysis that combines from tissues. A naive approach to apply meta-analysis to multi-tissue eQTL datasets is directly using computed from the linear model for TBT. This approach, however, violates the main assumption of meta-analysis that is independent for tissues. Because multiple tissues are often collected from the same individuals, there exists correlation between gene expression values across different tissues, and this leads to correlated .
where and contain gene expression and SNP information in all tissues, and Figure 1 shows how they are encoded using a simple example. is the random effect of a mixed model due to the fact that multiple tissues are collected from the same individuals. follows the multivariate normal distribution whose covariance matrix ( matrix in Figure 1) represents sharing of individuals in multiple tissues. Meta-Tissue applies the generalized least squares to estimate and its covariance or correlation between . Meta-Tissue “un-correlates” using the covariance it estimated and use the “un-correlated” for meta-analysis (see the Materials and Methods section for more details).
Fig. 1. A simple example showing how gene expression and SNP in multi-tissue eQTL studies are encoded in the mixed model of Meta-Tissue.
This example has five samples (S1, S2, S3, S4, and S5) in three tissues (T1, T2, and T3). The leftmost table shows which tissues are collected from each sample; means gene expression of th sample in th tissue, and means the tissue is not collected. In this example, each tissue has gene expression measured in three samples. is a vector containing expression of samples in all tissues; there are a total of 9 gene expression values. In the matrix, denotes genotype of th sample. The matrix contains three intercepts () and three for the three tissues. is the random effect of the mixed model, and . is matrix whose entry at th row and th column is 1 if the th and th entries of are collected from the same individual, and 0 otherwise.
There is a fundamental difference between Meta-Tissue and the TBT approach. The statistical test in Meta-Tissue tests whether or not a gene is involved in an eQTL in any of the tissues. In other words, the null hypothesis of Meta-Tissue assumes that no effect is present in any of the tissues for a specific gene. A rejection of this null hypothesis is effectively predicting the presence of an effect in at least one of the tissues. However, the tissue-by-tissue approach tests whether or not an eQTL is present in each tissue. Hence, the null hypothesis of TBT assumes that no effect is present in a specific tissue. This means that Meta-Tissue performs one test per gene and TBT performs one test per gene in each tissue. In our comparisons of Meta-Tissue and TBT, we adjust the significant thresholds so that the overall false positive rate of implicating any tissue of a gene in an eQTL is constant for both methods.
Once we identify a significant association using Meta-Tissue, this means that at least one of the tissues contains an eQTL. In order to identify which subset of the tissues contain an eQTL, we utilize a recently developed meta-analysis interpretation framework which computes an m-value statistic for each tissue . The m-value estimates the posterior probability that an effect is present in a study included in a meta-analysis. Utilizing the m-values, we can predict tissues in which an effect is present.
We first simulate gene expression data to compare the power between the traditional Tissue-By-Tissue approach (TBT), Meta-Tissue FE, and Meta-Tissue RE. We create a dataset that has 100 individuals with one SNP and one gene expression level simulating one eQTL. We set the minor allele frequency to 30%. We simulate four tissues and consider four scenarios where a SNP has the same effect in (1) a single tissue, (2) in two tissues, (3) in three tissues, and (4) in all four tissues. The first three scenarios correspond to eQTLs with heterogeneity while eQTLs have no heterogeneity in the last scenario. We check statistics of eQTLs that measure the magnitude of heterogeneity in each scenario and verify that eQTLs have high levels of heterogeneity in the first three scenarios, but very low levels in the last scenario (Figure S1). We assume that each individual provides four tissues, and hence this simulation corresponds to a repeated measures design. We use the mixed model discussed in the Materials and Methods section to generate the gene expression levels of individuals while taking into account the repeated measures design. We generate 1,000 datasets (each a potential eQTL) and the power is estimated as a proportion of eQTLs detected at a significance threshold of for meta-analysis methods. We choose this threshold because the number of tests we perform in mouse datasets is on the order of one million (135 SNPs10,588 genes). The significance threshold adjusted for one million tests as in typical GWAS is . For TBT, we apply a significance threshold of such that the overall false positive rate of TBT is the same as that for Meta-Tissue as discussed in the previous section.
To apply the proposed methods to the simulations, we use the following approach. For TBT, we perform a standard F-test using a linear model to obtain a p-value for each pair of a SNP and a gene expression level in each tissue (see Materials and Methods). The tissue-by-tissue approach declares a SNP-gene expression pair as an eQTL if the p-value for the association statistic is below the threshold for any one of the tissues. For Meta-Tissue, we first perform generalized least squares (GLS) to correct for the fact that individuals are shared among tissues. Meta-Tissue then combines information from multiple tissues to obtain either fixed effect or random effect meta-analysis p-values as described in the Materials and Methods section. A SNP-expression pair is considered as an eQTL if its meta-analysis p-value is below the significance threshold. As a separate simulation, we verify that both of our implementations (Meta-Tissue FE and RE) control the false positive rates (Text S1). This simulation also shows that utilizing the mixed model is critical for controlling false positives when expression levels from multiple tissues are collected from the same individual.
Figure 2 shows that Meta-Tissue methods are more powerful than TBT when effects exist in multiple tissues; Meta-Tissue RE is the most powerful when an eQTL has effects in two or three tissues, and Meta-Tissue FE outperforms TBT and Meta-Tissue RE when the effects exist in all tissues. The TBT approach has higher power than Meta-Tissue methods when the effects exist in a single tissue. These results show that TBT is an ideal approach to detect an eQTL that is specific to a certain tissue while Meta-Tissue approaches are ideal for detecting an eQTL that has effects in more than one tissue. As the number of tissues with effects increases, the power of Meta-Tissue methods increases while that of TBT decreases. These results suggest an integrated approach in eQTL studies to apply TBT for detecting tissue-specific eQTLs and Meta-Tissue methods for detecting eQTLs shared between tissues.
Fig. 2. Power comparison between the tissue-by-tissue approach, Meta-Tissue fixed effects model (FE), and Meta-Tissue random effects model (RE) using simulated data.
X-axis indicates the number of tissues having effects out of four tissues, and Y-axis is the power.
To verify the results of the previous power simulation in real multiple tissue data, we simulate heterogeneity using a liver tissue expression from mouse. This dataset contains 108 samples, 135 SNPs and 10,588 probe expression levels. We detect 389 eQTLs in this single tissue dataset using the standard linear model with a p-value threshold of , which corresponds to the false discovery rate (FDR) of 0.017% level. We consider these detected associations as the gold standard for measuring accuracy of methods in this simulation. We then split the 108 samples into three groups of 36 samples to simulate three tissues, and this means that eQTLs have effects in all three tissues. In our simulations, we expect to find fewer eQTLs because each of our “tissues” only has 36 samples compared to the original 108 samples. We then consider three scenarios similar to scenarios in the previous power simulation; (1) eQTLs have effects only in the first tissue by permuting expression of the second and third tissues, (2) eQTLs have effects only in the first and second tissues by permuting expression of the third tissue, and (3) eQTLs have effects in all three tissues without any permutation. Permuting the expression of a specific tissue removes effects of eQTLs from the tissue, and hence allows simulation of heterogeneity. We apply Meta-Tissue FE, Meta-Tissue RE, and TBT to this multiple tissue dataset and measure how many eQTLs out of the original 389 eQTLs each method can recover using the same threshold ( for TBT). Because the number of eQTLs methods recover can change depending on how we split the 108 samples, we perform ten iterations of the experiment where we divide individuals differently in each iteration, and average the results.
The result of this simulation shows that Meta-Tissue methods recover the most eQTLs when eQTLs have effects in more than one tissue (Figure 3). When effects exist in two out of three tissues, Meta-Tissue RE recovers the most eQTLs; it recovers 144 eQTLs out of the 389 eQTLs on average, and this is 27% and 133% more than the number of eQTLs Meta-Tissue FE and TBT recover, respectively. When eQTLs have effects in all tissues, Meta-Tissue FE recovers the most eQTLs, and when effects exist in a single tissue, TBT does. This result is consistent with the previous power simulation in which Meta-Tissue methods were more powerful than TBT when eQTLs have effects in multiple tissues.
Fig. 3. The average number of eQTLs that the tissue-by-tissue approach, Meta-Tissue FE, and Meta-Tissue RE recover from three tissues generated from the liver tissue.
The liver tissue has 108 samples from which we simulate three tissues of 36 samples. X-axis indicates the number of tissues having effects out of three tissues. The original liver tissue has 389 eQTLs.
We apply Meta-Tissue to detect eQTLs in multiple tissues from mouse. Our data consists of two sets; one with four tissues (cortex, heart, liver, spleen), and the other with ten tissues (bone marrow, hippocampus, kidney, pancreas, stomach, white fat, and the four tissues). The four tissue dataset has 50 samples per each tissue while the ten tissue dataset has 22 samples per tissue. In both datasets, not all individuals provided all different types of tissues; on average, 34% of individuals are shared between two tissues in the four tissue dataset while 11% of individuals are shared in the ten tissues dataset. The number of SNPs (135 SNPs) and the number of probes (10,588) are the same as those of the liver tissue.
Figures 4A (four tissues) and 4B (ten tissues) show the number of eQTLs detected by Meta-Tissue RE, Meta-Tissue FE, and TBT using a threshold of (/the number of tissues for TBT). The number substantially increases by using Meta-Tissue RE or FE, showing up to two fold and twelve fold increases compared to TBT in the four and ten tissue datasets, respectively. These results indicate that methods that combine results of multiple tissues outperform a method that uses results of each tissue separately as all meta-analysis methods detect more eQTLs than TBT. Moreover, these results suggest a possibility that there exist a considerable number of eQTLs with different effect sizes across tissues as Meta-Tissue RE consistently identifies more eQTLs than Meta-Tissue FE. In addition to the number of eQTLs (SNP-expression pairs), we also analyze the number of eSNPs (unique SNPs influencing gene expression) and eProbes (unique probes for gene expression). Similar to the results of the number of eQTLs, Meta-Tissue detects more eSNPs and eProbes than TBT (Figure 5).
Fig. 4. The number of eQTLs detected by the tissue-by-tissue approach (TBT), Meta-Tissue FE, and Meta-Tissue RE in A) four and B) ten tissues of mouse, and the overlap of eQTLs detected by the three methods in C) four and D) ten tissues.
The datasets consist of the gene expression levels from 50 individuals (four tissues) and 22 individuals (ten tissues). We apply a p-value threshold of for Meta-Tissue and a threshold of /the number of tissues for tissue-by-tissue. The Venn diagrams (C and D) show the number of eQTLs detected by either TBT, FE, or RE, by TBT and either of FE and RE, by FE and RE, and by all three methods.
Fig. 5. The number of eSNPs and eProbes detected by the tissue-by-tissue (TBT) approach, Meta-Tissue FE, and Meta-Tissue RE in A) four tissues and B) ten tissues of mouse.
We apply a p-value threshold of for Meta-Tissue and a threshold of /the number of tissues for TBT.
Another important implication comes from comparing the two datasets. TBT finds substantially fewer number of eQTLs in the ten tissue dataset than in the four tissue dataset. This is possibly because the sample size of each tissue is decreased from 50 to 22. On the other hands, the meta-analytic methods find more eQTLs. One possible reason is that the total sample size is slightly increased from 200 to 220. Therefore, the results demonstrate that by using information from multiple tissues and leveraging meta-analysis methods, we may be able to detect eQTLs even if the sample size for each tissue is small.
In addition to the number of eQTLs that different methods detect, we also analyze the overlap of eQTLs using Venn diagrams (Figures 4C and 4D). The Venn diagrams show the number of eQTLs detected only by each of the three methods, by both TBT and each of Meta-Tissue methods, by both Meta-Tissue methods, and by all three methods. In the four tissue dataset, the three methods detect 493 unique eQTLs overall, and a majority of eQTLs (95.1% of total eQTL) are detected by either of Meta-Tissue methods. There are, however, 24 eQTLs (4.9% of total eQTLs) that only TBT detects, and they are likely to be tissue-specific eQTLs. In the ten tissue dataset, almost all eQTLs (99.3% of total eQTLs) are detected by Meta-Tissue RE or FE, and there are 4 eQTLs (0.7% of total eQTLs) detected only by TBT, which may be due to the low statistical power due to the limited number of samples.
Instead of the common genome-wide significance threshold (e.g. ) to identify eQTLs, an alternative approach is to use the false discovery rate (FDR) approach, and we use the QVALUE package in R to compute a q-value for each SNP-expression pair. We consider only cis-eQTLs for the FDR approach; we consider an eQTL as cis if a SNP is on the same chromosome as the probe for gene expression. While typical eQTL studies consider 1 Mb as a distance between a SNP and a probe for cis-eQTLs, we consider a much longer distance due to a small number of genotyped SNPs (135 SNPs). Figures S2A and S2B show the number of eQTLs detected by Meta-Tissue methods and TBT using FDR of 0.05 level in four and ten tissues, respectively, and Figures S2C and S2D are Venn digrams showing the overlap of eQTLs. The results using the FDR approach are consistent with those using the common genome-wide significance threshold; Meta-Tissue RE detects most eQTLs among the three methods, and a majority of eQTLs (86% and 93% of total eQTLs for four and ten tissues) are detected either by Meta-Tissue RE or FE.
The number of eQTLs detected only by TBT or by RE in Figures 4 and S2 indicates that there can be several eQTLs with different effect sizes in different tissues. To measure the magnitude of heterogeneity of eQTLs, we use the Cochran's Q statistic and the statistic . We make a plot whose x-axis is the statistic and whose y-axis is the log of p-value of Cochran's Q statistic, and a histogram showing the distribution of statistics. Figures S3, S4, and S5 show the heterogeneity of eQTLs detected by TBT, FE, and RE, respectively, in the four tissues of mouse data. These plots show that the eQTLs detected by RE show higher level of heterogeneity than the eQTLs detected by FE, as expected. Given the p-value threshold of where is the number of eQTLs detected, 65, 17, and 53 eQTLs show statistically significant heterogeneity in TBT, Meta-Tissue FE, and Meta-Tissue RE, respectively, using the p-value of Cochran's Q statistic.
Our Meta-Tissue approach not only detects more eQTLs from multiple tissues but also provides an interpretation framework that predicts whether an eQTL has effects in a specific tissue. Meta-Tissue computes a statistic called m-value , and it is the posterior probability that an effect exists in a specific tissue. If the m-value is greater than a threshold , we predict that an effect exists, and if it is less than , we predict that an effect does not exist. Another approach to predict an effect is to use a p-value. In this approach, an effect exists if a p-value is less than a significance threshold and does not exist otherwise.
We first apply this prediction framework to the 3-way split liver tissue dataset that we previously generated. Recall that the liver tissue has 389 eQTLs, and we simulated three tissues from it and three scenarios in which we varied heterogeneity of eQTLs. For this simulation, we consider only the scenario where eQTLs have effects in the first two tissues out of three since this corresponds to heterogeneity in which the number of eQTLs that TBT and Meta-Tissue recover is relatively large. We measure how accurately Meta-Tissue and the p-value approach predict the presence and absence of effects of the 389 eQTLs in the three tissues. More specifically, Meta-Tissue makes a correct prediction if m-values are greater than 0.9 in the first two tissues and the m-value is less than 0.1 in the third tissue (). We consider an m-value prediction to be ambiguous if any of the three tissues has the m-value between 0.1 and 0.9. If the prediction is not either correct or ambiguous, it is considered as an incorrect prediction. For the p-value approach, p-values of the first two tissues need to be less than the significance threshold (/3) and p-value of the third tissue needs to be greater than the threshold for a correct prediction. Otherwise, the prediction is an incorrect prediction since the p-value approach does not have the notion of the ambiguous prediction. In the original 3-way split liver tissue experiment, we had ten simulations which differed in how the individuals were divided. Over the ten simulations, Meta-Tissue and TBT recovered 146 eQTLs out of total 389 eQTLs on average (Figure 3). Since we use m-values for the interpretation purpose (not for detecting eQTLs), we apply m-values to only those 146 eQTLs. We also predict effects of the 146 eQTLs using the p-value approach.
Meta-Tissue makes the correct prediction for 35% (51/146) of the eQTLs and predicts the ambiguous prediction for 56% (82/146). The p-value approach only makes the correct prediction for 11% (16/146) of the eQTLs. The number of correct predictions of Meta-Tissue is more than three times greater. In addition, given the advantage of the fact that Meta-Tissue can make ambiguous predictions, the number of incorrect predictions for Meta-Tissue (13/146) is ten times fewer than that for the p-value approach (130/146). The results demonstrate that by combining the meta-analysis method and the interpretation framework, we may predict effects of eQTLs more accurately than the approach utilizing p-values.
We then apply our interpretation framework to the four and ten multiple tissue datasets from mouse to predict effects of eQTLs that were discovered using Meta-Tissue and TBT (493 and 568 eQTLs in four and ten tissue datasets, respectively). We calculate the m-value for each eQTL per each tissue and make a prediction that the eQTL affects expression in that tissue if the m-value is greater than 0.9. We also compare our approach to the p-value approach as in the previous simulation using the same threshold (/the number of tissues).
First, we apply the two approaches to the four tissue dataset, and Table 1 lists the number of eQTLs predicted to have effects across various combinations of tissues (e.g. eQTLs affecting expression in heart/liver, heart/cortex, heart/liver/cortex). The results show that Meta-Tissue consistently categorizes more eQTLs having effects in multiple tissues than the p-value approach. Among those eQTLs, ones that influence expression levels in all tissues are particularly interesting because they may provide insights into the global regulatory mechanisms of eQTLs. Meta-Tissue predicts 283 such eQTLs while the p-value approach predicts 15 eQTLs. The small number of predictions in p-value approach is expected because even if the effect exists in all tissues, given power of tissue-by-tissue approach, we can predict the global effect only with probability .
Tab. 1. The number of eQTLs predicted to have effects by Meta-Tissue and the p-value approach across various combinations of the four tissues.
Meta-Tissue uses m-value statistics to predict effects; if m-value is greater than 0.9, the effect exists. The p-value approach uses p-values to make predictions; the effect exists if p-value is less than the significance threshold (/the number of tissues).
We next predict effects of eQTLs in the ten tissue dataset, and for this dataset, we would expect to detect a fewer number of eQTLs having effects across all tissues since it becomes less likely that all p-values or m-values pass the threshold as we try to detect effects in more tissues. Table 2 shows the number of eQTLs predicted to affect expression across different numbers of tissues considered (e.g. eQTLs having effects across any two tissues, any three tissues). Similar to the results of the four tissue dataset, Meta-Tissue predicts more eQTLs with effects in several tissues than the p-value approach. Unlike the four tissues, we detect a fewer number of eQTLs having effects in all ten tissues; 134 and zero such eQTLs by Meta-Tissue and the p-value approach, respectively. The results indicate the intrinsic difficulty in detecting eQTLs influencing expression across many different tissues.
Tab. 2. The number of eQTLs predicted to have effects by Meta-Tissue and the p-value approach across different numbers of tissues considered in the ten tissue dataset (eQTLs having effects across any two tissues, any three tissues, etc.).
We presented a statistically powerful approach to detect eQTLs from multiple tissues. Our approach, Meta-Tissue, takes advantage of two meta-analysis methods that differ in their assumptions on effects of eQTLs in different tissues. The first method assumes that effects exist in all tissues with the same magnitude, and this assumption allows us to detect eQTLs shared across all tissues. The second method assumes that effect sizes of variants are different among studies. By assuming the heterogeneity, we may be able to accurately describe the nature of eQTLs whose patterns of genetic regulation differ across tissues. Meta-analysis methods, however, assume that studies are independent, and this assumption is unlikely to be true in multi-tissue dataset since studies collect multiple tissues from the same individuals. This may cause correlation in expression between tissues, and to correct for the correlation, we utilized a mixed model that enables the meta-analysis method to achieve correct false positive rates.
To measure the performance of Meta-Tissue, we first showed by simulations that our methods are generally more powerful than a naive approach that looks at results of each tissue individually. Next, by using data from mouse liver tissue, we simulated the heterogeneity in effect sizes across a subset of tissues as well as in all tissues. Meta-Tissue methods were shown to recover more original eQTLs from multiple tissues than the naive tissue-by-tissue approach when effects exist in multiple tissues. We then observed that Meta-Tissue detects many eQTLs that the naive approach does not detect in four and ten tissue datasets from mouse. However, we note that there are a few tissue-specific eQTLs that only the naive approach detects, and hence we recommend that eQTL studies also apply the naive approach in addition to Meta-Tissue.
In addition to detecting more eQTLs, Meta-Tissue can also accurately predict whether an effect exists in a specific tissue. Meta-Tissue calculates the posterior probability that an eQTL has an effect in a certain tissue, and we demonstrated that this probability is more effective in predicting the effect than a p-value is by using the same liver tissue simulation. We then predicted effects of eQTLs that we found in the four and ten tissue datasets and showed our method predicts more eQTLs having effects in multiple tissues than the p-value approach.
Our approach is fundamentally different from previous approaches that also attempt to detect eQTLs from multiple tissues, and to the best of our knowledge, Meta-Tissue is the first method to apply both a mixed model and meta-analysis methods to eQTL mapping. A traditional approach to detect associations from repeated measurements from same individuals such as multiple tissue data is MANOVA. However, MANOVA is not directly applicable to our multiple tissue data because not all samples provided all different types of tissues, and hence our data are not completely “repeated measurements.” Meta-Tissue is more general than MANOVA since Meta-Tissue can be applied to both “repeated measures design” in which individuals are shared across all tissues and to a scenario in which only a subset of individuals are shared. Another advantage of our method is that Meta-Tissue can take into account population structure by adding an additional variance component term in our mixed model. This may be important to multiple tissue datasets in which individuals are sampled from different populations, which may cause inflation of false positives.
Meta-Tissue leverages the recently developed random effects model that achieves higher power than the traditional random effects model –. Han and Eskin showed that the traditional random effects model never achieves higher power than the fixed effects model due to its conservative null hypothesis. We apply the traditional RE to our power simulation (Figure S6), the heterogeneity experiment with the liver tissue (Figure S7), and the four and ten tissue datasets of mouse data (Figure S8), and we observe the same phenomenon; the traditional RE is always less powerful than FE and the recently developed RE.
There are a few other methods that attempt to detect eQTLs from the multiple tissue data such as Sparse Bayesian Multiple Regression and the GFlasso approach proposed by Petretto et al. and Kim et al. However, a key difference between these methods and Meta-Tissue is that they attempt to detect multiple variants (“multi-locus”) associated with multiple traits while our method focuses on an association of a single variant. Another difference and one main advantage of Meta-Tissue is that since it is a meta-analysis method, studies can combine results of many published eQTL analyses without actual data assuming that those analyses are independent; only results of an eQTL analysis such as effect size estimates are needed when the analyses are independent. Meta-Tissue has another advantage that it is simpler and more computationally efficient than other methods that involve computationally challenging algorithms such as Bayesian variable selection and regularized linear regression including Lasso. While we applied Meta-Tissue to the multi-tissue dataset with a small number of genotyped SNPs and samples (135 SNPs and about a total of 200 samples across tissues), our algorithm and software are efficient enough to be applied to larger eQTL studies where there are hundreds of individuals genotyped at hundreds of thousands SNPs.
F1N2 mice from a C57BL6/N×129/OlaHsd cross were produced as follows. Male ES cell chimeric founders (E14 ES line ) were crossed to C57BL6/N females (Harlan Laboratories). Male agouti offspring were backcrossed to C57BL6/N females, and F1N1 offspring were intercrossed to produce F1N2 animals, Figure 6. All animals were maintained in ventilated microisolator caging (Allentown), fed a standard lab chow diet (Harlan Teklad) and provided water ad libidem. F1N2 animals were group housed with littermates until 9 weeks of age. Mice selected for tissue harvest were singly housed for one additional week, to minimize socialization effects. Only males were used, to avoid estrus related effects on gene expression. While the production crosses segregated various gene targeted alleles, all mice selected for this study carried only wild type genomes and did not carry any engineered genomic alterations such as gene knockouts.
Fig. 6. The mice were generated by creating a chimera with heterozygous 129/Sv cells in a C56Bl/6J blastocyst.
The chimera was crossed with a wildtype C56Bl/6J to obtain heterozygous KOs and homozygous WTs. The heterozygous KOs were backcrossed to wildtype C56Bl/6J to obtain animals that are 75% C56Bl/6J. The male and female heterozygous KOs are intercrossed and only the resulting wildtype males are used in this study. The complicated structure of the cross is due to the fact that the knockouts were designed to be used subsequently for other studies.
Animals were sacrificed by cervical dislocation and immediately dissected. A set of thirty tissues were collected from each animal in a prescribed order, beginning with the pancreas. Each tissue was briefly rinsed in PBS and deposited in RNAlater (Ambion), held at room temperature to allow diffusion of RNAlater into the tissue, and then stored at −86C.
Tissue homogenization, total RNA isolation, cDNA production, in vitro transcription and fluorescent labeling were performed as per Affymetrix gene chip recommended protocols. The hybridization mixes were analyzed using Affymetrix U74Av2 expression microarrays, washed and scanned using Affymetrix instrumentation and protocols.
We consider the probes for which we have annotations. For each tissue type, we filter out array outliers which show an average correlation of with respect to all other arrays.
The mice were genotyped at 140 SNPs that are polymorphic between 129S1/SvImJ and C57BL/6J from the JAX SNP Genotyping Panel . Information on SNPs is listed in Table S1. We use 135 out of the 140 SNPs that are polymorphic in all tissues for our analysis.
In our analysis, we consider the gene expression levels of probes collected in 4 tissues (liver, spleen, cortex and heart) over individuals. To be consistent with the different tissue datasets we analyze, we randomly chose 50 individuals from those datasets that have more than 50 individuals. We first used RMA to perform background adjustment on the raw expressions and then quantile normalization to normalize the adjusted expressions. For 10 tissues, we collect the same number of gene expression levels over individuals.
where is a vector of size 400 corresponding to gene expression of 100 individuals in 4 tissues, and where is a 400 by 400 matrix representing correlation between individuals across the tissues. More specifically, if and are the same individual between two tissues, and otherwise. is an identity matrix with size of 400. and are coefficients of the two variance components, and we use the real mouse dataset to obtain realistic values of the two coefficients. We estimate and for every pair between a gene expression and a SNP, and find that on average, and . We use these values for our simulation.
where is gene expression of 100 individuals in tissue (), is on tissue (size of 100), and is SNP information of 100 individuals. if an eQTL does not have an effect in tissue , and if an eQTL has an effect. Since the goal is to compare the relative power between methods, we vary the effect size () depending on the scenario to avoid too high or too low power. Specifically, we set for the scenarios (1), (2), (3), (4), respectively.
where is a size vector denoting gene expression levels of individuals, is a size vector denoting SNP, is a vector of ones, and . To assess the significance of an association between a SNP and a gene, we perform a standard F-test for the null hypothesis and also obtain an estimate of using the lm function in R. In the tissue-by-tissue approach, if any single tissue turns out to be significant (), the pair of SNP and gene expression are reported as a significant eQTL. TBT can also find tissues in which an eQTL exists by examining which is non-zero.
Here is a description of each variable in above equation. Let .
is an matrix denoting expression levels of individuals in tissues. In other words, the first rows are expression of individuals in the first tissue, the next are expression in the second tissue, and so on. Expression values of each tissue are normalized to .
is an matrix denoting the intercepts for tissues. The first column of denotes the intercept for the first tissue; the first rows are ones, and the next are zeros. In the second column that denotes the intercept for the second tissue, the first rows are zeros, the next rows are ones, and the next rows are zeros.
is a matrix denoting coefficients of intercepts.
is an matrix denoting SNP for tissues. This is similar to the matrix, and we replace ones in the matrix with SNP information. For example, in the first column, the first rows are SNP information of individuals in the first tissue, and the next rows are zeros.
is a matrix denoting coefficients of SNP effects in tissues.
is the random effect of the mixed model due to the repeated measurements of individuals, and where is an matrix representing how individuals are shared across the tissues (discussed in the Power simulation framework section). represents random errors and where is an identity matrix. To efficiently estimate the two variance components ( and ), we use the efficient mixed-model association (EMMA) package .
Given the estimate , we combine information from multiple tissues by applying meta-analysis to . If the effect of eQTL is the same for all tissues, applying fixed effects model (FE) meta-analysis will be a powerful approach. If the effects of eQTL differs by tissues, applying random effects model (RE) meta-analysis will be a powerful approach .
A p-value of is obtained from the standard normal distribution.
The initial value of is estimated using approaches in the traditional random effects model , , . We obtain a p-value of from p-value tables that are constructed from numerous null statistics.
It is important that the meta-analysis methods account for this covariance structure of effect size estimates.
Note that if is independent ( is a diagonal matrix), and are equivalent to the inverse-variance weighted effect size estimate (the numerator of equation (5)) and its variance.
and then giving and as input to the traditional meta-analysis approaches assuming independent estimates . This “un-correlating” idea allows us flexibility to use the correlated estimates in any meta-analysis framework requiring independent estimates. We use and its “un-correlated” variance for the fixed effects model (which gives equivalent results to the Lin and Sullivan approach ), random effects model, heterogeneity estimation ( and ), and the m-value estimation .
More detailed derivations of and terms are discussed in Han and Eskin .
There are subtle issues in our framework combining mixed model and meta-analysis. First, the effect size estimates from linear model or mixed model are typically -distributed, while most of meta-analysis methods assume normally distributed effect sizes. Second, our approach simultaneously considers all tissues using Equation (3), but the error model is slightly different from the tissue-by-tissue approach in Equation (2). In the tissue-by-tissue approach, the error is fit in each tissue separately, while in our new approach, the error is fit in all tissues together, which is often less powerful than the former. We correct for these subtle differences using simple heuristics (See Text S2).
This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health.
1. BremRB, YvertG, ClintonR, KruglyakL (2002) Genetic dissection of transcriptional regulation in budding yeast. Science 296: 752–5.
2. BremRB, KruglyakL (2005) The landscape of genetic complexity across 5,700 gene expression traits in yeast. Proc Natl Acad Sci U S A 102: 1572–7.
3. KeurentjesJJB, FuJ, TerpstraIR, GarciaJM, van den AckervekenG, et al. (2007) Regulatory network construction in arabidopsis by using genome-wide gene expression quantitative trait loci. Proc Natl Acad Sci U S A 104: 1708–13.
4. CheslerEJ, LuL, ShouS, QuY, GuJ, et al. (2005) Complex trait analysis of gene expression uncovers polygenic and pleiotropic networks that modulate nervous system function. Nat Genet 37: 233–42.
5. BystrykhL, WeersingE, DontjeB, SuttonS, PletcherMT, et al. (2005) Uncovering regulatory pathways that affect hematopoietic stem cell function using ‘genetical genomics’. Nat Genet 37: 225–32.
6. CheungVG, SpielmanRS, EwensKG, WeberTM, MorleyM, et al. (2005) Mapping determinants of human gene expression by regional and genome-wide association. Nature 437: 1365–9.
7. StrangerBE, NicaAC, ForrestMS, DimasA, BirdCP, et al. (2007) Population genomics of human gene expression. Nat Genet 39: 1217–24.
8. EmilssonV, ThorleifssonG, ZhangB, LeonardsonAS, ZinkF, et al. (2008) Genetics of gene expression and its effect on disease. Nature 452: 423–8.
9. SpielmanRS, BastoneLA, BurdickJT, MorleyM, EwensWJ, et al. (2007) Common genetic variants account for differences in gene expression among ethnic groups. Nat Genet 39: 226–31.
10. FuJ, WolfsMGM, DeelenP, WestraHJJ, FehrmannRSN, et al. (2012) Unraveling the regulatory mechanisms underlying tissue-dependent genetic variation of gene expression. PLoS Genet 8: e1002431.
11. de BakkerPIW, FerreiraMAR, JiaX, NealeBM, RaychaudhuriS, et al. (2008) Practical aspects of imputation-driven meta-analysis of genome-wide association studies. Hum Mol Genet 17: R122–8.
12. COCHRANWG (1954) The combination of estimates from different experiments. BIOMETRICS 10: 101–129.
13. MANTELN, HAENSZELW (1959) Statistical aspects of the analysis of data from retrospective studies of disease. J Natl Cancer Inst 22: 719–48.
14. DerSimonianR, LairdN (1986) Meta-analysis in clinical trials. Control Clin Trials 7: 177–88.
15. IoannidisJPA, PatsopoulosNA, EvangelouE (2007) Heterogeneity in meta-analyses of genome-wide association investigations. PLoS One 2: e841.
16. IoannidisJPA, PatsopoulosNA, EvangelouE (2007) Uncertainty in heterogeneity estimates in metaanalyses. BMJ 335: 914–6.
17. EvangelouE, MaraganoreDM, IoannidisJPA (2007) Meta-analysis in genome-wide association datasets: strategies and application in parkinson disease. PLoS One 2: e196.
18. HanB, EskinE (2011) Random-effects model aimed at discovering associations in meta-analysis of genome-wide association studies. Am J Hum Genet 88: 586–98.
19. HanB, EskinE (2012) Interpreting meta-analyses of genome-wide association studies. PLoS Genet 8: e1002555.
20. HigginsJ, ThompsonSG (2002) Quantifying heterogeneity in a meta-analysis. Statistics in medicine 21: 1539–1558.
21. StoreyJD (2002) A direct approach to false discovery rates. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 64: 479–498.
22. PetrettoE, BottoloL, LangleySR, HeinigM, McDermott-RoeC, et al. (2010) New insights into the genetic control of gene expression using a bayesian multi-tissue approach. PLoS Comput Biol 6: e1000737.
23. KimS, XingEP (2009) Statistical estimation of correlated genome associations to a quantitative trait network. PLoS Genet 5: e1000587.
24. HooperM, HardyK, HandysideA, HunterS, MonkM (1987) Hprt-deficient (lesch-nyhan) mouse embryos derived from germline colonization by cultured cells. Nature 326: 292–5.
25. PetkovPM, DingY, CassellMA, ZhangW, WagnerG, et al. (2004) An efficient snp system for mouse genome scanning and elucidating strain relationships. Genome Res 14: 1806–11.
26. KangHM, ZaitlenNA, WadeCM, KirbyA, HeckermanD, et al. (2008) Efficient control of population structure in model organism association mapping. Genetics 178: 1709–23.
27. FleissJL (1993) The statistical basis of meta-analysis. Stat Methods Med Res 2: 121–45.
28. HARDYRJ, THOMPSONSG (1996) A likelihood approach to meta-analysis with random effects. Statistics in Medicine 15: 619–629.
29. HanB, SulJH, EskinE, de BakkerPIW, RaychaudhuriS (2013) A general framework for meta-analyzing dependent studies with overlapping subjects in association mapping. URL http://arxiv.org/abs/1304.8045.
30. LinDYY, SullivanPF (2009) Meta-analysis of genome-wide association studies with overlapping subjects. Am J Hum Genet 85: 862–72.
31. StephensM, BaldingDJ (2009) Bayesian statistical methods for genetic association studies. Nat Rev Genet 10: 681–90.
32. MarchiniJ, HowieB, MyersS, McVeanG, DonnellyP (2007) A new multipoint method for genome-wide association studies by imputation of genotypes. Nat Genet 39: 906–13.
|
0.999871 |
Today on 20th September Bahujan Samaj Party took a big decision on the elections in Chhattisgarh and Madhya Pradesh by releasing a press conference on September 20, with the Bahujan Samaj Party in Chhattisgarh and the Chhattisgarh Janata Congress, which together with Chhattisgarh Chief Minister Ajit Jogi, Chief Minister Announced the candidate for the post, tell you that Ajit Jogi comes from tribal society After IAS became an IAS and Minister became a Member of Parliament and after becoming the first Chief Minister of Chhattisgarh in 2001, after the differences with the Congress, he founded his party Chhattisgarh Janata Congress and formed a coalition of Bahujan Samaj Party in the upcoming elections.
In the coalition, the Bahujan Samaj Party got 35 seats and Chhattisgarh Janata Congress got 55 seats and Ajit Jogi was declared the Chief Ministerial candidate by sister Mayawati. Mayawati ji said that "Bharatiya Janata Party has been in power for the past 15 years and BJP has done nothing in 15 years, people of G section are people of Dalit backward tribal people. People of Upper cast are farmers laborers working for them. He continued to declare barbarity announcements only in the media, but in the ground reality there was no work "
At the same time, Bahujan Samaj Party has released the first list of 22 candidates in Madhya Pradesh, and now it is necessary to see how Bahujan Samaj Party demonstrates how the Madhya Pradesh Chhattisgarh performs in the elections of three states in Madhya Pradesh, depending on the performance of the party.
|
0.999999 |
Can you tell me more about enteritis? What are the causes and symptoms?
Enteritis is a term that can be used in a number of different ways. It is a little confusing, but strictly speaking, it is an inflammation of the small intestine.
Our medical encyclopedia defines enteritis as an inflammation that is caused by a bacterial or viral infection.
Several diseases and conditions can have symptoms that be categorized as enteritis. These include Crohn’s disease and radiation enteritis, which is caused by radiation treatment for cancer.
Many people use enteritis or regional enteritis interchangeably with Crohn’s disease, which is a chronic disease involving severe inflammation in the digestive tract — most commonly the small intestine.
|
0.993254 |
Traditional approaches for learning 3D object categories use either synthetic data or manual supervision. In this paper, we propose instead an unsupervised method that is cued by observing objects from a moving vantage point. Our system builds on two innovations: a Siamese viewpoint factorization network that robustly aligns different videos together without explicitly comparing 3D shapes; and a 3D shape completion network that can extract the full shape of an object from partial observations. We also demonstrate the benefits of configuring networks to perform probabilistic predictions as well as of geometry-aware data augmentation schemes. State-of-the-art results are demonstrated on publicly-available benchmarks.
|
0.999978 |
When tasked with the study of human thought and behavior, psychologists tend to adopt one, or a combination of several, perspectives. Cognitive psychologists examine conscious and subconscious processes that influence how we perceive and interact with the world around us. Social psychologists focus on our relationships with others, how they influence us and how internal factors influence them, in turn. Biological psychologists study the interface between the mind and the body to understand how behaviors and cognitions arise. Evolutionary psychologists attempt to decipher how patterns of behaviors and thoughts served an adaptive role led to increased chances of survival for our ancestors. This becomes particularly difficult when considering mental health disorders like schizophrenia. How can diseases like these be advantageous for survival when they are so debilitating?
For a trait - like a specific behavior or a mental illness - to be subject to evolution, four conditions must be met: 1) Reproduction occurs in the population. 2) The trait is heritable, or can be passed down from parent to child. 3) There is variation in the trait in the population. 4) A selective pressure exists on the trait. This means that possessing (or not possessing) the trait improves chances of survival and participating in condition number 1: reproduction. Condition #1 is certainly met, as humans have always relied on reproduction to make new humans. A long history of research has supported condition #2, with many mental illnesses shown to have high rates of heritability - schizophrenia has been found to be nearly 50% heritable. Put another way, if an identical twin is diagnosed with schizophrenia, the chances that their twin (who shares 100% of their DNA) also has schizophrenia is about 50%, even if the twins are raised separately. Further support for this idea is that, with the fairly recent advent of genome sequencing, researchers have identified genes associated with schizophrenia and other psychiatric illnesses. Condition 3 is also met, as 1% of the population suffers from schizophrenia.
Condition 4 has been the focus of most studies into the the evolutionary nature of mental illnesses like schizophrenia. Given the severe and debilitating nature of a disorder like schizophrenia, one would assume that there would be strong selection pressure against occurance of the disorder in the population. However, schizophrenia still exists. Why? It may be that schizophrenia is an extreme version of behaviors or cognitive processes that offer an adaptive advantage, similarly to how the allele that causes sickle cell anemia when two copies are present also provides resistance to malaria when only one copy is present. Recent work has provided support for this idea: that lower levels of psychotic-like traits or genetic risk may provide advantages that could explain the continued existence of psychotic disorders like schizophrenia.
Nichola Raihani and Vaughn Bell from University College London published a review on an evolutionary perspective of paranoia (2018). Although paranoia is not unique to schizophrenia, it is one of the most endorsed symptoms among schizophrenia patients. Paranoia is marked by beliefs that other people are secretly intending to cause one harm or that a conspiracy exists against you. The authors draw on theoretical and experimental literature to argue that paranoid thinking may have evolved as a sensitivity and vigilance toward social threat. From early human history through modern times, social group cohesion has served a crucial role in survival via resource gathering and distribution, as well as protection. Threats to this group cohesion potentially impacted the survival of every individual in the group. Since groups often contain(ed) related members, any threat to genetic survival was compounded. Therefore, it would be advantageous for groups or members of groups to be wary, even aggressively so, of new members or outside groups. As social groups have become more flexible and society has seen increased organization of external (including institutional) groups, extreme levels of paranoid thoughts can be targeted at a variety of people and groups, from family members to law enforcement agencies. While these extreme versions of paranoia likely impede optimal functioning, if paranoid thoughts continue to protect individuals from social threats, they will likely persist in the population.
Other hallmark symptoms of psychotic disorders are hallucinations. A recent study by Alderson-Day, Lima and colleagues examined the perceptual abilities of non-clinical voice-hearers (2017). Non-clinical voice-hearers (or NCVHs) are individuals who experience full auditory hallucinations, but may not display the other symptoms or functional difficulty associated with a psychotic disorder diagnosis. Many NCVHs appreciate their voice-hearing and do not wish to receive treatment for it. In this study, Alderson-Day, Lima, et al. recruited NCVHs and non-voice hearing controls to do a noise detection task during an MRI scan. The noise detection task involved listening to a series of “white-noise” clips, and pressing a button if they heard a distinct noise in those clips (they were trained on detecting the distinct noise prior to the scan). What the participants did not know was that half of the “white-noise” clips were actually recorded speech that had been acoustically degraded to a point that the speech is typically unintelligible at first. At the end of the task, the participants were asked if they noticed any speech sounds in the clips and if they remembered what they heard. Now that the subjects were aware of potential speech in the clips, the task was repeated.
The authors found that NCVHs were more likely than their peers to spontaneously notice the presence of speech in the degraded sound clips. Further, if they noticed speech, they noticed it earlier than the controls and were slightly more likely to recall specific words that they heard. Group differences were reflected in activation of the anterior cingulate cortex, a brain region that has been associated with a variety of functions, but seems to facilitate attentional processes, including monitoring internal and external speech and sounds. Although the paper did not offer evolutionary implications, we can see how such processes in psychosis could have contributed to fitness for our evolutionary ancestors. The authors concluded that NCVHs may have a cognitive bias towards hearing speech. Such a bias for speech or other salient sounds, generally, may have provided an advantage in detecting threats from ambiguous sounds. This bias may have also been beneficial during the early development of spoken communication and language. Maybe future research in psychology or linguistics will test these possibilities.
A third possible adaptation associated with schizophrenia and psychotic disorders is creativity. The idea that madness often accompanies genius dates back centuries, but only starting in the 20th century do we have data to back this up. The link between creativity and psychosis or psychotic-like traits is fairly robust. Schizophrenia patients perform better than controls on logic puzzles that require practical limits to be relaxed (Fink et al. 2014). Well siblings and children of individuals with schizophrenia are more likely to have successfully creative careers than individuals without familial psychosis. Further, schizotypal traits (personality traits that have overlap with several schizophrenia symptoms and are associated with increased risk of developing psychosis) are also associated with various measures of creativity from laboratory tasks of divergent thinking (e.g. coming up with nonstandard usage for objects) to artistic success.
Power and colleagues (2015) took this link between psychosis and creativity further by studying the shared genetic basis of psychosis and creativity. In a large dataset in Iceland, consisting of genetic, medical, and occupational information. They calculated a polygenic risk score for two psychotic disorders: schizophrenia and bipolar disorder. A polygenic risk score is basically a single value that describes the number of alleles associated with development of a certain condition. The authors found that both schizophrenia and bipolar disorder polygenic risk scores predicted creativity, indexed by membership in national artistic societies (for dancers, musicians, visual artists, etc.). The authors also found that polygenic risk also predicted attainment of a university degree, but predicted creative professions even when controlling for degree attainment. They also determined that the genetic effects on creativity are not confounded by having relatives with psychotic disorders.
An evolutionary perspective on psychological disorders forces researchers to study not only how the illness negatively impacts those who suffer from it, but how it may have offered advantages to our evolutionary ancestors, like protecting the group from threats. We are also able to see how genes related to disorders or non-impairing levels of symptoms can provide certain advantages even today, like enhanced speech perception or creative ability. Further research may help us understand how potentially beneficial mechanisms become pathological, but for now, we can that even the most severe and impairing disorders, like schizophrenia, can have their roots in adaptive evolutionary mechanisms.
Alderson-Day, B., Lima, C. F., Evans, S., Krishnan, S., Shanmugalingam, P., Fernyhough, C., & Scott, S. K. (2017). Distinct processing of ambiguous speech in people with non-clinical auditory verbal hallucinations. Brain, 140(9), 2475-2489.
Fink, A., Benedek, M., Unterrainer, H. F., Papousek, I., & Weiss, E. M. (2014). Creativity and psychopathology: are there similar mental processes involved in creativity and in psychosis-proneness?. Frontiers in psychology, 5, 1211.
Reginsson, G. W., Ingason, A., Euesden, J., Bjornsdottir, G., Olafsson, S., Sigurdsson, E., ... & Steinberg, S. (2018). Polygenic risk scores for schizophrenia and bipolar disorder associate with addiction. Addiction biology, 23(1), 485-492.
Raihani, N. J., & Bell, V. (2018). An evolutionary perspective on paranoia. Nature Human Behaviour, 1.
|
0.967892 |
Le Banquet des Généraux / Armistice possible?
In all the games that I have seen played none have ended in Armistice (the historical ending). Trying to get 40 war status points seems to be near impossible.
The CP player who has generally most to gain from reaching Armistice has cards worth 20 WS in his deck - he has to play all of them (even if some may be of marginal use). The Allied player has cards worth 22 WS in his deck and can block Armistice by holding back some of these.
Don't really know. It is difficult indeed to get to the WS 40. I didn't tackle that issue in the variant because it wasn't anything shocking, really. But it would probably be a good idea, historically consistent, to give Bolshevik Revolution a 1 WS, to encourage the CP player to push forward on the Russian collapse track. And the little chance of allowing an earlier end to the game could act as a play-balance element, depriving the Allied from a 5th Blockade VP (The Friedensturm final requirement is harder to implement than the US army attack).
I added a 1WS to the Bolshevik Revolution event.
Barring any blatant problems, I expect this to be the very last change.
Again we agree on something - it is becoming a habit.
Bolshevik revolution seems to be a good choice - it was after all one of the major events in the collapse of the russian political sphere and ardently desired by the CP player (who let lenin slip through germany to lend a hand).
One of the Kaiserschlacht cards would also have been a good option - possible friedensturm as an additional incentive to go all the way? - again it was a major event on the political sphere as the german army believed to be moribund went over to the offensive stretching the allies to the limit.
I thought of it but I don't think it's a good idea: Friedensturm would have to be the one, and I don't want to allow the CP player to close the game with a end-of-turn Friedensturm adding the 40th War Status point, thus depriving the Allies from a chance to counterattack.
|
0.876806 |
The contract ended because Parkgate voluntarily liquidated itself and assigned the contract to British Waggon, which did not have privity.
Plaintiff Parkgate loaned defendant one hundred railway wagons for seven years for £625 per year. As part of the contract, Parkgate agreed to maintain the wagons for defendant. Later that year, Parkgate began to liquidate and dissolve itself. As part of its liquidation, Parkgate assigned and transferred the the contracts with defendant to plaintiff British Waggon. British Waggon took over the repairing stations and the staff employed therefor. British Waggon has since performed all repairs for defendant, but under a special agreement without prejudice to their rights. Plaintiff sued for unpaid rent.
Was Parkgate allowed to delegate its obligations to defendant to British Waggon?
Parkgate still exists as long the wagons are letted, so the fact that it liquidated itself is irrelevant.
While contracts based on personal performance cannot be delegated, it is not assumed that during negotiations defendant attached any importance to whom the repairs were done by. It just wanted the wagons repaired. If the wagons get adequately repaired, it is not a departure from the contract to have someone else do them.
If it is not important that the obligations be performed by a specific person, they are delegable.
|
0.907996 |
1 On 1 is the baseline for player development. Think about the drive way duels, in the park challenges and the way we naturally built instincts to play the game.
Find the time in each practice to play 1 On 1. 1 On 1 makes each player (1) possess and score the ball or (2) defend the ball and stops. The transition between the two are the 50/50 effort plays that determine most game outcomes; loose balls and rebounds.
|
0.999117 |
Fiji has a diverse religious landscape, but Christianity is the religion of the majority.
Although Christianity is the most widely practiced religion in Fiji, there are many large and beautiful Hindu temples.
The religious landscape of Fiji is varied, but Christianity is dominant, followed by Hinduism and Islam. While indigenous Fijians are mainly Christians, most of those with Asian ancestry are Hindus, Muslims or Sikhs. The country also celebrates several festivals and observes numerous holidays, since it acknowledges the special days of the major religions practiced in the country. Below is a more detailed description of the religious beliefs of Fiji.
How Did Religious Beliefs Evolve in Fiji?
Prior to the 19th century, indigenous Fijians practiced various traditional religions based on divination and animism. With the arrival of the Europeans in the later years, the religious landscape of Fiji started to change, with Christianity gradually gaining popularity. The conversion of Fijian tribal chiefs to Christianity helped spread the religion among their followers. The colonization of Fiji by the British led to further changes in the country’s religious landscape. While Christianity became hugely popular during this time, other religions like Hinduism, Sikhism, and Islam were also introduced in the country by indentured laborers that the British brought from India to work in the country’s sugar plantations.
Freedom of Religion in Fiji: What Does the Law Say?
Prior to colonization, Fiji's traditional laws governed the people’s right to practice religion. After the British captured Fiji, the laws imposed by the British government became applicable, and therefore the Westminster system dictated the religious rights of the country’s people. In independent Fiji, the constitution protects people’s right to practice the religion of their choice. However, that right might be terminated if deemed a threat to the public or an infringement on the freedom or rights of other members of society.
The religious beliefs of indigenous Fijians can be classified as shamanism or animism. For example, complex rituals, spirit worship, belief in after life, worship of natural objects and phenomena, belief in several myths and legends, are all part of such religious beliefs. Prior to the arrival of Europeans, such beliefs governed every aspect of life for indigenous Fijians.
Christianity is the dominant religion in Fiji, and is practiced by 64.4% of the country’s population. The religion was first introduced in Fiji by the Tongans, who were more receptive to the Europeans than Fiji’s indigenous population. As the influence of Enele Ma’afu, a Tongan Prince and an ardent follower of Christianity, grew in the Lau Group of islands of Fiji, Christianity began to spread quickly throughout the country. When Seru Epenisa Cakobau, a powerful Fiji chieftain, converted to Christianity, the religion found an even firmer ground in the country, and the colonization of Fiji by the British in 1874 ensured that Christianity grew and prospered even further. Methodism is the most dominant Christian denomination in Fiji today, while Anglicanism, Catholicism, and several other denominations also have significant followings.
Hinduism is the second major religion in Fiji, and is practiced by 27.9% of the country’s population. Hinduism was introduced in Fiji by indentured Hindu workers who were brought to the island from India by British colonialists to work Fiji’s sugar plantations between 1879 and 1920. Many of these workers and their families settled in Fiji and soon their religion evolved to become an integral part of the Fijian religious beliefs. Today, large and impressive Hindu temples dot the country. The most famous among these temples is the Krishna temple of ISKCON, which is ISCON’s biggest temple outside of India. The lives of Fiji’s Hindus have not been entirely peaceful since the community has faced persecution during several events of communal unrest and coups. The Hindu community of Fiji, however, still continues to thrive and has built several temples, schools, and other institutions that serve their religious, educational, and other needs in Fiji.
Like Hinduism, Islam in Fiji was introduced primarily by Muslim indentured workers brought to the islands of Fiji from India by British colonialists. Their religion was established in the country by the latter half of the 19th century. Today, Muslims constitute 6.3% of Fiji's population. The majority of Fiji’s Muslims are Sunnis (59.7 %) and the remainder either belong to the Ahmadiyya minority (3.6%) or other denominations (36.7%). Fiji also has its own Muslim League that advocates for the rights of the country’s Muslim community, promotes Islamic education, and also actively participates in politics.
Although the country’s constitution grants the freedom to practice all religions, several communal conflicts and coups mar Fiji's recent history. Between the late 1980s and early 2000s, Hindus became victims of religious persecution and many were forced to emigrate to other countries. Even the burning of Hindu temples was linked to arson attacks. In recent years, some politicians have even advocated for the establishment of Christianity as the state religion of Fiji, but no policy has yet been implemented.
|
0.958671 |
Can you tell me why, if a health visitor makes an appointment to call for a home visit, he/she can turn up nearly 2 hours late, with just an apology - instead of calling beforehand to say he/she has been delayed and to give an estimated time of arrival? Communiction is key, it is also efficient! People have their own lives to live and sloppy or non-communication impacts on other arrangements. Please train your people to communicate - turning up so late without warning is not acceptable and indicates a lack of consideration amongst other things. No one person is more important than another and Health Visitors are public employees accountable to us as well as the NHS!!! No wonder the service is in such a mess!
Communication - simple as that! Your policy on moderation say 'be helpful and respectful' - does that work only one way??!
They could text to save cost not just delay arrival and then turn up with a cursory apology - not acceptable!
|
0.954222 |
The earliest idea of a computer network intended to allow general communication between users of various computers was the ARPANET, the world's first packet switching network, which first went online in 1969.
The Internet's roots lie within the ARPANET, which not only was the intellectual forerunner of the Internet, but was also initially the core network in the collection of networks in the Internet, as well as a important tool in developing the Internet (being used for communication between the groups working on internetworking research).
The need for an internetwork appeared with ARPA's sponsorship, by Robert Kahn, of the development of a number of innovative networking technologies; in particular, the first packet radio networks (inspired by the ALOHA network), and a satellite packet communication program. Later, local area networks (LAN's) would also join the mix.
Connecting these disparate networking technologies was not possible with the kind of protocols used on the ARPANET, which depended on the exact nature of the subnetwork. A wholly new kind of networking architecture was needed.
Kahn recruited Vinton Cerf to work with him on the problem, and they soon worked out a fundamental reformulation, where instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Herbert Zimmerman and Louis Pouzin (designer of the CYCLADES network) with important influences on this design.
With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics, thereby solving Kahn's initial problem. (One popular saying has it that TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two tin cans and a string".) A computer called a gateway (a name later changed to router to avoid confusion with a number of other kinds of devices, also called gateways) is provided with an interface to each network, and fowards packets back and forth between them.
Happily, this new concept was a perfect fit with the newly emerging local area networks, which were revolutionizing communication between computers within a site.
The early Internet, based around the ARPANET, was government-funded and therefore restricted to research use only. Commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, as the TCP/IP protocols (developed by Vint Cerf and others) replaced earlier protocols like NCP, the connections expanded to more colleges and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard who were participating in research projects.
Regional TCP/IP-based networks such as NYSERNet (New York State Education and Research Network) and BARRNet (Bay Area Regional Research Network) grew up and started interconnecting with the ARPANET. This greatly expanded the reach of the growing network, and to a great extent was the point where the ARPANET turned into the Internet.
At the end of the 1980s, the US Department of Defense decided the network was developed enough for its initial purposes, and decided to stop further funding. The US National Science Foundation, another branch of the US government, took over responsibility for the core Internet backbone. In 1989 the NSFNet backbone was established, the US military broke off as a separate MILNET network, and the ARPANET was shut down.
Parallel to the ARPANET, other networks were growing. Some were educational and centrally-organized like BITNET and CSNET. Others were a grass-roots mix of school, commercial, and hobby like the UUCP network.
During the late 1980s the first Internet Service Provider companies were formed. Companies like PSINet, UUNET, Netcom, and Portal were formed to provide service to the regional research networks and provide alternate network access (like UUCP-based email and Usenet News) to the public.
The interest in commercial use of the Internet became a hotly-debated topic. Although commercial use was forbidden, the exact definition of commercial use could be unclear and subjective. Everyone agreed that one company sending an invoice to another company was clearly commercial use, but anything less was up for debate. The alternate networks, like UUCP, had no such restrictions, so many people were skirting grey areas in the interconnection of the various networks.
Many university users were outraged at the idea of non-educational use of their networks. Ironically it was the commercial Internet service providers who brought prices low enough that junior colleges and other schools could afford to participate in the new arenas of education and research.
By 1994, the NSFNet lost its standing as the backbone of the Internet. Other competing commercial providers created their own backbones and interconnections. Regional NAPs (network access points) became the primary interconnections between the many networks. The NSFNet was dropped as the main backbone, and commercial restrictions were gone.
E-mail had existed as a message exchanging service on early time sharing mainframe computers connected to a number of terminals. In 1971, Ray Tomlinson developed the first system of exchanging addressed messages between different, networked computers; he also introduced the "name@computer" notation that is still used today. E-mail turned into the internet "killer application" of the 1980s.
The second most popular application of the early internet was usenet, a system of distributed discussion groups which is still going strong today. Usenet had existed even before the internet, as an application of Unix computers connected by telephone lines via the UUCP protocol.
It wasn't until the early to mid 1980s that the services we now use most on the Internet started appearing. The concept of "domain names" (like "wikipedia.org") requiring "Domain Name Servers" wasn't even introduced until 1984. Before that all the computers were just addressed by their IP addresses (numbers) or used a central "hosts" file maintained by the NIC. Most protocols used for email and other services were significantly enhanced after this.
The Internet has developed a significant subculture dedicated to the idea that the Internet is not owned or controlled by any one person, company, group, or organization. Nevertheless, some standardization and control is necessary for anything to function.
Many people wanted to put their ideas into the standards for communication between the computers that made up this network, so a system was devised for putting forward ideas. One would write one's ideas in a paper called a "Request for Comments" (RFC for short), and let everyone else read it. People commented on and improved those ideas in new RFCs. (With its basis as an educational research project, much of the documentation was written by students or others who played significant roles in developing the network but did not have official responsibility for defining standards. This is the reason for the very low-key name of "Request for Comments" rather than something like "Declaration of Official Standard".) The first RFC (RFC1) was written on April 7th, 1969. There are now well over 2000 RFCs, describing every aspect of how the internet functions.
The Internet standards process has been as innovative as the Internet itself. Prior to the Internet, standardization was a slow process run by committees with arguing vendor-driven factions and lengthy delays. In networking in particular, the results were monstrous patchworks of bloated specifications.
The fundamental requirement for a networking protocol to become an Internet standard is the existence of at least two working implementations that interoperate with each other. This makes sense looking back, but it was a new concept at the time. Other efforts built huge specifications with many optional parts and then expected people to go off and implement them, and only later did people find that they did not interoperate, or worse, the standard was not even implementable.
In the 1980s, the International Organization for Standardization (ISO) documented a new effort in networking called Open Systems Interconnect or OSI. Prior to OSI, networking was completely vendor-developed and proprietary. OSI was a new industry effort, attempting to get everyone to agree to common network standards to provide multi-vendor interoperability. The OSI model was the most important advance in teaching network concepts. However, the OSI protocols or "stack" that were specified as part of the project were a bloated mess. Standards like X.400 for e-mail took up several large books, while Internet e-mail took only a few dozen pages at most in RFC-821 and 822. Most protocols and specifications in the OSI stack are long-gone today, such as token-bus media, CLNP packet delivery, FTAM file transfer, and X.400 e-mail. Only one, X.500 directory service, still survives with significant usage, mainly because the original unwieldy protocol has been stripped away and effectively replaced with LDAP.
Some formal organization is necessary to make things operate. The first central authority was the NIC (Network Information Center) at SRI (Stanford Research Institute in Menlo Park, California).
The part of the Internet most people are probably most familiar with is the World Wide Web.
As the Internet grew through the 1980s and early 1990s, many people realized the growing need to be able to find and organize files and related information. Projects such as Gopher, WAIS, and the Anonymous FTP Archive Site list attempted to create schemes to organize distributed data and present it to people in an easy-to-use form. Unfortunately, these projects fell short in being able to accommodate all the various existing file and data types, and in being able to grow without centralized bottlenecks.
One of the most promising ideas was hypertext, inspired by Vannevar Bush's "memex" and Ted Nelson's Project Xanadu. Small self-contained hypertext systems had been created before, such as Apple Computer's HyperCard, but nobody had figured out how to scale it up to be able to refer to another document anywhere in the world.
The solution was invented by Tim Berners-Lee in 1989. He was a physicist working at CERN, the European Particle Physics Laboratory, and wanted a way for physicists to share information about their research. His documentation project was the source of the two key inventions that made the World Wide Web possible.
The two key inventions were the uniform resource locator (URL) and hypertext markup language (HTML). The URL was a simple way to specify the location of a document anywhere on the Internet in one simple address that specified a machine domain name, a path on that machine, and a protocol to use. HTML was an easy way to embed codes into a text file that could define the structure of a document and also include links pointing to other URLs. An additional network protocol (HTTP: hypertext transfer protocol) was also invented for reduced overhead in transfers, but the true genius of the new system was that a new protocol was useful but not necessary; the URL and HTML system was backwards compatible with existing protocols like FTP and Gopher.
Later around 1992 people realized that the simple markup capabilities of HTML could allow graphics to be included in text documents. The first graphical web browsers were developed, Viola and Mosaic. Mosaic was developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreesen. Andreesen left NCSA-UIUC and joined Jim Clark, one of the founders of SGI (Silicon Graphics, Inc). They started Mosaic Communications which became Netscape Communications Corporation , making Netscape Navigator the first commercially successful browser. Microsoft acquired technology from SpyGlass (who got their technology from NCSA) to develop Internet Explorer.
The ease of creating new Web documents and linking to existing ones caused exponential growth. As the Web grew, search engines were created to track pages on the web and allow people to find things. The first search engine, Lycos, was created in 1993 as a university project. In 1993, the first web magazine, The Virtual Journal, was published by a University of Maine student. At the end of 1993, Lycos indexed a total of 800,000 web pages.
By August 2001, the Google search engine tracked over 1.3 billion web pages and the growth continues. At the end of 2002, Google's index exceeded 3 billion pages.
|
0.999999 |
Who invented Bowling? The name of a specific person has never been credited with the invention of Bowling. When was Bowling invented Bowling? Historians tend to agree that Bowling was invented in c. 3100 B.C. and the first known evidence of the game dates back to the Ancient Egyptian: Early Dynastic Period (3100 - 2686 BCE) of invention.
Definition of Bowling: Bowling is defined as a game of tenpins or duckpins in which a heavy ball is rolled down a special alley, usually made of wood, at a group of wooden pins, especially the games of tenpin (tenpins) and skittles (ninepins) with the aim of knocking them over.
Fact 1: Who invented Bowling? The name of the inventor of Bowling is unknown but it is believed to have been invented in c. 3100 B.C. during the Ancient Egyptian: Early Dynastic Period (3100 - 2686 BCE) era of inventions.
Fact 2: Who invented Bowling? The oldest evidence of the Ancient Egyptian game of bowling was found in 1932 by Sir Flinders Petrie in an Ancient Egyptian grave which contained various archaic bowling pins and bowling balls.
Fact 3: Who invented Bowling? The throwing, or rolling, of stones to hit a target or mark is certainly among the earliest of games and sports in human history.
Fact 4: Who invented Bowling? The word 'bowl' derives from old English to mean a 'wooden ball' and the word bowl eventually took on the meaning of "to roll a ball on the ground". The word 'pin' derives from the Old English word 'pinn' which originally meant a jutting peg or bolt.
Fact 5: Who invented Bowling? The invention of Bowling in c. 3100 B.C. led to the much later invention of bowls and skittles during the Medieval era and a similar version of skittles called 'Ninepin bowling' was introduced to America from Europe during the colonial era. Different forms of the sport such as candlepin bowling was introduced in 1880 and duckpin bowling was invented in 1895.
Fact 6: Who invented Bowling? The invention of the game of Bowls dates back to the 13th century when images of the the sport were used to illustrate Medieval manuscripts. An annual bowls tournament was known to have been held in Southampton, England in 1299 and details of this event include first reference to the use of a 'bowling green'.
Fact 7: Who invented Bowling? The game of Bowls became so popular that, together with other sports and games, was banned on a Sunday in 1366 by King Edward III of England so that men would concentrate on archery practice.
Fact 8: Who invented Bowling? The most famous story related to bowls relates to the legend of the Elizabethan sailor and explorer Sir Francis Drake who famously finished his game of bowls on Plymouth Hoe in Devon before he joined in the fray against the great Spanish Armada that threatened to invade England in 1588.
Fact 9: Who invented Bowling? The modern version of tenpin bowling originated in America as a variant of the German nine-pin game 'Kegeln' aka 'Kegel' (the German name for skittle) which was introduced to the New World by German immigrants. Kegel was played as a gambling game in which large amounts of money were won and lost.
Fact 10: Who invented Bowling? In 1733 Bowling Green Park was opened and is the oldest public park in New York City. The park got its name for its inclusion of a bowling green.
Fact 11: Who invented Bowling? The popular game of Bowling was mentioned in the famous American story of Rip Van Winkle by Washington Irving that was published in 1812. In the story Rip Van Winkle was woken up by the sound of "crashing ninepins".
Fact 12: Who invented Bowling? The first indoor bowling alley was Knickerbockers of New York City, that was built in 1840.
Fact 13: Who invented Bowling? The American game of 'ninepins' continued the betting history of the game to such an extent that in 1841 the state of Connecticut passed a law making it illegal to maintain "any ninepin lanes". Wealthy people circumnavigated the law by building private lanes in their mansions.
Fact 14: Who invented Bowling? Adding an extra pin to the banned game of 'ninepins' exploited a loophole in the law and enabled ordinary people to continue playing the game. The game of Ninepins thus changed to the game of Tenpins.
Fact 15: Who invented Bowling? Candlepin bowling, which used more slender pins, was developed in 1880 in Worcester, Massachusetts, by Justin White, a local billiards and bowling center owner.
Fact 16: Who invented Bowling? The first standardisation of the rules for the sport was established at Beethoven Hall in New York City on September 9, 1895 as the American Bowling Congress and major national bowling competitions were established.
Fact 17: Who invented Bowling? Duckpin bowling, which used shorter, lighter but squatter pins, was developed in Baltimore around 1900 at a bowling, billiards and pool hall owned by John McGraw and Wilbert Robinson.
Fact 18: Who invented Bowling? Bowling balls were traditionally made from a dense hardwood but in 1905 this changed when the first rubber ball, or Mineralite ball, was introduced to the game. The material of the balls changed with the advances of new technology and polyester ("plastic") balls were produced in the late 1950s and polyurethane ("urethane") balls were introduced during the 1980's. Modern bowling balls are made from particle and reactive resin balls.
Fact 20: Who invented Bowling? In 1951, American Machine and Foundry Company (AMF) purchased the patents to Gottfried Schmidt's automatic mechanical pinsetter, and produced the machine called the "Pinspotter".
Fact 21: Who invented Bowling? Some of the first automatic mechanical pinsetters were installed at Roselle Lanes (16 lanes) in Roselle, New Jersey in April 1956.
Fact 22: Who invented Bowling? In 1958 the Professional Bowlers Association (PBA) was founded by Eddie Elias, a television sports interviewer. It was Eddie Elias that turned the game into one of the longest continuing sports series on network TV and continued to be involved in the PBA until his death in 1998.
Fact 20: Who invented Bowling? The game and sport of bowling continues its popularity to the present day with over 95 million enthusiastic bowlers playing across over 90 countries.
|
0.806099 |
Sub-Saharan Africa, referred to as "Africa" in this article, comprises the forty-two countries on the African continent south of the Sahara and the six island nations close to it. Africa's rich cultural and ethnic traditions reflect different heritages in all countries–an early Christian heritage in the Nile Basin, a strong Islamic influence in the north, and Christian influences dating from colonialism in many central and southern African countries.
Geographically and economically, Africa is diverse and fragmented. In 1999 the region's population was about 640 million. Six countries had fewer than 1 million people. Nigeria had 124 million people and Ethiopia 64 million. Within the continent, communications and travel are difficult. Gross national product (GNP) per capita averaged $500 in 1999, ranging from less than $200 in the Burundi, Ethiopia, Malawi, Niger, and Sierra Leone to more than $3,200 in Botswana, Gabon, Mauritius, and South Africa. On the whole, the region's GNP growth and human development indicators lag behind those of other regions.
Poverty is pervasive across the region. More than 290 million people live on less than $1 per day. With the region's rapidly growing population, 5 percent annual growth is needed to keep the number of poor from increasing. According to the World Bank, halving the incidence of poverty by 2015 would require annual per capita gross domestic product (GDP) growth rates of at least 7 percent. Unsustainable external indebtedness has diverted scarce resources away from priority social needs. Waste in the public sector and weak governance structures continue to act as major constraints to development in many countries.
Education systems in the region reflect differences in geography, cultural heritage, colonial history, and economic development progress. The impact of French, English, and other countries' colonial policies toward education has had a lasting impact on the objectives, structure, management, and financing of education systems in the region. When African countries gained independence from colonial rule around 1960, the region lagged far behind other regions on nearly every education indicator. Dramatic progress–with large national variations–occurred in the 1960s and 1970s. Primary enrollments jumped from 11 million in 1960 to almost 53 million in 1980. Growth at the secondary and tertiary levels was even more dramatic, with secondary enrollments increasing by fifteen times and tertiary enrollments by twenty times.
The economic crisis of the 1980s severely affected education in Africa. Declining public resources and private economic hardship resulted in an erosion of quality and primary level participation rates. As of the early twenty-first century, these setbacks have not yet been reversed. At every level, education facilities are too few, while those that exist are often in poor repair and inadequately equipped. Teachers are often underpaid and underqualified and rarely receive the support and supervision they need to do an effective job. The number of hours spent in the classroom by most African students is far lower than the international standard. Instructional materials are often in desperately short supply. Not surprisingly, learning achievement is almost always far below the instructional objectives specified in the curricula. While country experiences vary a great deal, the reality for too many Africans is one of education systems characterized by low quality and limited access.
Africa has the lowest enrollment rate at every level and is the only region where the number of out-of-school children continues to rise. The average African adult has fewer than three years of schooling, lower than the attainment level for any other region. Almost one in three males and one in two females is illiterate. Gender inequalities persist at all levels of schooling. Female enrollments are about 80 percent of male enrollments at the primary and secondary levels and less than 55 percent at the tertiary level.
As disturbing as the low levels of literacy and education attainment is the marked decline in the capacity of many African countries to generate knowledge as a resource for tertiary level instruction and for research and technology development. A 1992 study estimated that Africa had only 20,000 scientists and engineers, or 0.36 percent of the world's total. In 1996 Senegal had only 3 researchers engaged in research and development per million people, Burkina Faso had only 16 and Uganda had 20, compared with 149 in India and 350 in China. Few African researchers are integrated in the worldwide scientific knowledge networks. A continuing brain drain exacerbates these problems. Reasons vary from country to country but usually relate to a lack of employment opportunities in the modern sector, limited research budgets in universities, the lack of freedom of speech, and the fear of political repression in countries with authoritarian regimes. An estimated 30,000 Africans holding doctoral degrees live outside the continent, and 130,000 Africans study in tertiary institutions outside Africa.
Social and economic progress in Africa will depend to a large extent on the scope and effectiveness of investments in education. If living standards are to be raised, sustained efforts will be needed to narrow the gaps in educational attainment and scientific knowledge between Africa and other regions and to bridge the digital divide. Decades of research and experience in Africa and elsewhere have shown the pivotal role of a well-educated population in initiating, sustaining, and accelerating social development and economic competitiveness. Numerous studies show that education, particularly primary education, has a significant positive impact on economic growth, earnings, and productivity.
But clearly, primary education cannot expand and African economies cannot grow without an education system that trains a large number of students beyond the basic cycle, including graduate students at universities. To be sustainable, educational development must be balanced. It must ensure that systems produce students at different levels with qualifications that respond to the demand of the labor market, providing a continuous supply of skilled workers, technicians, professionals, managers, and leaders.
Yet, lasting education development will take place only when the extensive armed conflicts come to an end and the HIV/AIDS pandemic stalls. Restoring peace and stability in the region is an urgent priority. At least one in five Africans lives in a country severely disrupted by war. Between 1990 and 1994 more than 1 million people died because of conflict. And in 2000, approximately 13.7 million people in Africa were refugees or internally displaced. Few opportunities for schooling exist in the African conflict zones.
Africa has been the region hardest hit by the HIV/AIDS pandemic, accounting for 23 million of the 33 million people affected worldwide. By killing people in their most productive years, the pandemic is destroying the social and economic fabric of the worst affected countries and reversing hard-won human development gains. Replacing education sector staff lost to AIDS-related illnesses while national resources are being diverted from education to the health sector and providing an education to children affected by AIDS are urgent ongoing challenges.
Primary enrollment growth slowed in the 1980s. The gross enrollment rate (total number of children enrolled as a proportion of the number of children of the relevant age group) fell from 80 percent in 1980 to 75 percent in 1990, largely as a result of declining male participation rates, and by 1997 had recovered to only 77 percent. Yet other coverage indicators showed considerable improvement (see Table 1). Net enrollment rates (number of children of the relevant age group enrolled as a proportion of the number of children of relevant age) increased from 54 percent in 1990 to 60 percent in 1998; apparent intake rates (total number of children admitted in grade 1 as a proportion of the total number of children of the school entry age) from 70 percent to 81 percent; and net intake rates (number of children of entry age admitted in grade 1 as a proportion of the total number of children of the school entry age) from 33 percent to 43 percent. Although not available for all countries, these data suggest that more school-age children are in school, the decline in boys' participation has reversed, and more children are enrolling in first grade. But many children still enroll late (only two-thirds of the new entrants in 1998 were the official age for school enrollment), the gap in girls' initial enrollment rate has increased, and more than 40 percent of school-age children are not in school.
Country experiences vary a great deal, however. Botswana, Cape Verde, Mauritius, Namibia, the Seychelles, Swaziland, and Zimbabwe sustained education progress. Uganda and Mauritania implemented policies that resulted in a sudden increase in primary enrollments and then began struggling to deal with the consequent challenges. Burkina Faso, Guinea, Mozambique, and Senegal opted for a gradual approach. Most other countries are formulating comprehensive long-term strategies for educational development, including universal primary education.
Nevertheless, access to primary education remains problematic. Of the forty-four countries with data for 1996, only ten (Botswana, Cape Verde, Congo, Malawi, Mauritius, Namibia, South Africa, Swaziland, Togo, and Zimbabwe) had a primary gross enrollment rate of 100 percent. Seven (Burkina Faso, Burundi, Ethiopia, Liberia, Mali, Niger, and Somalia) had a primary gross enrollment rate below 50 percent. And between 1985 and 1997 the primary gross enrollment rate actually declined in seventeen countries–Angola, Burundi, Cameroon, Central African Republic, Comoros, Côte d'Ivoire, the Democratic Republic of Congo, Kenya, Lesotho, Liberia, Madagascar, Mozambique, Nigeria, Sierra Leone, Somalia, Tanzania, and Zambia. Together, these seventeen countries include more than half of Africa's school-age population.
The challenge is clear. In almost all countries, access has expanded far too slowly to achieve international education targets for gender equity and universal primary education. About 12 percent of the world's children aged six to eleven live in Africa, yet the region accounts for more than one-third of the world's out-of-school children. Unless these trends reverse, Africa will account for three-quarters of out-of-school children by 2015.
Participation problems are exacerbated by the absence of an environment for effective learning. Children are taught in overcrowded classrooms by underqualified and frequently unmotivated teachers who are often poorly and irregularly paid and receive little managerial support. Teacher absenteeism is widespread, disrupting learning and eroding public confidence in the value of public education. Shortages of learning materials further constrain learning. In ten of eleven countries surveyed by UNESCO (1998b), more than one-third of the students had no chalkboards in their classrooms. In eight countries, more than half of the students in the highest grade had no math books. Most African children spend roughly half the time in the classroom that children in other countries do.
Poverty-related deprivation further contributes to low educational attainment in Africa. Poor children spend more time than other children contributing to household work. As a result they are less likely to spend out-of-school hours on schoolwork, more likely to be absent from school during periods of peak labor demand, and more likely to be tired and ill-prepared for learning when they are in the classroom. More than 40 percent of children in Africa are stunted, while almost one-third are underweight. Primary school-age children are especially susceptible to illnesses that affect poor people, in particular gastrointestinal and respiratory problems. Malnourished and sick children are less likely than healthy children to learn in school and are more likely to be absent from lessons. And if private costs for education are substantial, parents in poor households are more likely to withdraw their children from school early in the school cycle. All these effects are exacerbated by the rapid spread of HIV/AIDS, which affects the attendance of teachers and students and strains household resources.
Unsurprisingly, students who complete primary school often have an unacceptably low level of learning. In 1990–1991 Botswana, Nigeria, and Zimbabwe participated in a thirty-one-country survey of ninth grade reading skills (described by Warwick B. Elley in 1992). Students in these three countries registered the lowest scores, performing considerably worse than students in the other four developing countries participating in the survey (the Philippines, Thailand, Trinidad and Tobago, and Venezuela). More recently, the Southern Africa Consortium for Monitoring Educational Quality assessed the reading skills of sixth grade students in Mauritius, Namibia, Zambia, and Zanzibar. The average percentage of correct answers ranged from 38 percent to 58 percent.
Poor learning often results in high repetition rates and low completion rates. In fifteen countries more than 20 percent of students are repeaters–in Côte d'Ivoire more than half of all primary students are repeating a grade at any time. More than one-third of school entrants fail to reach the final grade. In the Central African Republic, Chad, Congo, Madagascar, and Mozambique, fewer than half the children who enroll in primary school complete five years. Many of the students drop out early in the primary cycle, before they acquire even rudimentary literacy and numeracy skills. In Chad, Ethiopia, and Madagascar more than one-third of the children who enter school never complete second grade.
Few African countries provide adequate opportunities for education and training needed by twelve-to seventeen-year-olds or for adults. The gross secondary enrollment rate in 1997 was 26 percent for Africa, compared with 52 percent for all developing countries. Many Africans are looking for opportunities to either continue formal schooling or acquire skills that will equip them to enter the world of work.
Education opportunities for adults remain equally limited. The mass literacy campaigns of the 1970s fell far short of their objectives. Only a few countries–Uganda and Ghana are examples–continue to support large-scale literacy programs. But in the late 1990s countries such as Senegal began to experiment with small-scale highly targeted programs, often implemented with the support of nongovernmental organizations. Skill development programs are delivered for the most part by private-sector institutions and sponsored by employers.
In 1960 Africa (excluding South Africa) had six universities with fewer than 30,000 students. In 1995 the region supported nearly 120 universities enrolling almost 2 million. Yet, tertiary enrollment, which reached 3.9 percent for Africa in 1997, is still far below the 10 percent average for all developing countries. In many African countries universities are the only national institutions with the skills, equipment, and mandate to generate new knowledge through research and to adapt global knowledge to solve local problems. A few have long traditions and were world-class institutions through the 1970s. Yet many others are weak. Early curriculum links to religious studies and civil-service needs have often promoted the humanities and social sciences at the expense of the natural sciences, applied technology, business-related skills, and research capabilities. Inappropriate governing structures, misguided national policies, weak managerial capacity, political interference, and campus instability have further hampered effectiveness. The experience with subregional academic cooperation has been disappointing, although many institutions are too small and recruit from too small a national pool of talent to develop a high-level teaching and research capacity across a wide range of academic subjects.
The private sector is an increasingly important provider of education in Africa. Most registered private schools in Africa are nonprofit community and religious schools. Several countries are also increasing the role of private providers in delivering support services such as textbook publishing, classroom construction, and university catering. The private sector plays a small–although an increasingly important–role at the primary level, but its share in meeting secondary, vocational, and tertiary education needs has increased significantly since the mid-1980s. In Côte d'Ivoire 36 percent of general secondary students and 65 percent of technical students are enrolled in private schools. In Zambia almost 90 percent of the students taking technical and vocational examinations were trained outside public institutions.
At the tertiary level the number of private institutions has increased rapidly. In the 1990s private institutions were established in countries such as Kenya, Mozambique, Senegal, Sudan, Uganda, and Zimbabwe. In South Africa alone there are probably more that 500 private tertiary institutions.
These institutions reduce the financial burden on governments, give parents more choice and control, and improve accountability. They help to meet some of the excess demand for education, provide special programs that the government is unable or unwilling to provide and reduce geographical imbalances in provision. Nevertheless, while many private training institutions have been successful, many others are of poor quality raising important issues of accreditation or other means of quality control. Registration requirements usually call for the provision of basic infrastructure and staff. Kenya has established a Commission for Higher Education for the accreditation of tertiary institutions. In most other countries the ministry of education typically has this responsibility.
The efficiency of resource use varies considerably within and between countries. In some countries, especially in the Sahel (the southern fringe of the Sahara), high teacher salaries make it difficult to mobilize the resources required to reach universal primary education in the foreseeable future. In other countries teacher salaries are so low that teachers are forced to take additional jobs. Teacher deployment often creates further inefficiencies when teachers are not deployed according to rational criteria such as the number of students. For example, in Niger the teacher-student ratio in primary schools of 200 students ranges from 1:100 to 1:20.
In 1999 Keith Lewin and Francoise Caillods argued that developing countries with low secondary enrollments, including most African countries, cannot finance substantially higher participation rates from domestic public resources with current cost structures. Secondary schooling is the most expensive level relative to GNP per capita in countries with the lowest enrollment rates. In Africa secondary schools use resources such as teachers and buildings much less efficiently than primary schools. One reason may be that in the poorest countries, secondary schools are still organized along traditional lines to educate a small elite.
Limited public resources and competing public spending priorities have prevented many governments from addressing the challenges of education development. Since the mid-1980s the share of GDP spent on education has decreased in eleven and increased in twelve African countries for which data are available. Perhaps more significant, this share is still less than 3 percent in ten countries for which data is available for 1996 or after. At a given level of education spending as a share of GDP, participation and attainment levels in Africa compare unfavorably with those in other low-income countries (see Table2). Inefficient and inequitable use of scarce resources in a context of high population growth and demand for general public financing of education by politically powerful pressure groups adds to the fiscal challenge. Thus countries must set priorities for public spending and identify possible efficiency gains from and opportunities for mobilizing additional public and private resources.
The imperative of accelerated education development in Africa is clear. Africa will not be able to sustain rapid growth without investing in the education of its people. Many lack the education to contribute to–and benefit from–high economic growth. Meeting this challenge will require a major effort by Africans and their development partners during a long period–a decade or more in many cases. Many governments will need to implement changes in the way education is financed and managed–changes that are often politically controversial. Partnership of governments, civil society, and external funding agencies will need to be established or reconfigured to ensure national ownership and sustainability of programs of reform and innovation.
Yet, at the start of the twenty-first century the opportunity to effectively address the often intractable problems of education was perhaps better than at any time in the 1980s and 1990s. The economic performance improved markedly beginning in 1995, with consecutive years of per capita growth in many countries for the first time since the 1970s. In several countries additional resources have or will become available through debt relief provided under the Highly Indebted Poor Countries (HIPC) initiative, the coordinated effort of the industrialized countries to bring down debtor developing countries' debt to sustainable levels. Information and communications technology offers new opportunities to overcome constraints of distance and time. Political commitment to education development is strong almost everywhere. At the World Education Forum in Dakar, Senegal, in April 2000, the 185 participating countries adopted a Framework for Action toward the 2015 goal of Education for All, which gives special attention to the needs of Sub-Saharan Africa. Promising reforms and innovations have been implemented. Many funding agencies are committed to increasing their support for education in Africa. New aid relationships are being piloted in the context of sectorwide development programs replacing the increasingly ineffective individual project approach.
a relentless pursuit of quality, expanded education opportunities are unlikely to achieve their purpose–that is, the acquisition of useful knowledge, reasoning abilities, skills, and values. Second, an unwavering commitment to equity is vital to ensuring that disadvantaged groups–rural residents, the poor, and females–have equal access to learning opportunities at all levels. This will require explicitly targeted strategies for hard-to-reach groups and better analysis of the mechanisms by which people are excluded from education. Third, African countries will need to ensure education development strategies are financially sustainable. Setting spending priorities, spending the resources that have been allocated effectively, diversifying funding sources, and in many cases mobilizing additional funding from sources outside the public sector–especially for postprimary education beyond the basic level–are areas where tough decisions need to be made and then adhered to. Finally, an up-front emphasis on capacity building of institutions and of individuals is needed for accelerated education development to happen. Effective planning, implementation, and evaluation of reforms depend upon effective incentives, reasonable rules, efficient organizational structures, and competent staff. Without these, no strategy for education development can succeed.
See also: INTERNATIONAL EDUCATION; MIDDLE EAST AND NORTH AFRICA; POVERTY AND EDUCATION.
ASSOCIATION FOR THE DEVELOPMENT OF EDUCATION IN AFRICA. 1999. Newsletter 11 (1). Paris: Association for the Development of Education in Africa.
ASSOCIATION FOR THE DEVELOPMENT OF EDUCATION IN AFRICA. 2001. What Works and What's New in Education: Africa Speaks! Report from a Prospective, Stocktaking Review of Education in Africa. Paris: Association for the Development of Education in Africa.
BARRO, ROBERT J. 1991. "Economic Growth in a Cross-Section of Countries." Quarterly Journal of Economics 106:407–444.
ELLEY, WARWICK B. 1992. How in the World Do Students Read?" Hamburg, Germany: International Education Association.
INSTITUT NATIONAL d'ETUDE et d'ACTION pour le DEVELOPPEMENT de l'EDUCATION. 1997. Projet SNERS: L'évaluation du rendement pédagogique du français écrit dans l'enseignement primaire: Les résultats au CM 2 et sciences CM2. Dakar, Senegal: Institut National d'Etude et d'Action pour le Developpement de l'Education.
INTERNATIONAL INSTITUTE FOR EDUCATION PLANNING. 1999. Private Education in Sub-Saharan Africa: A Re-examination of Theories and Concepts Related to Its Development and Finance. Paris: International Institute for Education Planning/United Nations Educational, Scientific and Cultural Organization.
LAU, LAWRENCE J.; JAMISON, DEAN T.; and LOUAT, FREDERIC F. 1991. Education and Productivity in Development Countries: An Aggregate Production Function Approach. Policy Research Working Paper 612. Washington, DC: World Bank, Development Economics and Population and Human Resources Department.
LEWIN, KEITH M., and CAILLODS, FRANCOISE. 1999. Financing Education in Developing Countries. Paris: International Institute of Educational Planning, Strategies for Sustainable Secondary Schooling.
LOCKHEED, MARLAINE E.; JAMISON, DEAN T.; and LAU, LAWRENCE. 1980. "Farmer Education and Farm Efficiency: A Survey." Economic Development and Cultural Change 29 (1):37–76.
LOCKHEED, MARLAINE E., and VERSPOOR, ADRIAAN. 1991. Improving Primary Education in Developing Countries. New York: Oxford University Press for the World Bank.
MIDDLETON, JOHN; VAN ADAMS, ARVIL; and ZIDERMAN, ADRIAN. 1993. Skill for Productivity: Vocational Education and Training in Developing Countries. New York: Oxford University Press for World Bank.
MINGAT, ALAIN. 1998. Assessing Priorities for Education Policy in the Sahel from a Comparative Perspective. Dijon, France: Université de Bourgogne, Institut de Recherche sur l'Economie de l'Education.
MINGAT, ALAIN, and SUCHET, BRUNO. 1998. Une analyse économique comparative des systèmes éducatifs Africains. Dijon, France: Université de Bourgogne, Institut de Recherche sur l'Economie de l'Education.
NEHRU, VIKRAM, and DHARESHWAR, ASHOK M. 1994. New Estimates of Total Factor Productivity Growth for Developing and Industrial Countries. Policy Research Working Paper 1313. Washington, DC: World Bank, International Economics Department.
ORGANISATION FOR ECONOMIC CO-OPERATION and DEVELOPMENT, DEVELOPMENT ASSISTANCE COMMITTEE. 1996. Shaping the Twenty-First Century: The Contribution of Development Cooperation. Washington, DC: Organisation for Economic Co-operation and Development, Development Assistance Committee.
OXFAM. 1999. Education Now: Break the Cycle of Poverty. Oxford: Oxfam.
PSACHAROPOULOS, GEORGE. 1985. "Returns to Education: A Further International Update and Implications." Journal of Human Resources (U.S.) 20:583–604.
SAINT, WILLIAM. 1992. Universities in Africa: Strategies for Stabilization and Revitalization. World Bank Technical Paper, 0253-7494, no. 194. Washington, DC: World Bank.
UNITED NATIONS EDUCATIONAL, SCIENTIFIC and CULTURAL ORGANIZATION. 1998a. Development of Education in Africa: A Statistical Review. Seventh Conference of Ministers of Education of African Member States of UNESCO. (MINEDAF VII). Paris: United Nations Educational, Scientific and Cultural Organization.
UNITED NATIONS EDUCATIONAL, SCIENTIFIC and CULTURAL ORGANIZATION. 1998b. UNESCO Yearbook, 1998. Paris: United Nations Educational, Scientific and Cultural Organization.
UNITED NATIONS EDUCATIONAL, SCIENTIFIC and CULTURAL ORGANIZATION. 1998c. World Education Report, 1998. Paris: United Nations Educational, Scientific and Cultural Organization.
UNITED NATIONS EDUCATIONAL, SCIENTIFIC and CULTURAL ORGANIZATION. 1999a. Science and Technology in Africa: A Commitment for the Twenty-First Century. Paris: United Nations Educational, Scientific and Cultural Organization, Office of Public Information.
UNITED NATIONS EDUCATIONAL, SCIENTIFIC and CULTURAL ORGANIZATION. 1999b. UNESCO Yearbook, 1999. Paris: United Nations Educational, Scientific and Cultural Organization.
UNITED NATIONS EDUCATIONAL, SCIENTIFIC and CULTURAL ORGANIZATION. 2000. Education for All (2000): Report from the Sub-Saharan Africa Zone, Assessment of Basic Education in SSA. Harare, Zimbabwe: United Nations Educational, Scientific and Cultural Organization.
UNITED NATIONS CHILDREN'S FUND. 1999. The State of the World's Children, 1999: Education. New York: Oxford University Press.
VAWDA, AYESHA, and PATRINOS, HARRY ANTHONY. 1999. "Private Education in West Africa: The Technological Imperative." Paper presented at the Fifth Oxford International Conference on Education and Development, Oxford University.
WORLD BANK. 1991. Vocational and Technical Education and Training: A World Bank Policy Paper. Washington, DC: World Bank.
WORLD BANK. 2000. Can Africa Claim the Twenty-First Century? Washington, DC: World Bank.
WORLD BANK. 2001a. A Chance to Learn: Knowledge and Finance for Education in Sub-Saharan Africa. Africa Region Hunan Development Series. Washington, DC: World Bank.
WORLD BANK. 2001b. Can Africa Reach the International Targets for Human Development? An Assessment of Progress towards the 1998 Second Tokyo International Conference on African Development (TICAD II). Africa Region Human Development Series. Washington, DC: World Bank.
WORLD EDUCATION FORUM. 2000. Sub-Saharan Africa Regional Framework for Action. Dakar, Senegal: World Education Forum.
|
0.999744 |
National anthem of BrazilThe largest country in South America, Brazil takes up about half of the continent. It is one of the world’s largest and most economically important countries. It is also filled with some of the greatest natural treasures on Earth. Brazil’s Amazon River basin, including the Amazon rainforest, is one of Earth’s richest areas of plant and animal life. The Iguazú Falls in the south constitute one of the country’s most famous natural wonders. Brazil is the only Portuguese-speaking nation in South America. While language distinguishes it from its neighbors, however, the country has much in common historically and culturally with the rest of the region. The capital is Brasília.
Brazil shares borders with every South American nation except Ecuador and Chile. The Atlantic Ocean lies to the east. To the south is Uruguay; to the southwest are Argentina, Paraguay, and Bolivia; to the west, Peru; to the northwest, Colombia; and to the north are Venezuela, Guyana, Suriname, and the territory of French Guiana. The country covers an area of 3,300,171 square miles (8,547,404 square kilometers).
In such a large country there are many different geographical regions. The two that dominate the landscape are the Amazon River basin in the north and the Brazilian Highlands, or Plateau, in the center, east, and south. The northeastern coast is flat and dry; the central part of the Brazilian Highlands is mostly grassland; and the southeastern coast includes narrow plains and scenic mountains. In the central-western part of Brazil is a vast wetland called the Pantanal.
The Amazon, with its many large tributaries, is the world’s largest river system. Other major rivers in Brazil include the Paraguay, the Paraná, the Tocantins, the Araguaia, and the São Francisco.
Brazil is the world’s largest tropical country. In the rainforest, temperatures average 80° F (27° C) all year round and rainfall is heavy. South of the Amazon lowland the climate becomes more varied. Along the coast temperatures can reach as low as 57° F (14° C), and during winter there are sometimes freezing temperatures in the southern hills.
The Amazon rainforest has the most varied plant life on Earth, with about 50,000 different species. Individual plants of each species are widely scattered throughout the forest. This helps them survive blight, disease, and pests.
The animal life along the Amazon is equally diverse. Because of the tall trees, very little sunlight reaches the ground. Most animals therefore live in the trees, at different heights up to the treetops at about 150 feet (45 meters), where food and sunlight are plentiful. Animals living in the tree canopy include tree frogs and salamanders, monkeys, swarms of insects, and hundreds of types of birds. Parrots, macaws, and hummingbirds are common. Brazil has tens of thousands of butterflies—more than any other place in the world.
Larger animals in the rainforest include jaguars, tapirs, pumas, and sloths. Along the riverbanks can be found the world’s largest rodent, the capybara, as well as alligators, boa constrictors, and turtles. The river itself contains a wide variety of fish, including electric eels, catfish, and the famous piranha. Manatees and freshwater dolphins are also common.
Outside the Amazon basin, in the Pantanal wetland, are great numbers of birds, reptiles, insects, and larger animals such as anteaters and armadillos. In the southeastern part of the country, where many of Brazil’s largest cities are located, most of the original forests have been destroyed to make way for the cities. Because of this, few wild animals remain in southeastern Brazil.
In the drier northeastern region of Brazil, the plant cover is low and spread out. It is known as caatinga, from an Indian term meaning “white forest.” Thicker woodlands known as agreste grow in moister areas, mainly between the caatinga and the coast. Covered in thorns, these woods may in places reach heights of up to 30 feet (10 meters), with interlocking branches that make them hard to get through.
Brazil’s population is a mix of several different ethnic groups. These include descendants of the original Indians, the Portuguese who colonized the region beginning in the 1500s, and the Africans whom the Portuguese brought as slaves to work their plantations and mines. Starting in the mid-1800s, thousands of European settlers from Italy, Germany, and parts of eastern Europe began to move to the country. Later, in the early 1900s, large groups of Japanese also moved to Brazil. From the earliest days of Brazil’s colonial history, these groups have intermarried, so that today most Brazilians have a variety of ancestors.
The Portuguese language, enriched by Indian and African influences, is the official language of Brazil. Roman Catholicism is the dominant religion, though a number of Indian and African beliefs are also still practiced.
More than 80 percent of Brazil’s people live in cities or towns, and 12 of those cities have more than 1 million inhabitants each. São Paulo and Rio de Janeiro are two of the world’s largest cities. Some of the other major cities are Salvador, Belo Horizonte, Fortaleza, Brasília, Recife, and Pôrto Alegre. Most of the rural population is concentrated along the east coast or in the southern highlands, though more and more rural families have moved inland, to the Amazon basin and elsewhere, to clear forests to make room for farms and mines.
Services—including education, government, banks, hospitals, restaurants, and the military—are the largest part of Brazil’s economy. Manufacturing is the second most important area of the economy. The country mainly produces foods, petroleum products, cars and trucks, electrical equipment, steel, and chemicals. Brazil’s industries use its reserves of iron, silicon, clay, quartz, gold, coal, petroleum, natural gas, and wood.
Farmers use less than 10 percent of Brazil’s land, mostly in the south. However, Brazil is one of the world’s top producers of oranges and coffee. Farmers also grow sugarcane, soybeans, corn, cassava, rice, bananas, tomatoes, and many other crops. They raise great numbers of cattle and hogs.
Before the Portuguese arrived in what is now Brazil, the region was the home of at least 2 million Indians. Those who occupied the drier lands lived mostly by hunting and gathering. Other groups lived in the rainforests of the Amazon and along the Atlantic coast. Some of these groups were also hunters and gatherers. Many others lived in large villages (as many as 3,000 people) and were expert farmers and fishermen. They also manufactured hammocks, canoes and balsa rafts, blowguns for hunting and warfare, and pottery.
On April 22, 1500, the navigator Pedro Álvares Cabral claimed the land for Portugal after landing near what is now Pôrto Seguro, Brazil. Soon after the Portuguese began to settle Brazil in the early 1500s, they began importing Africans to work on the sugar plantations and, later, in the gold and diamond mines and on the coffee plantations. By 1822, when the slave trade was abolished, about four million Africans had been brought to Brazil.
Brazil was long neglected by the Portuguese, whose attention was focused on their wealthier colonies in Asia and Africa. As a result the French established settlements at São Luís and Rio de Janeiro, and in 1624 the Dutch occupied the entire northeastern coast. By then sugar from that area had become important to the Portuguese economy, and non-Portuguese settlers were forced to leave by 1654.
When the French emperor Napoleon threatened to invade Portugal in 1808 the Portuguese royal family fled to Brazil. They ruled from there and made Brazil equal with Portugal in the new United Kingdom of Portugal, Brazil, and the Algarves. The king returned to Portugal in 1821 but his son, Dom Pedro, stayed in Brazil. The next year Dom Pedro declared Brazil’s independence from Portugal and became emperor of the new nation. In 1889 Brazil became a federal republic.
Since its independence, Brazil has been one of Latin America’s most stable nations, though dictators and the military have ruled at times. Since 1985, civilian (nonmilitary) governments have led Brazil. In 1988 the country adopted a new constitution that guaranteed basic social and labor rights. Brazil continued to struggle to strengthen its economy, which has suffered from long periods of rising prices. It has also tried to resolve its serious social problems, but with mixed results, as its population and diversity continue to grow.
In 2010 Brazil elected its first woman president, Dilma Rousseff. Soon after she took office in January 2011, Rousseff had to address one of Brazil’s worst natural disasters in decades. Torrential rain created floods and mudslides that left thousands homeless and killed more than 500 in several mountainside communities just north of Rio de Janeiro.
Brazil was named for its brazilwood (pau-brasil), a tropical wood used to make red dye.
|
0.999435 |
I would like to discuss how people perceive images in the media and the affects those images have on society. Most of the images that we view have been altered to create an ideal or a fantasy, as opposed to a reality. Most Americans are probably aware that advertising images are altered, but that does not protect them from the messages that images send. Each picture a person views sends them a message of what perfection looks like. When someone sees a picture in an ad, they don’t just see a model displaying a perfume bottle, for example. They see beauty. They see what it looks like to be seductive. They see an ideal. They might recognize that the model in the picture is airbrushed, but it doesn’t prevent them from defining human beauty according to the altered image. People reflect themselves back into images and depict their world through those images, as opposed to understanding them at face value. Rather, people use these images to define themselves and the world around them. For example one might view an image and personally relate to it, or they might view an image and want to relate to it, asking how they could change themselves to closer reach the ideal depicted in the image.
The multitude of images that we consume in American culture has many affects. First of all, it causes people to constantly compare themselves to often impossible ideals. For instance, women might look to magazines to determine how they should look. Some might follow a path towards trying to become and image, in which the actual self can be lost. Many people measure celebrities and others according to their images. For instance, how many images one sees of a person might be positively correlated to how relevant and important they perceive that person to be. I would say that people today also use social media to measure celebrities and non-celebrities according to their images. For example if a “friend” on Facebook only posts pictures of herself in full make-up, wearing heels and on vacation, one could perceive her living a very glamorous life, even though this is really just a highlight reel of images she wants seen. People are more mindful of their image (both in person and online) and the messages that it sends because we live in a society that is so consumed with, and aware of images.
Understanding people based on their images also plays a role in politics in our society. The general population tends to care more about who a politician is than what his or her policies are. For example, when surveyed, people rated President Regan highly, but didn’t typically agree with the actual policies he endorsed. People often want to vote for who they observe to be the better person. I think it would be safe to say that people determine who the better person is, based (at least partially) on the images of that person that they are exposed to. I imagine that if Americans were surveyed today and asked to describe our current president, many would list attributes of his personality or character; based on how they perceive the images they’ve been exposed to of him, as opposed to his political decisions or policies.
When it comes to images in the media, one could argue both that they are good and that they are not. Regarding this, I think that there is somewhat of a paradoxical situation. On the one hand, we don’t want to see real images, and studies indicate that they wouldn’t sell products nearly as well as an altered, fantastical ideal. At the same time though, we hate these images for setting in our minds an unachievable standard of perfection. Many say that advertising force-feeds us these images, but we are the ones buying the magazines in which to consume the images. I do think that images in advertising have an effect on society; however I also believe that society has an effect on advertising. I don’t think that we can blame one or the other, but that the two perpetuate each other.
|
0.962623 |
For the English soccer player and coach, see Doug Allison (soccer).
Catcher for the Cincinnati Red Stockings, the first fully professional baseball team.
First baseball player to use a glove.
Douglas L. Allison (July 12, 1846 – December 19, 1916) was an American professional baseball player. He played as a catcher for the original Cincinnati Red Stockings, the first fully professional baseball team. Allison was one of the first catchers to stand directly behind the batter, as a means to prevent baserunners from stealing bases. He was considered a specialist, at a time when some of the better batsmen who manned the position normally rested, or substituted at other fielding positions. Allison was the earliest known player to have used a glove, when he donned buckskin mittens to protect his hands in 1870. His brother Art Allison also played in the Major Leagues.
Not quite 22 years old, he moved to Cincinnati for the 1868 season and played for the Cincinnati Red Stockings managed by Harry Wright. Open professionalism was one year away but the long move from Philadelphia, where he worked as a bricklayer, suggests that Allison was somehow compensated by club members, if not by the club. Cincinnati fielded a strong team that year, with five of the famous team already in place. Allison was a defensive specialist, his job was simply to catch the ball. First, he caught for Asa Brainard plus he had to contend with pop ups and tips off the bat, a fielder could put the batter out by a catch on the first bounce at that time. Most catchers of his era stood twenty to twenty-five feet behind the batter. His technique of moving closer to the batter proved effective in curtailing baserunners from stealing bases. In the 1860s, it was common for teams to score fifty or sixty runs a game. As the technique of moving closer to the batter became more widespread among other catchers, run production began to plummet helping usher in what became known as the Dead-ball era.
When the National Association of Base Ball Players (NABBP) permitted professionalism, the Red Stockings hired five incumbents including Allison and five new men to complete its roster, the first team that consisted of salaried players. A few of the others had previously played some catcher (all played at the six infield positions in 1868), but Allison filled the role in almost every game. Cincinnati toured the continent undefeated in 1869 and may have been the strongest team in 1870, but the club dropped professional base ball after the second season.
Harry Wright was hired to organize a new team in Boston, where he signed three teammates for 1871. The other five regulars including Allison signed with Nick Young's Washington Olympics, an established club that also joined the new, entirely professional National Association (NA). The five former Red Stockings led the Olympics to a respectable finish in the inaugural NA season.
Later, Doug Allison played in the Major Leagues with the Troy Haymakers in 1872, the Brooklyn Eckfords in 1872, the Elizabeth Resolutes in 1873, the New York Mutuals from 1873 to 1874, the Hartford Dark Blues from 1875 to 1877, the Providence Grays from 1878 to 1879, and one game with the Baltimore Orioles of the American Association in 1883.
Allison was reported playing for a post office team in 1882. Thirty-four years later he died in Washington, DC at age 70, en route to his job at the Post Office Department. He is buried in Rock Creek Cemetery, Washington.
^ a b c d e Morris, Peter (2010). Catcher: How the Man Behind the Plate Became an American Folk Hero. Government Institutes. p. 41. ISBN 1-5666-3870-4. Retrieved 10 July 2012.
^ "The First Glove - Ever". BaseballGloves.com. Retrieved 2006-08-28.
^ Retrosheet. "Doug Allison". Retrieved 2006-08-29.
Liberman, Noah (2003). Glove Affairs: The Romance, History, and Tradition of the Baseball Glove. Triumph Books. ISBN 1-57243-420-1.
|
0.959187 |
Flat: Left handed, flat. Oval-shaped, about thirteen furlongs round, with a run-in of four and a half furlongs which rises slightly throughout. Runners in races of seven furlongs and further tend to stick towards the far rail in the straight, though often switch nearer the stand side when the ground is testing. Sprints take place on the straight six-furlong course, with runners usually racing down the centre of the track.
Jumps: Left handed, flat. Conditions are often gruelling during winter, placing emphasis on stamina. Conversely, conditions at the Swinton Meeting in May usually place the emphasis on speed, with a tendency to favour those ridden prominently as a result. Formerly held an undeserved reputation as one of the hardest courses to jump around, something not backed up by statistics, and more recent evidence suggests the new portable fences still provide no more than an average jumping test. There are two chase courses nowadays, the Lancashire Course and the Inner Course. Which course is being used isn't always obvious beforehand, though the Betfair Chase meeting is run on the Lancashire Course.
|
0.999998 |
Research and produce a start-up business plan document for a specific small business.
Develop the marketing plan component for a specific small business plan.
Develop the financial plan component for a specific small business plan.
Develop the operations plan component for a specific small business plan.
Develop the human resources plan component for a specific small business plan.
Determine relevant licensing and regulatory issues for a specific small business plan.
Develop the customer service plan component for a specific small business plan.
Utilize computer technology to support small business management.
Present and defend business reports in a professional manner.
Develop strategies for ongoing personal and professional development and advancement.
|
0.949406 |
Russell Schwartz rose through the ranks of the independent film world to become president of Gramercy Pictures, responsible for the release of such applauded films as Stephen Soderbergh's "King of the Hill" (1994), the Oscar-nominated "Four Weddings and a Funeral" (1994), Tim Robbins' "Dead Man Walking" (1995) and the Coen brothers' Oscar-winning "Fargo" (1996).
A graduate of NYC's Hunter College, Schwartz became president of the Film League in the late 1970s where he supervised the marketing and distribution of such films as Jean-Charles Tacchella's "Cousin, Cousine" (1975) and Robert M. Young's "Short Eyes" (1977). Additionally, Film League served as producer's representative for directors, such as Jack Nicholson ("Goin' South" 1978) and Martin Scorsese ("The Last Waltz" 1978). There, Schwartz had his first involvement with Peter Bogdanovich, handling the unsuccessful "Saint Jack" (1979), but the pair later collaborated on "They All Laughed" (1981), with Schwartz serving as line producer.
Moving back to the executive suites, Schwartz joined the independent distributor Island Alive in 1983, where he rose from senior vice president of marketing and distribution eventually to the presidency of Island Films in 1987. Among the motion pictures released under his aegis were Jonathan Demme's "Stop Making Sense" (1984), Alan Rudolph's "Choose Me" (1984), "Kiss of the Spider Woman" and "The Trip to Bountiful" (both 1985), which won Oscars for stars William Hurt and Geraldine Page respectively, "Mona Lisa" (1986), Spike Lee's "She's Gotta Have It" (1986) and Percy Adlon's "Bagdad Cafe" (1988). In the mid-80s, the executive also took time to serve as executive producer of Jim Jarmusch's "Down By Law" (1986) and produced the River Phoenix vehicle "A Night in the Life of Jimmy Reardon" (1988).
Moving to Miramax as executive vice president of its marketing strategies and planning in 1988, Schwartz supervised the selling of such films as the Oscar-winning foreign film "Cinema Paradiso" (1988), Jim Sheridan's "My Left Foot" (1989), which solidified Daniel Day Lewis' stardom, the Merchant-Ivory production "Mr. & Mrs. Bridge" (1990), with Paul Newman and Joanne Woodward, and the behind the scenes documentary "Madonna - Truth or Dare" (1991). In 1992, he left Miramax to assume the presidency of Gramercy Pictures. Under his tenure, some of the top independent films have been released ranging from Richard Linklater's "Dazed and Confused" (1993), "The Adventures of Priscilla, Queen of the Desert" and the Oscar-nominated "Before the Rain" (both 1994) "The Usual Suspects" (1995), "Bound" (1996), Leon Gast's Oscar-winning documentary "When We Were Kings" (1996) and "Keys to Tulsa" (1997).
Produced the River Phoenix vehicle, "A Night in the Life of Jimmy Reardon"
Not to be confused with the Russell Schwartz who works for HBO Independent Productions as executive vice president of business and planning.
Zane Thomas Schwartz. Born in January 1993.
Zane Thomas Schwartz. Mother of Grazer's two children.
|
0.975959 |
Existing technologies can already automate most work functions, and the cost of these technologies is decreasing at a time when human labor costs are increasing. This, combined with ongoing advances in computing, artificial intelligence, and robotics, has led experts to predict that automation will lead to significant job losses and worsening income inequality. Policy makers are actively debating how to deal with these problems, with most proposals focusing on investing in education to train workers in new job types, or investing in social benefits to distribute the gains of automation.
The importance of tax policy has been neglected in this debate, which is unfortunate because such policies are critically important. The tax system incentivizes automation even in cases where it is not otherwise efficient. This is because the vast majority of tax revenues are now derived from labor income, so firms avoid taxes by eliminating employees. Also, when a machine replaces a person, the government loses a substantial amount of tax revenue-potentially hundreds of billions of dollars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once the labor is capital. Robots are not good taxpayers.
We argue that existing tax policies must be changed. The system should be at least "neutral" as between robot and human workers, and automation should not be allowed to reduce tax revenue. This could be achieved through some combination of disallowing corporate tax deductions for automated workers, creating an "automation tax" which mirrors existing unemployment schemes, granting offsetting tax preferences for human workers, levying a corporate self-employment tax, and increasing the corporate tax rate.
|
0.996764 |
Reform the NYS Department of Labor's so called 80/20 Rule that creates significant legal and financial liability for restaurants and hurts workers who want to work more hours and gain more skills. The law prohibits an employer from taking the tip credit if an employee works more than 20% or two hours of their shift in a non-tipped job capacity. Effectively this means they can't work five hours of a shift as a bartender earning tips and then three hours doing inventory, tastings and purchasing, which is non-tipped work. If they do, they violate the 80/20 Rule and enormous financial legal and financial liability results, including loss of the tip credit, double damages and attorney fees.
|
0.999856 |
Lump Sum and/or Salary Continuance: Fixed or variable pay? Questions to consider are whether the executive would be most comfortable with a limited salary continuation period, or would rather have a variable duration, depending on when the executive found a new job. Often a combination of fixed and variable pay is negotiated, providing both a floor and ceiling for the continuance.
Bonuses, Incentive Payment, Commissions: While most employers will base a severance agreement on a base salary, an executive can negotiate for some or all of the incentive compensation he or she might have been owed absent the termination. Often, a strong equitable claim for at least a pro rata share of the incentive compensation, especially toward the end of the year when such claims can be substantial, can be negotiated.
Unused Leave Pay: Be sure to include compensation for any accrued but unused vacation, sick or other leave pay. Often, this will be done pro-rata based on the time of year.
Benefits: An employer will often continue to pay for health benefits during a salary continuation period. Also, a discharged employee has the right under COBRA to continue heath insurance at existing rates and benefit levels for up to 18 months after the off-payroll date. Employers sometimes will continue other benefits, such as life insurance, pension or 401K contributions and vesting under stock plans. *If an employer terminates you in order to prevent you from obtaining a benefit (such as a vesting pension) you may have a claim under Section 501 of the Employee Retirement Income Security Act. If an employer terminates you in order to prevent you from obtaining compensation (for example, a large commission or bonuses) you may also have a claim for breach of the implied covenant of good faith and fair dealing.
Positive Recommendations: In drafting or negotiating a severance agreement, it is important to consider a provision regarding whether you will receive a positive evaluation or recommendation in the future from the company you are leaving.
|
0.987005 |
What is digital marketing, what do the digital agencies do and what are the solutions in digital marketing? Digital Marketing is the practice of promoting products and services with the use of data base-driven online distribution channels, to reach the consumers in a relevant, timely, personal and cost-effective manner. Internet marketing attracts more and more people on the company’s /organizations/ agencies websites; further increasing the clients and enhancing the brand and products of your company.
|
0.993069 |
What’s the best use for an iPad if you’re savvy with a laptop and smartphone?
Depending upon what I want to achieve, these four devices are not totally interchangeable.
I can use my smartphone to create a presentation, but anything but a simple presentation is best created on a laptop or tablet. I can sort through hundreds of emails on my tablet or smartphone, but must use a laptop for powerful sorting and cleanup. Likewise, I can create complex spreadsheets on the tablet, but likely would use my MacBook or PC with a keyboard and full functionality.
My take—Tip #1: I DO use the iPad for mail and social apps; Tip #2: I use the iPad to catch up on reading; Tip #3: I turn off MOST notifications; Tip #4: I change SOME of the settings to improve battery life.
Let me hear how you use your tablet!
Categories: Apple, iOS, Microsoft, OSX, productivity, trends, Windows | Tags: 2014, Apple, Business, BYOD, hardware, IOS, iPad, IPhone, Microsoft, Microsoft Windows, mobile, productivity, smartphone, tablet, tablet computer, tech, Windows 7 | Permalink.
|
0.999502 |
Physical characteristics: The black-winged stilt has long pink legs and a straight or upwardly curved black bill. In the male, the back and wings are black, the belly is white, and the tail is marked with gray bands. Females have dullish brown backs. The color of the head and neck varies in black-winged stilts from white to black.
Geographic range: The black-winged stilt is widely distributed and occurs on all continents except Antarctica.
Habitat: Black-winged stilts occupy wetland habitats including marshes, swamps, lakeshores, river-edges, and flooded fields.
Diet: Black-winged stilts eat aquatic insects, mollusks, crustaceans, worms, small fish, and tadpoles. They sometimes forage, or search for food, at night, particularly when there is no moon and therefore little light.
Behavior and reproduction: Black-winged stilts can be found in large flocks of as many as several thousand individuals. They have a display where they leap up and then float down, but it is not known what the purpose of the display is. Their call is described as a sharp "yep" sound.
Black-winged stilts and people: No significant interactions between black-winged stilts and people are known.
|
0.978848 |
Which eye is dominant suggest whether you shoot right-handed or left-handed. Right-eye dominant archers draw the string with the right hand; left-eye dominant archers draw the string with the left hand. It would be preferable before learning or buying equipment to first determine which eye is dominant.
Shooting equipment that is matched to your dominant eye is certainly the ideal situation but is not a hard fast rule. It is better to let the archer who resist change, to use equipment in a way that is natural to them, than not shoot at all! There are many well know adult archers that shoot opposite of their dominate eye.
We have found an easy method to determine the dominant eye that is accurate and does not require any interpretation or feed back by the archer as some methods do. This is particularly important when determining the dominant eye of a young child.
2. Cross the fingers and thumbs to create an opening about 1-2 inches in diameter.
3. With both eyes open, raise the hands extended at arm length as shown in picture "A" and look through the hole created by the fingers at the object.
4. Keeping the object in sight bring the hands to the face as shown in Picture "B". The hands will go to the dominant eye.
|
0.95275 |
Has the Left learnt nothing from the past 30 years?
THEY learned nothing and they forgot nothing, it was said contemptuously of the French Bourbons. It might not suit the self-image of the Labour Party to be compared to a royal house, particularly one that got so out of touch with its people that it was toppled by a bloody revolution.
But that is the thought that sprang to mind when Peter Hain floated the idea that top rates of tax should go up so that other people, and specifically the middle classes, could pay less. It was as if the past 30 years had never happened and we were back in the House of Commons of 1975, with Labour Chancellor Denis Healey promising his baying backbenchers he would 'squeeze the rich till the pips squeak'.
It was a disaster then and would be a disaster now. But clearly, if we have got as far as Hain seriously floating the idea, then too many people must have forgotten why it does not work and, perversely, risks making the whole population substantially worse off.
The big economic discovery of the Eighties was the Laffer curve which, like most economic theory, was common sense dressed up as science. Its basic proposition was that if governments take off in tax most of what people earn, then they won't work very hard. If, however, government cuts tax as far as it reasonably can, people will have the motivation to work harder, and that will generate significant extra wealth which in turn will yield tax. A virtuous circle is created whereby the extra growth is more than enough to make good the revenue lost by the initial tax cuts. Ronald Reagan was converted to this idea in the 1980s. Since then it has swept the world, and with good reason. It has been demonstrated time and again that low-tax economies are more flexible, more dynamic and more innovative than high tax ones, but that if you stifle the dynamism of the entrepreneurs, then everyone ends up worse off.
It is not just bad economics but also lousy maths. There are not enough rich people to go round, so even if their tax was doubled to 80%, it would not make a material difference to the amount the rest of us have to find. Back in the Eighties, when the sums were done regularly to justify the cut in top rate tax, this was understood. Then, it was estimated you have to tax the rich at 90% to knock a couple of pennies off income tax for the rest of the country.
So higher tax demotivates top earners, drives a lot of them abroad, destroys the image of the country internationally as a sensible place to come to work and invest, stifles economic growth and enterprise, undermines our international competitiveness, encourages totally unproductive tax-avoidance devices - yet does not significantly ease the burden on the rest of us.
The only justification, and it is political and personal rather than economic, is envy. Nor, and this is a further important point, would taxing top earners more and leaving the tax for the rest of us unchanged release a huge surge of money for schools and hospitals. Income tax is an important revenue source along with National Insurance and VAT, but it is still well under a fifth of the £ 400billionplus the Chancellor will raise this year.
Think about it. Since 1997 Labour has kept its promise not to raise income tax. But the total amount of revenue coming in has already increased from £270 billion in the Tories' last year of power to £400 billion now and £500 billion by the next election. This shows that from Gordon Brown's perspective, top-rate tax rates are not really the issue other than for those penalised - it is the overall tax burden that matters, and that of course has gone up quietly but substantially.
The case against penal taxes on top earners is so clear-cut it is hard to believe that this is not a bit of politics at its most cynical. Hain probably feels that the outrage over fat cats has swayed the popular mood. So why float the idea? Has he been put up to it by Tony Blair to sweeten the increasingly militant unions? Did he simply want to ruffle the feathers of Gordon Brown? Maybe his week as Leader of the House has rekindled the fire of ambition, so this is really about him re-establishing his Left-wing credentials so he will have more support should any more senior jobs come vacant.
Whatever the reason, Mr Hain came to us from what was then a Third World country, South Africa. His ideas on tax would go a long way to making Britain into one, too.
|
0.987461 |
Bloomberg published a report Friday afternoon based on leaks by a source with intimate knowledge of the Mueller probe (and Mueller’s thinking on the probe) who said Special Counsel Robert Mueller has not cleared President Donald Trump of colluding with Russia in the 2016 election nor of obstructing the subsequent investigations into the alleged collusion. The leaker also claimed Friday’s indictments about Russian interference in the election are meant to be a warning for the 2018 mid-term elections.
Special Counsel Robert Mueller and his prosecutors haven’t concluded their investigation into whether President Donald Trump or any of his associates helped Russia interfere in the 2016 election, according to a person with knowledge of the probe.
Friday’s indictment of a St. Petersburg-based “troll farm” and 13 Russian nationals should be seen as a limited slice of a comprehensive investigation, the person said. Mueller’s work is expected to continue for months and also includes examining potential obstruction of justice by Trump, said the person, who requested anonymity to discuss an investigation that is largely confidential.
…“Russia started their anti-US campaign in 2014, long before I announced that I would run for President. The results of the election were not impacted. The Trump campaign did nothing wrong – no collusion!” Trump said on Twitter.
That has yet to be determined. Friday’s indictment should be seen as an effort by Mueller to raise awareness about Russia’s capabilities as the 2018 U.S. elections draw near, the person said.
It’s still possible that Mueller will indict Americans for knowingly helping Russia, the person said.
End excerpt. Please read the entire Bloomberg article at this link.
Earlier today, Deputy Attorney General Rod Rosenstein announced indictments against 13 Russian nationals and 3 Russian entities for meddling in the 2016 Presidential election, which began in 2014 before the President declared his candidacy. President Donald J. Trump has been fully briefed on this matter and is glad to see the Special Counsel’s investigation further indicates—that there was NO COLLUSION between the Trump campaign and Russia and that the outcome of the election was not changed or affected.
The Justice Department issued a statement about the indictments–the conclusion of the statement was apparently rebutted with the leak to Bloomberg.
…There is no allegation in the indictment that any American was a knowing participant in the alleged unlawful activity. There is no allegation in the indictment that the charged conduct altered the outcome of the 2016 election.
|
0.999899 |
A methodology is a tool that I will need to use in order to structure and ultimately carry out my dissertation. Using a methodology will allow me to achieve a systematic approach in my primary and secondary research for the project, making the entire process as efficient as possible. In short, a methodology can be defined as 'a system of methods used in a particular area of study or activity.' It forms a set of procedures that will be carried out in order to acquire knowledge about my specific line of inquiry/chosen research topic.
What if I want to find out about social trends, or the measurable effects of particular policies?
You will probably want to use large data-sets and undertake quantitative data analysis, and you will be adopting a realist approach to the topic studied. Quantitative dissertations are likely to be nearer to the lower end of the range of approved lengths for the dissertation (e.g. if the length is to be 5,000-8,000 words, dissertations based on quantitative analysis are likely to be closer to 5,000 words in length). They will also include tables and figures giving your important findings. Remember that all tables must be carefully titled and labelled and that sources of your data must be acknowledged.
You will probably want to use in-depth qualitative data, and you may wish to adopt a realist, a phenomenologist, or a constructionist approach to the topic. Qualitative dissertations will include descriptive material, usually extracts from interviews, conversations, documents or field notes, and are therefore likely to be nearer to the upper limit of your word range (e.g. 8,000 words). The types of method suitable for a dissertation could include content analysis, a small scale ethnographic study, small scale in-depth qualitative interviewing and so on.
From this, I have concluded that qualitative primary research methods will prove most useful and effective for my dissertation project, as my topic is concerned with the contemporary, and therefore needs to be informed by contemporary views, opinions and thoughts from artists, designers and practitioners working in and responding to the now.
Through conducting a qualitative analysis, I will be likely looking to use at least some original material in the essay. This may be collected through in-depth interviews, participant observation recordings and field-notes, non-participant observation, or some combination of these.
I will also look to visit relevant exhibitions and art spaces to further my primary research, in order to gain a better understanding of my topic and gain a rounded perspective on contemporary graphic design practice and wider visual culture.
|
0.999998 |
This option disables diagnostic messages that have the specified tags.
The --diag_suppress option behaves analogously to --diag_errors, except that the compiler suppresses the diagnostic messages having the specified tags rather than setting them to have Error severity.
This option has the #pragma equivalent #pragma diag_suppress.
is a comma-separated list of diagnostic message numbers specifying the messages to be suppressed.
Suppressing diagnostic messages in the Compiler User Guide.
|
0.999617 |
Mike I don't think every individual in this group of professionals would have been prepared to risk their career for two relative strangers whatever the predicament Kate and Gerry found themselves in.
Have you got? Could you? What do we do now? Who is culpable? If I hadn't? If we hadn't? And ultimately it was you who?
Somebody acted on such advice, and Maddie died as a result..
But, why would Oldfield and O'Brien be volunteering to do Kate McCanns 9.30pm check, in view of the fact that Russell O'Brien had very recently checked apartment 5A minutes beforehand?
Why would Oldfield and O'Brien be looking in on the McCann siblings, if minutes before they had already been checked?
I think that Gerry McCann went to apartment 5A at 9.30pm, because of Russell O'Briens intrusion minutes beforehand. If Maddie was already dead by that stage he would have been worried about whether or not O'Brien had stumbled upon the fact that Maddie was not sleeping in her bed..
Seems to me that 9.30pm was a pivotal point when things came to a head, and which resulted in Maddie's body having to be taken out of the apartment..
I think that Jane Tanner left the tapas bar a little while after Gerry McCann at around 9.30pm, to 9.40pm, and that this was the occasion that she saw him in the street talking to a friend of his. I think that Oldfield and O'Brien also left the tapas bar again, and one of them, or Gerry McCann, took Maddie's body out of the apartment - the person who took her body out of the apartment was almost certainly the person who was seen carrying a child by the Smith contingent..
There was no reason at all for Kate McCann to have to go and do a 10pm check, because she knew that Gerry left the tapas bar at 9.30pm and hadn't come back. She must have been wondering what was keeping her husband so long?
On top of this, she must have noticed that Oldfield, O'Brien and Tanner, had all left the tapas bar in the wake of Gerry McCanns 9.30pm departure..
Seems almost certain to me that the people she was referring to who had taken Maddie, were (1) her husband Gerry, (2) Russell O'Brien, and or (3) Mathew Oldfield..
I don't think that Kate McCann would have gone back to the tapas bar after she discovered that 'they' had taken Maddie, and leave her other two siblings unprotected and alone. I think Jane Tanner stayed with the remaining two McCann siblings whilst Kate returned to the tapas bar to raise the false alarm..
Gerry McCann was not at the tapas bar either when Kate left there at 10pm, or when she returned to raise the alarm..
This makes it highly probable that the man seen carrying a child in his arms by the Smith contingent was in fact Gerry McCann..
Rather curiously the cadaver dogs alerted positively to an area in the front garden of 5A, suggesting that traces of cadaveric odour were deposited in the same location where Gerry McCann had been seen and spoken to, on the night of the reported disappearance, so soon after the disappearance of Maddie had been reported..
In what circumstances could Maddie's Cardaveric odour have been transferred into the bushes of the front garden of apartment 5A after the sighting of Gerry McCann there at around 10.30pm on the evening of 3rd May 2007?
Mrs Fenn wasn't mistaken, she saw and recognised Gerry McCann amongst the bushes and shrubbery on the garden of apartment 5A on the late evening of 3rd May 2007, when he told her that a little girl was missing!
Not 'his daughter', but a little girl..
|
0.956345 |
Did Google remove presidential candidates Donald Trump and Gary Johnson from search results?
The US has four nominees for president now. The choices narrowed a bit yesterday when Bernie Sanders officially nominated Hillary Clinton upon losing a hard fought campaign. Clinton swallowed her pride and did the same thing for Barrack Obama back in 2008.
Despite the common misconception, the US does have more than two parties, though most citizens seldom hear about others. There are also fringe parties that really aren't heard of.
However, a search of Google as of this morning told people there were only three candidates, and one of those was no longer in the race. The ones listed were Clinton, Sanders and Stein. Both Donald Trump, the Republican nominee, and Gary Johnson, the Libertarian nominee, had been removed.
Per NBC that result changed again, this time simply removing them all. Current searches are only bringing up related news stories.
NBC points out that "In a tweet on July 17, Green Party candidate Jill Stein posted a screen shot of another version of the guide showing five candidates. Sometime between then and this morning, Donald Trump and Gary Johnson disappeared".
Search Engine Roundtable grabbed a screenshot which can be seen below. During the course of this morning my colleagues here have received varying results for the search. For now it seems the entire box, which was appearing at the top of search results for "presidential candidates", has been removed.
|
0.998412 |
For birthday celebrations, there's something special about having a homemade touch! This simple birthday banner is so fun and festive, but is also so simple, you can make it in a snap. There are no fancy cutting tools involved, so you can create this project with just the items that you already have in your supply cabinet. Make a personalized birthday banner for friends or family on their next annual celebration and they'll definitely be feeling the love!
As you're planning, consider the theme of your birthday party to make the banner fit the festivities. You can customize colors, paper patterns, flag shape, and even the style of your alphabet.
Start by cutting your cardstock. If you're working with standard 8-1/2" x 11" sheets, you can simply cut them into quarters.
Measure your first cut, using your ruler, along the halfway point of the long end, cutting the sheet directly in half. Then cut both of those sheets in half as well, creating four quarters. Each of these quarters will be 4-1/4" x 5-1/2". You will need 13 pieces for your banner if you're spelling out "Happy Birthday." If you'd like to customize it with someone's name, make enough sheets to have one sheet for each letter in the name.
Cut the patterned paper so that it serves as an accent color and pattern on each banner flag. Cut pieces of your patterned paper that are 4-1/4" x 2". You will need 13 of these for a "Happy Birthday" banner, or more if you're using a name, as per the instructions in the previous step.
Using a glue stick, spread some glue on the back side of the patterned paper. Adhere the patterned paper to one end of a plain sheet of your solid colored cardstock. Press firmly along all edges to ensure that it's secure. Be sure that all edges are aligned.
Place one letter on each flag of your birthday banner, arranging the letter in the center of the flag with the patterned paper on the bottom of the flag. If you don't have alphabet stickers, you can simply use a black marker to write a letter on each flag.
Whether you're using stickers or writing, you'll apply one letter of "Happy Birthday" to each flag. Once the letters are applied, punch a hole on either edge of the top of each flag using a hole punch.
Cut a length of cord that's a few feet long. Starting with the "H," insert the cord into the left hole, from the front to the back, and then in through the hole on the right, from the back to the front. Continue in this manner with each letter in order. You can string all letters onto one long cord, or separate the words onto multiple cords as shown here.
Hang your banner up and let the birthday festivities begin!
|
0.999999 |
How can different use cases for a "component" be documented in the style guide. Does that mean that the style guide would need to be connected to the same database the app uses?
The idea of using a living style guide is that the documentation is connected to the source code, which includes showing components that would usually use mock data via fixtures. Typically fixed data is used for testing purposes, but it can also be used for demonstrating how a component works, and therefore show back to back the different use cases the component supports.
|
0.90894 |
Implement data validation in a Windows Phone app created by using the Windows Phone SharePoint List Application template. In a Windows Phone app intended for production use, you likely need to validate data entered by users to, for example, enforce business logic relevant to your particular circumstances, or to ensure appropriate formatting of entered values, or simply to catch mistakes before saving values to a SharePoint list. Projects based on the Windows Phone SharePoint List Application template include default data validation logic, but such projects also provide a mechanism for developers to implement custom data validation.
Important: If you are developing an app for Windows Phone 8, you must use Visual Studio Express 2012 instead of Visual Studio 2010 Express. Except for the development environment, all information in this article applies to creating apps for both Windows Phone 8 and Windows Phone 7. > For more information, see How to: Set up an environment for developing mobile apps for SharePoint.
Some data types for fields in SharePoint lists are associated by default with simple formatting or data validation. If you enter an invalid URL for a field based on the Hyperlink or Picture field type in a SharePoint list and attempt to save your changes, you see a message indicating that the address you entered is invalid. If you enter a customer name as a value for a field based on the Date and Time field type, you receive a message directing you to enter a date within a valid range for the field.
Date input validation is with respect to SharePoint date format. If the date format of the phone locale is required, customize the field and add validations accordingly.
The text box labeled "Start Time" in the Edit form is bound to a Date and Time field in the SharePoint list on which this sample app is based. The validation error cue (in red text) shown in Figure 1 appears if an invalid date is entered in the text box (and the text box subsequently loses focus) because the ValidatesOnNotifyDataErrors property of the Binding object associated with the Text property of the TextBox control is set to True in the XAML declaration that defines the TextBox in the EditForm.xaml file.
But some fields may not provide any notification for invalid data in the Windows Phone app. And well-designed Visual Studio project templates are necessarily generalized to be used as a starting point for many different applications. The Windows Phone SharePoint List Application template can't include validation rules relevant to specific contexts and yet retain its value as a generalized template. Depending on your needs and the circumstances in which your particular Windows Phone app will be used, you likely will want to implement your own custom data-validation rules.
The SharePoint list templates do not include default validations (such as percentage complete in a SharePoint task list, post check for a team discussion list, and SP decimal field type validation), but you can implement such validations.
In applications designed based on the MVVM pattern, data validation is often handled in the data layer (that is, in the Model component). In projects created from the Windows Phone SharePoint List Application template, an extensible mechanism for data validation has been "pushed up" a layer and implemented in the ViewModel component, to make it easier for developers to manage data validation. In projects based on the template, therefore, the most suitable place for custom code that validates user input or otherwise manages data is in these ViewModel classes. In terms of data validation, the EditItemViewModel class and the NewItemViewModel class (the classes associated with the forms most likely to involve editing and updating list data) both provide an open implementation of a validation method (named Validate) that overrides the base validation method in the class from which these two classes are derived.
This method provides a convenient mechanism to the developer for adding custom validation logic that targets individual fields. The general approach is to check the value of the fieldName argument passed to the Validate method to identify the field you want to associate with your custom validation code. You can, for example, use a switch statement in your implementation of this method to supply validation logic specific to various fields in the Edit form (EditForm.xaml) of your Windows app.
For the following code example, assume that an installation of SharePoint Server has a Product Orders list created from the Custom List template. The list has been created with the columns and field types shown in Table 1.
Fulfillment dates for orders must be later than the date on which the order was placed.
If a customer wants to place an order for a product named Fuzzy Dice, the dice must be ordered in pairs. According to the peculiar rules at Contoso, Ltd., there is simply no such thing as a Fuzzy Die.
In the Product Orders list, the field type for phone numbers is "Single line of text" (that is, Text), which can be any text (up to 255 characters by default). For this sample, a formatting validation rule will be enforced that requires entered data to be in one of the common phone number formats; for example, "(555) 555-5555".
Assuming you have created a SharePoint list based on the Custom List template that includes the columns and types specified in Table 1, create a Windows Phone app by using the Windows Phone SharePoint List Application template in Visual Studio by following the steps detailed in How to: Create a Windows Phone SharePoint list app.
In Solution Explorer, in the ViewModels folder for the project, double-click the EditItemViewModel.cs file (or choose the file and press F7) to open the file for editing.
Add the following using directives to the list of directives at the top of the file.
Replace the default implementation of the Validate method in the file with the following code.
if (!string.IsNullOrEmpty(fieldValue)) //Allowing for blank fields.
// Enforce ordering Fuzzy Dice in pairs only.
if ((quantityOrdered % 2) != 0) // Odd number of product items ordered.
AddError("Item[Quantity]", "Fuzzy Dice must be ordered in pairs.
// Restriction on ordering in pairs doesn't apply to other products.
// Determine whether fulfillment date is later than order date.
// Check that contact number is in an appropriate format.
//Specified Contact Number is not a valid phone number.
// Not adding custom validation for other fields.
//And then proceed with default validation from base class.
Keep in mind that the field names specified in this code sample are based on properties of the sample Product Orders list specified in Table 1. (Notice that in the XML schema for list fields in SharePoint Server, spaces in the names of fields are replaced with the string "_x0020_" for the **Name** attribute of the **Field** element that defines a given field. The template uses the **Name** attribute for a **Field** element as it is defined in the XML schema on the server, not the **DisplayName** attribute.) You can identify the field names of those fields for which you want to implement validation logic by looking at the **Binding** declarations of the **Text** properties for the **TextBox** objects defined in EditForm.xaml or by examining the **ViewFields** string of the **CamlQueryBuilder** class in the ListProvider.cs file.
The custom validation code in this sample is executed only if the value argument passed to the Validate method is not a null or empty string. As indicated in Table 1, the Fulfillment Date and Contact Number fields are not required to contain data (as the list is defined for the purposes of this sample in SharePoint Server), so we want to allow these fields to be blank. A simple check to determine whether the value argument is null is not sufficient, because the value passed could be a zero-length string (which doesn't equate to a null value), and for this sample we don't want to invalidate zero-length strings for fields that can be blank. The validation logic for the Quantity and Fulfillment Date fields includes additional checks of the values passed in to ensure that they are of the appropriate type. If the initial check here (before the switch statement) confirmed only that the value passed in were not null (instead of checking against the narrower condition of being a zero-length string), those validations would still not execute if the value were a zero-length string, but the logic to validate data for the Contact Number field would still execute if the value passed were a zero-length string. And in this sample we want to allow for the Contact Number field to be blank (a zero-length string), especially when a user starts editing a list item by opening the Edit form.
The code in this sample, if it is included in the EditItemViewModel.cs file only, enforces these validation rules for data entered by users only on the Edit Form. If you want to enforce the validation rules both when users add new items as well as when they edit them, you must include the same validation logic in the Validate method in the NewItemViewModel.cs file (or, preferably, create a separate class file with a function that includes this validation logic and call that same function from the Validate methods in both the EditItemViewModel.cs file and the NewItemViewModel.cs file).
The validation logic in this sample enforces given business rules by indicating to the user that entered data is not in a format permitted by the rules, but the entered data is not intercepted and changed by this code. To intercept and, for example, format phone numbers in a consistent way before saving the data to the SharePoint list, you can implement custom data conversion for entered phone numbers. For an explanation of custom data conversion for list item fields, see How to: Support and convert SharePoint field types for Windows Phone apps.
|
0.999995 |
Massage therapy is an alternative or complementary therapy which involves working on and applying pressure to the patient’s body using the hands, or sometimes the feet, elbows, forearms, or a device. People most commonly seek massage therapy to help alleviate pain or stress.
Massage as medical therapy is a somewhat controversial idea. It is unable to reach the gold standard of scientific research by being tested in placebo-controlled and double-blind clinical trials, because it is very difficult (perhaps impossible) to perform a “placebo” massage.
However, there is some evidence that massage carries certain benefits, including reducing stress, anxiety and depression, temporarily lowering the heart rate and blood pressure, and pain relief. Claims that it helps low back pain, on the other hand, are disputed.
Due the benefits that have been observed or claimed, massage may be a technique employed as part of physical therapy.
It has been claimed that massage can help treat the following, either alone or in conjunction with other treatments. These claims are generally disputed, and the consensus is that more research is needed.
What does massage therapy consist of?
Massage therapy consists of applying pressure to the patient’s body, usually with the hands, although the massage therapist may use their elbows, forearms, feet, or devices. Massages can be done to the whole body, or just to part, such as the back, or limbs. Physiotherapy for a joint injury may include massaging the injured muscles, tendons, or ligaments.
The mechanism behind massage (if there is one) is not understood, although theories propose that massages activate the parasympathetic nervous system, releasing endorphins and serotonin (substances produced naturally in our bodies that make us feel good), and improving blood circulation and/or the flow of the lymph, which helps to carry nutrients to the cells and remove impurities and toxic substances from the body.
|
0.902863 |
Bcl9, Bcl9l, and Pygo2 interact with transcription factors, such as the Wnt-regulated protein β-catenin, to regulate gene expression. Cantù et al. reveal that these proteins also have cytoplasmic functions during tooth development and are particularly important for the formation of enamel. Mice lacking both Pygo1 and Pygo2 or both Bcl9 and Bcl9l developed teeth, a process that requires Wnt/β-catenin transcriptional regulation, but the enamel was structurally disorganized and contained less iron than teeth from control mice. Bcl9, Bcl9l, and Pygo2 were present in the cytoplasm of ameloblasts, the cells that secrete enamel proteins, and colocalized in these cells with amelogenin, the main component of enamel. Bcl9 interacted with amelogenin and proteins involved in exocytosis and vesicular trafficking, suggesting that these proteins function in the trafficking or secretion of enamel proteins. These results demonstrate that Bcl9, Bcl9l, and Pygo2 have cytoplasmic functions distinct from their roles as transcriptional cofactors downstream of Wnt signaling.
Wnt-stimulated β-catenin transcriptional regulation is necessary for the development of most organs, including teeth. Bcl9 and Bcl9l are tissue-specific transcriptional cofactors that cooperate with β-catenin. In the nucleus, Bcl9 and Bcl9l simultaneously bind β-catenin and the transcriptional activator Pygo2 to promote the transcription of a subset of Wnt target genes. We showed that Bcl9 and Bcl9l function in the cytoplasm during tooth enamel formation in a manner that is independent of Wnt-stimulated β-catenin–dependent transcription. Bcl9, Bcl9l, and Pygo2 localized mainly to the cytoplasm of the epithelial-derived ameloblasts, the cells responsible for enamel production. In ameloblasts, Bcl9 interacted with proteins involved in enamel formation and proteins involved in exocytosis and vesicular trafficking. Conditional deletion of both Bcl9 and Bcl9l or both Pygo1 and Pygo2 in mice produced teeth with defective enamel that was bright white and deficient in iron, which is reminiscent of human tooth enamel pathologies. Overall, our data revealed that these proteins, originally defined through their function as β-catenin transcriptional cofactors, function in odontogenesis through a previously uncharacterized cytoplasmic mechanism, revealing that they have roles beyond that of transcriptional cofactors.
|
0.975627 |
What is yoga for energetic balance how can it help me heal? What is the subtle body and how does yoga affect it?
Come explore the intricate connection between chakras, nadis, sound and yogic philosophy and how these timeless teachings can promote balance and healing. Realize and reconnect with your true nature which is joyful, expansive and fearless.
We will also work with mantra, learning more about the history of mantra, and how sounds affect our bodies at a vibrational level.
|
0.999982 |
What is a ROM Cartridge?
A plug-in card similar to a ROM card, but commonly much larger. Great examples of ROM cartridges are those used with earlier game consoles, like the Atari and Nintendo Entertainment System.
|
0.985424 |
Find a list of top Private schools in South Africa. Are you looking for a high or primary primary school? Choose all girls, all boys, or co-ed schools. You can search for your best choices, compare your results and save your choices to your shortlist.
021-761-2158 No Fax Number Provided.
012-362-0308 No Fax Number Provided.
021-704-2786 No Fax Number Provided.
018-381-1102 No Fax Number Provided.
|
0.983572 |
What is Seven Swordsmen about?
AD 1664, the Manchurians defeated the Ming Empire and successfully captured Central Plains. To exert its influence and ensure trouble-free reign, the Manchu government went all out to arrest the pugilists. To save Central Plains, a recluse at Mt Tian, Dhyana Master Reverend Hui Ming (Master Shadow Glow), ordered Seven Swordmasters to descend Mt Tian. However, after leaving Mt Tian, circumstances dictated that the Seven Swordmasters were to go their separate ways. So, they arranged to meet each other by Qiantang River one year later. During a battle, Yang Yun Cong was seriously wounded. He was saved by an enemy general's daughter Nalan Ming Hui. The two fell in love and gave birth to a daughter. However, Nalan Ming Hui was compelled by her father to marry Prince Duo Duo. Yang Yun Cong asked Nalan Ming Hui to leave with him, but, as the safety of her family was at stake, she declined. Yang Yun Cong snatched away their daughter and went to Qiantang River in response to prior appointment with six other Mt Tian disciples. However, only Mu Lang showed up. Before his death, Yang Yun Cong entrusted Mu Lang with the care of her daughter, and told him bring her to Mt Tian to study martial arts... Eighteen years later, Ling Mo Feng (Mu Lang) successfully inherited all the skills of Reverend Ming Hui. Yi Lan Zhu, the baby girl of yesteryear, had also mastered Mt Tian Swordplay. She vowed to avenge the death of her father. The remaining Swordmasters became the leaders of Heaven and Earth Society. On learning that Emperor Kangxi would be going to Mt Wutai to pay his homage, they meticulously plotted to assassinate him.
|
0.999892 |
Best Led TV How to get audio to headphones from my TV?
The best Audio Technica headphones for audio quality are the ATH-M50x. They are popular over-ear headphones, thanks to their almost unmatched price to sound quality ratio. If you're looking for a sturdy and durable pair of headphones that you're primarily going to use to listen to music, then they're a great option. They're comfortable and have an excellent sound quality but they do not block... For example if a TV outputs sound to it's own speakers slightly faster than it outputs sound to the audio output jacks that the headphones are connected to then that would mean that there is a delay in the audio getting to the headphones on top of the latency of the headphones so the difference between audio and image would be more obvious.
Does the TV have RCA outputs? Those are the red and white connectors for stereo input. If so, just get a cable that converts RCA to a 3.5mm female connector where stick your headphone jack in.... For example if a TV outputs sound to it's own speakers slightly faster than it outputs sound to the audio output jacks that the headphones are connected to then that would mean that there is a delay in the audio getting to the headphones on top of the latency of the headphones so the difference between audio and image would be more obvious.
Most expensive option is to get an AV amp or soundbar connect to the optical audio out of a new TV which can be used while the headphones are plugged into the TV. Could be annoying unless you get the audio delay on the amp exactly right though.
|
0.958222 |
Is respect for autonomy defensible?
Wilson, J; (2007) Is respect for autonomy defensible? Journal of Medical Ethics , 33 (6) 353 - 356. 10.1136/jme.2006.018572.
Three main claims are made in this paper. First, it is argued that Onora O'Neill has uncovered a serious problem in the way medical ethicists have thought about both respect for autonomy and informed consent. Medical ethicists have tended to think that autonomous choices are intrinsically worthy of respect, and that informed consent procedures are the best way to respect the autonomous choices of individuals. However, O'Neill convincingly argues that we should abandon both these thoughts. Second, it is argued that O'Neill's proposed solution to this problem is inadequate. O'Neill's approach requires that a more modest view of the purpose of informed consent procedures be adopted. In her view, the purpose of informed consent procedures is simply to avoid deception and coercion, and the ethical justification for informed consent derives from a different ethical principle, which she calls principled autonomy. It is argued that contrary to what O'Neill claims, the wrongness of coercion cannot be derived from principled autonomy, and so its credentials as a justification for informed consent procedures is weak. Third, it is argued that we do better to rethink autonomy and informed consent in terms of respecting persons as ends in themselves, and a characteristically liberal commitment to allowing individuals to make certain categories of decisions for themselves.
|
0.999999 |
Wondering how to make your new aquarium look beautiful for your pet fish?
Regardless of its size, it is a good idea to keep your aquarium well decorated so that your fish have plenty to look at and explore.
Some fish tank decorations may be beneficial to the health of your fish and the water upkeep. The ornaments you choose are up to you, depending on whether you want them to be bright or subtle.
Don't overcrowd the fish: You should only put items in your tank if you are willing to look after them.
Hiding Places: Ceramic objects or natural rocks provide places for your fish to swim around as well as find refuge in. Hollow items give fish a place to go if they are startled or hiding. Ensure that all items are disinfected and properly cleaned before being introduced into the tank.
Plants: Including plants in your fish tank can help reduce algae growth and are sometimes useful as an extra source of food for your fish. Make sure your plant gets plenty of light and care to ensure their survival.
Pebbles: You should spread a generous amount of pebbles along the bottom of your tank. This will give your ornaments a place to sit, as well as keep your fish off the glass in case they go to the bottom of the tank.
Decorative Objects: Items that you think look pretty and are safe for your fish can brighten up your fish tank instantly! Once you've considered their size, weight and safety, you can choose whatever ornaments you like to keep your tank beautiful.
|
0.99963 |
Rotsan Regency is one of the popular residential projects that is located in Vidya Nagar, Hubli. Developed by Rotson Group, this project offers thoughtfully constructed apartments with modern amenities for the comfort of residents. Adding to this, it is close to the market, hospital and many educational institutions.
In which area is Rotsan Regency located?The project is situated in Vidya Nagar, Hubli.
|
0.971777 |
Juventus striker Paulo Dybala has indicated he is aiming to follow in the footsteps of his compatriot Carlos Tevez.
After Boca Juniors clinched the league title, Carlos Tevez said it would lead to bigger and better things for the club.
Juventus striker Carlos Tevez waves as he arrives at the team's hotel in Berlin, Germany June 5, 2015, ahead of the Champions League final. Juventus will play Barcelona in the final at the Olympic stadium in the German capital tomorrow.
TURIN, ITALY - MAY 23: Carlos Tevez of Juventus FC celebrates with the Serie A Trophy at the end of the Serie A match between Juventus FC and SSC Napoli at Juventus Arena on May 23, 2015 in Turin, Italy.
TURIN, ITALY - MAY 23: Carlos Tevez of Juventus FC and his family celebrate with the Serie A Trophy at the end of the Serie A match between Juventus FC and SSC Napoli at Juventus Arena on May 23, 2015 in Turin, Italy.
Carlos Alberto Martínez Tevez (born 5 February 1984) is an Argentine professional footballer who plays as a forward for Boca Juniors and the Argentina national team. His energy, skill, and goal scoring rate have made him an indispensable player for his club sides throughout his career, in the eyes of fellow players and media alike.
Tevez began his career with Boca Juniors, winning the Copa Libertadores and Intercontinental Cup in 2003 before moving to Corinthians where he won the Brasileiro. In 2006 he moved to West Ham United, helping the team remain in the Premier League in his only season. Tevez's prolonged transfers to West Ham and Manchester United were plagued by issues regarding third-party ownership by Media Sports Investment, and their resulting sagas paved changes to both Premier League and FIFA regulations.
Tevez transferred to Manchester United in 2007 and in his two years won several trophies including two league titles and the Champions League. In 2009 he joined Manchester City for £47 million, becoming the first player to move between the two rival clubs since Terry Cooke in 1999. Despite missing four months of the 2011–12 season following a dispute, Tevez returned to help City win their first league title in 44 years. In 2013, he joined Juventus for £12 million, finishing as the team's top goalscorer and winning the Scudetto in his first season. After winning a domestic double and reaching the Champions League final in his second season, he returned to Boca Juniors in June 2015 where he won another domestic double, becoming the first footballer to win two domestic league and cup doubles in one calendar year.
Since his debut for Argentina in 2005, Tevez has over 75 caps. A gold medal winner at the 2004 Olympics, he also played at two World Cups, a Confederations Cup, and four Copa América tournaments.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.