text
stringlengths
277
230k
id
stringlengths
47
47
metadata
dict
William Douglass, M.D. This map, published by Thomas Jefferys, is one of a flurry of cartographic publications in this one year, in answer to various French maps and supposed French encroachments on English turf. The yellow highlights clearly show, for propaganda purposes, the looming French presence on the rich fur-bearing western and northern frontiers. The minimal Spanish claim in Florida is barely worth a mention. Douglass's work, while criticized by some, has been described as very, very influential on successive writers. Return to Part VIII Images | Return to Exhibit | Return to Checklist Exhibition Home Page | Table of Contents
<urn:uuid:d2c09ece-905f-47e1-835c-813dd6662115>
{ "dump": "CC-MAIN-2014-35", "url": "http://web-static.nypl.org/exhibitions/mapexhib/image86.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823169.67/warc/CC-MAIN-20140820021343-00034-ip-10-180-136-8.ec2.internal.warc.gz", "language": "en", "language_score": 0.9429109692573547, "token_count": 133, "score": 2.640625, "int_score": 3 }
History of the village: The village of Abaújalpár is the”first” village of Borsod-Abaúj-Zemplén county, and one of the oldest villages of Hungary. It was first mentioned in written memories in 1330. Initially, the village belonged to the Genus of Aba, and then it was owned by the Alpáry family. Its nobleman was Samuel Alpáry, who became well-known in the noble revolt of 1551-1565, as its chief. In 1905 the village got its present name. The village was destroyed in the 15th-16th century, in the age of Turkish subjection. In the 17th century, bohemians lived here (the followers of Jan Huss). They built the church in gothic style that has been the flower of the village with its frescos, and with the boarded ceiling. In the 16th century, reformed Hungarian people moved here. Crackers, small holders lived here, the peasantry, and the mop fairs, who served the crackers and the small holders. There are a few families who worth mentioning: Bernáth, Andrássy, Nagy, Papp, Patay, Kapy. Their mansions and the gardens made the mountain village – that situated in an extraordinary nice area – more beautiful. After the World War, these families were interned, and the estates were divided. The characteristic, beautiful buildings became to the stables, stores for crop, or offices of the farmers’ cooperative. During the years these buildings were destroyed, and they disappeared. From 1947, farmers’ cooperatives were established, and the livelihood of the population was provided by agricultural work and livestock. The border of the village was excelled place for wine producing. The Aranyosi Valley belongs to the village, where there was a quarry. Today it is a nice outleap, and there was a thermal bath there till the 1950’s. Abaújalpár is a tiny village in the north east part of the county, in the western side of the Zemplén mountain. Two geographic sights are close to the village that used to belong to Abaúj-Torna county, and it is north from the brook Aranyosi: the Sátor Mountain of Abaújszántó, and the stone see of Boldogkőújfalu. There is a forest on the two-third of its periphery, and the rest is used as ploughland. The well-known thermal bath, called Aranyosfürdő (Golden Bath) is close to the village. Data of the village: Area: 8,48 km2 Population: 96 people (data from 2001) Population density: 11,32 people/km2 Post code: 3882 County calling code: 47 Address of the Local Government: 11 Petőfi Street,
<urn:uuid:e4407df3-c2ca-46e7-8be8-1e2ecacd86ce>
{ "dump": "CC-MAIN-2019-35", "url": "http://en.volgykapu.hu/?page_id=212", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317113.27/warc/CC-MAIN-20190822110215-20190822132215-00226.warc.gz", "language": "en", "language_score": 0.9733783602714539, "token_count": 627, "score": 2.6875, "int_score": 3 }
Peggy Shippen Arnold and child, by Sir Thomas Lawrence Peggy’s family wasn’t really Loyalists but they weren't Patriots either; they sort of straddled the fence. While they believed that the colonists had definite grievances against the Motherland, they thought that things could be worked out, if both sides were willing to compromise. It was a tough line to walk, particularly since during the Revolutionary War, Philadelphia was occupied by both the British and the Americans at different times. While Peggy was growing up both George Washington and Benedict Arnold had been entertained by her parents. When the British captured Philadelphia in 1777, they did the same for the British high command. The parties and balls that had been a feature of Philadelphia social life continued under British occupation, giving Peggy a chance to practice her dance steps and her flirting. A frequent visitor to the Shippen home was a young officer named John André. André was handsome, cultured, and charming. Some historians speculate that Peggy and André fell in love but there is no evidence of this. In fact, he paid court to not only Peggy but also to her friends Peggy Chew, Becky Franks, and Becky Morris. One might call them André’s Angels; he spent that much time with them. When the British withdrew from the city a year later, he gave Peggy a lock of his hair to remember him by. Peggy and her family had fled to the New Jersey countryside initially after the Americans occupied the city under the governorship of Benedict Arnold, but they soon moved back to the city because Edward Shippen felt that they would be safer. The family soon became reacquainted with Benedict Arnold. Arnold was immediately smitten and began courting the young woman despite their 20 year age difference. What did Peggy see in Arnold? Despite the age difference and the fact that he was widowed with three small sons, Arnold was also a hero, responsible for the capture for Fort Ticonderoga and also for key actions during the Battle of Saratoga in which he was wounded. Now a major general, he had been given the military governorship of Philadelphia. While Peggy was willing, her father was more skeptical. Arnold had just been brought up on charges of corruption and malfeasance with the money of the federal and state governments, and was awaiting trial. Arnold, however, knew the way to a woman’s heart, purchasing one of the nicest homes in town, Mount Pleasant for Peggy which he gave her the ownership of. On April 19, 1779, Benedict Arnold and Peggy Shippen were married. If Peggy had encouraged Arnold to change sides, it would certainly be understandable. She was being a good wife, supporting her man, who felt unappreciated by the Americans. And she probably didn’t have to give him that hard a push. Arnold seems like he would have been a pain in the ass to live with, one of those men who never leave well enough alone. He made as many enemies as he did friends.Pissed off at his treatment in Philadelphia, Arnold resigned his command there in June of 1780. By this time, he had been corresponding secretly with André, who had gotten permission from his commanding officer, General Clinton to pursue the possibility of Arnold coming over to the British. The messages that were exchanged were sometimes transmitted through Peggy, she would write Andre a seemingly innocent letter asking for material or some sort of frippery, but the letter would also include coded communications from Arnold in invisible ink. Arnold had sought and obtained the command of West Point which was a critical defense post on the Hudson River. The plan was now for Arnold to weaken the defenses at West Point instead of rebuilding them, to make it easier for the British to capture the fort. Peggy and their newborn son Edward soon joined them staying at the home of Beverly Robinson, a Loyalist whose home had been seized by the Americans. Image of a coded letter: Peggy Shippen Arnold's handwriting is interspersed with coded writing in Benedict Arnold's hand; Arnold's writing would have been in invisible ink In September 1780, Arnold finally met André in the woods nearby, giving him vital documents regarding the fortifications at West Point. Unfortunately for André, he ended up behind the American lines, something that Clinton had told him expressly not to do. André was arrested on September 23, 1780 trying to cross back into British territory. The documents hidden in his boot were found, and the plot was exposed. When Arnold found out that the jig was up, he fled to the HMS Vulture that was on the Hudson River, leaving Peggy behind at Robinson House waiting for George Washington to show up. Washington had been scheduled to have a meeting with Arnold that morning. Peggy put on a tour-de-force performance, becoming completely hysterical, almost mad. The performance convinced Washington and his aide Alexander Hamilton that not only was Peggy completely innocent but it also gave Arnold enough time to escape. Peggy was sent back to her family in Philadelphia but news of Arnold’s betrayal meant that it was too difficult for her to stay and put her family in danger. Instead Peggy was banished from the city of her birth, and sent to New York City to join her husband. Their second son James Robinson Arnold was born in New York on August 28, 1781. Peggy was initially welcomed into New York society. Meanwhile André was condemned as a spy and hanged at Tappan, New York. Now on the British side, Arnold was desperate to prove his worth but officers were naturally suspicious of the traitor in their midst. Just as he had when he was part of the Continental Army, Arnold clashed with other officers over the right way to proceed to win the war. Ironically, if he had been listened to, things might have been different and America might still be part of the British Empire. With the war all but over, the Arnold family moved to England.The Arnold family fortunes continued to decline during their time in England. Arnold was busy trying to get the British government to pay what he felt that he was owed for his actions betraying his country (he had asked to be paid £10,000 if he failed in his mission to secure West Point for the British, but the government ended up paying him a little over £6,000). Peggy meanwhile devoted herself to motherhood, giving birth to five more children, of which 3 survived. They moved to New Brunswick in Canada so that Arnold could pursue a business opportunity. When that failed, the family moved back to London, moving into increasingly smaller homes. When Arnold died in 1801, Peggy spent the last three years of her life paying off his debts. She used the pension money that she had been given by the British government and invested it wisely so that she had something to leave her children. She died in 1804 of uterine cancer and was buried with Arnold in St. Mary’s Church in Battersea. After her death, a biographer of Aaron Burr first made the claim that Peggy had either manipulated or convinced Arnold to change sides like a Revolutionary War Lady Macbeth. The information came from Burr’s wife, Theodosia Prevost who had been a good friend of Peggy’s. Peggy had stayed with Prevost in what is now Paramus, NJ, enroute to Philadelphia from West Point. Apparently Peggy couldn’t take the lying anymore and confessed everything to Theodosia, telling her that “through unceasing perseverance, she had ultimately brought the general into an arrangement to surrender West Point.” When the biography was published, the Shippen family disputed this version of events. They claim that Burr made up these allegations because Peggy had spurned his advances made on the way to Philadelphia. However, papers were later found that showed that Peggy was paid £350 for handling secret dispatches. Still, until recently, Peggy was seen as the innocent wife of a traitor. One reason is, of course, the idea that women are naturally less treacherous than men. Peggy was not the only woman who aided and abetted the British during the American Revolution, but very few women were caught, and the ones that were reprimanded at most. While male spies such as Nathan Hale and André were executed, not a single female spy met the same fate. Peggy Shippen Arnold was a survivor, a testament to her ancestors who crossed the ocean to the New World. Her life was more difficult than easy after her marriage but she made it work and never complained.
<urn:uuid:e6cca596-a9d1-4ad3-8d03-f682db156c84>
{ "dump": "CC-MAIN-2015-32", "url": "http://scandalouswoman.blogspot.com/2012/11/treacherous-beauty-life-of-peggy-shppen.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986806.32/warc/CC-MAIN-20150728002306-00265-ip-10-236-191-2.ec2.internal.warc.gz", "language": "en", "language_score": 0.9904714226722717, "token_count": 1754, "score": 2.59375, "int_score": 3 }
Understanding the Six Day War ❶ TRACKING THE TRENDS OF THE PALESTINIAN CAUSE SINCE 1967: LOOKING BACK Al-Shabaka: The Palestinian Policy Network Nadia Hijab, Mouin Rabbani June 6, 2017 On the eve of June 5, 1967, the Palestinians were dispersed among Israel, the Jordanian-ruled West Bank (including East Jerusalem), the Gaza Strip administered by Egypt, and refugee communities in Jordan, Syria, Lebanon, and beyond. Their aspirations for salvation and self-determination were pinned to Arab leaders’ pledges to “liberate Palestine” . . . . ___The Six-Day War, which resulted in Israel’s occupation of the Palestinian West Bank, East Jerusalem, the Gaza Strip, the Syrian Golan Heights, and the Egyptian Sinai Peninsula, brought dramatic changes to the geography of the conflict. It also produced a sea change in the Palestinian body politic. In a sharp break with previous decades, Palestinians became the masters of their own destiny rather than spectators to regional and international decisions affecting their lives and determining their fate. MORE . . . ❷ SIX-DAY WAR – 50 YEARS LATER 1A – WAMU 88.5 Joshua Johnson, Host Jun 05 2017 If the ongoing conflict in the Middle East confuses you, then the Six Day War 50 years ago is a good place to start to gain an understanding. During this conflict, Israel came to occupy East Jerusalem, the West Bank and the Gaza Strip defeating the armed forces of Egypt, Jordan and Syria. ___Why is the Six Day War so important and why does it still impact relations in the region today? AUDIO . . . ❸ CHALLENGES TO INTERNATIONAL HUMANITARIAN LAW: ISRAEL’S OCCUPATION POLICY International Review of the Red Cross, vol. 94, no. 888, Dec. 2012, pp. 1503-1510. [. . . .] without respecting the basic tenets of international humanitarian law (IHL) in these testing times, it is most unlikely that the various communities will find their way toward reconciliation or be prepared to share the burden of a just peace after decades of conflict. Considering that the customary core of that law is older than the state- based system itself, the specific nature and extraordinary significance of IHL in today’s armed conflicts provide a legitimacy beyond the current international system. Far from being outdated, humanitarian law is very much a contemporary and future-oriented body of law. ___ While respect for IHL is a crucial element of the protection of victims of armed conflict, and ultimately of fostering stability in such contexts, a critical analysis of the policies underpinning the status quo in conflict-affected states is also indispensable. ___Turning secifically to the situation in Israel and the Occupied Palestinian Territory, the particular challenges facing humanitarian action there cannot be tackled without an honest look at certain Israeli policies that have become key features of the occupation. ___Israel has exercised ‘actual authority’1 over the West Bank and the Gaza Strip for almost half a century, making its presence in these areas one of the longest sustained military occupations in modern history. . . . MORE . . . ❹ Opinion/Analysis: WHAT IS ANTISEMITISM? June 4, 2002 [. . . .] Israel is building a racial state, not a religious one. Like my parents, I have always been an atheist. I am entitled by the biology of my birth to Israeli citizenship; you, perhaps, are the most fervent believer in Judaism, but are not. Palestinians are being squeezed and killed for me, not for you. They are to be forced into Jordan, to perish in a civil war. So no, shooting Palestinian civilians is not like shooting Vietnamese or Chechen civilians. The Palestinians aren’t ‘collateral damage’ in a war against well-armed communist or separatist forces. They are being shot because Israel thinks all Palestinians should vanish or die, so people with one Jewish grandparent can build subdivisions on the rubble of their homes. This is not the bloody mistake of a blundering superpower but an emerging evil, the deliberate strategy of a state conceived in and dedicated to an increasingly vicious ethnic nationalism. MORE . . . “AFTER THE JUNE AGGRESSION,” BY TAWFIQ ZAYYAD What did you hide You shed my blood and dimmed the light of my eyes You silenced my pen and usurped the right of peaceful men who did not sin What did you hide you rent my flag and opened wounds in my skin You stabbed my dreams What did you hide? We’re deeper than the sea and taller than the stars Our breath is long longer than space Which mother, I wonder bequeathed you half the Canal Which mother bequeathed you the Jordan Bank the sand, petroleum, and the Heights He who forcibly takes a right must guard his own When the balance shifts
<urn:uuid:f9f73b4c-b261-4719-8a4a-e31ac041962e>
{ "dump": "CC-MAIN-2021-17", "url": "https://palestineinsight.net/2017/06/06/usurped-the-right-of-peaceful-men-who-did-not-sin-tawfiq-zayyad/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039544239.84/warc/CC-MAIN-20210421130234-20210421160234-00139.warc.gz", "language": "en", "language_score": 0.9125176668167114, "token_count": 1085, "score": 2.921875, "int_score": 3 }
The music industry is huge, but also fickle. In fact, when young people try to enroll in a music business degree, their parents often worry about whether they will ever be able to make a living in this field. There are two realities in the world of music: - Yes, there are jobs out there for people, particularly those with an education. - Yes, some positions can be obtained solely through luck, particularly the performing positions. So, if your child is thinking about enrolling in a music business degree and you want him or her to be realistic about not being the next Justin Bieber or Katie Perry, what other career options exist for them? Careers for People in Music Business - Live sound engineers ensure all live performances sound perfect. - Recording engineers work in studios and are responsible for tracking. This means that they record each individual element of a song, before someone else puts them all together. - Mixing engineers are the ones who come in and take what the recording engineer has done, adding them and mixing them together to produce a beautifully sounding song. - Mastering engineers are the final step in the overall recording process. They ensure that an album works properly and has a holistic feel to it. They adjust one song’s sound levels to make sure that it sounds better with the next song, for instance, creating a harmonious whole. - Pro tools operators are specialized engineers who use Pro Tools to create music. Those who want to work with this have to become Certified Pro Tools Operators. - Interns, which most good music business schools will require, giving students the opportunity to find out whether the industry is right for them and, at the same time, giving them the opportunity to build up professional contacts. - Teachers, who will work with other children, including the very young, and teach them an instrument, including their voice. This can be through private lessons or in schools, colleges, or community groups, for instance. - Musical managers are responsible for ensuring an artist or a band have a good career, with representation of the right agents – and the right lawyers. - Booking agents, who ensure artists have a chance to perform at different venues and gigs. Did you know that there are states in which an artist’s manager is not allowed to book a gig, as they would be breaking the law? As you can see, there are plenty of career options in the field of music, and the above nine are only scratching the surface. If your child wants to enroll in a music business degree – let them. They are unlikely to have wild ideas that a four year bachelor’s degree will turn them into Take That and, if they do, they will have a rude awakening very quickly. Instead, they will spend four years receiving a high quality education with important transferable skills and a huge array of job opportunities at the end of it.
<urn:uuid:f0b16a35-6bd9-48a8-a97d-e390863bafe6>
{ "dump": "CC-MAIN-2017-51", "url": "http://thegobblersknob.com/want-a-slice-of-the-music-industry-pie-here-are-some-ways-to-do-it/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948585297.58/warc/CC-MAIN-20171216065121-20171216091121-00644.warc.gz", "language": "en", "language_score": 0.9694851040840149, "token_count": 594, "score": 2.609375, "int_score": 3 }
My parents lived through the Great Depression of the 1930s and were profoundly affected by it. They taught us to work hard to earn a living, live within our means, save for tomorrow, share and not be greedy and help our neighbours because one day we might need their help. Those homilies and teachings seem quaint in today's world of credit cards, hyper-consumption and massive debt. Subscribe to Science Matters Society has undergone huge changes since the Second World War. Our lives have been transformed by jet travel, oral contraceptives, plastics, satellites, television, cellphones, computers and digital technology. We seem endlessly adaptable as we adjust to the impacts of these new technologies, products and ideas. We only become aware of how dependant on them we are when they malfunction (work comes to a standstill when the network goes down) or don't exist (when we visit a "developing country"). Most of the time, we can't even imagine a way of living beyond being endlessly occupied with making money to get more stuff to make our lives "easier". But some people have had the benefit of directly comparing a simpler way with the accelerated societies we've created. In the mid-20th century, the tiny Kingdom of Bhutan, hidden deep in the Himalayas between China and India, emerged from three hundred years of isolation. In 1961, the third king of Bhutan started sending students to schools in India. From there, some went on to Oxford, Cambridge, Harvard and other universities. The first of their nation to encounter Western society after three centuries of separation, those young people clearly saw the contrast in values. Upon returning to Bhutan, they expressed shock that, in the West, "development" and "progress" were measured in terms of money and material possessions. At a 1972 international conference in India, a reporter asked Bhutan's king about his country's gross national product — a measure of economic activity. His response was semi-facetious: He said Bhutan's priority was not the GNP but GNH - gross national happiness. Bhutan's government has since taken the concept of GNH seriously and galvanized thinking around the world with the notion that the economy should serve people, not the other way around. In 2004, Crown Prince Jigme Khesar Namgyel Wangchuck, who became king in late 2006, said, "There cannot be enduring peace, prosperity, equality and brotherhood in this world if our aims are so separate and divergent — if we do not accept that in the end we are people, all alike, sharing the earth among ourselves and also with other sentient beings." In July 2011, Bhutan introduced the only resolution it has ever presented at the United Nations. Resolution 65/309 was called "Happiness: towards a holistic approach to development." The country's position was "that the pursuit of happiness is a fundamental human goal" and "that the gross domestic product...does not adequately reflect the happiness and well-being of people." The General Assembly passed the resolution unanimously. It was "intended as a landmark step towards adoption of a new global sustainability-based economic paradigm for human happiness and well-being of all life forms to replace the current dysfunctional system that is based on the unsustainable premise of limitless growth on a finite planet." That empowered Bhutan to convene a high-level meeting. I was delighted when its leaders asked me to serve on a working group charged with defining happiness and well-being, and developing ways to measure these states and strategies. Prime Minister Jigmi Thinley even cited the David Suzuki Foundation's 'Declaration of Interdependence' as an inspiration for the proposal. The Bhutanese understand that well-being and happiness depend on a healthy environment. They vow to protect 60 per cent of forest cover in their country, are already carbon-neutral (they generate electricity from hydro) and have vowed to make their entire agriculture sector organic. They have snow leopards, elephants, rhinos, tigers and valleys of tree-sized rhododendrons — and know their happiness depends on protecting them. The people of this tiny nation see that money and hyper-consumption aren't what contribute to happiness and well-being. I'm proud to be part of the important initiative they've embarked upon, and look forward to the work leading up to a presentation to the UN by 2015.
<urn:uuid:cea7ae74-8630-42d6-9cb6-58829f862bce>
{ "dump": "CC-MAIN-2016-40", "url": "http://davidsuzuki.org/blogs/science-matters/2013/05/tiny-bhutan-redefines-progress/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661555.40/warc/CC-MAIN-20160924173741-00290-ip-10-143-35-109.ec2.internal.warc.gz", "language": "en", "language_score": 0.9638805985450745, "token_count": 899, "score": 2.578125, "int_score": 3 }
Ambitions to achieve universal education and improve teaching quality in the world’s poorest countries will be jeopardised, unless the $22bn (€20.7bn) funding needed every year is found, a UN agency has warned. Aid to low-income countries for education must be increased five-fold to meet the Sustainable Development Goals (SDGs) being discussed by UN member states, according to a report published on Wednesday by the UN Educational, Scientific and Cultural Organisation (Unesco). It urged donors to ramp up their funding. Donors will need to spend $10.6bn a year on educational programmes in low-income countries in order to progress towards the SDG targets. Middle-income countries will need $11.8bn, according to Unesco. These figures do not include previous aid flows, which have in recent years totalled $4.4bn. Governments cover many educational costs themselves, but ambitious targets will require more external help than is currently on offer, Unesco said. Educational aid to low-income countries amounts to about $2bn a year, and aid to middle-income countries costs a similar amount, Unesco said. “This number would have to increase five-fold to enable these countries to meet post-2015 education targets,” it added. The report projected that the annual cost of working toward educational goals in low- and lower middle-income countries will more than double from $100bn in 2012 to $239bn annually between 2015 and 2030. Programmes to improve educational systems in low-income countries will cost $36.3bn, while lower middle-income countries will need $202.9bn, according to Unesco. Low-income countries are defined by the World Bank as states with a gross national income (GNI) per capita of $1,035 or less. Lower middle-income countries have a GNI per capita of between $1,036 and $4,085. The sustainable development goals (SDGs), which are due to be agreed later this year and come into force in 2016, seek to send all children to school for a minimum of ten years and call for quality early childhood development programmes. They aim to end dropout rates in primary and lower-secondary schools and improve the quality of learning by recruiting more teachers and driving class sizes down. “Failure to address the current $22bn funding gap for global education will jeopardise the new SDGs, potentially setting progress back over 30 years,” Unesco said. “Without a doubt, substantial new investment is needed if the world is to achieve the key education targets of the post-2015 sustainable development agenda.” The average yearly cost of sending a primary school student to school in a low-income country will rise from $65 in 2012 to about $200 by 2030, the study said. The cost of recruiting more teachers and raising their salaries, providing better learning materials and building new classrooms accounts for this rise, it said. But 121 million children of primary and lower-secondary school age are still not enrolled in educational programmes, Unesco said, underscoring the scale of the challenge ahead. Nigeria, Pakistan and Sudan have the largest number of children not in school. The educational targets in the SDGs are an improvement from the Millennium Development Goals, said Manos Antoninis, a policy analyst at Unesco and author of the report. “We are happy because essentially the new SDG […] sees education as a whole. It doesn’t limit itself to primary education, which I think is essential.” Unesco warned that poor data meant that 49% of public expenditure data was not available between 2000 and 2013, and that no data was available for Nigeria. Costs associated with getting students into tertiary education and vocational training were not included in the report, it said. Gordon Brown, the UN’s special envoy for global education, has called for more funding to boost education networks in low-income countries. The Sustainable Development Solutions Network, which mobilises experts to offer support and advice on achieving sustainable development, has also pushed for more funding. The report called for renewed attention to the financing of educational development targets ahead of the UN’s international conference on financing for development, which will be held in the Ethiopian capital, Addis Ababa, on 13 July.
<urn:uuid:6dc9457d-f38a-4b04-8518-469da421a474>
{ "dump": "CC-MAIN-2022-21", "url": "https://www.euractiv.com/section/development-policy/news/universal-education-will-cost-20-7-billion-a-year-says-unesco/?replytocom=302496", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662644142.66/warc/CC-MAIN-20220529103854-20220529133854-00167.warc.gz", "language": "en", "language_score": 0.9553591012954712, "token_count": 903, "score": 3.078125, "int_score": 3 }
- About us - News & events Relying on tools derived from genomics, the study of organisms’ genetic make-up, the project is designed to increase the efficiency of investments and contribute to a more sustainable use of Canada’s forest resources for bioenergy. The $7.8 million project is to be carried out by universities and research centres across western Canada and is a response for how to address the current mountain pine beetle epidemic, Pulp and Paper Canada reports. “We are currently faced with millions of hectares of dead trees, and have a surplus of potential bioenergy feedstock, but this does not guarantee a supply for the future. The question is: what are we going replant with?" Dr. Joerg Bohlmann from the University of British Columbia, told the magazine. “This is where genomic tools can help us be more strategic in terms of how we plan feedstock development in our forests -- taking into account a holistic approach: biodiversity of our forests, climate change and pest prevalence -- to name a few.” The project seeks to gather genetic information on pine trees and bark beetles, and then apply techniques from genomics and risk modeling in what Genome BC's president and CEO Dr. Alan Winter hopes will “further Canada's international leadership in forest health genomics”. The project is expected to wrap up in late 2012, with applications expected to be available within five years of the project’s completion. Read the full article here.
<urn:uuid:a71a74a9-2d59-4322-9d19-dfbf6051632a>
{ "dump": "CC-MAIN-2018-17", "url": "http://worldbioenergy.org/news/276/47/Canadian-forestry-industry-employs-genomics-to-track-bioenergy-feedstock-growth", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945459.17/warc/CC-MAIN-20180421223015-20180422003015-00455.warc.gz", "language": "en", "language_score": 0.91297447681427, "token_count": 311, "score": 3.09375, "int_score": 3 }
What is OK short for? Who will get the iron throne? Was Hitler an alien? What is a hiccup? Does chewing gum really stay inside you for years? Was Sherlock Holmes real? Ranging form sluggish, moronic and absurd to the rational, logical and most intellectual questions, Google has been asked with all kinds of queries. But probably the most perplexing question has never been dug up from the folds of our thoughts. What is there in the space between the nucleus and the electrons? We all know how any thing is made up of matter and that matter is made up of atoms, which in turn, are made up of protons, neutrons and electrons. But what is there in between the revolving electrons and their oppositely changed counterparts? Well, here we have some of the plenty possibilities that might find their place in this underrated space. Yes, nothing. Absolutely nothing at all. We know that the matter is made up of atoms and that protons, neutrons and electrons are the fundamental particles of nature. It means there is nothing in between them. There is nothing ; just empty space. Vacuum. And if we consider this theory to be correct, it means that almost 99.99% of matter is made up of vacuum. 2.) Phantom particles:- Phantom particles, for most of us, are unheard of. They are quite simple things — particle-antiparticle pairs. There are many physicists who believe vacuum not to be simply empty. A vacuum still has vacuum energy’ which is a field of its own. This energy is borrowed by particle and anti-particles to come into existence, which is followed by complete annihilation. That is why, under examination, nothing is observed when vacuum is considered. This is a theory having much potential and supported by Sheldon Cooper himself (The String Theory). But the string theory itself has not been proven yet, so who knows! 3.) Electron clouds and energy fields:- The structure of an atom is not exactly how the planets revolve around the sun. The electrons have their own spins and wavefunctions. So, if we look at an atom from the quantum physics point of view, it would be something like this. Their is no certain path for an electron but probability regions where the electron might be which is called electron cloud. Also the electrons and the nucleus are constantly interacting by exchanging photons or heavy gauge bosons. Thus, it may be assumed that the space between the electrical and the nucleus is filled with these quanta carrying forces. If you like this article and find it useful, then do share and like us on social media. Stay tuned for much more contents like this!
<urn:uuid:2ada71cd-91a9-46c1-8ccd-fd2b9cd74a0e>
{ "dump": "CC-MAIN-2018-30", "url": "http://techbotinc.com/one-question-nobody-asked/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592875.98/warc/CC-MAIN-20180722002753-20180722022753-00607.warc.gz", "language": "en", "language_score": 0.9643712043762207, "token_count": 553, "score": 2.984375, "int_score": 3 }
Originally posted on the IEAM blog on 25 July 2017 Welcome to the 2nd post in our series of updates from the SETAC Europe Annual Meeting held in Brussels, Belgium from 7-11 May 2017. After this post we will have two more updates that will be online in the next couple weeks. Enjoy! Are pesticides hurting pollinators? The widespread loss of honeybee populations in Europe and the reduced numbers of wild bees in other countries sparked concern among scientists, policymakers, and farmers all across the world. Recent research conducted on historical field data found a potential connection between the use of certain insecticides and changes in wild bee populations. This was especially true for species that are known to visit flowering crops like oil seed rape. While scientists have been looking in detail at how pesticides might be harmful to bees, there are still many questions on how to find the balance between protecting crops while ensuring the protection of bees and other pollinators. Managing both pesticide usage while mediating risks on wildlife populations continues to challenge scientists and policymakers. Risk assessment is the primary tool that scientists use to address this challenge. A risk assessment is an evidence-based process that determines 1) how much of a toxic chemical can be found in a specific environment (the soil, water, or air) and how much an animal or person can come in contact with that chemical (called ‘exposure’), 2) how toxic the chemical is to an animal or person (hazard), and 3) the quantitative relationship between the two (risk). The three answers are used to calculate the risk a chemical poses in the environment. In conducting bee and pollinator risk assessments, scientists are focused on logistical problems such as experimental set up, how much of a chemical a given pollinator will come in contact with, and determining the total toxicity of all of the pesticides currently in use. At the session “New developments in ecotoxicology for the risk assessment of single and multiple stressors in insect pollinators: From the laboratory to the real world” held at the SETAC Brussels meeting, scientists highlighted new findings that can help policy makers choose the best course of action to ensure that pollinators are protected when pesticides are used. New findings on the impacts of pesticides to pollinators Are all pollinators affected by pesticides in the same way? To test whether different bee species respond to pesticides in the same way, David Spurgeon from the Center for Ecology and Hydrology exposed three bee species to several commercial pesticides and compared their responses. He exposed the European honeybee (Apis mellifera), the buff-tailed honeybee (Bombus terrestris), and the red mason bee (Osmia bicornis) to pesticides through their food and compared survival rates. Spurgeon and his group found that pesticide toxicity increased over time in all three species. This has implications for how scientists conduct regulatory toxicity tests on bees in the lab, and Spurgeon commented that scientists cannot rely on a single time point when trying to determine the overall risk from chemical exposure. This is especially relevant, he said, if the bees come in contact with the pesticide on a frequent and long-term basis. Philipp Uhl from the University of Koblenz-Landau determined the toxicities of several pesticides and compared results between the European honeybee and the red mason bee. Because the European honeybee is the main test species for pesticide risk assessments in Europe, scientists are concerned that using only one pollinator species will make it difficult to accurately determine the risk to other species that may be more or less sensitive. Uhl found that the European honeybee was either more sensitive or had a similar sensitivity profile than the red mason to six of the tested pesticides. This means that using the European honeybee data to complete the risk assessments for these pesticides would be protective for other pollinator species. But for one set of pesticides, the European honeybee was less sensitive, and for certain pesticides there was a 100-times difference between the two species. Any risk assessments conducted using data generated from the honeybee would not provide results that would be protective to other species for these pesticides. Uhl concluded that these species-specific differences in chemical sensitivity should motivate scientists and policymakers to find better ways to test the most relevant species. Uhl commented that this data also indicates how chemicals should be used and what species of bees may be the first ones to be affected. How do we design experiments to more accurately determine the effects of pesticides? Natalie Ruddle from Syngenta discussed the importance of experimental design for evaluating toxicity in species other than the European honeybee. Ruddle presented a field study that was designed to determine the impacts of a neonicotinoid (thiamethoxam) on the red mason bee. Since this pollinator is a solitary bee and does not have a central hive nor a queen, Ruddle and her collaborators worked to develop a field method that can measure the reproductive capacity of individual females. Their field setup relied on the use of long half-dome greenhouses where plants and bees were housed together (known as a “tunnel design”). While no negative effects were seen in the red mason bee when they were housed with pesticide-treated oilseed rape plants, Ruddle highlighted the continued challenges of designing these types of field experiments for solitary bee species, noting the need for consensus on how to set up such experiments. Stefan Kimmel from Innovative Environmental Services, Ltd. discussed the dynamics of how bees are exposed to pesticides in an open field, also using the solitary red mason bee and pesticide-treated oilseed rape plants. Kimmel and colleagues sampled pollinators before and after pesticide application and looked at the amount of pesticides in the flower buds, pollen, nectar, the bee foragers themselves and the hive entrance. Kimmel found that there was a gradient in pesticide concentration, with higher levels in crops and lower but detectable levels found in the nesting sites. At the end of the session, presenters and audience members discussed the current and future needs for pesticides and pollinators based on EU regulations. While tests conducted in open fields are not currently accepted by regulators, due to concerns about competing crops, Kimmel commented that there are advantages of open-field techniques because the setting more accurately represents how pollinators can become exposed to pesticides and avoids the potential for any harm caused by tunnel confinement. What’s next for pollinators? We still have a lot to learn about how bees and pollinators are impacted by pesticide use. But thanks to a better scientific understanding of the risks that pesticides can have on bees in agricultural settings, scientists and policymakers are working together, now more than ever before, on empirical and creative ways to address this global problem. The latest science presented at the SETAC Brussels meeting highlights how researchers, government institutions, regulators, and agrochemical companies are working together to find the best ways to protect pollinators. SETAC will also continue to be a place for scientists to work together with the Pollinators interest group now being developed within SETAC. Originally posted on the SETAC IEAM Blog on 17 July 2017 We are finally kicking off the SETAC Brussels summary series! This post is the first of four highlights of research presented at the SETAC Europe Annual Meeting in Brussels, Belgium (7-11 May 2017). Each post features the latest research findings from SETAC scientists on emerging topics of interest. Enjoy! Why does oceans health matter? Oceans provide more for us than just the backdrop of our annual summer holidays—they provide food and medicine, help connect people and provide a means to deliver materials across the world, are a source of economic growth for coastal communities, and help moderate climate change. But our strong connection to the marine environment also comes with some drawbacks. Seafood contamination, marine pollution, biological hazards such as red tides and antimicrobial resistance (AMR), and rising sea levels are just a few of the examples of how our own health is closely linked to that of our environment. A new and rapidly expanding field of research called Oceans and Human Health (OHH) examines the connections between our health and the health of marine environments. This work includes looking at both the benefits and the risks to people and how our actions can influence the health of marine ecosystems. The theme of OHH was prevalent at this year’s SETAC Brussels meeting, where a common theme of keynote and platform presentations was the interconnections between environmental science and human health. “This area of research is very strategically important for the world, and very important for SETAC as an organization, to move into.” said Colin Janssen, one of the co-chairs of the OHH session. “SETAC researchers are now beginning to focus more on the marine environment, as we are recognizing more and more that human health is not isolated from the environment’s health.” A discussion around the theme was kicked off at the Opening Keynote Presentation by Lora Fleming (University of Exeter) and was followed by a series of platform and poster presentations. The science that connects oceans and human health Lora Fleming presented her collaborative work on red tide events in the state of Florida, in the US. Red tide is caused by microscopic algae (Karenia brevis) that release neurotoxins as aerosols, which are then transmitted by air and wind. Large outbreaks in Southwestern Florida were responsible for the deaths of many endangered Florida manatee and dolphin populations. One significant result from this work was the finding that dolphins had eaten fish with trace amounts of red tide neurotoxin. Since dolphins do not eat dead fish, and it was previously thought that fish consumption did not confer a risk to neurotoxin exposure, these findings provided new evidence of the risks of consuming fish during red tide events. Fleming’s research team provided the evidence needed to change existing policies for red tide event management in order to better protect both marine and human health. The human health impacts of red tide events could also be seen beyond the beach where direct exposure occurs. Fleming and her team found that red tide outbreaks were linked to increases in emergency room visits and exacerbated breathing problems for people with respiratory conditions such as asthma. Fleming’s work highlights the pervasive nature of red tide events, providing a better understanding of how people are affected by the health of the marine environment. Maarten de Rijcke from Ghent University later presented results of a study focused on red tide pollution in the North Sea. Rijcke and his team placed caged mussels at a coastal sluice dock and looked for algal bloom neurotoxins in the mussels. Researchers found a complex mixture of toxins present in the mussels after only 15 days, and several of the neurotoxins they found had unknown toxicities. Rijcke highlighted the importance for looking at algal bloom toxins levels in economically important species, as well as looking at toxins more broadly, instead of only focusing on neurotoxins of known toxicities. He stated that chemicals which are not regularly monitored—or for which no toxicity data exist—might still have a negative impact on human health, and that these should be assessed when possible. Antimicrobial resistance (AMR) in surfers Anne Leonard, University of Exeter, presented research on how antibiotic resistance spreads through coastal environments. Coastal areas are strongly impacted by human activities, including run-off from agricultural fields and wastewater treatment plants, and are also a place that people have the most physical contact with the ocean. Leonard collected coastal water samples and counted the numbers of Escherichia coli that could produce a protein that is able to provide resistance to several antibiotics. Leonard then conducted a survey of surfers compared to non-surfers to see if there was a connection between time spent in the ocean and the presence of drug-resistant E. coli. Volunteers provided rectal swabs and filled in questionnaires as part of the Beach Bum survey. Data from the Beach Bum study shows that surfers were four times more likely to be colonized by drug-resistant E. coli when compared to people who did not surf. While there appeared to be no direct risk from the E. coli on this healthy population of surfers, Leonard commented that their presence in a healthy population means they can easily spread to more difficult-to-treat and sensitive patients. This research also shows that coastal recreational and occupational exposure to microbes might be a significant route of AMR transmission. The benefits of interacting with the oceans leming shifted the tone of the platform presentations to focus on the benefits gained through positive interactions with marine environments. She presented results from scientific surveys, interviews, and controlled experiments in the UK. Benefits include better health reported in people who live close to the ocean or other bodies of water, with the strongest effects seen in poorer communities. Her group also found a reported reduction in stress and an increase in physical activity after people visited coastal areas. Researchers also found that people who visited marine areas reported increased interactions among family members and had increased vitamin D levels. Fleming and her group are now working to understand and consolidate the benefits of “blue gyms” in the UK, findings which consistently demonstrate positive benefits from interactions with healthy marine environments. What’s next for the field of oceans and human health? A number of research projects across Europe and the United States will continue to conduct research on the connections between oceans and human health. These research projects are also looking to foster connections with other fields such as economics, psychology, and science communication. Learn more about these initiatives in the EU by visiting the Horizon 2020 Blue Health web page and the SeaChange ocean literacy project. “If we can show that oceans really are valuable, in an economic sense as well as a public health sense, and that healthy ecosystems are good for our own health and well-being, we can promote more pro-environmental behavior in people.” said Fleming. “I hope that researchers in toxicology and public health will continue to take this topic forward as a truly transdisciplinary field. That we can value and treat our world better and own what we do to the environment in a positive way.” The strategic graduate student An early career researcher faces a lot of pressures within the academic research environment. We’re expected to work hard and put in long hours on experiments and data analysis, under the idea that more output (or, in our case, more data) will inevitably lead to more papers and more opportunities. Hard work is a crucial aspect of success in graduate school, but what’s sometimes not as clear, especially in the early periods of our research careers, is how to work smart. Working smart means being strategic with time: set goals, plan ahead, and adapt as needed. But how exactly can we learn to become more strategic in our work? It’s one thing to design a flawless plan of experiments and analyses in great detail…but what about when an unexpected results offers new insights or inspires different experiments? With an endless array of tasks, distractions, and the all-enveloping feeling like we have to be doing something at any given point in time, how can we clearly see and decide on the most valuable course of action at any given moment? I’ve been interested in answering this question both in a broad sense as well as for my own work-life balance. And while I’ve had wonderful mentors, coaches, and bosses who have taught me how to prioritize my current work while visualizing the future, I also like to find inspiration from other sources. My reading hobby typically leads me towards history books, in part as a break from reading about science but also as a source of awe-inspiring stories. It’s incredible how often the lives of the great men and women of history were defined by how they made pivotal strategic decisions or how a single idea changed the entire course of history. One of my recent such reads was Robert Greene’s “The 33 Strategies of War”. Greene’s book offers insights on how you can make your own career, or even your entire life, more strategic. The book is interwoven with stories from history highlighting the 33 concepts described in great detail in his book. If you’re not a military history aficionado, there are also a number of stories about politicians, business leaders, and even artists who fought in their own sort of ‘wars’ as they worked to bring their goals and ideas to life. Highlights from “The 33 Strategies of War” Greene’s book is not a practical ‘How to make war’ type of book. It instead focuses more on the psychology of conflict and how to approach these situations with a rational and strategic mind. One of the most important facets of good strategy is to have a wide perspective of your situation. In the case of research, you should thoroughly understand the problems that your field is working to solve and the possible solutions: “To have the power that only strategy can bring, you must be able to elevate yourself above the battlefield, to focus on your long-term objectives, to craft an entire campaign, to get out of the reactive mode that so many battles in life lock you into.” “The essence of strategy is not to carry out a brilliant plan that proceeds in steps: it is to put yourself in situations where you have more options than the enemy does. Instead of grasping at Option A as the single right answer, true strategy is positioning yourself to be able to do A, B, or C depending on the circumstances. This is strategic depth of thinking, as opposed to formulaic thinking.” Greene also stresses the importance of acting on the plans you make while being flexible to changing situations. While strategy is the “art of commanding the entire military operation”, tactics refers to the “skill of forming up the army for battle itself and dealing with the immediate needs of the battlefield.” You can think of strategy as the plans you draw up for the experiments you need complete for your dissertation and tactics as the action you take if you find out that one of those experiments was already done by another lab or is no longer needed because another paper refuted the hypothesis. And regardless of how well you plan, you must also be ready to work hard and to learn from any mistakes you make. As Greene said: “What you know must transfer into action, and action must translate into knowledge.” Greene’s book discusses how to use both victory and defeat to your advantage. Both victory and defeat are temporary, says Greene, because what matters is what you do with the lessons you gain from each encounter. If you win, don’t become blinded by your own success but keep working hard and moving forward. If you lose, envision your loss as a temporary setback and use the lessons learned to plant the seeds of future victory. Greene also talks extensively about the way that emotions can cause you to make ill-informed decisions. This is especially true for academics and young researchers, where the pressures to work hard and publish can lead many to mental health problems or simply finding themselves burned out from exhaustion. Many of the stories in 33 Strategies of War show how people extricated themselves from difficult situations and provide hope for the rest of us that anyone can make it through any type of challenge we might face: “Fear will make you overestimate the enemy and act too defensively. Anger and impatience will draw you into rash actions that will cut off your options.” To become a strategic student, start by waging a war against yourself Greene’s book goes into great detail on the many facets of war, including offensive and defensive tactics as well as methods for psychological warfare. What I found the most resonant, especially for early career researchers, were the discussions around internal warfare: ‘declaring war on yourself’ in order to progress and move forward. Greene also focuses on the importance of self-confidence and having a positive mindset—a topic we discussed earlier this spring. One of the most striking personal stories in this section is about General George S. Patton, the famous WWII general who was instrumental in leading the Allies to victory. But before he was a WWII general, he found himself commanding a small contingent of tanks in France during WWI. At one point his unit ended up trapped, their retreat back to base blocked and the only way forward through enemy lines. He found himself terrified to the point of being unable to move or speak. In the end he was able to muster enough courage and stride forward, but the moment left a mark on Patton. He made a habit of putting himself into dangerous situations more regularly, to face that which he feared in order to become less afraid of the situation. This is one of my favorite stories from 33 Strategies of War. It not only shows us the human side of a great general from modern history, but it also shows us the importance of facing our fears. There are many unknowns, uncertainties, and even fears we face in our own work: what if we get something wrong, what if an experiment fails, what if we don’t win that grant or fellowship. But putting ourselves into challenging situations is part of how we progress. Facing and embracing what we fear helps us move forward and lessens our anxiety surrounding failure. Another important consideration for graduate students and early career researchers is the importance of taking time away from our work. We’ve discussed the importance of breaks and time away from the lab to give us perspective on our work and refresh our minds, and Greene also highlights this as a strategic move: “If you are always advancing, always attacking, always responding to people emotionally, you have no time to gain perspectives.” Through these opening chapters, Greene explores this internal war and how we can develop a warrior’s heart and mindset. Instead of summarizing the chapter in great detail, I’ve highlighted are a few of my favorite quotes from this part of his book: “He (the warrior) must beat off these attacks he delivers against himself, and cast out the doubts born of failure. Forget them, and remember only the lessons to be learned from defeat—they are worth more than from victory” (About your presence of mind): “You must actively resist the emotional pull of the moment, staying decisive, confident, and aggressive no matter what hits you.” (On being mentally prepared for ‘war’): “When a crisis does come, your mind will already be calm and prepared. Once presence of mind becomes a habit, it will never abandon you.” and “The more you have lost your balance, the more you will know about how to right yourself.” (About keeping an open mind): “Clearing your head of everything you thought you knew, even your most cherished ideas, will give you the mental space to be educated by your present experience.” (About self-confidence): “Our greatest weakness is losing heart, doubting ourselves, becoming unnecessarily cautious. Being more careful is not what we need; that is just a screen for our fear of conflict and of making a mistake. What we need is double the resolve—an intensification of confidence.” (On moving forward): “When something goes wrong, look deep into yourself—not in an emotional way, to blame yourself or indulge your feeling of guilt, but to make sure that you start your next campaign with a firmer step and greater vision.” I’ve learned a lot from mentors and colleagues throughout my career, but I also enjoy looking for inspiration outside of my normal work environment. Greene’s book “The 33 Strategies of War” provides great inspiration in the form of quotes, advice, and stories from history for approaching life strategically and rationally. Greene’s book is also very grounded and realistic in its approach, and he encourages us to do the same: “While others may find beauty in endless dreams, warriors find it in reality, in awareness of limits, in making the most of what they have.” Whether we are focused on our own research projects, maneuvering into the world in search of fulfilling work, or just going through our day-to-day lives outside of work, we will encounter different types of battles. Greene’s book focuses on the importance of goals in waging this war, whether they are personal or professional: “Do not think about either your solid goals or your wishful dreams, and do not plan out your strategy on paper. Instead, think deeply about what you have—the tools and materials you will be working with. Ground yourself not in dreams and plans but in reality: think of your own skills or advantages.” “Think of it as finding your level—a perfect balance between what you are capable of and the task at hand. When the job you are doing is neither above nor below your talents but at your level, you are neither exhausted nor bored and depressed.” How we approach them depends on our own strategy, but we can all face them with courage and strength by adopting a warrior’s approach to facing conflict. Greene’s discussion about internal warfare might be one of the books’ most relevant sections for graduate students. There are numerous quotes in this book and it’s difficult to highlight all of the great advice discussed in just one blog post, but to close off the post, here is a post on the importance of having a warrior’s heart: “It is not numbers or strength that bring victory in war but whichever army goes into battle stronger in soul, their enemies generally cannot withstand them.”
<urn:uuid:7e9c5e4a-95ab-48d3-b0e5-1495b7ffc44b>
{ "dump": "CC-MAIN-2019-13", "url": "http://www.sciencewithstyle.org/blog/archives/07-2017", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00351.warc.gz", "language": "en", "language_score": 0.9560994505882263, "token_count": 5303, "score": 2.90625, "int_score": 3 }
The biden 2t nsf rdbirnbaumprotocol article “What to Look for in Biden’s $2T NSF R&D Bill” provides an overview of the main provisions of the bill and what it could mean for the future of American research and development. The bill includes a number of provisions that would increase funding for the National Science Foundation (NSF) and other federal research agencies, as well as create new programs to support basic and applied research. 1) What is the RDBirnbaumprotocol? The RDBirnbaum Protocol is a set of guidelines that were developed by Dr. William Birnbaum and his colleagues at the National Institutes of Health (NIH). The protocol is designed to improve the quality of research data by ensuring that data is collected in a standardized and consistent manner. The protocol is also intended to improve the transparency of research data by making it available to the public. The RDBirnbaum Protocol has been used in a number of studies, including the National Health and Nutrition Examination Survey (NHANES) and the Women’s Health Initiative (WHI). The protocol has been shown to improve the quality of data collected in these studies, and to make data more transparent and accessible to the public. The RDBirnbaum Protocol is a voluntary set of guidelines that are not binding on any research institution or individual researcher. However, the NIH has strongly encouraged researchers to adopt the protocol in order to improve the quality of research data. 2) What are the benefits of the RDBirnbaumprotocol? The RDBirnbaum Protocol is a set of guidelines that help ensure that research data is properly collected, managed, and preserved. It was developed by the Research Data Alliance (RDA), an international consortium of experts in research data management. The Protocol is designed to be used by research institutions, funders, and journals. It provides a framework for best practices in research data management, and includes a set of checklists and tools that can be used to assess compliance with the Protocol. The RDBirnbaum Protocol has a number of benefits, including: – Ensuring that research data is properly collected and managed, which can save time and money – Helping to ensure the validity and reliability of research findings – Facilitating the reuse of research data, which can lead to new discoveries – Ensuring that research data is preserved for future generations The RDBirnbaum Protocol is a valuable tool for anyone involved in research data management. By following the Protocol, research institutions, funders, and journals can help ensure that research data is properly collected, managed, and preserved. 3) What are the key features of the RDBirnbaumprotocol? The RDBirnbaum Protocol is a set of rules and guidelines that govern how research data is collected, managed, and disseminated. It was developed by the Research Data Alliance (RDA), an international consortium of research institutions, funding agencies, and other organizations. The Protocol is designed to promote transparency and accountability in research, and to ensure that research data is accessible to the widest possible audience. The RDBirnbaum Protocol has three key features: 1. Data sharing: The Protocol requires that research data be made available to the public through open-access repositories. 2. Data management: The Protocol establishes guidelines for managing research data, including metadata standards and best practices for data curation. 3. Data dissemination: The Protocol sets forth rules for how research data can be used and reused, including attribution and copyright. The RDBirnbaum Protocol is a voluntary set of guidelines, and it is not legally binding. However, many research institutions and funding agencies have adopted the Protocol as a way to promote good data management practices. 4) How does the RDBirnbaumprotocol work? The RDBirnbaumprotocol is a set of rules that govern how data is stored in an RDBMS. It defines how data is organized into tables and how relationships between those tables are defined. The RDBirnbaumprotocol is a critical part of any RDBMS, as it ensures that data is stored accurately and consistently. 5) What are the downsides of the RDBirnbaumprotocol? There are a few potential downsides biden 2t nsf rdbirnbaumprotocol to the RDBirnbaum Protocol that should be considered before using it. First, the Protocol can be time-consuming, as it requires multiple steps and can take up to two weeks to complete. Second, the Protocol can be expensive, as it requires the use of specialized biden 2t nsf rdbirnbaumprotocol equipment and supplies. Finally, the Protocol can be difficult to follow, as it requires specific instructions and careful attention to detail.
<urn:uuid:6b9e98a9-8d38-4574-a59a-b42304b0ccf1>
{ "dump": "CC-MAIN-2023-23", "url": "https://thebodynarratives.com/what-to-look-for-in-biden-2t-nsf-rdbirnbaumprotocol/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650264.9/warc/CC-MAIN-20230604193207-20230604223207-00562.warc.gz", "language": "en", "language_score": 0.9320857524871826, "token_count": 1015, "score": 2.515625, "int_score": 3 }
At Bucklesham Primary School, teachers provide a wide range of contexts for spoken language throughout the school day. Teachers and other adults in school model speaking clearly. This includes clear diction, reasoned argument, using imaginative and challenging language and use of Standard English. Listening is modelled, as is the appropriate use of non-verbal communication, respecting the views of others. Teachers are also sensitive in encouraging the participation of retiring or reticent children. Spoken Language outcomes are planned for in all areas of the curriculum. Roles are shared amongst pupils: sometimes a pupil will be the questioner, presenter, etc. Learning takes place in a variety of situations and group settings. For example, these could include reading aloud as an individual, working collaboratively on an investigation, reporting findings as a newscaster, interviewing people as part of a research project, acting as a guide for a visitor to school or responding to a text in shared or guided reading. Spoken Language will be a focus across the curriculum and across the school day in a variety of settings. - Feel their ideas and opinions are valued - Listen to verbal instructions which are clear - Offer ideas and opinions which may differ from others - Verbalise ideas in a variety of situations - Ask and answer questions appropriately - Think before they speak – plan out - Appreciate opinions of others - Speak aloud with confidence for the appropriate audience - Communicate collaboratively - Plan for speaking and listening - Speak clearly - Consider oral outcomes - Encourage discussion, debate and role play - Value and build on pupils’ contributions - Understand how to develop skills progressively - Use resources effectively - Set realistic goals - Use different approach
<urn:uuid:0b9d7633-e410-4201-b443-eb8816c00fce>
{ "dump": "CC-MAIN-2024-10", "url": "https://www.buckleshamprimaryschool.co.uk/page/?title=Spoken+Language&pid=43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476396.49/warc/CC-MAIN-20240303142747-20240303172747-00733.warc.gz", "language": "en", "language_score": 0.9368221759796143, "token_count": 354, "score": 4.125, "int_score": 4 }
A tale of two cities Valley of the Moon Alliance (VOTMA) Recently, VOTMA has been diving into the complex issues of water use and conservation. Homeowners, businesses, and farmers want to ensure that we and the flora and fauna of Sonoma Valley have clean, accessible water in the years to come. Seven western states are facing water shortages, and California is finally moving forward with ways to have communities look at their own water use and develop water management plans. Up until now, groundwater laws have allowed a “first-come, first served” access to aquifers. Poor management of groundwater and drought conditions can force communities to dip even further into emergency wells and permanently affect an aquifer’s ability to recharge itself. With so many states facing dwindling water resources, what happens when a community actually runs out of water due to drought coupled with growth? When a community runs out of water they have to find another source to tap. And that is exactly what Colorado Springs, Colorado, had to do. In the past, water filled the Fountain and Monument creeks that were the only sources of water for the community. For years, they knew that drought would permanently dry up their flood-prone plain, but they continued to build out the city. One City Council member dressed up as a “Growth Buster” with overalls and a spray tank and wand to try to engage community members in a discussion of smart, sustainable planning. He was voted off the city council the following year. The Colorado Springs City Council (sans the Growth Buster) looked to the Arkansas River in the plains below. The city council eventually approved an $825 million pipeline called the Southern Delivery System that will deliver water from the city of Pueblo’s reservoir, which lies in the Arkansas River basin. It took 27 years to engineer and construct. It will pump 50 million gallons a day of Arkansas River water 1,500 feet uphill from Pueblo, 50 miles away. The City of Pueblo and their reservoir will get wastewater back after treatment via Fountain Creek. This is a huge pipeline costing taxpayers millions, and of course, the EPA also weighed in. In 2009, the Colorado Springs City Council decided to suspend the city’s stormwater program, which had previously contaminated Fountain Creek. Their stormwater runoff had chemical contaminants and increasing sediment from fires washing into the creek. Pueblo brought this to the attention of the EPA via a threatened lawsuit. Colorado Springs will now spend $460 million over 20 years to complete stormwater cleanup projects which include ponds to filter water and planting vegetation along drainage channels to stabilize sediment. Like most public improvements, they will rely on general fund revenues from sales taxes for the additional $460 million dollar cost. Mayor John Suthers observed that, “If we have a downturn, we may have to look at something else.” He also stated that the pipeline, “…will take care of the future water needs of Colorado Springs for up to 50 years of growth.” On June 15, 2016, the Colorado Springs mayor got the news that chemicals used to fight the previous years’ devastating petroleum fires were being found in their drinking water. These per-fluorinated chemicals are like hormones and pesticides; they do not break down and boiling water will not get rid of them. They were forced to shut down seven city wells in Colorado Springs and nine more along Fountain Creek. No one wants to see a future water bill for the Colorado Springs basin. This may all seem unbelievable, but it happened. Water is going to be the key resource in our future. We are already pumping from emergency wells in the Santa Rosa Plains and we continue to use pesticides for agriculture, parks, and homes. We rely on the Russian River for the majority of the county’s water and the river is at risk. According to Will Parish in the September 2015 “Fish Out of Water” article in the Bohemian, “The vast majority of regional vineyards are irrigated. Many use water from wells, an unknown proportion of which are hydrologically connected to the river.” In the past, Russian River levels have been so low that the Coho and Steelhead salmon have been killed particularly when neighboring vineyards pulled out 50-55 gallons per minute, per acre, for frost protection of vines. Parrish also points out that vineyard irrigation pumps sunk in the Russian River are not metered and may draw out unrestricted amounts of water year round. The Russian River is also the major source of water for many of our cities in Sonoma County, including Santa Rosa. Sonoma County Water Agency spokeswoman Ann Dubay stated, “We do not have a countywide breakdown of water use for residences and agriculture.” So we keep pumping. When we first heard about Colorado Springs and Pueblo’s plan to build the pipeline, my husband and I asked ourselves, “What happens to the Arkansas River in the long run?” Then we asked ourselves, “What will Colorado Springs do in 50 years?” What happens to our water in 50 years is a question we need to answer right here in Sonoma County.
<urn:uuid:ee385589-a5b8-4921-9451-75ddb49af405>
{ "dump": "CC-MAIN-2020-40", "url": "https://www.kenwoodpress.com/cs/public/lpt/a/9086", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201699.38/warc/CC-MAIN-20200921112601-20200921142601-00508.warc.gz", "language": "en", "language_score": 0.9634079337120056, "token_count": 1066, "score": 2.734375, "int_score": 3 }
Most bacteria propel themselves using a rapidly rotating filament called a flagella. The flagella-less mutants of P. aeruginosa are able to get around without one. Time-lapse video shows that they can walk upright and crawl on surfaces using pili, rigid filaments that protrude from the cell wall and are typically a few microns in length. Two motility modes were observed: walking and crawling. When walking, the cell body was oriented vertically off the surface and moved only a few microns before changing directions. Bacteria crawled along their long axis when they were oriented horizontally on the surface, and could travel about three times longer than when walking. The bacteria were able to switch between walking and crawling. Bacteria walking was also implicated in the formation of biofilms, robust communities of bacteria that cooperatively form a protective mesh. Before cell division, the bacteria permanently anchored themselves vertically, so that one daughter cell floated away while one remained on the surface. When the bacteria do not have pili, and therefore don’t walk or anchor, they form clumpy, non-uniform biofilms. Listing image by Gerard Wong, UCLA
<urn:uuid:2a0506d0-284e-4595-8e1d-23672181ec37>
{ "dump": "CC-MAIN-2014-41", "url": "http://arstechnica.com/science/2010/10/bacteria-stand-up-go-for-a-stroll/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137698.52/warc/CC-MAIN-20140914011217-00141-ip-10-234-18-248.ec2.internal.warc.gz", "language": "en", "language_score": 0.9594739675521851, "token_count": 242, "score": 4.0625, "int_score": 4 }
Cloud computing means entrusting data to information systems that are managed by external parties on remote servers "in the cloud." Webmail and online documents (such as Google Docs) are well-known examples. Cloud computing raises privacy and confidentiality concerns because the service provider necessarily has access to all the data, and could accidentally or deliberately disclose it or use it for unauthorized purposes. Conference management systems based on cloud computing represent an example of these problems within the academic research community. It is an interesting example, because it is small and specific, making it easier to explore the exact nature of the privacy problem and to think about solutions. This column describes the problem, highlights some of the possible undesirable consequences, and points out directions for addressing it. Most academic conferences are managed using software that allows the program committee (PC) members to browse papers and contribute reviews and discussion via the Web. In one arrangement, the conference chair downloads and hosts the appropriate server software, say HotCRP or iChair. The benefits of using such software are familiar: HotCRP and iChair require the conference chair to download and install software, and to host the Web server. Other systems such as EasyChair and EDAS work according to the cloud computing model: instead of installing and hosting the server, the conference chair simply creates the conference account "in the cloud." In addition to the benefits described previously, this model has extra conveniences: For these reasons, EasyChair and EDAS are an immense contribution to the academic community. According to its Web page, EasyChair hosted over 3,300 conferences in 2010. Because of its optimizations for multiconferences and multitrack conferences, it is mandated for conferences and workshops that participate in the Federated Logic Conference (FLoC), a huge multiconference that attracts approximately 1,000 paper submissions. Accidental or deliberate disclosure. A privacy concern with cloud-computing-based conference management systems such as EDAS and EasyChair arises because the system administrators are custodians of a huge quantity of data about the submission and reviewing behavior of thousands of researchers, aggregated across multiple conferences. This data could be deliberately or accidentally disclosed, with unwelcome consequences. The data could be abused by hiring or promotions committees, funding and award committees, and more generally by researchers choosing collaborators and associates. The mere existence of the data makes the system administrators vulnerable to bribery, coercion, and/or cracking attempts. If the administrators are also researchers, the data potentially puts them in situations of conflict of interest. The problem of data privacy in general is of course well known, but cloud computing magnifies it. Conference data is an example in our backyard. When conference organizers had to install the software from scratch, there was still a risk of breach of confidentiality, but the data was just about one conference. Cloud computing solutions allow data to be aggregated across thousands of conferences over decades, presenting tremendous opportunities for abuse if the data gets into the wrong hands. The acceptance success records could be identified, for individual researchers and groups, over a period of years. Beneficial data mining. In addition to the abuses of conference review data described here, there are some uses that might be considered beneficial. The data could be used to help detect or prevent fraud or other kinds of unwanted behavior, for example, by identifying: The data could also be used to understand and improve the way conferences are administered. ACM, for example, could use the data to construct quality metrics for its conferences, enabling it to profile the kinds of authors who submit, how much "new blood" is entering the community, and how that changes over different editions of the conference. This could help identify conferences that are emerging as dominant, or others that have outlived their usefulness. The decisions about who is allowed to mine the data, and for what purposes, are difficult. Policies should be decided transparently and by consensus, rather than being left solely to the de facto data custodians. Policies and legislation. An obvious first step is to articulate clear policies that circumscribe the ways in which the data is used. For example, a simple policy might be that the data gathered during the administration of a conference should be used only for the management of that particular conference. Adherence to this policy would imply that the data is deleted after the conference, which is not done in the case of Easychair (I don't know if it is done for EDAS). Other policies might allow wider uses of the data. Debate within different academic communities can be expected to yield consensus about which practices are to be allowed in a discipline, and which ones not. For example, some communities may welcome plagiarism detection based on previously reviewed submissions, while others may consider it useless for their subject, or simply unnecessary. Another direction would be to try to find alternative custodians for the datacustodians that are not themselves also researchers participating actively in conferences. The ACM or IEEE might be considered suitable, although they contribute to decisions about publications and appointments of staff and fellows. Professional data custodians such as Google might also be considered. It may be difficult to find an ideal custodian, especially if cost factors are taken into account. In most countries, legislation exists to govern the protection of personal data. In the U.K., the Data Protection Act is based on eight principles, including the principle that personal data is obtained only for specified purposes and is not processed in a manner incompatible with the purposes; and the principle that the data is not kept longer than is necessary for the purposes. EasyChair is hosted in the U.K., but the lack of an accessible purpose statement or evidence of registration under the Act mean I was unable to determine whether it complies with the legislation. The Data Protection Directive of the European Union embodies similar principles; personal data can only be processed for specified purposes and may not be processed further in a way incompatible with those purposes. Processing encrypted data in the cloud. Policies are a first step, but alone they are insufficient to prevent cloud service providers from abusing the data entrusted to them. Current research aims to develop technologies that can give users guarantees that the agreed policies are adhered to. The following descriptions of research directions are not exhaustive or complete. Progress has been made in encryption systems that would allow users to upload encrypted data, and allow the service providers to perform computations and searches on the encrypted data without giving them the possibility of decrypting it. Although such encryption has been shown possible in principle, current techniques are very expensive in both computation and bandwidth, and show little sign of becoming practical. But the research is ongoing, and there are developments all the time. Hardware-based security initiatives such as the Trusted Platform Module and Intel's Trusted Execution Technology are designed to allow a remote user to have confidence that data submitted to a platform is processed according to an agreed policy. These technologies could be leveraged to give privacy guarantees in cloud computing in general, and conference management software in particular. However, significant research will be needed before a usable system could be developed. Certain cloud computing applications may be primarily storage applications, and might not require a great deal of processing to be performed on the server side. In that case, encrypting the data before sending it to the cloud may be realistic. It would require keys to be managed and shared among users in a practical and efficient way, and the necessary computations to be done in a browser plug-in. It is worthwhile to investigate whether this arrangement could work for conference management software. Many people with whom I have discussed these issues have argued that the professional honor of data custodians (and PC chairs and PC members) is sufficient to guard against the threats I have described. Indeed, adherence by professionals to ethical behavior is essential to ensure all kinds of confidentiality. In practice, system administrators are able to read all the organization's email, and medical staff can browse celebrity health records; we trust our colleagues' sense of honor to ensure these bad things don't happen. But my standpoint is that we should still try to minimize the extent to which we rely on people's sense of good behavior. We are just at the beginning of the digital era, and many of the solutions we currently accept won't be considered adequate in the long term. The issues raised about cloud-computing-based conference management systems are replicated in numerous other domains, across all sectors of industry and academia. The problem of accumulations of data on servers is very difficult to solve in any generality. The particular instance considered here is interesting because it may be small enough to be solvable, and it is also within the control of the academic community that will directly benefitor sufferaccording to the solution we adopt. Many thanks to the Communications reviewers for interesting and constructive comments. I also benefited from discussions with many colleagues at Birmingham, and also in the wider academic research community. Thanks to Henning Schulzrinne, administrator of EDAS, for comments and clarifications. Drafts of this Viewpoint were sent to Andrei Voronkov, the Easychair administrator, but he did not respond. The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc. For EasyChair, I have compiled instructions on how PC chairs can restrict shut down access to past submissions and reviews. It would be great if PC chairs could do this for their past conferences: Displaying 1 comment
<urn:uuid:4dd47948-1c48-481a-8e98-9301ef211a9c>
{ "dump": "CC-MAIN-2015-18", "url": "http://cacm.acm.org/magazines/2011/1/103200-cloud-computing-privacy-concerns-on-our-doorstep/fulltext", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246650195.9/warc/CC-MAIN-20150417045730-00126-ip-10-235-10-82.ec2.internal.warc.gz", "language": "en", "language_score": 0.945670485496521, "token_count": 1912, "score": 2.84375, "int_score": 3 }
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published, which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise: ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY'). Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent". ALICE – which stands for Artificial Linguistic Internet Computer Entity, an acronym that could have been lifted straight out of an episode of The X-Files – was developed and launched by creator Dr. Richard Wallace way back in the dark days of the early Internet in 1995. (As you can see in the image above, the website’s aesthetic remains virtually unchanged since that time, a powerful reminder of how far web design has come.) The term chat bot (or sometimes just bot) can also be used in the meaning of an automatic chat responder program. The article How to Create a Chat Bot for Yahoo Messenger written by Chelsea Hoffman, explains how quick and easy it is to create a Chat bot responder containing unique and accurate responses to general phrases, words and questions that are used in Yahoo messenger. Previous generations of chatbots were present on company websites, e.g. Ask Jenn from Alaska Airlines which debuted in 2008 or Expedia's virtual customer service agent which launched in 2011. The newer generation of chatbots includes IBM Watson-powered "Rocky", introduced in February 2017 by the New York City-based e-commerce company Rare Carat to provide information to prospective diamond buyers. The first formal instantiation of a Turing Test for machine intelligence is a Loebner Prize and has been organized since 1991. In a typical setup, there are three areas: the computer area with typically 3-5 computers, each running a stand-alone version (i.e. not connected with the internet) of the participating chatbot, an area for the human judges, typically four persons, and another area for the ‘confederates’, typically 3-5 voluntary humans, dependent on the number of chatbot participants. The human judges, working on their own terminal separated from one another, engage in a conversation with a human or a computer through the terminal, not knowing whether they are connected to a computer or a human. Then, they simply start to interact. The organizing committee requires that conversations are restricted to a single topic. The task for the human judges is to recognize chatbot responses and distinguish them from conversations with humans. If the judges cannot reliably distinguish the chatbot from the human, the chatbot is said to have passed the test. Earlier, I made a rather lazy joke with a reference to the Terminator movie franchise, in which an artificial intelligence system known as Skynet becomes self-aware and identifies the human race as the greatest threat to its own survival, triggering a global nuclear war by preemptively launching the missiles under its command at cities around the world. (If by some miracle you haven’t seen any of the Terminator movies, the first two are excellent but I’d strongly advise steering clear of later entries in the franchise.) Are the travel bots or the weather bots that have buttons that you click and give you some query, artificially intelligent? Definitely, but they are just not far along the conversation axis. It can be a wonderfully designed conversational interface that is smooth and easy to use. It could be natural language processing and understanding where it is able to understand sentences that you structure in the wrong way. Now, it is easier than ever to make a bot from scratch. Also chatbot development platforms like WotNot, Chatfuel, Gupshup make it fairly simple to build a chatbot without a technical background. Hence, making the reach for chatbot easy and transparent to anyone who would like to have one for their business. For more understanding on intelligent chatbots, read our blog. “There is hope that consumers will be keen on experimenting with bots to make things happen for them. It used to be like that in the mobile app world 4+ years ago. When somebody told you back then… ‘I have built an app for X’… You most likely would give it a try. Now, nobody does this. It is probably too late to build an app company as an indie developer. But with bots… consumers’ attention spans are hopefully going to be wide open/receptive again!” — Niko Bonatsos, Managing Director at General Catalyst
<urn:uuid:2284a8ca-ba78-4442-a677-e045bcab75ca>
{ "dump": "CC-MAIN-2020-16", "url": "https://chatbots.london/dependable-chatbot-more-info-on.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371893683.94/warc/CC-MAIN-20200410075105-20200410105605-00123.warc.gz", "language": "en", "language_score": 0.9526329636573792, "token_count": 1155, "score": 3.296875, "int_score": 3 }
Air pollution is not only unhealthy but also expensive. Hans Bruyninckx of the European Environment Agency tells DW why. His agency's recent report shows staggering annual costs as a result of relatively few companies. The European Environment Agency (EEA) has recently published a report on industrial facilities and what they cost the European Union. Over a period of five years they took a closer look at 14,000 facilities in Europe - either industrial production plants or power stations - and measured their damages to the environment and health due to air pollution and greenhouse gases. The major finding: One percent of these plants - which means about 150 facilities - cause 50 percent of the overall damage. DW spoke with the EEA's executive director to talk damage control. DW: What are the most destructive facilities in Europe? Hans Bruyninckx: If you look at the "Top 30" companies in Europe, the most polluting companies, 26 of those are power facilities, and are coal fired or lignite fired, which we find in Germany and in eastern Europe primarily. And we have a couple of large industrial facilities: steel factories, or in the chemical and petrol sector. And that's the bulk of the most polluting companies. They are spread throughout Europe if you look at the "top 1 percent," but with a fairly high concentration in Germany and Eastern Europe. Are they in breach of required emission levels? We don't make any claims on whether these facilities are operating within the European legislation - that would be another analysis. We're just calculating the costs to health and the environment. What you can say is that the current rules and regulations in Europe on air pollution do not guarantee that citizens live in air that falls below the levels or the quality that is recommended by the World Health Organization. How did the report measure the damage? Over a five-year period we gathered the emissions data from these facilities and we looked through models at the estimated costs for health care and the environment. We used a number of indicators, for example premature death, hospital costs, lost workdays, damage to buildings, reduced agriculture or yields - those sort of factors. So what is this costing European taxpayers? Greenhouse gas emissions and industrial air pollution cost anywhere between 60 billion and 190 billion euros ($75 - $240 billion) a year for European taxpayers. Because the costs are there - they are not absorbed by those who cause the costs but in general by society. And this means we put financial burdens on our healthcare systems, we put financial burdens on families - because they have less healthy children, adults are less healthy - we put costs on our food system because we have lower agriculture yields …. So that is the kind of damage we're talking about. It is important to note that this is only 20 percent of the air pollution in Europe. In addition to that, we have the impacts of traffic for example, the impacts of agriculture, we have the impact of a number of other processes, that we did not study here. You looked at information dating back to 2008. What sort of trend are we seeing? We see a slight improvement in the damage from air pollution in Europe, at least from industrial air pollution and greenhouse gases. Why is that? Because, on the one hand, you have legislation that is moving sectors in the right direction, and that's putting pressure on the government and industrial facilities to perform better. And at the same time, we see that the number of the older installations - especially in the power sector - are being replaced by more modern and efficient installations that pollute less. So overall there is a trend in a good direction. What more needs to be done? What are the report's recommendations? First of all, a stronger emphasis on moving the regulation in the direction of health concerns. This could be a serious step forward, and that is exactly that what we have in the [legislative] air package that is on the table in Brussels. And a second thing would be a stronger push to go to best available technologies when it comes to industrial facilities, but also to the energy sector. If you think long term of a de-carbonized European economy - a low-carbon economy - we know that, over time, we will have to move away from the highest-polluting energy production methods, and those obviously include heavy coal and lignite. So a push to move away from those would definitely a very positive step forward. How promising is this air package on the table in Brussels? The current air package that is being discussed politically in Europe is trying to move the limits and the regulations into the direction of guarantying a healthier environment for European citizens. Hans Bruyninckx is Executive Director of the European Environment Agency (EEA), Copenhagen.It's an agency of the European Union. Its task is to provide sound, independent information on the environment.
<urn:uuid:d44fbf04-81ff-4d84-bcfd-1dc236e3ecba>
{ "dump": "CC-MAIN-2023-23", "url": "https://www.dw.com/en/air-pollution-costing-europe-billions/a-18090125", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644506.21/warc/CC-MAIN-20230528182446-20230528212446-00564.warc.gz", "language": "en", "language_score": 0.9567961096763611, "token_count": 1019, "score": 2.984375, "int_score": 3 }
India and Pakistan have agreed to construct a new border entry point and road to allow Sikh pilgrims from India to visit a shrine in Pakistan. Sikhism was born in Punjab, a region that was divided between the two countries during partition in 1947. The Gurdwara Darbar Sahib Kartarpur is one of Sikhism’s holiest shrines. The religion’s founder, Guru Nanak, spent the last 18 years of his life there. The decision coincides with the 550th anniversary of Guru Nanak’s birth.
<urn:uuid:c0a3a81d-296d-4501-9321-7e01a6ef528f>
{ "dump": "CC-MAIN-2021-43", "url": "http://www.newswire.com.pk/2018/11/22/india-and-pakistan-to-construct-a-new-border-entry-point/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00317.warc.gz", "language": "en", "language_score": 0.9654052257537842, "token_count": 111, "score": 2.6875, "int_score": 3 }
On Jan. 4, the House of Representatives met for a second day to cast votes that would determine who would become speaker of the House. After a sixth vote, Rep. Kevin McCarthy (R-Calif.), who is the House GOP leader, was not elected to the position. McCarthy needs 218 votes in the House in order to take the seat as speaker. Google Trends data show that people online were asking if the speaker of the House has to be from the party that holds the majority. The Republican party currently has the majority. Does the speaker of the House have to be from the party that holds the majority? No, the speaker of the House does not have to be from the majority party. Anyone with a nomination and enough votes can be speaker. WHAT WE FOUND The speaker is the presiding officer of the House of Representatives and is responsible for maintaining order and managing proceedings of the House. The speaker of the House is also third in line in presidential succession, which means should the president and vice president not be able to serve, the speaker would be the one sitting in the Oval Office. Article 1, Section 2 of the U.S. Constitution says the “House of Representatives shall choose their speaker and other officers,” but is vague on who can hold the position. The position doesn’t have to be held by the leader of the party. During the 2023 session, which began earlier this week, Rep. Byron Donalds (R-Fla.) and Rep. Hakeem Jeffries (D-N.Y.) were also nominated. The speaker doesn’t even have to be a member of the House of Representatives. In 2013 and 2015, former Secretary of State Colin Powell was nominated, according to data from the Congressional Research Service (CRS). In 2019, Sen. Tammy Duckwortth (D-Ill.) and Joe Biden, before he was president, were both nominated. Also in 2015, Sen. Rand Paul (R-Ky.) was nominated to be speaker, even though he wasn’t a member of the House. More from VERIFY: There are no official members of the House until a speaker is sworn in According to the U.S. Government Publishing Office (GPO), the speaker is the only House officer that “traditionally has been chosen from the sitting membership of the House.” The Constitution doesn’t limit the selection from among the class, “but the practice has been followed invariably,” the GPO says. According to House archives, the speaker position has always been held by a House member. The first elected speaker was in 1789. A speaker must be nominated, and then the House votes. A majority vote, of 218, must be reached in order for the speaker to be elected. If no candidate wins a majority, ballots are re-cast until a speaker is chosen. The 2023 session is only the 15th time in history multiple roll calls were necessary to vote for speaker, according to House archives. Thirteen of those times occurred before the Civil War “when party divisions were more nebulous.” The last time a Speaker election required two or more votes on the floor happened in 1923. In July 2021, Rep. Brendan Boyle (D-Pa.) introduced a bill that would require the speaker to be a member of the House. The legislation was introduced after rumors swirled that former President Donald Trump would be nominated and voted into the role. The bill, known as the MEMBERS Resolution, has not been actioned since it was introduced in the House.
<urn:uuid:baff53e2-5cf4-4afc-b338-e989d74684f2>
{ "dump": "CC-MAIN-2023-14", "url": "https://www.9news.com/article/news/verify/government-verify/speaker-of-the-house-does-not-have-to-be-from-the-majority-party/536-2c51e908-c576-44a0-b495-7423f1abb88e", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00761.warc.gz", "language": "en", "language_score": 0.9808740019798279, "token_count": 761, "score": 2.671875, "int_score": 3 }
Variable Impedance - What is it? Humour me for a minute - I have an analogy that works here… Imagine a train crossing a bridge that is suspended high above ground between two mountain peaks. The train is your source resistance - or in this case, the microphone. It can be low impedance (a light train) or high impedance (a heavy train). The bridge is your load impedance - or in this case, the preamplifier. The bridge can be stiff or saggy (I know, I know!). The stiffer the bridge, the easier it is for the train (passengers = signal) to cross the mountainous gap! The heavier the train, the stronger the bridge should be… In a typical “bridging” audio connection - you have a low source impedance and approximately 5-10 times that impedance in the load (stiffer bridge), this way as much voltage signal as possible arrives at the destination. The goal is to keep the signal “above” ground. Normally we assume 150 ohms for the microphone output impedance and 1500 ohms for the preamplifier input impedance (x10). However microphones vary their output (source) resistance with frequency as both resistive AND reactive (inductive/capacitive) components are at play when forming an “impedance”. Impedances & Bridging Loads - How They Work The connection between mic and preamplifier can be seen as a typical potential divider - let’s take a standard SM57 dynamic microphone as an example. It’s output impedance is stated as a nominal 310 ohms. The ASP880 has three input impedance options: - LO - 220 ohms (this is a saggy bridge) - MED - 1200 ohms (this is a classic 1970’s style bridge) - HI - 2800 ohms (this is a modern, strong and well braced bridge) Imagine when the two devices are connected together - a 310 ohm source with a variable load from 220-2800 ohms: The SM57 connects to the ASP880 preamplifier and we want to get as much tone and output voltage from the mic as possible - the preamplifier then adds 40-60dB of gain to this signal for recording. For anyone not familiar with a potential divider - this simple circuit is the building block of all electronics and allows us to figure out “voltage transfer” between two points. The simplest way to look at it is that the stiffer the load (R2), the more voltage from the source (R1) is carried to the output. Vin = 1 volt, R1 = 310 ohms, R2 = 220, 1200 or 2800 ohms, Vout changes: - R2 = 220 ohms, Vout = 0.41 volt - R2 = 1200 ohms, Vout = 0.80 volt - R2 = 2800 ohms, Vout = 0.90 volt Of course, who cares about voltage when we’re sound engineers?! To quote Ray Charles “How’s it sound baby?” - well, we’ll get there soon… If we take into account that all dynamic microphones such as SM57/58, SM7, D12, MD421 or ribbon mics such as a Coles 4038 or Royer R121 have substantial varying output resistances with frequency - they use coils, transformers and/or large magnets to create inductive fields that convert acoustic movement into electrical voltage… what we get is a dynamically changing frequency response that can be heavily affected by the loading - or stiffness of the bridge! The typical output impedance of an SM57 vs frequency looks approximately like this, it might remind you of a small loudspeaker: If we were to figure out a potential divider for every point on that purple curve, we could understand the signal transfer for each frequency from the mic to the preamplifier. What we actually end up with is not just a change in signal level BUT A CHANGE IN TONE! What To Expect & Listen To On top of level changes (which you should be aware of if you want to do a proper level matched A/B) we can experience the following changes in tone when varying the load that the microphone operates into: - Speed of delivery - transient content - Detail - in particular the “room” pickup from the off-axis parts of the mic - Frequency response & timbre - Overall output level (-5dB for 220 ohms, -1dB for 1200 ohms, referenced to 0dB attenuation at 2800 ohms with a 150 ohm source) I like to display these on something I call the TRIANGLE OF TONE. Essentially a system that allows you to mark up the audible changes to triangulate the appararent sonic footprint of a mic/pre combination. Any of the descriptors can be as you desire - just make sure that opposites face each other on a line… To describe how I hear it - the ASP880 when set to HI provides detail, punch and a fast presentation with lots of low end weight. On MED the sound becomes a bit tighter and mid focused with a good balance of everything - a classic kind of presentation. Set to LO, microphones tend to take on a different personality with a more glued transient to the main body of the sound, an overall slower presentation with some loss of space/detail - great for making a bad room sound a bit better! Of course - each microphone reacts differently and it is definitely a case of FLIP THE SWITCH, LISTEN & DECIDE. However - some things to bear in mind: - Moving coil and ribbon mics display the greatest tonal change - Transformer output capacitor microphones can also provide tonal variation with variable impedance - Solid-state output (50 ohm typical) capacitor microphones are such a well designed, light and nimble train that they are often immune to loading effects and will display a much more subtle change, if at all - Contrary to popular belief, many ribbon microphones provide more generous low frequency performance (with less ringing) and greater sonic delivery when operated into higher impedances, so be sure to try the 3600 ohm HI setting!
<urn:uuid:6d5b9c17-a20b-4cc2-b450-5464853d8916>
{ "dump": "CC-MAIN-2018-13", "url": "https://support.audient.com/hc/en-us/articles/202917308-Variable-Impedance-Explained", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648178.42/warc/CC-MAIN-20180323044127-20180323064127-00601.warc.gz", "language": "en", "language_score": 0.9078067541122437, "token_count": 1321, "score": 3.0625, "int_score": 3 }
Waxes separated from petroleum are defined as the waxes present naturally in various fractions of crude petroleum . Petroleum waxes are complex mixtures of hydrocarbons, amongst which are n-paraffin, branched chain paraffins and cycloparaffins in the range of C18–C70 [2, 3, 4]. The quality and quantity of waxes manufactured from crude oils depend on the crude source and the degree of refining to which it has been subjected prior to wax separation [5, 6]. Paraffin waxes constitute the major bulk of such waxes, the other two types; produced in comparative quantities; also command a good market because of their certain specific end uses . The paraffin waxes are solid hydrocarbons at room temperature. Slack wax is a refinery term for the crude paraffin wax separated from the solvent dewaxing of base stocks. Slack wax contains varying amounts of oil (ranging from 20 to 50 wt.%) and must be removed to produce hard or finished waxes [1, 7]. If the slack wax separated from residual oil fractions, the oil-bearing slack is frequently called petrolatum . Petrolatum is a general name applied to a slightly oiled crude microcrystalline wax. It is semi-solid, jelly-like materials. Petrolatum is obtained from a certain type of heavy petroleum distillates or residues. Ozokerite wax is naturally occurring mineral wax. It is also a microcrystalline wax. Ceresin is a microcrystalline wax; it is the name formerly given to the hard white wax obtained from fully refined ozokerite. Petroleum ceresin is a similar microcrystalline wax but separated from petroleum. Ceresin and petroleum ceresins appear to have the same composition, structure, physical and chemical properties . 1.1 Composition of petroleum waxes Petroleum waxes are substance, which is solid at normal temperatures. Paraffin and microcrystalline waxes in their pure form consist only solid saturated hydrocarbons. Petrolatum, in contrast to the other two waxes, contains both solid and liquid hydrocarbons. Petrolatum is semi-solid at normal temperatures and is quite soft as compared to the other two waxes. Paraffin wax is a solid and crystalline mixture of hydrocarbons; it is usually obtained in the form of large crystals. It consists generally of normal paraffin ranging from C16 to C30 and may be higher. Proportions of slightly branched chain paraffin ranging from C18 to C36 and naphthenes; especially alkyl-substituted derivatives of cyclopentane and cyclohexane; are also present [1, 5, 8, 9, 10]. The average molecular weight of these paraffin waxes is about 360–420 [9, 11]. A paraffin wax melting at 53.5°C showed a space lattice having C—C bond length of 1.52°A, a C—C—C bond angle of 110°A, a C—H bond length of 1.17°A and an H—C—H bond angle of 105°A . Microcrystalline waxes are obtained from the vacuum residue. The source for the production of microcrystalline wax is petrolatum or bright stock . Microcrystalline waxes consist of highly branched chain paraffin; in contrast to the macrocrystalline; cycloparaffins and small amounts of n-paraffins and alkylated aromatics [1, 5, 9]. The actual chain length of the n-alkanes is approximately C34–C50. Long-chain, branched iso-alkanes predominantly contain chain lengths up to C70 . The branched-chain structures of the composition CnH2n + 2 are found. Branched mono-methyl alkane, 2-methyl alkanes being found. As the position of the methyl group moves farther from the end of the chain, the amount of the corresponding alkane becomes smaller. The branched chains in the microcrystalline waxes are presented at random along the carbon chain, meanwhile in paraffin wax, they are located at the end of the chain . The cyclo-alkanes, however, consist mainly of monocyclic systems. Monocyclopentyl, monocyclohexyl, dicyclohexyl paraffin and polycyclo paraffin are also found. Some microcrystalline waxes are mainly composed of multiple-branched isoparaffins and monocycloparaffins . Moreover, non-hydrogenated micro waxes also mainly contain mono-cyclic and heterocyclic aromatic compounds . 1.2 Properties of petroleum waxes 1.2.1 Physical properties Paraffin waxes are composed of 40–90 wt.% normal paraffins of about 22–30 carbon atoms and possibly higher, accordingly, they differ very little in physical and chemical properties. The remainder is C18–C36 isoalkanes and cycloalkanes [5, 16]. Straight chain alkanes in the range from 20 up to 36 carbon atoms show transition points in the solid phase. Thus two modifications, stable at different temperatures and different crystal habits, are known . Paraffin waxes, relatively simple mixtures, usually have a narrow melting range and are generally lower in melting point than microcrystalline waxes. They usually melt between 46 and 68°C. The melting point of paraffin waxes increases in parallel with molecular weight. The branching of the carbon chain, at identical molecular weights, results in a decrease in the melting point. Paraffin waxes can be classified according to the melting point to soft (lower m.p.) and hard (higher m.p.) paraffin waxes. Oil content is a fingerprint of the quality of the wax. The method of determination depends upon the differential solubility of oil and wax in a given solvent. Paraffin wax, microcrystalline wax and petrolatum have a different degree of affinity for oil content. Paraffin wax has little affinity for oil content. It may be taken as a degree of refinement. Fully refined wax usually has an oil content of <0.5%. Microcrystalline waxes have a higher affinity for oil than paraffin waxes because of their smaller crystal structure. The oil content of microcrystalline wax is 1–4 wt.%, depending on the grade of wax . 1.2.2 Mechanical properties The hardness and crystallization behavior of macrocrystalline paraffin waxes are interfered distinctly by their distribution width, average chain length and n-alkane content . Hardness is the resistance against the penetration of a body (needle, cone or plunger rod) under a defined load, this body is made of a harder material than the substance being tested. To measure the hardness of paraffin waxes, penetration tests are widely accepted. It is a common feature of strength and hardness tests that the test specimens are subjected to short-time stresses . The penetration test is the most widespread technique for determining the hardness and the thermal sensitivity of petroleum waxes. Macrocrystalline waxes change to a greater extent with temperature than that of microcrystalline waxes. An increase in oil content results in an increase in penetration values of both macro- and microcrystalline waxes . 1.2.3 Food grade properties These properties concern waxes and petrolatums for food grade. Their potential toxicity could be attributed to aromatic residues. The latter are characterized directly by using UV spectra in the spectral zone corresponding to aromatics. 1.3 Crystal structure of petroleum waxes The class of organic crystals represents a broad range of geometries, including needles, plates, cubes, rods, prisms, pentagons, octagons, hexagons, rhomboids and pyramids. Each of these forms results from crystallization from a solution. The geometry of the crystals formed is determined by the solute/solvent interaction and the physical conditions of the system (e.g., temperature, pressure and mechanical mixing). One interesting characteristic of crystals is that they can form a variety of shapes, which are due to the environmental conditions under which they form. They can be large or small, extend long distances or short, be well-defined or diffuse; in short, they can display an impressive array of forms. It is this variety of form upon which crystal modifiers are intended to take advantage . All petroleum waxes are crystalline in some degree and it is possible to classify waxes in terms of the type of crystals formed, when the wax crystallizes out of solution. 1.3.1 Macrocrystalline waxes (paraffin waxes) The paraffin crystals appear in three different forms: plates, needles and mal shapes; the latter are small size, undeveloped crystals, which often agglomerates. The conditions for the formation of these shapes have been studied by many researchers. They have come to the following conclusions: The three crystal forms of paraffin waxes depend on both the conditions of the crystallization process and the chemical composition of the wax. Plate crystals are obtained from lower boiling points paraffinic distillates, while the needle and mal-shaped crystals are obtained from the higher boiling points ones and from vacuum residues. For a given molecular weight limit, the higher melting point constituents crystallize in plate type in which the crystals are hexagonal plate. The low-melting ones crystallize in needles while the medium-melting ones crystallize in mal shapes. Normal paraffin crystallize in plates. Needle crystals contain both aliphatic and cyclic hydrocarbons, while mal-shaped crystals are characterized by their content of branched hydrocarbons. Low-cooling rates during crystallization will result in large crystals for both plate and needle forms, while the crystal growth for mal-shaped crystals is very slight. The solubility of paraffin in a solvent is inversely proportional to their melting points. In the presence of solvent, wax mixtures begin to crystallize at relatively low temperatures in the form of plates followed by mal-shaped crystals. However, the constituents crystallizing in needles are more soluble than those crystallizing in plates. Therefore, needles crystals will appear only at lower temperature and higher concentrations Plate crystals can readily be transformed into needle and mal-shaped crystals. Under appropriate conditions, the needle crystals can be transformed into mal-shaped crystals . Normal paraffin, C17–C34, may exist in three and possibly four crystal forms. Near the melting point, hexagonal crystals are the stable form. At somewhat lower temperatures, the odd-numbered from C19 to C29 are orthorhombic, even numbered ones from C18 to C26 is triclinic and those C28–C36 is monoclinic [22, 23]. 1.3.2 Microcrystalline waxes Both n-paraffin and isoparaffins crystallize in needle forms; they differ in that the latter does so at all temperatures, while higher temperatures are required for the former. The needle form of the isoparaffins differs from that of ceresins or paraffin waxes containing ceresins, in that the crystals of former are large and loose, while those of the latter are extremely small and dense . Microcrystalline waxes may contain substantial percentages up to 30% of paraffin which, when separated, crystallize well as high-melting macro crystalline or paraffin wax. The microcrystalline wax material interferes and imposes its crystallizing habit on the other material [16, 24]. Although the classification of petroleum waxes into macro crystalline and microcrystalline waxes on the basis of crystal size is valid to a great extent, there is no sharp line separating the two groups. Indeed, there is a large group of waxes that could fall in either classes and these waxes are called intermediate waxes, blended waxes, mal-crystalline waxes and semi-microcrystalline waxes. But semi-microcrystalline wax adopted . 1.4 Manufacture of petroleum waxes The manufacture of petroleum waxes is closely related to the manufacture of lubricating oils. The raw paraffin distillates and residual oils contain wax and they are normally solid at ambient temperature. Removal of wax from these fractions is necessary to permit the manufacture of lubricating oil with a satisfactory low pour point. Manufacture of petroleum waxes includes the following technological processes: Production of slack waxes and petrolatums by dewaxing petroleum products. Refining of the wax products. Deoiling and fractional crystallization. 1.5 Applications of petroleum waxes As the consumption of wax products in the world wax market increases; especially for food, pharmaceutical and cosmetic grades and specialty wax; the increase of profitability of wax production will lie on the improvement of blending and modification techniques for macro- and microcrystalline waxes as base materials as well as the development and applications of new wax products . Petroleum waxes are based in a wide variety of applications. Some of its most important applications were used in industry such as, paper industry, household chemicals, cosmetics industry, dental industry, match industry, rubber industry, building constructions, electrical industry, inks industry and powder injection molding industry beside that of hydrogen production and energy storage applications [4, 10, 26, 27, 28, 29, 30]. Fractions of petroleum wax can be achieved to separate more than one type of paraffin wax such as macrocrystalline and microcrystalline waxes, the waxes characterization such as carbon number, hardness, crystal shape, composition and molecular weight depend on the condition of separating the wax, paraffin wax act like a joker in different industries such as inks, papers, cosmetics and ceramic fabricating using powder injection molding industry. Mazee W. Modern Petroleum Technology. Great Britain: Applied Science Publishers Ltd., on behalf of The Institute of Petroleum; 1973. p. 782 Prasad R. Petroleum Refining Technology. Delhi, India: Khanna; 2000 Gupta A, Severin D. Characterization of petroleum waxes by high temperature gas chromatography-correlation with physical properties. Petroleum Science and Technology. 1997; 15(9-10):943-957 Bennett H. Industrial Waxes. New York: Chemical Pub Co; 1975 Letcher C. Waxes. John Wiley & Sons New York. 1984. pp. 466-481 Avilino S Jr. Lubricant Base Oil and Wax Processing. New York: Morcel Dekker, Inc.; 1994. pp. 17-36 Guthrie VB. Petroleum Products Handbook. McGraw-Hill; 1960 Concawe. Petroleum Waxes and Related Products. Boulevard du Souverain, Brussels, Belgium; 1999 Gottshall R, McCue C, Allinson J. Criteria for Quality of Petroleum Products. London, Great Britian: Applied Science Publishers Ltd. On behalf; 1973 Freund M. et al. Paraffin Products Properties, Technologies, Applications. 1982. p. 14 Nakagawa H et al. Characterization of hydrocarbon waxes by gas-liquid chromatography with a high-resolution glass capillary column. Journal of Chromatography A. 1983; 260:391-409 Vainshtein B, Pinsker Z. Opredelenie polozheniya vodoroda v kristallicheskoi reshetke parafina. Doklady Akademii Nauk SSSR. 1950; 72(1):53-56 Meyer G. Thermal properties of micro-crystalline waxes in dependence on the degree of deoiling. SOFW journal. 2009; 135(8):43-50 Levy E et al. Rapid spectrophotometric determination of microgram amounts of lauroyl and benzoyl peroxide. Analytical Chemistry. 1961; 33(6):696-698 Kuszlik A et al. Solvent-free slack wax de-oiling—Physical limits. Chemical Engineering Research and Design. 2010; 88(9):1279-1283 Corson B. In: Brooks BT, Kurtz SS Jr, Boord CE, Schmerling L, editors. The Chemistry of Petroleum Hydrocarbons. Vol. Ill. 1955. pp. 310-312 Ferris S. Petroleum Waxes: Characterization, Performance, and Additives. New York, USA: Technical Association of the Pulp and Paper Industry; 1963. pp. 1-19 Meyer G. Interactions between chain length distributions, crystallization behaviour and needle penetration of paraffin waxes. Erdöl, Erdgas, Kohle. 2006; 122(1):16-18 Hopkins TD, N.C.F.P. Analysis, the costs of federal regulation. National Chamber Foundation. 1992 USP, U.P. 34, NF 29. The United States pharmacopeia and the National formulary. Rockwille, MD: The United States Pharmacopeial Convention; 2011 Becker J. Crude Oil Waxes, Emulsions and Asphaltenes. Tulsa, OK, USA: Penn Well Publishing Company; 1997 Smith A. The crystal structure of the normal paraffin hydrocarbons. The Journal of Chemical Physics. 1953; 21(12):2229-2231 Ohlberg SM. The stable crystal structures of pure n-Paraffins Contalmng an even number of carbon atoms in the range C30 to C36. The Journal of Physical Chemistry. 1959; 63(2):248-250 Higgs P. The utilization of paraffin wax and petroleum ceresin. Journal of the Institution of Petroleum Technology. 1935; 21:1-14 Zaky MT et al. Raising the efficiency of petrolatum deoiling process by using non-polar modifier concentrates separated from paraffin wastes to produce different petroleum products. RSC Advances. 2015; 5(88):71932-71941 Maillefer S, Rehmann A, Zenhaeusern B. Hair wax products with a liquid or creamy consistency. Google Patents. 2011 Saleh A, Ahmed M, Zaky M. Manufacture of high softening waxy asphalt for use in road paving. Petroleum Science and Technology. 2008; 26(2):125-135 Zaky M, Soliman F, Farag A. Influence of paraffin wax characteristics on the formulation of wax-based binders and their debinding from green molded parts using two comparative techniques. Journal of Materials Processing Technology. 2009; 209(18-19):5981-5989 El Naggar AM et al. New advances in hydrogen production via the catalytic decomposition of wax by-products using nanoparticles of SBA frame-worked MoO3. Energy Conversion and Management. 2015; 106:615-624 Mohamed NH et al. Thermal conductivity enhancement of treated petroleum waxes, as phase change material, by α nano alumina: Energy storage. Renewable and Sustainable Energy Reviews. 2017; 70:1052-1058
<urn:uuid:d681b11e-1ad9-41f9-b64b-54ab3a31b39f>
{ "dump": "CC-MAIN-2023-40", "url": "https://www.intechopen.com/chapters/67759", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506539.13/warc/CC-MAIN-20230923231031-20230924021031-00449.warc.gz", "language": "en", "language_score": 0.8562888503074646, "token_count": 4289, "score": 3.421875, "int_score": 3 }
In a landmark move, scientists were able to reconstruct how Homo heidelbergensis, an extinct relative of modern humans, went about their daily lives for the first time in scientific history. With the aid of forensic studies of thousands of pieces of evidence, they recreated what a community of between 30 and 40 people did for around an eight-hour daily period - and particularly how they would kill to survive. The University College London (UCL) explained that the Homo heidelbergensis lived around 480,000 years ago, and evidence of their existence was found by a team of archaeologists at an area of land near Boxgrove, West Sussex. Discoveries offered unprecedented detail of a Stone Age feast the extinct human relatives would have eaten, including how the food was made, as well as what happened in the aftermath of the meal. Reports say the site allowed experts to suggest the Homo heidelbergensis may have opted to visit the site to kill large animals, butcher them and then eat them. The Boxgrove land is located between the sea and a high cliff, which experts said made for a good location for the hunting of animals, especially wild horses, who would have been moved to the area due to its cliff springs and freshwater ponds. Led by archaeologist Matthew Pope, from UCL's Institute of Archaeology, the team were able to find some of the "earliest non-stone tools found in the archaeological record of human evolution". Archaeology breakthrough: Major find rewrites 500,000-year-old extinct human understanding Dr Pope said: "Our detailed investigation has brought into focus people from the deep past who usually remain invisible." The work on the site, which was concluded in 2019, is detailed in ‘The Horse Butchery Site’, published this year by Spoilheap Publications. Co-author Simon Parfitt said the non-stone tools "would have been essential manufacturing the finely made knives found in the wider Boxgrove landscape". Dr Silvia Bello, of the Natural History Museum, London - which part funded the project, added: “The finding provides evidence that early human cultures understood the properties of different organic materials and how tools could be made to improve the manufacture of other tools. "Along with the careful butchery of the horse and the complex social interaction hinted at by the stone refitting patterns, it provides further evidence that early human populations at Boxgrove were cognitively, socially and culturally sophisticated.” The finds also help develop our knowledge of the butchering process, and new evidence from the site suggests that the Boxgrove people would come to the site armed to kill. It is claimed that the human relatives would use spears to kill animals such as horses, and that they would immediately begin the butchering process before eating the remains. Archaeology: 'Graveyard of shipwrecks' caught eyes of treasure hunters [LATEST] Jesus Christ: Archaeology 'evidence' has link to crucifixion - expert [ANALYSIS] Archaeologist exposes Great Pyramid discovery – 'something's inside'. [INSIGHT] The process of butchering, which includes the removal of the animal's skin, extraction of its offal and the breaking down of the bones, would have taken hours, the scientists say. They also claim that children would have "almost certainly" been involved in the butchering process. ‘The Horse Butchery Site’ can be purchased here.
<urn:uuid:31d57b6e-bfe1-422c-a584-5bb406d8dc65>
{ "dump": "CC-MAIN-2020-45", "url": "https://www.express.co.uk/news/world/1335786/archaeology-news-science-stone-age-discovery-west-sussex-boxgrove-human-butchering-spt", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107916776.80/warc/CC-MAIN-20201031062721-20201031092721-00124.warc.gz", "language": "en", "language_score": 0.9703482389450073, "token_count": 717, "score": 4.0625, "int_score": 4 }
When I was looking for more information about peer production and open source, I came across a very interesting article. This article is about a project to fight cancer. Did you know that one out of three people will get cancer in their lifetime? Isaac Yonemoto came up with a solution. This solution is the development of unpatented drugs. These drugs should be sold by pharmaceutical companies for a reasonable price, so that these drugs will become accessible to everyone. The open source software and industry has already proved that patents are not necessary for innovation. Without patent, the drugs are less expensive and it is easier to develop better drugs. This video will tell you briefly what the project is about: The Marilyn Project Marilyn Project is an open source project for developing a cure for cancer. This drug is patent free. You can support this research by donating… View original post 286 mots de plus
<urn:uuid:e4f1fd4c-6495-4051-b793-6d180001355b>
{ "dump": "CC-MAIN-2018-17", "url": "https://vienergie.wordpress.com/2014/10/18/project-marilyn-an-open-source-cancer-research/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948950.83/warc/CC-MAIN-20180427021556-20180427041556-00492.warc.gz", "language": "en", "language_score": 0.9531735777854919, "token_count": 185, "score": 2.71875, "int_score": 3 }
The method that people use to attempt suicide has a large influence on the risk of later completed suicide, according to a new study published in the British Medical Journal today. A Swedish study found that suicide attempts involving hanging or strangulation, drowning, firearm, jumping from a height, or gassing are moderately to strongly associated with an increased risk of suicide compared with poisoning or cutting. Suicide is a leading cause of death and the risk of suicide following a suicide attempt is around 10% over follow-up of five to 35 years. However, there has been little research so far into the characteristics of a suicide attempt - such as being well planned, drastic or violent - and whether those have a bearing on the risk of a later completed suicide. Researchers from the Karolinska Institute in Stockholm used national registers to carry out a study of 48,649 people admitted to hospital in Sweden due to a suicide attempt between 1973 and 1982. They studied how the method of the suicide attempt might predict a completed suicide during a follow-up of 21-31 years, to the end of 2003. The results showed that during follow up, 5,740 people (12%) went on to commit suicide and that suicide risk varied substantially by the method used at the previous suicide attempt. Attempted suicide by poisoning was the most common method (84% of attempters) and was therefore linked to the majority of later suicides (4,270). However, the researchers found that the highest risk for eventual suicide (54% in men and 57% in women) was found for attempted suicide by hanging, strangulation, or suffocation. People were around six times more likely to successfully commit suicide if they had attempted suicide by these methods previously, after adjusting for age, gender, education, immigrant status, and psychiatric illness. More than 85% of these suicide cases died within one year following the prior suicide attempt. For other methods such as gassing, jumping from a height, using a firearm or explosive, and drowning; the risks were significantly lower than for hanging, but were still higher at 1.8 times to 4 times more likely to successfully commit suicide. People whose suicide attempt involved poisoning or cutting led to 12.3% or 13% respectively of later suicides. The authors conclude: "The method used at a suicide attempt predicts later completed suicide also when controlling for sociodemographic confounding and co-occurring psychiatric disorder. Intensified aftercare is warranted after suicide attempts involving hanging, drowning, firearms or explosives, jumping from a height, or gassing." In an accompanying editorial, Keith Hawton, Professor of Psychiatry at Warneford Hospital in Oxford, says that the results of this study have important implications for assessment and aftercare of patients who self harm. However, he warns that, "although use of more lethal methods of self harm is an important index of suicide risk, it should not obscure the fact that self harm in general is a key indicator of an increased risk of suicide." Explore further: Short birth length more than doubles risk of violent suicide attempts
<urn:uuid:28227a3b-9d47-42b6-a40d-61df523d581d>
{ "dump": "CC-MAIN-2015-40", "url": "http://phys.org/news/2010-07-method-suicide-eventual.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738006925.85/warc/CC-MAIN-20151001222006-00010-ip-10-137-6-227.ec2.internal.warc.gz", "language": "en", "language_score": 0.9731972217559814, "token_count": 626, "score": 2.765625, "int_score": 3 }
Fluorescent coral. A Rutgers University study focused on how corals evolved to allow them to interact with, and adapt to, the environment. More about this image An international team of scientists led by Rutgers University faculty has conducted the world’s most comprehensive analysis of coral genes. The coral gene database study focused on how their evolution has allowed corals to interact with, and adapt to, the environment. A second study, also led by Rutgers researchers, with colleagues at the University of Hawaii, shows for the first time how stony corals create their hard skeletons, using proteins as key ingredients. In 2014, leaders in the field of coral biology and genomics met at Rutgers to plan an analysis of 20 coral genomic datasets. The goal was to provide a comprehensive understanding of coral evolution since the organisms appeared on Earth 525 million years ago. "There are a few key genes in corals that allow them to build this house that laid down the foundation for many, many thousands of years of corals," said Debashish Bhattacharya, a professor in the Department of Ecology, Evolution and Natural Resources in the School of Environmental and Biological Sciences at Rutgers and one of the leads of the coral gene database study. "It couldn’t be any more fundamental to ocean ecosystems." Paul Falkowski, the other lead of the coral gene database study and a professor who heads the Environmental Biophysics and Molecular Ecology Laboratory at Rutgers, said, "I think one of the more interesting aspects of these data will be to understand which coral species may become winners or losers in the face of anthropogenic climate change -- what makes them tougher and what makes them susceptible to changes in temperature, changes in ocean acidification." The researchers found dozens of genes that allow corals to coordinate their response to changes in temperature, light and pH (acidity vs. alkalinity), as well as deal with stress triggered by the algae that live with them and exposure to high levels of light. Some of these stress-related genes are of bacterial origin and were acquired to help corals survive. The researchers theorize that the vast genetic range of corals may help them adapt to changing ocean conditions. The stony coral study, led by former Rutgers Department of Marine and Coastal Sciences postdoctoral fellow Tali Mass, explains how stony corals make their hard, calcium-carbonate skeletons and explains how this process might be affected as the oceans become more acidic due to climate change. This work was supported by the National Science Foundation (grants EF 14-08097, EF 10-41143 and EF 14-16785). To learn more, see the Rutgers news story Rutgers scientists help create world’s largest coral gene database. (Date image taken: 2015-2016; date originally posted to NSF Multimedia Gallery: Aug. 9, 2016) [Image 4 of 4 related images. Back to Image 1.]
<urn:uuid:5786c0a3-7e49-4b75-b440-2692bd3641af>
{ "dump": "CC-MAIN-2018-39", "url": "https://www.nsf.gov/news/mmg/mmg_disp.jsp?med_id=81088&from=", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155702.33/warc/CC-MAIN-20180918205149-20180918225149-00514.warc.gz", "language": "en", "language_score": 0.9331697821617126, "token_count": 594, "score": 3.375, "int_score": 3 }
As any commuter who has experienced unreliable service or lives miles away from a bus stop will tell you, sometimes public transit isn’t really a viable option, even in major cities. In our car-loving society, where 85 percent of Americans use a car to get to work, people who cannot access transportation are excluded from their own communities and trapped inside “transit deserts.” This term, which one of us (Junfeng Jiao) coined, describes areas in a city where demand for transit is high but supply is low. Lack of transit has harmful effects on those who rely on public transit – generally, people who are too young, too old, too poor or have disabilities that don’t allow them to drive. Mapping these deserts will help agencies adjust transit services and better serve their communities. At UT Austin’s Urban Information Lab, our research focuses on refining the methods used to quantify and measure transit supply and demand. We’ve developed clear and concise geographic information system (GIS) methods to evaluate transportation systems, providing alternatives to previous, more complicated network modeling. These methods can quickly be applied to any location, as we have shown in studies of five major cities in Texas and other cities across the United States. By using this method, we found that hundreds of thousands of transit-dependent people in Texas don’t have access to mass transit systems. Connecting people to jobs and services Research shows that low-income residents living in sprawling areas have limited transportation options, which constrains their job opportunities and upward mobility. Inadequate transportation keeps people from finding work, which then reduces the productivity of their communities. It also can limit access to medical services, causing health problems to go undetected or worsen. Addressing transit access is one important strategy for tackling broader social problems. For example, welfare recipients are less likely to own cars or have access to transit than the general population. Reducing these transportation barriers would help move them from welfare to work. Although scholars have been studying “food deserts” (areas where residents lack access to nutritious food) for several decades, we have only recently applied this logic to mass transportation systems, despite the fact that food deserts often occur due to lack of transportation. Relatively little research has been carried out to identify and quantify gaps between transit demand and supply. But as counties and cities feel the effects of declining funding from federal and state transportation user fees, they need new ways to target transportation infrastructure investments and ensure limited resources are used in the best way possible. We have found that maps are a promising way to guide these discussions. Mapping transit deserts Determining exactly who relies on mass transit can be difficult. Existing information depends on census data. As previously noted, people who rely on transit are usually from marginalized demographic groups. They may be elderly, poor or have disabilities that keep them from driving. Census data do not account for the fact that sometimes these populations overlap (a transit-dependent person could be old as well as poor), so one individual could be counted many times. Also, census data on car ownership are not available at the census block group level, which is the smallest geographic unit published by the U.S. Census Bureau. This lack of data makes it hard to measure transit dependency with accuracy. Measuring transit supply is easier. It relies on data from municipal planning agencies as well as relevant municipal and county GIS departments, which manage spatial and geographic information, analysis tools and mapping products. These agencies measure variables that include numbers of transit stops, transit routes and frequency of service, as well as lengths of sidewalks, bicycle lanes and low speed-limit routes (which are relevant because some commuters may opt to walk instead of taking the bus). Beyond city centers Current research shows that transit deserts exist all over the country. Cities such as Chicago; Cincinnati; Charlotte, North Carolina; Portland, Oregon; and San Antonio contain multiple communities that don’t have enough transit services to meet existing demand. Even in older cities, where development tends to follow transit lines, there are neighborhoods where the supply of transit is simply not enough. This is a large-scale problem. In San Antonio, the seventh-largest U.S. city by population, some 334,530 people – nearly one-fourth of the population – need access to public transportation in a city that doesn’t even have rail service. In Chicago, where there are high levels of transit dependency all across the city, just three of the transit desert neighborhoods that we identified house approximately 176,806 residents. Even in a city as progressive as Portland, Oregon, thousands live in transit desert neighborhoods. When it comes to geographic location, transit demand and supply appear to follow certain spatial patterns. Unsurprisingly, transit supply is highest in city centers and decreases as distance from city centers increases. As a result, transit deserts do not typically occur in city centers or near downtown. In fact, because of the typical “hub and spoke” design of many transit services, city centers often have transit surpluses where supply outstrips demand. The location of transit deserts often does not follow a geographic pattern, although they are usually associated with low-income and remote areas. While planners and engineers may have a rough idea of where supply is low, making service adjustments requires measuring and mapping of transit supply and demand citywide. Rebalancing transit networks Many cities are now making service adjustments to improve service to transit deserts. For example, Houston’s transit authority, METRO, recently redesigned its bus service as part of a larger “Transit Service Reimagining,” in an attempt to better meet the region’s mobility needs. Evaluation of the new transit services shows that current levels of transit demand and supply are more balanced, though gaps still exist. Identifying transit deserts is even catching on at the federal level. The U.S. Department of Transportation recently launched a new initiative to map transit deserts nationally through a National Transit Map, which will put together data from different transit agencies into a complete feed. By accessing a larger, national look at transit demand and supply, regional agencies will have extra tools available to them when making changes to their local transit services. What these changes will be is hard to say. Expanding existing bus services may be the most cost-effective way to improve transit access. Even in New York City, with its massive subway system, city officials are increasingly turning to bus rapid transit due to the high cost of adding new subway lines. Adding bus lines, increasing service hours and even streamlining boarding and fares can help improve service and increase access. Integrating bicycling with transit services would be another cost-effective option. As research on transit deserts continues to grow, more precise methods of quantifying the gap between transit supply and demand should develop. More research may provide new views on how the built environment and socioeconomic variables affect transportation accessibility. With careful planning and investment, these transit deserts can eventually transform into transit oases.
<urn:uuid:db38db76-3e58-4016-853d-11c0981d5a9d>
{ "dump": "CC-MAIN-2019-35", "url": "https://theconversation.com/stranded-in-our-own-communities-transit-deserts-make-it-hard-for-people-to-find-jobs-and-stay-healthy-77450", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315618.73/warc/CC-MAIN-20190820200701-20190820222701-00092.warc.gz", "language": "en", "language_score": 0.9511120319366455, "token_count": 1441, "score": 3.265625, "int_score": 3 }
The Kokoda Track or Trail is a rough and narrow foot track through the Owen Stanley Ranges which the Japanese hoped to use as a route to capture Port Moresby, the capital of Papua, in World War II. The Japanese reached Kokoda, about 100 kilometers from Port Moresby, in July 1942. Between Kokoda and Imita Ridge, about 30 kilometers from Port Moresby, they met stiff resistance from Australian troops supported by Papuan carriers, stretcher-bearers and armed police. The rugged terrain, cold and rain made fighting extraordinarly difficult. There were heavy casualties on both sides and many died from disease. The Japanese failed to take Imita Ridge and between September and December 1942 were forced to retreat to the northeast coast. Tom Cunningham, winner of the Do Kokoda and High Sierra Ultimate Adventurer competition trekked the Kokoda Trail in September 2016. The first person to ever take a drone on the trek, Tom captured the beauty of the landscape and shared his Kokoda experience. More about Kokoda Track: Last edited by Sam on September 16, 2017
<urn:uuid:4f0c8ca5-493f-496d-b380-e2bc3fa57a39>
{ "dump": "CC-MAIN-2021-17", "url": "https://www.tokpisin.info/kokoda-track/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00483.warc.gz", "language": "en", "language_score": 0.9639023542404175, "token_count": 223, "score": 3.21875, "int_score": 3 }
According to the National Institute on Drug Abuse (NIDA), patients stabilized on adequate, sustained doses of methadone or buprenorphine can keep their jobs, avoid crime and violence, and reduce their exposure to HIV and Hepatitis C by stopping or reducing injection drug use and drug-related high risk sexual behavior. Naltrexone is a long-acting opioid antagonist with few side effects. It is usually prescribed in outpatient medical conditions. Naltrexone blocks the euphoric effects of alcohol and opiates. Naltrexone cuts relapse risk during the first 3 months by about 36%. However, it is far less effective in helping patients maintain abstinence or retaining them in the drug-treatment system (retention rates average 12% at 90 days for naltrexone, average 57% at 90 days for buprenorphine, average 61% at 90 days for methadone). 3 Stages of Drug Alcohol Rehab-How It Works The first step in treatment is brief intervention. The physician states unequivocally that the patient has a problem with alcohol and emphasizes that this determination stems from the consequences of alcohol in that patient's life, not from the quantity of alcohol consumed. Emphasizing the effects on family, friends, and occupation, as well as any physical manifestations, is important. Pointing out that loss of control and compulsive use indicate alcohol dependence also is important. Crucially, DBT is also collaborative: it relies upon the ability of the addict and therapist to work things out together interactively. DBT is broken down into four modules – Mindfulness, Distress Tolerance, Emotion Regulation, and Interpersonal Effectiveness – which is an approach which allows addicts to focus on one particular task or aspect of themselves at once, and enables the therapy to be targeted more acutely at the individual addict and their own particular situation. The risk of relapse in drug addiction recovery is substantial, and that makes outpatient aftercare programs vitally important for newly-sober individuals, as well as for those working to maintain their recovery. Regular therapy sessions and 12-step (or alternative) peer group meetings can provide much-needed guidance and moral support to people in the midst of making major lifestyle changes, and family participation in ongoing relapse prevention programs can boost their effectiveness even further. While aftercare programs don’t guarantee permanent wellness, they can significantly decrease the likelihood of relapse and make it easier for recovering addicts to get back on track if and when they slip. Living on a limited income is challenging enough; having to deal with recovery from a drug or alcohol addiction on a limited income is even more so. Finding help with treatment can make ease some of this burden, and it can help those struggling with addiction to get their lives back. Once recovery is in progress, it can help to be surrounded by others who understand and who can help the recovering individual through the process, such as by participating in self-help groups and other counseling programs. Opioid Addiction and Treatment Marital and Family Counseling incorporates spouses and other family members in the treatment process and can play an important role in repairing and improving family relationships. Studies show that strong family support through family therapy increases the chances of maintaining abstinence (stopping drinking), compared with patients undergoing individual counseling. Recovery from alcohol addiction is a lifelong journey. You may face relapses and temptations for most of your life. It’s not uncommon to slip in and out of sobriety as you work your way through your addiction. Some people beat addiction the first time they try to become sober. Others battle alcohol dependence for many years. The more you try, the higher your chances of success. Withdrawal is medically supervised and supported by our on-site nurses. For certain cases, we make use of medical aids to render the process much easier and safer. For opiate withdrawals we use suboxone, and for Benzodiazepine withdrawal we follow a modified version of the Ashton protocol. Alcohol withdrawal is medically supervised and medication is given to eliminate the risk of seizure and stroke. We take every measure to ensure that this first, important stage towards drug addiction recovery is a comfortable and safe one. To find out more about the detox program at Searidge please call us at 1-866-777-9614. So... What is Rehab Like? Alcoholism can also be categorized into 2 types: early-onset (biological predisposition to the disease) or late-onset (brought on by environmental or psychosocial triggers). Understanding and studying the difference between early- and late-onset alcoholism facilitate the selection of the appropriate therapy. Drugs that affect the rewarding behavior of neural activities, such as ondansetron, naltrexone, topiramate, and baclofen, have been shown to alter drinking behavior. 6. Finally, supportive social services – During this final step of alcohol rehabilitation, rehab staff help empower a patient by connecting her/him with services outside the treatment facility in order to maintain abstinence from alcohol and begin to create a network of supportive people to influence in the patients life. These services can include housing, health care, social service, child care, or financial and vocational counseling. That’s why we are here for you. Getting treatment for your alcohol addiction is the first step on your journey to health and recovery, but it’s a big step and not an easy one to make. We understand that. Whatever your questions and concerns are, there is a solution and an answer. Call us for information on alcohol treatment. We can also answer your questions about Dual Diagnosis treatment for those who are suffering from a mental health issue in conjunction with substance abuse. Your first step is to call our Patient Access Team for a confidential phone assessment. You will talk with a recovery expert who will determine whether drug or alcohol treatment is needed and, if it is, will recommend the appropriate level of care and work with you to coordinate insurance benefits. If alcohol or drug addiction is not clearly indicated or if you’re not ready to commit to an inpatient stay, you can learn more about your situation and possible next steps by participating in one of our residential evaluation programs. Residential evaluations typically involve a four-day stay at one of our treatment centers where a number of screenings and assessments will help to identify your particular needs and challenges. Drug Rehab Nc | What Is Rehab Like? | Drug Rehabilitation Centers Near Me Received income in an amount equal to or greater than $250 from: Blue Cross Blue Shield Federal Employee Program Received royalty from Lippincott Williams & Wilkins for book editor; Received grant/research funds from National Alliance for Research in Schizophrenia and Depression for independent contractor; Received consulting fee from Blue Cross Blue Shield Association for consulting. for: Received book royalty from American Psychiatric Publishing Inc. The purpose for seeking rehab is to ultimately achieve the goal of overcoming alcohol abuse or addiction. Rehab is the ideal way to attack an alcohol abuse problem because treatment utilises the latest methodologies and practices that address every aspect of alcohol misuse. Patients are treated in mind, body, and spirit rather than just focusing only on the body. Pharmaceutical opiates are now considered to be a more serious threat to public health than illicit drugs like heroin or cocaine. The widespread popularity of prescription analgesics like Vicodin (a combination of hydrocodone and acetaminophen), oxycodone (OxyContin), and Percocet (a combination of oxycodone and acetaminophen) has made these drugs much more accessible to Americans, many of whom obtain the drugs without a prescription. The journal Pain Physician reports that out of the 5 million Americans who admitted to abusing pain relievers in 2010, only 17 percent obtained the drugs through a legitimate prescription. Withdrawal. Medications and devices can help suppress withdrawal symptoms during detoxification. Detoxification is not in itself "treatment," but only the first step in the process. Patients who do not receive any further treatment after detoxification usually resume their drug use. One study of treatment facilities found that medications were used in almost 80 percent of detoxifications (SAMHSA, 2014). In November 2017, the Food and Drug Administration (FDA) granted a new indication to an electronic stimulation device, NSS-2 Bridge, for use in helping reduce opioid withdrawal symptoms. This device is placed behind the ear and sends electrical pulses to stimulate certain brain nerves. Also, in May 2018, the FDA approved lofexidine, a non-opioid medicine designed to reduce opioid withdrawal symptoms. Environment. Environmental factors, such as your access to healthcare, exposure to a peer group that tolerates or encourages drug abuse, your educational opportunities, the presence of drugs in your home, your beliefs and attitudes, and your family’s use of drugs are factors in the first use of drugs for most people, and whether that use escalates into addiction. Substance abuse therapy: Used as a part of many inpatient and outpatient programs, therapy is one of the cornerstones of drug addiction treatment. Individual, group and family therapy help patients and their loved ones understand the nature and causes of addiction. Therapy teaches coping strategies and life skills needed to live a productive, sober life in the community. For individuals with a co-occurring mental illness, intensive psychotherapy can also address psychiatric symptoms and find the underlying issues that contribute to addiction. Inside Shalom House, Australia’s ‘strictest’ drug rehabilitation | Australian Story
<urn:uuid:b53da965-e5a0-43c8-a049-89495baac4e3>
{ "dump": "CC-MAIN-2019-18", "url": "https://drug-rehab-program-centers.com/drug-rehab-in-corbett-terwilliger-lair-hill-portland-or.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578732961.78/warc/CC-MAIN-20190425173951-20190425195951-00164.warc.gz", "language": "en", "language_score": 0.9377628564834595, "token_count": 1947, "score": 2.59375, "int_score": 3 }
Heartworm disease (HWD) is caused by an infection of the filarial (threadlike) worm, Dirofilaria immitis. The dog is the primary host for this large parasite; however, other species – including cats and ferrets – also can be affected. Mature heartworms live predominately within the pulmonary arteries of the dog (the arteries of the lungs). In fact, the name "heartworm" is a bit misleading. Only in heavy or advanced infections do the worms actually reside in the heart. Nonetheless, the heartworm causes intense reaction of the pulmonary blood vessels. There is also injury to the lungs that can lead to shortness of breath and coughing. The lung injury places a strain on the heart leading to exercise intolerance and eventually heart failure. Advanced cases of HWD cause severe symptoms and can be fatal. Canine HWD is common throughout many parts of the world. The infection is spread through numerous species of mosquitoes; indeed, this insect is essential to the life cycle of the heartworm. For this reason, HWD is most common where climatic conditions of warmth and moisture are ideal for mosquito development. The heartworm life cycle is complicated and typically involves these stages: 1) mature heartworms living within a host dog produce thousands of microscopic offspring – microfilaria – that circulate in the blood; 2) a mosquito bites a dog (or wild canine) that is infected with mature heartworms; 3) the mosquito ingests microfilaria during the blood meal; 4) the microfilaria enter the mosquito and develop into infective larvae; 5) the mosquito bites another dog and transfers the infective larvae into a new canine host; 6) the larvae develop and migrate into the arteries of the lungs where they mature. The entire life cycle takes about 185 days! Where mosquitoes are active year round, HWD is spread throughout the year. This is characteristic of subtropical climates, including Florida and the Gulf coast states in the USA. Heartworm infection is also very common in geographic locations that experience four seasons. While the infection cannot be spread during the cold winter, dogs in temperate climates are at high risk for infection between the late spring and mid-autumn mosquito season. What determines your dog's risk to heartworm infection? Clearly mosquitoes are required to spread this disease, and local mosquito activity is a prime risk factor for your pet. Male dogs and dogs spending a great deal of time out-of-doors are at greater statistical risk. However, the most important predisposing factor for heartworm infection is the failure of a dog to receive appropriate heartworm prevention on a regular basis. Fortunately, the pet owner can control this risk factor with INTERCEPTOR (milbemycin oxime) Flavor Tabs from Novartis Animal Health. INTERCEPTOR (milbemycin oxime) Flavor Tabs prevent deadly heartworm disease, while they protect your dog against roundworms, hookworms (A. caninum) and whipworms. And INTERCEPTOR Flavor Tabs are approved for puppies as young as four weeks, weighing two pounds or more. INTERCEPTOR Flavor Tabs are clean, safe, and effective. Safe INTERCEPTOR has been tested to satisfy FDA requirements (NADA #141-915, Approved by the FDA). It has been used around the globe for over 10 years. Millions of dog owners and their veterinarians trust INTERCEPTOR. As with other heartworm preventives, dogs must be tested for heartworm prior to using INTERCEPTOR Flavor Tabs. In a small percentage of treated dogs, digestive and neurological side effects may occur. Effective INTERCEPTOR® (milbemycin oxime) Flavor Tabs® are 100% effective in preventing heartworm disease, and over 97% effective against roundworms, hookworms (A. caninum) and whipworms. Where other products leave your pet vulnerable, INTERCEPTOR provides proven, effective protection against the labeled parasites.
<urn:uuid:9a5b2b8b-a6d4-4075-84de-59ff76cb0378>
{ "dump": "CC-MAIN-2014-42", "url": "http://www.petplace.com/dogs/interceptor-milbemycin-oxime-flavor-tabs/page1.aspx", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450097.39/warc/CC-MAIN-20141017005730-00126-ip-10-16-133-185.ec2.internal.warc.gz", "language": "en", "language_score": 0.9304018616676331, "token_count": 816, "score": 3.6875, "int_score": 4 }
John Metcalfe was CityLab’s Bay Area bureau chief, covering climate change and the science of cities. Researchers say roof racks are “responsible for almost 1 percent of national fuel consumption.” Having a car rack for bikes or kayaks is a good way to show the world that you’re an outdoorsy, planet-loving person. But those contraptions might be taking their own toll on the environment by devastating your vehicle’s fuel consumption. Last year alone, roof racks ate up some 100 million gallons of gasoline throughout the U.S. due to their aerodynamic drag, according to a new study in Energy Policy. That represents about 0.8 percent of national fuel consumption by light-duty vehicles, say co-authors Yuche Chen of the National Renewable Energy Laboratory and Alan Meier of Berkeley Lab. In fact, they say the simple act of putting a rack on your ride can (depending on how it’s attached) cost you up to 25 percent more in fuel consumption. That’s not just a problem for your wallet but possibly for the climate, too, as the use of racks is estimated to jump 200 percent by 2040. Here’s more from a Berkeley press release: “A national perspective is still needed to justify policy actions,” the authors write. “For comparison, the additional fuel consumption caused by roof racks is about six times larger than anticipated fuel savings from fuel cell vehicles and 40 percent of anticipated fuel savings from battery electric vehicles in 2040.”... [M]anufacturers have found that it is possible to make roof racks with greatly improved aerodynamics. A policy to require energy labeling of roof racks could spur greater changes, the researchers note. Even greater energy savings would come from removing roof racks when not in use. Meier notes that they could be designed so as to be easier to remove. The researchers estimated that a government policy to minimize unloaded roof racks (admittedly extreme) in combination with more energy-efficient designs would result in cumulative savings of the equivalent of 1.2 billion gallons of gasoline over the next 26 years.
<urn:uuid:5738c37d-8cdf-4bfd-b6a8-329acaa38e0c>
{ "dump": "CC-MAIN-2019-47", "url": "https://www.citylab.com/transportation/2016/04/roof-bike-rack-fuel-efficiency-study/480087/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00119.warc.gz", "language": "en", "language_score": 0.9474515318870544, "token_count": 442, "score": 3.21875, "int_score": 3 }
What is Pediatric Neurology ? Pediatric neurology is the branch of medicine which deals with prevention, diagnosis, management, and treatment of neurological conditions of newborns, infants, children, and adolescents. Full range of neurological diseases is encompassed in this category. Diseases and disorders of the spinal cord, brain, peripheral nervous system, autonomic nervous system, muscles and blood vessels are a part of this. The conditions and diseases which are diagnosed and treated by a pediatric neurologist vary considerably from simple conditions like aa migraine to rare and complex diseases like neurological disorders. Pediatric neurologists act as consultants to primary care physicians who refer children to a specialist for better and personalized care. Our Pediatric neurologists treat children from birth to young adulthood. Their core of medical practice is directed towards the care of children only. The advanced training which they receive makes them highly equipped to meet child’s unique needs. Around 45 percent of the children who come for treatment have epilepsy while 20 percent have learning disabilities and 20 percent are suffering from headaches. Following are the conditions which are solved by a pediatric neurologist - - Seizure disorders including seizures in newborns, febrile convulsions, and epilepsy. - Genetic diseases and congenital metabolic abnormalities of the nervous system. It also includes congenital birth defects that affect the brain and spinal cord such as spina bifida. - Weakness in children including cerebral palsy, muscular dystrophy and nerve-muscle disorders. - Headaches and migraine. - Behavioral disorders which lead to polarized behavior in children. It includes attention-deficit/hyperactivity disorder (ADHD), school failure, autism and sleep problems. - Developmental disorders including delayed speech, motor milestones, and coordination issues. - Intellectual disability. - Sleep disorders which are affecting the overall health of children like sleep apnea, the problem in breathing etc. - Hospice and palliative medicine. Patient Stories & Case Studies Consult with Experienced Doctors Dr. Sanjay Kumar Choudhary Dr. Rajesh Kumar IBS Facilities and ServicesRead more IBS Facilities and Services Doctor SpeaksKnow more IBS HospitalsKnow more IBS - Cure. Care. CompassionKnow more International Patient TestimonialsOur hospital services International Patients Help Desk Events, News & GalleryKnow more Events, News & Gallery Rehabilitation CentreOur out-of-hospital services
<urn:uuid:e16106d2-a3d6-40df-bf27-beef6c20b123>
{ "dump": "CC-MAIN-2020-40", "url": "https://www.ibshospitals.com/sub/pediatric-neurology", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400217623.41/warc/CC-MAIN-20200924100829-20200924130829-00765.warc.gz", "language": "en", "language_score": 0.9079351425170898, "token_count": 511, "score": 3.015625, "int_score": 3 }
To learn more about the British Army of the Rhine, Anglo-German relations or British military communities, look at the links, exhibitions and programmes below: Imperial War Museums – https://www.iwm.org.uk/ – a rich archive of material relating to the British in Germany, particularly the initial occupation 1945-1955 and a significant photographic collection. National Army Museum, Chelsea – https://www.nam.ac.uk/ – significant collection of objects and documents relating to the British Army in Germany. Major exhibition series 2020-1: https://www.nam.ac.uk/series/foe-friend AlliiertenMuseum, Berlin – http://www.alliiertenmuseum.de/en/home.html The British in North Rhine-Westphalia: an exhibition in the Landtag Nordrhein-Westfalen, Düsseldorf, 4th May – 2nd June 2019. The British contributed to shaping the State of North Rhine-Westphalia: They founded it, they organized it on a democratic basis and left a huge cultural footprint. From the 4th of May through to the 2nd of June the North Rhine-Westphalia State Parliament will present the British in North Rhine-Westphalia” exhibition in order to generate discussion with a wider public and encourage reflection on NRW’s joint Anglo-German history. The bilingual exhibition, curated by Dr. Bettina Blum, will display approx. 300 exhibits on 150 square meters (photos, various objects, audiovisual media, reproductions of documents). It throws a new light on the numerous and varied Anglo-German relationships, dating from 1945 through to the present day. An array of topics will be covered,ranging from the requisitioning of housing to cooperation at the workplace, the celebration of carnival and sport events, experiences of binational couples and families up to the problems of learning a new language, the military exercises and social debates concerning the relationship between the military and society. Glimpses of life in the British barracks and settlements provide an additional insight into how the soldiers and families lived in and experienced North Rhine-Westphalia. The exhibition ends with the question “What remains?” and will incorporate the views and thoughts from the visitors. For group bookings contact: [email protected] If you have questions concerning the exhibition, the tours, or any special requirements, please contact Dr. Bettina Blum: [email protected] British Forces Germany Legacy Project – https://bfgnet.de/legacy-project.html BBC Radio 4, ‘The British Germans’, 2011, prod. Chris Bowlby –https://www.bbc.co.uk/programmes/b0174h5t – Many thousands of former soldiers who have decided to stay in Germany. In this programme Chris Bowlby goes in search of these ‘ British Germans’, and traces their relationship with Germany and Germans. Roy Bainton, The Long Patrol: The British in Germany Since 1945 (2003) Susan Carruthers, The Good Occupation: American Soldiers and the Hazards of Peace (2016) Charlie Hall, British Exploitation of German Science and Technology, 1943-1949 (2019) Adam Seipp, Strangers in the Wild Place: Refugees, Americans, and a German Town, 1945-1952 (Bloomington, 2013) Christopher Knowles, Winning the Peace: The British in Occupied Germany, 1945-1948 (2017)
<urn:uuid:4376e3db-7be4-4463-beff-90c45e4ed9e5>
{ "dump": "CC-MAIN-2022-33", "url": "https://britishbasesingermany.wordpress.com/resources/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00764.warc.gz", "language": "en", "language_score": 0.8883774280548096, "token_count": 765, "score": 2.515625, "int_score": 3 }
Students in the Charleston County School District are receiving real world STEM experiences outside the classroom for two full days a year at Camp Blackbaud on Daniel Island. Thanks to a partnership with Charleston Promise Neighborhood and the school district, students from participating schools attend a camp for middle school students which aims to get them out of the classroom and into a real world environment geared towards Science, Technology, Engineering and Mathematics (STEM). The camp began as a program focused on fifth grade software coding and programming and expanded last year to include a separate middle school camp focused on robotics. One of the best things that middle school eighth grader Saniyah Rivers likes is the ability to learn about new topics. “When I heard about [the camp] in my mini course in STEM at school and how good Blackbaud was, I knew I wanted to sign up,” said Rivers. “I really like learning about new things, like working with robots. I like working in groups, too… Oh and the glass elevator and the atrium in the building are amazing.” According to Blackbaud Corporate Citizenship Coordinator Gabrielle Sanders the camp not only benefits students but also associates who serve as counselors. “Through camp, our associates are not only able to pass their knowledge and expertise onto the next generation, but also give back to our community through skills-based volunteerism,” said Sanders. “I think my favorite part is just seeing the kids get excited about something they haven’t done before,” said Blackbaud associate and counselor Courtney Grainger. “Coding is just becoming much more pervasive in regards to the school systems and this is really introducing them to concepts that they may have not had exposure to in their classrooms.”
<urn:uuid:2c8e1c63-72ad-465f-82ba-491847d7b1b8>
{ "dump": "CC-MAIN-2023-14", "url": "https://www.howtolearn.com/2017/12/real-world-stem-experiences-outside-the-classroom/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00541.warc.gz", "language": "en", "language_score": 0.9671552777290344, "token_count": 364, "score": 3.03125, "int_score": 3 }
Future President Herbert Hoover is born on this day in 1874 in West Branch, Iowa. After being tragically orphaned at the age of nine, Hoover lived with his uncle, attended Quaker schools and then graduated from Stanford University with a degree in engineering. Hoover and his wife Lou went to China in 1899, where he worked as an engineering consultant for the Chinese government. The next year, Chinese nationalists rebelled against European colonial control and besieged westerners living in the city of Tientsin. During the siege, Hoover led an enclave of westerners in building barricades near their residential section of Tientsin. One story about this dramatic event has Hoover narrowly escaping with his life while attempting to rescue some Chinese children caught in the crossfire of urban combat one day during the month-long siege. After an international coalition of troops rescued the Hoovers and spirited them and other westerners out of China, Herbert Hoover was made a partner at Bewick, Moreing and Co. He and Lou split their time between residences in California and London and traveled the world between 1901 and 1909. They then returned to the U.S. and, after serving as secretary of commerce under Presidents Warren Harding and Calvin Coolidge from 1921 to 1924, Hoover headed the American Child Health Association and served as chairman of the Federal Street and Highway Safety Commission. During World War I, Lou chaired the American Women’s War Relief Fund and worked on behalf of other war-related charitable organizations. Both Hoovers, inspired by their experience in China, were active in helping refugees and tourists stranded in hostile countries. In 1928, Hoover ran for president and won. Remarkably, it was his first election campaign—he had been appointed to his previous government positions. During the 1928 campaign, Hoover optimistically asserted that America was on the verge of snuffing out poverty and said “the poorhouse is vanishing from among us.” Despite his warnings against market speculation, the great stock market crash of October 1929 occurred less than a year into his presidency. Hoover’s tenure was thereafter associated with his inability to lead the nation out of the Great Depression and the couple’s reputation was soon tarnished by Hoover’s ineffective leadership and Lou’s ostentatious White House social functions, which appeared heartless, frivolous and irresponsible at a time when many Americans could hardly make ends meet. As the Depression deepened, a growing number of shanty towns full of destitute unemployed workers sprang up in city centers; they became known as “Hoovervilles.” Still, Hoover resisted implementing the type of emergency government relief programs advocated by his 1932 presidential campaign opponent, and ultimate successor, Franklin Delano Roosevelt. After his tenure in the White House, Hoover worked as an advisor for economic and humanitarian relief programs. In 1947, he worked closely with then-President Harry Truman to combat worldwide famine and to help European nations with post-World War II reconstruction. Hoover died of heart failure on October 20, 1964, in New York City. His beloved wife Lou had died 20 years before, in 1944.
<urn:uuid:dcc6198e-fa73-4778-aa72-881ca18688ef>
{ "dump": "CC-MAIN-2018-09", "url": "http://www.history.com/this-day-in-history/herbert-hoover-is-born/print", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811352.60/warc/CC-MAIN-20180218023321-20180218043321-00094.warc.gz", "language": "en", "language_score": 0.9783477783203125, "token_count": 637, "score": 4, "int_score": 4 }
The world’s fisheries are in crisis. Decades of overfishing have severely depleted stocks in 90 percent of fisheries, and many are now facing collapse. Industrial fishing fleets scour the seas with relentless efficiency to meet ever-growing demand. The crisis has become particularly acute on the African coast where heavily subsidized Chinese ships are crushing the local fishing industry. With each ship scooping up as many fish in a week as Senegalese boats can in a year, a growing Chinese presence is costing West African economies $2 billion a year. At the same time, European ships grossly underreport their catch, exploiting weak enforcement regimes, and American diets help fuel Chinese export demand. A global compact to address the crisis of illegal fishing came into effect last year, but until participation is universal, policy geared toward long-term sustainability, and enforcement stringent, extractive economic power will endanger the vitality of our common ocean resources. Bodes well for the future Journey to Earthland The Great Transition to Planetary Civilization GTI Director Paul Raskin charts a path from our dire global moment to a flourishing future.Read more and get a copy Available in English, French, Portuguese, and Spanish
<urn:uuid:8f4f1495-70cf-4c8b-ac1e-be86a88ef93d>
{ "dump": "CC-MAIN-2023-23", "url": "https://www.greattransition.org/post/plunder-on-the-high-seas", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646457.49/warc/CC-MAIN-20230531090221-20230531120221-00743.warc.gz", "language": "en", "language_score": 0.9257209897041321, "token_count": 248, "score": 2.53125, "int_score": 3 }
Afghanistan, a Dangerous Place to Be, to Have a Baby “Afghanistan remains one of the most dangerous and most violent, crisis ridden countries in the world,” where one third of the population needs help, according to the United Nations. A recent report by the UN Office for the Coordination of Humanitarian Affairs found that 9.3 million people in Afghanistan are in need of aid due to armed conflict. The population of Afghanistan is more than 32 million. In 2016, every province in Afghanistan was affected by a natural disaster or armed conflict. More than half were affected by both. The fighting killed more than 8,000 civilians in the first nine months of 2016. A half million people lost their homes by November. Jens Laerke is with the UN Office for the Coordination of Humanitarian Affairs. More than half of those displaced were children. The United Nations International Children’s Emergency Fund, or UNICEF says children and mothers are at great risk. The organization calls Afghanistan one of the most dangerous places in the world to be a baby, a child or mother because of limited access to health care. UNICEF reports that thousands of Afghan women die every year because of problems linked with pregnancy. It says those deaths can be prevented. In 2015, it says more than one in every 18 Afghan children died before their first birthday. UNICEF spokesman, Christophe Boulierac, says their poor diet is a silent emergency. He says more than 41 percent of Afghan children under age five are stunted. It is one of the highest rates in the world. "Stunting, as you know, is a sign of chronicundernutrition during the most critical periods of growth and development in early life. Children who suffer from stunting are more likely to contract disease, less likely to access basic health care and do not perform well in school." Boulierac says that the education system in Afghanistan has been destroyed by more than thirty years of conflict. He says three and a half million children do not go to school. An estimated 75 percent of them are girls. I’m Dorothy Gundy. Lisa Schlein reported on this story for TingVOA.com. Dorothy Gundy adapted this story for Learning English. Hai Do was the editor.
<urn:uuid:41fbaaf6-4d4b-442f-9ca5-15d276bc6533>
{ "dump": "CC-MAIN-2019-47", "url": "http://m.iamlk.cn/article/article/id/3757", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00096.warc.gz", "language": "en", "language_score": 0.9593117237091064, "token_count": 477, "score": 2.609375, "int_score": 3 }
One of the most controversial aspects of Neuroendocrine Neoplasms, in particular low grade Neuroendocrine Tumours (NETs), is the ‘benign vs malignant’ question. It’s been widely debated and it frequently patrols the various patient forums and other social media platforms. It raises emotions and it triggers many responses ….. at least from those willing to engage in the conversation. At best, this issue can cause confusion, at worst, it might contradict what new patients have been told by their physicians (….or not been told). I don’t believe it’s an exact science and can be challenging for a NET specialist let alone a doctor who is not familiar with the disease. NANETS Guidance talks about the ‘…heterogeneous clinical presentations and varying degrees of aggressiveness‘ and ‘…there are many aspects to the treatment of neuroendocrine tumours that remain unclear and controversial‘. I’m sure the ‘benign vs malignant’ issue plays a part in these statements. In another example, ENETS Guidance discusses (e.g.) Small Intestine Tumours (Si-NETs) stating that they ‘derive from serotonin-producing enterochromaffin cells. The biology of these tumors is different from other NENs of the digestive tract, characterized by a low proliferation rate [the vast majority are grade 1 (G1) and G2], they are often indolent’. However, they then go on to say that ‘Si-NETs are often discovered at an advanced disease stage – regional disease (36%) and distant metastasis (48%) are present‘. It follows that the term ‘indolent‘ does not mean they are not dangerous and can be ignored and written off as ‘benign’. This presents a huge challenge to physicians when deciding whether to cut or not to cut. To fully understand this issue, I studied some basic (but very widely accepted) definitions of cancer. I also need to bring the ‘C’ word into the equation (Carcinoid), because the history of these tumours is frequently where a lot of the confusion lies. The use of the out of date term by both patients, patient advocates and doctors exacerbates the issue given that it decodes to ‘carcinoma like‘ which infers it is not a proper cancer. See more below. Let’s look at these definitions provided by the National Cancer Institute. Please note I could have selected a number of organisations but in general, they all tend to agree with these definitions give or take a few words. These definitions help with understanding as there can be an associated ‘tumour’ vs ‘cancer’ debate too. Cancer – Cancer is the name given to a collection of related diseases. In all types of cancer, some of the body’s cells begin to divide without stopping and spread into surrounding tissues. There are more than 100 types of cancer which are usually named for the organs or tissues where the cancers form. However, they also may be described by the type of cell that formed them. Author’s note: The last sentence is important for Neuroendocrine Tumour awareness (i.e. Neuroendocrine Tumour of the Pancreas rather than Pancreatic Cancer). Carcinoma – Carcinomas are the most common grouping of cancer types. They are formed by epithelial cells, which are the cells that cover the inside and outside surfaces of the body. There are many types of epithelial cells, which often have a column-like shape when viewed under a microscope. Author’s note: By definition, Carcinomas are malignant, i.e. they are without question malignant cancers. Poorly differentiated Neuroendocrine Neoplasms are deemed to be a ‘Neuroendocrine Carcinoma’ according to the most recent World Health Organisation (WHO) classification of Neuroendocrine Tumours (2017) and ENETS 2016 Guidance. You will have heard of some of the types of Carcinoma such as ‘Adenocarcinoma’ (incidentally, the term ‘Adeno’ simply means ‘gland’). It follows that Grade 3 Neuroendocrine Carcinomas (NEC) are beyond the scope of this discussion. Malignant – Cancerous. Malignant cells can invade and destroy nearby tissue and spread to other parts of the body. Benign – Not cancerous. Benign tumors may grow larger but do not spread to other parts of the body. Author’s Note: This is a key definition because there are people out there who think that low grade NETs are not cancer. Tumour (Tumor) – An abnormal mass of tissue that results when cells divide more than they should or do not die when they should. Tumors may be benign (not cancerous), or malignant (cancerous). Also called Neoplasm. Author’s Note: Neoplasm is an interesting term as this is what is frequently used by ENETS and NANETS in their technical documentation, sometimes to cover all Neuroendocrine types of cancer (Tumor and Carcinoma). It follows that a malignant tumour is Cancer. The term “Malignant Neuroendocrine Tumour” is the same as saying “Neuroendocrine Cancer” Neuroendocrine Tumours – Benign or Malignant? Definitions out of the way, I have studied the ENETS, UKINETS and NANETS guidance both of which are based on internationally recognised classification schemes (i.e. the World Health Organisation (WHO)). In older versions of the WHO classification schemes (1980 and 2000), the words ‘benign’ and ‘uncertain behaviour’ were used for Grades 1 and 2. However, the 2010 edition, the classification is fundamentally different (as is the recent 2017 publication). Firstly, it separated out grade and stage for the first time (stage would now be covered by internationally accepted staging systems such as TNM – Tumour, (Lymph) Nodes, Metastasis). Additionally, and this is key to the benign vs malignant discussion, the WHO 2010 classification is based on the concept that all NETs have malignant potential. Here’s a quote from the UKINETS 2011 Guidelines (Ramage, Caplin, Meyer, Grossman, et al). Tumours should be classified according to the WHO 2010 classification (Bosman FT, Carneiro F, Hruban RH, et al. WHO Classification of Tumours of the Digestive System. Lyon: IARC, 2010). This classification is fundamentally different from the WHO 2000 classification scheme, as it no longer combines stage related information with the two-tiered system of well and poorly differentiated NETs. The WHO 2010 classification is based on the concept that all NETs have malignant potential, and has therefore abandoned the division into benign and malignant NETs and tumours of uncertain malignant potential. The guidance in 2017 WHO reinforces this statement to include endocrine organs, including the pancreas and adrenal glands. The C Word (Carcinoid) – part of the problem? History lesson – Carcinoid tumours were first identified as a specific, distinct type of growth in the mid-1800’s, and the name “karzinoide” was first applied in 1907 by German pathlogist Siegfried Oberndorfer in Europe in an attempt to designate these tumors as midway between carcinomas (cancers) and adenomas (benign tumors). The word ‘Carcinoid’ originates from the term ‘Carcinoma-like’. ‘CARCIN’ is a truncation of Carcinoma. ‘OID’ is a suffix used in medical parlance meaning ‘resembling’ or ‘like’. This is why many people think that Carcinoid is not a proper cancer. The situation is made even more confusing by those who use the term “Carcinoid and Neuroendocrine Tumors” inferring that it is a separate disease from the widely accepted and correct term ‘Neuroendocrine Tumor’ or Neuroendocrine Neoplasm. A separate discussion on this subject can be found in this post here. I encourage you to stop using the term ‘Carcinoid’ which is just perpetuating the problem. How are NENs Classified? If you read any NET support website it will normally begin by stating that Neuroendocrine Tumours constitute a heterogeneous group of tumours. This means they are a wide-ranging group of different types of tumours. However, the latest WHO classification scheme uses the terms ‘Neuroendocrine Tumour’ for well differentiated Grade 1 (low-grade), Grade 2 (Intermediate Grade) and Grade 3 (High Grade) NET; and ‘Neuroendocrine Carcinoma’ (NEC) for poorly differentiated tumours which are by default grade 3 or high grade. They also use the term ‘Neoplasm’ to encompass all types of NET and NEC. So Grade 1 is a low-grade malignancy and so on (i.e any grade of NET is a malignant tumour). You may benefit from reading my blog article on Staging and Grading of NETs as this is also a poorly understood area. Can some NETs be Benign? By any accepted definition of cancer terms, a tumour can be non-cancerous (benign) or cancerous (malignant). This is correct for any cancer type. For example, the word is used in the 2016 version of Inter Science Institute publication on Neuroendocrine Tumors, a document I frequently reference in my blog. For example, I’ve seen statements such as “These tumors are most commonly benign (90%)” in relation to Insulinoma (a type of Pancreatic NET or pNET). Ditto for Pheochromocytoma (an adrenal gland NET). Adrenal and Pituitary ‘adenomas’ are by definition benign (adenoma is the benign version of Adenocarcinoma). And I note that there is a ‘benign’ code option for every single NET listed in the WHO International Classification of Diseases (ICD) system. The ‘BUT‘ is this – all WHO classification systems are based on the concept that NETs always have malignant potential. The WHO 2017 classification update confirmed this thinking by adding endocrine organs including the pancreas and adrenal glands. Can Tumours be Malignant or become Malignant? Using the definition above, if a tumour invades and destroy nearby tissue and spread to other parts of the body, then it’s malignant (i.e Cancer). However, there’s a reason why the WHO declared in 2010 that all NETs have malignant potential (as amplified in WHO 2017). It may not happen or it may happen slowly over time but as Dr Richard Warner says, “they don’t all fulfill their malignant potential, but they all have that possible outcome”. Thus why ongoing surveillance is important after any diagnosis of Neuroendocrine Tumour of any grade or at any stage. Dr Lowell Anthony, a NET Specialist from the University of Kentucky explains this much better than I can – CLICK HERE to hear his two-minute video clip. This issue even caused confusion with doctors, some of whom still think a Stage 4 NET is still benign. Not only is this very insensitive to the patient concerned but it also goes against all the definitions of ‘benign’ and ‘malignant’ that exist in authoritative texts. This was a difficult piece of research. I do believe there are scenarios where NETs will be benign and probably never cause the person any real harm (e.g. many are found on autopsies). I suspect this is the same for many cancers. However, based on the above text and the stories of people who have presented for a second time but with metastatic disease, use of the word ‘benign’ is probably best used with great care. I would certainly (at least) raise an eyebrow if someone said to anyone with any NET tumour, “you don’t need any treatment or surveillance for a NET”; or “it has been cured and no further treatment or surveillance is required”. Particularly if they are not a NET specialist or a recognised NET Centre. Remember, I’m not a medical professional, so if you are in any doubt as to the status of your NET, you should discuss this directly with your specialist. A good place to start is evidence of your Grade, Differentiation, Primary Site Location and Stage. You may be interested in reading these associated posts: Thanks for reading
<urn:uuid:f35f0bfe-5fe6-41dc-9b0f-ce3e42d5c8af>
{ "dump": "CC-MAIN-2019-39", "url": "https://ronnyallan.net/2016/11/28/neuroendocrine-tumours-benign-vs-malignant/?shared=email&msg=fail", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573071.65/warc/CC-MAIN-20190917121048-20190917143048-00418.warc.gz", "language": "en", "language_score": 0.9319804310798645, "token_count": 2747, "score": 2.65625, "int_score": 3 }
There is no standard protocol for evaluating teaching approaches in higher education, meaning faculty must look elsewhere to determine what works and what doesn't, Angela Carbone writes for The Conversation. Even the most skilled instructors face uncertainty when designing a curriculum. They wonder whether adding online learning will improve student outcomes, whether to allow students to bring their own devices to class, or whether the lecture model is really as outdated as everyone says it is. Instructors need to know which methods will improve several metrics of student success, including: - Motivation and inspiration; - Study approaches; - Employability; and Unfortunately, says Carbone, tens of thousands of studies attempt to assess teaching approaches, but their results are often limited to specific circumstances—and may even appear to contradict each other entirely. "For every study that says a change is better—for example, introducing social media into the classroom—there will be another that argues the opposite," writes Carbone. There's also no set standard of methodology, which can make it difficult to draw broad conclusions from narrow studies. Research in the physical sciences, for instance, often focus on facts and data, while studies in the social sciences often focus on developing an understanding over a long period of time. So where do instructors get their answers about student success? Because of the discrepancies in the field, Carbone says the closest an instructor will come to determining best practices is by looking at a meta-study, which presents a systematic review of research synthesized into main conclusions. Carbone offers several examples of meta-studies that synthesize broad research into key conclusions. Related: Engage faculty to make a lasting change in student outcomes One such synthesis is the Blended Learning in Higher Education study from the Association for Information Systems (AIS), which looked at the best practices for blending online and face-to-face components. AIS found that the most effective mix depends on the following criteria: - Professional development support available at the institution; - The instructor's experience using technology for work; - How willing the instructor is to test new approaches; - Technical support available at the institution; - The types of students enrolled in the class; - The students' technology accessibility; - The students' outside commitments; - The students' campus accessibility; and - The type of course being taught. An additional meta-analysis, which speaks most directly to the question of best practices for teaching, is the Australian Government's Office for Learning and Teaching's (OLT) review of teaching quality. That meta-analysis found that the following dimensions determine teaching quality: - How personable the teacher is; - Whether the teacher is performative; - How well the teacher creates interactions; - How much the teacher motivates students; - Whether the teacher's demands are realistic; - If the teacher has an international perspective; - How well the teacher helps students find meaning; - Whether the teacher uses effective assessment processes; and - Whether the teacher develops autonomy among students. (Carbone, The Conversation, 10/17). How to identify—and support—your most innovative instructors Next in Today's Briefing Here's the key to success with success tech
<urn:uuid:d718493d-5ccd-4bd7-8597-fc8567897ac1>
{ "dump": "CC-MAIN-2019-13", "url": "https://www.eab.com/daily-briefing/2016/10/31/why-its-so-hard-to-evaluate-teaching-methods", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912207618.95/warc/CC-MAIN-20190327020750-20190327042750-00278.warc.gz", "language": "en", "language_score": 0.9349152445793152, "token_count": 669, "score": 2.78125, "int_score": 3 }
Kerala or Keralam is an Indian state, located south most on its west coast. It was created on 1 November 1956, by the States Reorganisation Act, combining various Malayalam speaking regions.The state has an area of 38,863 km2 (15,005 sq mi) and is bordered by Karnataka to the north and northeast, Tamil Nadu to the south and southeast and the Arabian Sea towards the west. Thiruvananthapuram is the capital city. Kochi and Kozhikode are other major cities. Kerala is also known for its many small towns that are scattered across the state, thus creating a higher density of population.Kerala is a popular tourist destination famous for its backwaters, Ayurvedic treatments and tropical greenery. Kerala has the highest Human Development Index of all Indian states.The state has a literacy rate of 94.59 percent,also the highest in India. A survey conducted in 2005 by Transparency International ranked Kerala as the least corrupt state in the country. Kerala has witnessed significant migration of its people, especially to the Persian Gulf countries during the Kerala Gulf boom, and is heavily dependent on remittances from its large Malayali expatriate community. History of Kerala: The spices from Malabar coast may have landed initially at Gulf of Aden and they eventually were transported to the East African trading ports in and around the city known in Grecian-Roman literature as Rhapta.Merchants then moved the commodities northward along the coast.In Roman times, they traveled to Muza in Yemen and finally to Berenice in Egypt. From Egypt they made their way to all the markets of Europe and West Asia. The beginning of the trade is hinted at in Egyptian hieroglyphic inscriptions during the New Kingdom period about 3,600 years ago. The Pharaohs of Egypt opened up special relationships with the kingdom of Punt to the south. Although the Egyptians knew of Punt long before this period, it was during the New Kingdom that we really start hearing of important trade missions to that country that included large cargoes of spices. Particularly noteworthy are the marvelous reliefs depicting the trade mission of Queen Hatshepsut of the 18th Dynasty. Culture of kerala: Kerala's culture is derived from both a Tamil-heritage region known as Tamilakam and southern coastal Karnataka. Later, Kerala's culture was elaborated upon through centuries of contact with neighboring and overseas cultures.Native performing arts include koodiyattom (a 2000-year-old Sanskrit theatre tradition, officially recognised by UNESCO as a Masterpiece of the Oral and Intangible Heritage of Humanity), kathakali—from katha ("story") and kali ("performance")—and its offshoot Kerala natanam, Kaliyattam -(North Malabar special), koothu (akin to stand-up comedy), mohiniaattam ("dance of the enchantress"), Theyyam, thullal NS padayani. Kathakali and Mohiniattam are widely recognized Indian Classical Dance traditions from Kerala. Elephants are an integral part of daily life in Kerala. Indian elephants are loved, revered, groomed and given a prestigious place in the state's culture. They are often referred to as the 'sons of the sahya.' The ana (elephant) is the state animal of Kerala and is featured on the emblem of the Government of Kerala. Tourism in Kerala: Kerala, a state situated on the tropical Malabar Coast of southwestern India, is one of the most popular tourist destinations in the country. Named as one of the ten paradises of the world by the National Geographic Traveler, Kerala is famous especially for its ecotourism initiatives.Its unique culture and traditions, coupled with its varied demography, has made Kerala one of the most popular tourist destinations in the world. Growing at a rate of 13.31%, the tourism industry is a major contributor to the state's economy. Until the early 1980s, Kerala was a hitherto unknown destination, with most tourism circuits concentrated around the north of the country.Aggressive marketing campaigns launched by the Kerala Tourism Development Corporation—the government agency that oversees tourism prospects of the state—laid the foundation for the growth of the tourism industry. In the decades that followed, Kerala Tourism was able to transform itself into one of the niche holiday destinations in India. The tag line Kerala- God's Own Country was adopted in its tourism promotions and became synonymous with the state. Today, Kerala Tourism is a global superbrand and regarded as one of the destinations with the highest brand recall. In 2006, Kerala attracted 8.5 million tourists–an increase of 23.68% in foreign tourist arrivals compared to the previous year, thus making it one of the fastest growing tourism destination in the world. The state's tourism agenda promotes ecologically sustained tourism, which focuses on the local culture, wilderness adventures, volunteering and personal growth of the local population. Efforts are taken to minimise the adverse effects of traditional tourism on the natural environment, and enhance the cultural integrity of local people. tourist places in Kerala: Flanked on the western coast by the Arabian Sea, Kerala has a long coastline of 580 km (360.39 miles); all of which is virtually dotted with sandy beaches. Kovalam beach near Thiruvananthapuram was among the first beaches in Kerala to attract tourists. Rediscovered by back-packers and tan-seekers in the sixties and followed by hordes of hippies in the seventies, Kovalam is today the most visited tourist destination in the state. Other popularly visited beaches in the state include those at Alappuzha Beach,Nattika beach[Thrissur], Vadanappilly beach[Thrissur], Cherai Beach, Kappad, Kovalam, Marari beach, Fort Kochi and Varkala. The Muzhappilangad Beach beach at Kannur is the only drive-in beach in India. The backwaters in Kerala are a chain of brackish lagoons and lakes lying parallel to the Arabian Sea coast (known as the Malabar Coast). Kettuvallam (Kerala houseboats) in the backwaters are one of the prominent tourist attractions in Kerala. Alleppey, known as the "Venice of the East" has a large network of canals that meander through the town. The Vallam Kali (the Snake Boat Race) held every year in August is a major sporting attraction. The backwater network includes five large lakes (including Ashtamudi Kayal and Vembanad Kayal) linked by 1500 km of canals, both manmade and natural, fed by 38 rivers, and extending virtually the entire length of Kerala state. The backwaters were formed by the action of waves and shore currents creating low barrier islands across the mouths of the many rivers flowing down from the Western Ghats range. Eastern Kerala consists of land encroached upon by the Western Ghats; the region thus includes high mountains, gorges, and deep-cut valleys. The wildest lands are covered with dense forests, while other regions lie under tea and coffee plantations (established mainly in the 19th and 20th centuries) or other forms of cultivation. The Western Ghats rises on average to 1500 m elevation above sea level. Certain peaks may reach to 2500 m. Popular hill stations in the region include Devikulam, Munnar, Nelliyampathi, Peermade, Ponmudi, Vagamon, Wayanad and Kottanchery Hills. Most of Kerala, whose native habitat consists of wet evergreen rainforests at lower elevations and highland deciduous and semi-evergreen forests in the east, is subject to a humid tropical climate. however, significant variations in terrain and elevation have resulted in a land whose biodiversity registers as among the world’s most significant. Most of Kerala's significantly biodiverse tracts of wilderness lie in the evergreen forests of its easternmost districts. Kerala also hosts two of the world’s Ramsar Convention-listed wetlands: Lake Sasthamkotta and the Vembanad-Kol wetlands are noted as being wetlands of international importance. There are also numerous protected conservation areas, including 1455.4 km² of the vast Nilgiri Biosphere Reserve. In turn, the forests play host to such major fauna as Asian Elephant (Elephas maximus), Bengal Tiger (Panthera tigris tigris), Leopard (Panthera pardus), and Nilgiri Tahr (Nilgiritragus hylocrius), and Grizzled Giant Squirrel (Ratufa macroura). More remote preserves, including Silent Valley National Park in the Kundali Hills, harbor endangered species such as Lion-tailed Macaque (Macaca silenus), Indian Sloth Bear (Melursus (Ursus) ursinus ursinus), and Gaur (the so-called "Indian Bison" — Bos gaurus). More common species include Indian Porcupine (Hystrix indica), Chital (Axis axis), Sambar (Cervus unicolor), Gray Langur, Flying Squirrel, Swamp Lynx (Felis chaus kutas), Boar (Sus scrofa), a variety of catarrhine Old World monkey species, Gray Wolf (Canis lupus), Common Palm Civet (Paradoxurus hermaphroditus). Many reptiles, such as king cobra, viper, python, various turtles and crocodiles are to be found in Kerala — again, disproportionately in the east. Kerala's avifauna include endemics like the Sri Lanka Frogmouth (Batrachostomus moniliger), Oriental Bay Owl, large frugivores like the Great Hornbill (Buceros bicornis) and Indian Grey Hornbill, as well as the more widespread birds such as Peafowl, Indian Cormorant, Jungle and Hill Myna, Oriental Darter, Black-hooded Oriole, Greater Racket-tailed and Black Drongoes, bulbul (Pycnonotidae), species of Kingfisher and Woodpecker, Jungle Fowl, Alexandrine Parakeet, and assorted ducks and migratory birds. Additionally, freshwater fish such as kadu (stinging catfish — Heteropneustes fossilis) and brackishwater species such as Choottachi (orange chromide — Etroplus maculatus; valued as an aquarium specimen) also are native to Kerala's lakes and waterways. The major festival in Kerala is Onam. Kerala has a number of religious festivals. Thrissur Pooram and Chettikulangara Bharani are the major temple festivals in Kerala. The Thrissur Pooram is conducted at the Vadakumnathan temple, Thrissur. The Chettikulangara Bharani is another major attraction. The festival is conducted at the Chettikulangara temple near Mavelikkara. The Sivarathri is also an important festival in Kerala. This festival is mainly celebrated in Aluva Temple and Padanilam Parabrahma Temple. Padanilam Temple is situated in Alappuzha district of Kerala, about 16 km from Mavelikkara town. Parumala Perunnal, Manarkadu Perunnal are the major festivals of Christians. Muslims also have many important festivals. Cuisine of Kerala: The cuisine of Kerala is linked in all its richness to the history, geography, demography and culture of the land. Kerala cuisine has a multitude of both vegetarian and non-vegetarian dishes prepared using fish, poultry and meat. The staple food of Kerala, like most South-Indian states, is rice. Unlike other states, however, many people in Kerala prefer parboiled rice (Choru) (rice made nutritious by boiling it with rice husk). Kanji (rice congee), a kind of rice porridge, is also popular. Tapioca, called kappa in Kerala, is popular in central Kerala and in the highlands, and is frequently eaten with fish curry. Common non-vegetarian dishes include stew (using chicken, beef, lamb, or fish), traditional or chicken curry (Nadan Kozhi Curry), chicken fry (Kozhi Porichathu/Varuthathu), fish/chicken/mutton molly(fish or meat in light gravy), fish curry (Meen Curry), fish fry (Karimeen Porichathu/Varuthathu), lobster fry (Konchu Varuthathu), Spicy Beef Fry (Beef Ularthiyathu), Spicy Steamed Fish (Meen Pollichathu) etc. Biriyani, a Mughal dish consists of rice cooked along with meat, onions, chillies and other spices.
<urn:uuid:6888fbe5-031c-4912-9457-bbc7d4d1f39d>
{ "dump": "CC-MAIN-2016-36", "url": "http://travelkeralatouristplaces.blogspot.com/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982974951.92/warc/CC-MAIN-20160823200934-00215-ip-10-153-172-175.ec2.internal.warc.gz", "language": "en", "language_score": 0.9396032691001892, "token_count": 2704, "score": 3.125, "int_score": 3 }
The CNC Machining Service: Detail Down To The Final Millimeter Ever because the industrial revolution, companies have been finding a way to make components with minimal errors as feasible. The issue was that handmade parts typically had minor faults that could cause machines to break down. For instance, a tiny error in a groove of a screw can make an alarm clock’s hands run slower, and as a result make the clock late. A CNC machining service normally comes into conversations like these. But few folks have talked about what it actually is. If anyone bothered to define precision machining, it frequently came off too technical for a layman to recognize. Defining Precision Machining To comprehend it far better, let’s split precision machining into two words. Precision is all about exactness and detail. Machining, on the other hand, is the process of utilizing gear to make parts out of a raw material. For that reason, precision machining requires producing parts that are closest to the actual blueprints or plans. Equipment utilized in precision machining Machinists never use manual approaches. Instead, they rely on quickly and accurate machines that stick to the design they have in mind. Jets that spray off water at overwhelming pressure could chip off metals with ease. Most machines rely on computers to guide them in the method. Machinists only need to input the blueprints in the computers, and the machines produce the outputs. The typical program used is called Computer Aided Design and style (CAD). But not too long ago, lasers have turn into the new norm for precision machining. Lasers are quicker and a lot more precise compared to traditional machines. The standards of precision Defining a CNC machining service is easy, but another query arises. How precise ought to machining be? Ahead of, makers made precision up to .1 of a millimeter the common. But now, machinists could assure accuracy up to .005 millimeters. This is a fantastic leap for the manufacturing business given that products have grow to be smaller. (Moving components for watches are so small they leave little area for error.) Why use a CNC machining service? If you are into the manufacturing industry, you want to get support from professionals and get precision machining services. There are a number of firms that specialize in machining services. Nevertheless, they are differentiated by their methods. Some use lasers, other use water, whilst other folks have a various operating method. In this globe of high security requirements, your machining approach could decide how effective or secure your item could be. The subsequent time you look at a car, a refrigerator or a watch, stop and feel about the CNC machining service that made them.
<urn:uuid:65e8396e-4340-4479-beb4-d468943a9106>
{ "dump": "CC-MAIN-2017-51", "url": "http://www.cncmachinings.com/the-cnc-machining-service-detail-down-to-the-last-millimeter/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948585297.58/warc/CC-MAIN-20171216065121-20171216091121-00428.warc.gz", "language": "en", "language_score": 0.9249526858329773, "token_count": 566, "score": 2.578125, "int_score": 3 }
Here we are going to discuss 3 bad habits associated with having acid reflux, namely snoring, burping and bad breath. These bad habits affect our social well-being thus it is worth the while to understand the condition and find out the ways to reduce or eliminate acid reflux. How Does Acid Reflux Relate To Snoring? An acid reflux can cause many problems and many people do not realize that there is actually a strong connection between snoring and reflux. Snoring is just one problem that can be caused by an acid reflux and can be an indication that a problem with reflux is occurring. Both snoring and a reflux are problems that can affect the amount of quality sleep and cause poor effects over a person’s lifestyle. The problem with suffering from both snoring and reflux is usually that the suffering is continuous. An acid reflux affects every part of your day while snoring leaves you weary and cross and the inability to deal with the pain and discomfort caused by reflux. How Does Acid Reflux Relate To Burping? The basic symptom that is felt when an individual has reflux is a burning sensation of the esophagus with the chance of the acid entering into the mouth. Another frequent symptom that is sometimes experienced by individuals is burping with acid reflux. Recognizing Burping With Acid Reflux It is important to know that gas in the stomach or burping is a common occurrence for most individuals. The gas in the stomach can be either caused by swallowing air or by eating certain types of food. Therefore, it is important to differentiate between non-reflux burping versus burping with acid reflux. Associated burping with reflux happens when the acid in a person’s stomach travels backwards into the esophagus which causes a burning feeling in the chest area. If this excess acid travels into the mouth it is then given out in gaseous state. This is the way to identify whether an individual’s burping is normal or associated with reflux. Moreover, the gas that is released through this process can be extremely smelling strong and very bad. How Does Reflux Relate To Bad Breath? It is usual for a person to suffer both bad breath and reflux. This is because the stomach acid backing up into the esophagus often carries with it the odor of the stomach acids that travel their way into the esophagus or all the way up into the throat. A person’s breath is affected by what they eat so it only explains that some of the same foods may be causing bad breath and reflux. However a change of diet can aid with both problems. When a person takes in hot and spicy foods, the chemicals in the food that makes it spicy will enter the blood stream. When the blood passes through the lungs and is exchanged for oxygenated air, the person’s breath will reflect the smell leaving the blood stream. Chronic and persistent indigestion can damage more than bad breath and reflux as the hydrochloric acid in the stomach is the same type of acid that is in car batteries. It can erode the lining of the esophagus and if allowed to work upwards can be painful in the ears and cause damage to the voice box and upper larynx. If a person feels there is acid rising in their throat, even when not followed by a burning sensation, it is a sign that reflux is occurring. Once the symptoms seem to fade away, they may still suffer bad breath and acid reflux may be the cause of it. The Common Treatment For Snoring, Burping And Bad Breath If as a result of reflux, you suffer from either snoring, burping or bad breath, the cure for you is 1) Get rid of your excess weight. Excess weight has always been the onset of health problems. To achieve and maintain your ideal weight is always best. One way to lose those kilos is to eat frequently but smaller portions for your meals. Your stomach will be able to cope with the smaller amount thus enabling to digest most of the foods taken in. 2) Avoid eating foods which trigger reflux. Foods that may cause acid reflux are foods that add to the acidity of the digestive process. That will include drinking of carbonated beverages, tomatoes, onions, certain types of fruit, etc 3) Raise the torso area when you sleep. By doing so, the contents in the stomach will unlikely to be able to seep up the esophagus. 4) Treatments for reflux can be the use of medication. This medication can be over the counter products such as antacids or the use of prescription medicines that help to reduce the production of acid. 5) Avoiding the use of tobacco. Smoking has also been proven to cause other more serious illnesses such as cancer. These solutions will most probably reduce your occurrences with acid reflux and at the same time solve the bad habits of snoring, burping and bad breath. By having acid reflux is hazardous and uncomfortable enough without going through other problems. However for a truly effective cure, it is best to consult a doctor in the matter.
<urn:uuid:8bf53e28-a398-4808-94d5-763eb7eed14e>
{ "dump": "CC-MAIN-2018-26", "url": "http://medxr.com/what-you-should-know-about-bad-habits-through-having-acid-reflux/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864391.61/warc/CC-MAIN-20180622104200-20180622124200-00213.warc.gz", "language": "en", "language_score": 0.9467687606811523, "token_count": 1071, "score": 3.0625, "int_score": 3 }
Another in an ongoing series of hackers attacking developer resources, last week The Bleeping Computer reported that the PHP Git repository was hacked in an attempt to add a backdoor to the PHP source code. Yesterday two maliciously tainted files were uploaded to git.php.net and signed by two actual PHP developers (Rasmus Lerdorf and Nikita Popov) to appear legitimate. “The first commit was found a couple of hours after it was made, as part of routine post-commit code review. The changes were rather obviously malicious and reverted right away.” How Has PHP Responded? An investigation showed that the hackers had not compromised the developer accounts but the server itself, which was alarming. As a result, PHP has decided to migrate official PHP code to GitHub and decommission their server altogether. PHP officials commented that “While the investigation is still underway, we have decided that maintaining our own git infrastructure is an unnecessary security risk and that we will discontinue the git.php.net server.” Popov further commented that “Instead, the repositories on GitHub, which were previously only mirrors, will become canonical.” They also noted that all changes and updates going forward will be updated to GitHub directly and that any contributing developers must be added to the PHP organization on GitHub. Anyone interested can read the full security announcement here. Those who will be joining must have 2-factor authentication turned on in their GitHub account. The company is examining all code committed to the server to look for any additional compromised files. Because of the quick discovery and response, PHP does not believe that the malicious code made it into “any tags or release artifacts.” The affected files were part of a development version of PHP 8.1 that won’t be released to the public until later this year. What is PHP? PHP is a server-side programming language used for building websites and web applications. It was developed in 1994 by Rasmus Lerdorf, a Danish-Canadian programmer. The acronym originally stood for “Personal Home Page” but was later changed to “HP: Hypertext Preprocessor.” PHP is the backbone for many content management systems (WordPress, Drupal, Joomla, etc.). PHP only works on servers with it installed. Most hosting companies support PHP. PHP is open-source and free to use. PHP is relatively easy to learn as opposed to other programming languages. It is regularly updated and well supported, which makes it a popular choice among new developers. PHP works seamlessly with MYSQL, and you can also use it with other databases like Postgres, Oracle, MS SQL Server, and ODBC, among others. It can easily be integrated within HTML code, making it light and easy to use. Roughly 20 million websites and applications use PHP code. Due to its wide use and gaining popularity, the attack on PHP source code is extremely alarming. Had the new version been rolled out with malicious code included, millions of potential victims could have been affected. Thankfully, PHP has a process in place for new commits that checks every line of code for anything suspicious. This time, the crisis was averted.
<urn:uuid:ae4e7dcb-d96e-4801-a6d7-3d97a6045811>
{ "dump": "CC-MAIN-2021-31", "url": "https://www.idstrong.com/sentinel/git-internal-servers-breached-hackers-add-backdoor-to-source-code-on-php/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154320.56/warc/CC-MAIN-20210802110046-20210802140046-00310.warc.gz", "language": "en", "language_score": 0.9625006914138794, "token_count": 659, "score": 2.609375, "int_score": 3 }
Girish Kulkarni, a graduate student in the Electrical Engineering Program, received a U-M Rackham Predoctoral Fellowship to support his dissertation research on Carbon Nanoelectronic Heterodyne Sensors for Chemical and Biological Detection. This fellowship is awarded to outstanding doctoral candidates in the final stages of their program who are unusually creative, ambitious and risk-taking. |Graphene heterodyne vapor sensor| By taking a novel approach to sensing, Girish’s research has resulted in a new paradigm in sensor technology that promises both high-speed and highly-sensitive detection, which are critical for practical applications. He has built sensors that can be used not only to detect hazardous chemical leaks in a lab or chemical attacks on a battlefield, but also in point-of-care diagnostics for example measuring PSA levels and other biomarkers in blood; he believes that one day these sensors can detect health irregularities by breath alone. "And our devices are so small they can be put almost anywhere," Girish added. "Nanoelectronic sensors typically depend on detecting charge transfer between the sensor and a molecule in air or in solution," explained Girish. "It is well known that charge transfer is a rate limiting step for molecular detection leading to extremely slow response and recovery. Moreover, conventional charge-based detection techniques fail in solution due to ionic screening effect, which can be overcome only through time-consuming steps like desalting, making them impractical for real-time sensing." "We use a technique called heterodyne mixing," Girish continued. "Instead of detecting molecular charge, we look at the interaction between the dipoles associated with these molecules and the nanosensor at high frequencies." Kulkarni’s approach gives sub-second response times and parts-per-billion level sensitivity simultaneously. The technique has been demonstrated on carbon nanotubes and graphene, though it can also be used on any nanoelectronic sensor platform and works both for solution and gas phase detection. Microfluidic setup for real-time detection of ligand receptor binding by single-walled carbon nanotube biosensor And because his devices are less than a micron-by-micron is size, they are conducive for multiplexed detection. The graphene heterodyne sensor has been shown to detect 20 different volatile organic compounds on one small device, with many more possible. Girish Kulkarni is advised by Prof. Zhaohui Zhong, who has done pioneering work in graphene. Girish came to Michigan in 2008, the same year as his advisor; he was attracted by the opportunities Prof. Zhong offered to new students joining his group. Those opportunities have only increased for Girish, who plans to continue investigating and perhaps one day commercializing this technology.
<urn:uuid:baa565ad-f598-4547-8239-af0ccf880757>
{ "dump": "CC-MAIN-2016-30", "url": "http://www.eecs.umich.edu/eecs/about/articles/2014/Girish-Kulkarni-paradigm-shifting-advances-in-biosensor-technology.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824133.26/warc/CC-MAIN-20160723071024-00147-ip-10-185-27-174.ec2.internal.warc.gz", "language": "en", "language_score": 0.9606804251670837, "token_count": 583, "score": 2.71875, "int_score": 3 }
Did flowing water carve the well-known channels on the face of Mars? Or was molten lava perhaps the instigator? This debate has raged for years, and the answer is important, because if there was a lot of surface water, that increases the chances that life may once have existed. Comparison of images of Martian channels to lava flows in Hawaii indicates that lava, not water, may have been the creative force behind at least some of the channels. So said NASA researcher Jacob Bleacher, speaking before the 41st Lunar and Planetary Science Conference last week. "To understand if life — as we know it — ever existed on Mars, we need to understand where water is or was," Bleacher said. The notion that water once flowed freely on Mars derives from images showing details resembling fluvial (water-based) erosion. Fine, delicate terrain features usually aren't considered products that lava flows can create. "The common image (of lava) is of the big, open channels in Hawaii," Bleacher explains. More detailed view A single channel on the southwest flank of Mars' Ascraeus Mons volcano, one of three volcanoes collectively known as the Tharsis Montes, formed the basis of Bleacher and his colleagues' research. The team pieced together images covering more than 168 miles of this channel utilizing high-resolution pictures from three cameras: the Thermal Emission Imaging System (THEMIS) on board Mars Odyssey spacecraft, the Context Imager (CTX) on Mars Reconnaissance Orbiter and the High/Super Resolution Stereo Color (HRSC) imager on Mars Express, as well as older data from the Mars Orbiter Laser Altimeter (MOLA) on Mars Global Surveyor (MGS). These data gave a more detailed view of the surface than previously available. Time has obliterated the fluid that created the Ascraeus Mons channels, but visual clues at the source of the channel in question seem to indicate that water is the culprit. Clues include small islands, secondary channels that branch off and rejoin the main one and eroded bars on the insides of the curves of the channels. But new close examination of the channel's other end by Bleacher and colleagues revealed a ridge that appears to have lava flows coming out of it. In some areas, "the channel is actually roofed over, as if it were a lava tube, and lined up along this, we see several rootless vents," or openings where lava is forced out of the tube and creates small structures, he explained. Water-carved channels don't typically form these types of features, he notes. Bleacher argues that one end of the channel forming by water and the other end by lava is an "exotic" combination. More likely, he thinks, lava formed the entire channel. To compare the Mars features to those created by lava, Bleacher, along with W. Brent Garry and Jim Zimbelman at the Smithsonian Institution in Washington, examined the 32-mile lava flow from the 1859 eruption of Mauna Loa on the Big Island of Hawaii. They focused on a mid-channel island almost a kilometer long. Bleacher says this is much larger than islands typically identified within lava flows. To survey the island, the team used differential GPS, which provides location information to within about 1.1 to 1.9 inches, more accurate than a car's GPS can offer. "We found terraced walls on the insides of these channels, channels that go out and just disappear, channels that cut back into the main one, and vertical walls 9 meters (about 29 feet) high," Bleacher said. "So, right here, in something that we know was formed only by flowing lava, we found most of the features that were considered to be diagnostic of water-carved channels on Mars." The new results make "a strong case that fluid lava can produce channels that look very much like water-generated features," Zimbelman said. "So, we should not jump to a water-related conclusion when we see such channels on other planets, particularly in volcanic terrain such as that around the Tharsis Montes volcanoes." Lunar evidence, too Further, researchers discovered more evidence from the moon by studying a detailed image of channels in the Mare Imbrium, a large crater filled with ancient lava rock. In this image, as well, they found channels with terraced walls and branching secondary channels. The conclusion that lava probably made the channel on Mars "not only has implications for the geological evolution of the Ascraeus Mons but also the whole Tharsis Bulge (volcanic region)," said Andy de Wet, a co-author of the study at Franklin & Marshall College, Lancaster, Penn. "It may also have some implications for the supposed widespread involvement of water in the geological evolution of Mars." Bleacher notes that the team's conclusions do not preclude the possibility of flowing water on Mars, nor of other channels carved by water. "But one thing I've learned is not to underestimate the way that liquid rock will flow," he said. "It really can produce a lot of things that we might not think it would." © 2013 Space.com. All rights reserved. More from Space.com.
<urn:uuid:f1705290-7f8d-4dba-9480-980ccc4405e4>
{ "dump": "CC-MAIN-2017-09", "url": "http://www.nbcnews.com/id/35843970/ns/technology_and_science/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174167.41/warc/CC-MAIN-20170219104614-00480-ip-10-171-10-108.ec2.internal.warc.gz", "language": "en", "language_score": 0.9607247114181519, "token_count": 1090, "score": 3.875, "int_score": 4 }
Three Days That Changed Modern Life Forever Events on this date in 1945, 1974 and 1995 shaped the world we live in today. Bloomberg, August 9, 2017 I take it as my mission today to connect the dots among three events — in reverse chronological order — that occurred on this date, Aug. 9, in the not-too-distant past. 1 First, on this day in 1995, the internet stock sector was born, when Netscape Communications Corp., a company that just 16 months earlier was founded with the goal of creating the first commercial web browser, had its initial public offering. The shares, priced at $28 each, exploded on the first day of trading. Morgan Stanley, the lead underwriter, had wildly underpriced the IPO. 2 The shares opened at $71 and ended the day at $58.25. 3 So this marks the 22nd anniversary of modern society’s tech revolution; almost every aspect of life has been radically affected by the fallout from the rise of the graphical internet. Consider just these a few things that unfolded as a result of Netscape: It set off a Cambrian explosion of new technological life. Freed from the domineering monopoly that was Microsoft Corp., a robust internet unleashed a wave of online innovation and creativity. Before that, the Redmond regnant dominated the desktop.Every venture-capital funding meeting before 1995 ended with the question of what would happen if Gates & Co. decided to add this feature/software/widget as part of its operating system? It may not have been obvious at the time, but Netscape changed all that. The dot-com boom created a huge internet stock bubble that eventually collapsed, driving the Nasdaq Composite Index down about 80 percent from March 2000 to October 2002. There were many side effects of the dot-com collapse, but let’s consider a few of - The huge build-out in fiber-optic bandwidth was done with lots of venture capital and money from the investing public. Thousands of miles off fiber-optic lines to replace aging copper cables were laid at huge expense by companies such as Metromedia Fiber, Global Crossing, Covad, WorldCom and many others. As Dan Gross taught us in “Pop!: Why Bubbles Are Great For The Economy,” what was built for billions of dollars was bought out of bankruptcy for pennies on the dollar. All of those big fat cheap pipes arguably led to such bandwidth-intensive services as YouTube, Google Maps, Facebook, Netflix and many others. - The dot-com collapse set the stage for Federal Reserve Chairman Alan Greenspan’s ultralow rates, which turbocharged the credit expansion that fueled the market in everything from subprime mortgages to credit default swaps, both of which contributed to the financial crisis. If you think about it, you can trace a clear path from Netscape to the election of Donald Trump. - The internet goes mobile in the early 2000s, first with email, then with web access. This tees up the opportunity for Apple Inc. to introduce the iPhone, which is changing the world in ways that we are still trying to grasp. - Amazon.Com Inc., and the destruction of modern retail, easily traces back to that same set of events via Netscape. * * * On this date, 21 years before Netscape’s IPO, was another historical thunderclap. Amid the burgeoning Watergate scandal that began with what appeared to be a minor burglary, President Richard M. Nixon opted to resign from office 4 after it became clear that Congress was prepared to impeach him and remove him from office. Today, the parallels with America’s current president are almost impossible to avoid. Just this morning we learned that FBI agents last month raided the home of President Trump’s former campaign chairman, Paul Manafort, apparently amid a probe into his dealings with foreign governments. General Michael Flynn, Trump’s former national security adviser, is thought to be under similar scrutiny. Others close to the Trump campaign and the president himself are also potential subjects of former FBI director and independent counsel Robert Mueller’s investigation into Russia interference in last year’s presidential election. * * * Did Trump’s invocation of “fire and fury” in reference to North Korea’s nuclear program disturb you? Well, we have one more anniversary to tie together, for Aug. 9, 1945, was the day that the U.S. dropped the second atomic bomb on Nagasaki, resulting in Japan’s unconditional surrender. The good news is that nuclear weapons have not been used during war since then. The bad news is the rhetoric between the U.S. and North Korea is getting hotter, increasing the risks of a nuclear conflict the likes of which we probably haven’t seen since the Cuban Missile Crisis. * * * Thus, we can see some of the connections between technology, politics and international affairs. As investors, citizens, consumers, workers, voters and savers, there is no aspect of our daily lives where we can escape their relentless grip. And yet, here again, technology has a solution. If the stress of thinking about all this seems like too much, try setting aside a few minutes for quiet meditation, using one of the apps designed to help you cope. And if that feels like technology overload, there’s one last way to de-stress. 1. Special thanks to Jason Zweig’s “This Day in Financial History.” 2. Pricing IPOs has been a perennial problem, as it is as much art as science, fraught with the potential for error. Price it too low, and you leave lots of money on the table, forgoing capital that could be used by the company selling itself to the public; price it too high and you discourage buyers. 3. See the book “1995: The Year the Future Began” for a much more detailed history. 4. Zweig reminds us that the Dow Jones Industrial Average dropped by a mere 1 percent to 777.30. Perhaps the takeway here is that if you have a long-term perspective, post-presidential resignations and impeachments (think 1998) create good market entry points. Originally: Three Days That Changed Modern Life Forever
<urn:uuid:dabc5621-a154-4d46-9e66-c3af26d91792>
{ "dump": "CC-MAIN-2018-30", "url": "http://ritholtz.com/2017/08/august-9th-day-live-infamy/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596204.93/warc/CC-MAIN-20180723090751-20180723110751-00516.warc.gz", "language": "en", "language_score": 0.954091489315033, "token_count": 1300, "score": 2.609375, "int_score": 3 }
Intro to Permaculture Permaculture is literally permanent culture. It started out literally as permanent agriculture, but it is now being applied to other areas besides agriculture. Bill Mollison wrote, “Permaculture is the conscious design and maintenance of agriculturally productive ecosystems which have diversity, stability, and resilience of natural ecosystems. It is the harmonious integration of landscape and people providing their food, energy, shelter, and other material and non-material needs in a sustainable way.” He continues, “Permaculture design is a system of assembling conceptual, material, and strategic components in a pattern which functions to benefit life in all its forms. The philosophy behind permaculture is one of working with, rather than against, nature; of protracted and thoughtful observation rather than protracted and thoughtless action; of looking at systems in all their functions, rather than asking only one yield of them; and of allowing systems to demonstrate their own evolutions. Basically, permaculture is a system of design with the goal of designing self-sufficient systems aligned with nature, and many times mimicking natural systems. A mature permaculture garden should ultimately be more productive, require less energy, sustainable, and great for the environment. People can label just about anything as being “permaculture”, but does that make it so? The Permaculture Research Institute is pretty lax on the use of the word. If you’ve passed a PDC, you can call a pile of chicken manure permaculture if you want. Having said that, there has been much debate over whether or not something qualifies. For example, Paul Wheaton hates chicken tractors, and thinks a stationary coop and run setup might be as bad as a commercial chicken house, but Geoff Lawton would disagree. They would both say, “It depends” for a lot of things. I think you have to be careful of trying to force elements into a permaculture design. For example, swales are fantastic tree growing systems that can re-hydrate land and build soils. Should you always include swales? It depends. Is the grade too steep for swales? Do you need them? Is the energy expended worth the production? What about hugelkultur? Hugelkultur is great in cold climates a la Sepp Holzer in the Austrian Alps, but not as effective in hot drylands. Double reach raised beds might be great where there is adequate rainfall, but sunken beds are better in hot drylands. Increasing shade is typically a good thing in the tropics, but increasing sun is usually positive in the cold climates. Of course some variation of microclimates can be good for different uses. The first thing to “do” in permaculture is to simply observe the site. Don’t even think about what you’re going to physically do. It is best to stay open to the endless possibilities. You might find connections you would have never thought of. I take notes and make maps as I’m observing. What are the wind flows? Check the surrounding vegetation to see if they are leaning one way or another. Bear in mind that winds typically come in from the north in the winter, and the south in the summer, but every site is different. I get a terrible westerly wind in the summer and a northerly wind in the winter. After making a wind map, I like to make a sun and shade map. A solar pathfinder is a great way to do this at any time of the year at any time of the day. It will give you all the seasonal and hour by hour information that you need. Make sure to include your compass headings, add existing structures to the map, and figure your slope and orientation. Contour mapping is important as well. I like to use a laser level to find some of the important contour lines. This can help with swale and pond placements, driveways and walkways, as well as fencing and tree planting, or anything else you may want to put on contour. Identifying microclimates It’s a good idea to note any microclimates that exist on the property. Do you have any sun traps, or sheltered areas, or excessively shady spots etc… I have a terrible microclimate directly behind my house that is 100% shade except for a couple of months in the summer when the sun is directly overhead. I’m still considering my options. Note anything else you observe. Don’t worry too much about the relevance. What kind of wildlife visits your site? Will you have to contend with deer or groundhogs? I’ve got them both. What is the soil type? This is great to know for planting and for earthworks. What climate type do you have? What is the rainfall amount, growing zone? Once you’ve done a healthy amount of observation, you can start to design. Now you’ll be able to determine whether or not you should install swales, or hugelkultur, or a mandala garden, or ponds, or double reach raised beds, or any other type of design element. Permaculture is all these design elements, and none of them. It all depends on how they arranged and whether or not they function properly in your system and are aligned with nature. In the next article, I will get more specific on the design process. I will cover designing with zones, sectors, and stacking functions. Source: Permaculture: A Designer’s Manual, Bill Mollison
<urn:uuid:f027a79c-2b81-487c-9813-0a2bebcf3337>
{ "dump": "CC-MAIN-2015-14", "url": "http://www.foodproduction101.com/blog/what-is-permaculture.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298660.78/warc/CC-MAIN-20150323172138-00258-ip-10-168-14-71.ec2.internal.warc.gz", "language": "en", "language_score": 0.9476040601730347, "token_count": 1162, "score": 2.78125, "int_score": 3 }
We round out this week’s book posts with a new biography of William Howard Taft, who managed to serve both as President and Chief Justice of the United States and who was, incidentally, the last American President to deny the divinity of Jesus Christ. (It’s true. He was a Unitarian. You could look it up). The publisher is Macmillan and the author is law professor Jeffrey Rosen. Here’s the description from the publisher’s website: The only man to serve as president and chief justice, who approached every decision in constitutional terms, defending the Founders’ vision against new populist threats to American democracy William Howard Taft never wanted to be president and yearned instead to serve as chief justice of the United States. But despite his ambivalence about politics, the former federal judge found success in the executive branch as governor of the Philippines and secretary of war, and he won a resounding victory in the presidential election of 1908 as Theodore Roosevelt’s handpicked successor. In this provocative assessment, Jeffrey Rosen reveals Taft’s crucial role in shaping how America balances populism against the rule of law. Taft approached each decision as president by asking whether it comported with the Constitution, seeking to put Roosevelt’s activist executive orders on firm legal grounds. But unlike Roosevelt, who thought the president could do anything the Constitution didn’t forbid, Taft insisted he could do only what the Constitution explicitly allowed. This led to a dramatic breach with Roosevelt in the historic election of 1912, which Taft viewed as a crusade to defend the Constitution against the demagogic populism of Roosevelt and Woodrow Wilson. Nine years later, Taft achieved his lifelong dream when President Warren Harding appointed him chief justice, and during his years on the Court he promoted consensus among the justices and transformed the judiciary into a modern, fully equal branch. Though he had chafed in the White House as a judicial president, he thrived as a presidential chief justice.
<urn:uuid:baa4d78d-cc82-4f55-8d19-a43b6ab3ebe2>
{ "dump": "CC-MAIN-2021-31", "url": "https://lawandreligionforum.org/2018/04/20/rosen-william-howard-taft/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154163.9/warc/CC-MAIN-20210801061513-20210801091513-00548.warc.gz", "language": "en", "language_score": 0.9727808833122253, "token_count": 410, "score": 3.109375, "int_score": 3 }
There are several ways of ornamenting a woven cloth: (1) real tapestry, (2) carpet-weaving, (3) mechanical weaving, (4) printing or painting, and (5) embroidery. There has been no improvement (indeed, as to the main processes, no change) in the manufacture of the wares in all these branches since the fourteenth century, as far as the wares themselves are concerned; whatever improvements have been introduced have been purely commercial, and have had to do merely with reducing the cost of production; nay, more, the commercial improvements have on the whole been decidedly injurious to the quality of the wares themselves. The noblest of the weaving arts is Tapestry, in which there is nothing mechanical: it may be looked upon as a mosaic of pieces of colour made up of dyed threads, and is capable of producing wall ornament of any degree of elaboration within the proper limits of duly considered decorative work. As in all wall-decoration, the first thing to be considered in the designing of Tapestry is the force, purity, and elegance of the silhouette of the objects represented, and nothing vague or indeterminate is admissible. But special excellences can be expected from it. Depth of tone, richness of colour, and exquisite gradation of tints are easily to be obtained in Tapestry; and it also demands that crispness and abundance of beautiful detail which was the especial characteristic of fully developed Mediæval Art. The style of even the best period of the Renaissance is wholly unfit for Tapestry: accordingly we find that Tapestry retained its Gothic character longer than any other of the pictorial arts. A comparison of the wall-hangings in the Great Hall at Hampton Court with those in the Solar or Drawing-room, will make this superiority of the earlier design for its purpose clear to any one not lacking in artistic perception: and the comparison is all the fairer, as both the Gothic tapestries of the Solar and the post-Gothic hangings of the Hall are pre-eminently good of their kinds. Not to go into a description of the process of weaving tapestry, which would be futile without illustrations, I may say that in contradistinction to mechanical weaving, the warp is quite hidden, with the result that the colours are as solid as they can be made in painting. Carpet-weaving is somewhat of the nature of Tapestry: it also is wholly unmechanical, but its use as a floorcloth somewhat degrades it, especially in our northern or western countries, where people come out of the muddy streets into rooms without taking off their shoes. Carpet-weaving undoubtedly arose among peoples living a tent life, and for such a dwelling as a tent, carpets are the best possible ornaments. Carpets form a mosaic of small squares of worsted, or hair, or silk threads, tied into a coarse canvas, which is made as the work progresses. Owing to the comparative coarseness of the work, the designs should always be very elementary in form, and suggestive merely of forms of leafage, flowers, beasts and birds, etc. The soft gradations of tint to which Tapestry lends itself are unfit for Carpet-weaving; beauty and variety of colour must be attained by harmonious juxtaposition or tints, bounded by judiciously chosen outlines; and the pattern should lie absolutely flat upon the ground. On the whole, in designing carpets the method of contrast is the best one to employ, and blue and red, quite frankly used, with white or very light outlines on a dark ground, and black or some very dark colour on a light ground, are the main colours on which the designer should depend. In making the above remarks I have been thinking only of the genuine or hand-made carpets. The mechanically- made carpets of to-day must be looked upon as makeshifts for cheapness’ sake. Of these, the velvet pile and Brussels are simply coarse worsted velvets woven over wires like other velvet, and cut, in the case of the velvet pile; and Kidderminster carpets are stout cloths, in which abundance of warp (a warp to each weft) is used for the sake of wear and tear. The velvet carpets need the same kind o£ design as to colour and quality as the real carpets; only, as the colours are necessarily limited in number, and the pattern must repeat at certain distances, the design should be simpler and smaller than in a real carpet. A Kidderminster carpet calls for a small design in which the different planes, or plies, as they are called, are well interlocked. Mechanical weaving has to repeat the pattern on the cloth within comparatively narrow limits; the number of colours also is limited in most cases to four or five. In most cloths so woven, therefore, the best plan seems to be to choose a pleasant ground colour and to superimpose a pattern mainly composed of either a lighter shade of that colour, or a colour in no very strong contrast to the ground; and then, if you are using several colours, to light up this general arrangement either with a more forcible outline, or by spots of stronger colour carefully disposed. Often the lighter shade on the darker suffices, and hardly calls for anything else: some very beautiful cloths are merely damasks, in which the warp and weft are of the same colour, but a different tone is obtained by the figure and the ground being woven with a longer or shorter twill: the tabby being tied by the warp very often, the satin much more rarely. In any case, the patterned webs produced by mechanical weaving, if the ornament is to be effective and worth the doing, require that same Gothic crispness and clearness of detail which has been spoken of before: the geometrical structure of the pattern, which is a necessity in all recurring patterns, should be boldly insisted upon, so as to draw the eye from accidental figures, which the recurrence of the pattern is apt to produce. The meaningless stripes and spots and other tormentings of the simple twill of the web, which are so common in the woven ornament of the eighteenth century and in our own times, should be carefully avoided: all these things are the last resource of a jaded invention and a contempt of the simple and fresh beauty that comes of a sympathetic suggestion of natural forms: if the pattern be vigorously and firmly drawn with a true feeling for the beauty of line and silhouette, the play of light and shade on the material of the simple twill will give all the necessary variety. I invite my readers to make another comparison: to go to the South Kensington Museum and study the invaluable fragments of the stuffs of the thirteenth and fourteenth centuries of Syrian and Sicilian manufacture, or the almost equally beautiful webs of Persian design, which are later in date, but instinct with the purest and best Eastern feeling; they may also note the splendid stuffs produced mostly in Italy in the later Middle Ages, which are unsurpassed for richness and effect of design, and when they have impressed their minds with the productions of this great historic school, let them contrast with them the work of the vile Pompadour period, passing by the early seventeenth century as a period of transition into corruption. They will then (if, once more, they have real artistic perception) see at once the difference between the results of irrepressible imagination and love of beauty, on the one hand, and, on the other, of restless and weary vacuity of mind, forced by the exigencies of fashion to do something or other to the innocent surface of the cloth in order to distinguish it in the market from other cloths; between the handiwork of the free craftsman doing as he pleased with his work, and the drudgery of the “operative” set to his task by the tradesman competing for the custom of a frivolous public, which had forgotten that there was such a thing as art. The next method of ornamenting cloth is by painting it or printing on it with dyes. As to the painting of cloths with dyes by hand, which is no doubt a very old and widely practised art, it has now quite disappeared (modern society not being rich enough to pay the necessary price for such work), and its place has now been taken by printing by block or cylinder-machine. The remarks made on the design for mechanically woven cloths apply pretty much to these printed stuffs: only, in the first place, more play of delicate and pretty colour is possible, and more variety of colour also; and in the second, much more use can be made of hatching and dotting, which are obviously suitable to the method of block- printing. In the many-coloured printed cloths, frank red and blue are again the mainstays of the colour arrangement; these colours, softened by the paler shades of red, outlined with black and made more tender by the addition of yellow in small quantities, mostly forming part of brightish greens, make up the colouring of the old Persian prints, which carry the art as far as it can be carried. It must be added that no textile ornament has suffered so much as cloth-printing from those above-mentioned commercial inventions. A hundred years ago the processes for printing on cloth differed little from those used by the Indians and Persians; and even up to within forty years ago they produced colours that were in themselves good enough, however inartistically they might be used. Then came one of the most wonderful and most useless of the inventions of modern Chemistry, that of the dyes made from coal-tar, producing a series of hideous colours, crude, livid — and cheap, — which every person of taste loathes, but which nevertheless we can by no means get rid of until we are able to struggle successfully against the doom of cheap and nasty which has overtaken us. Last of the methods of ornamenting cloth comes Embroidery: of the design for which it must be said that one of its aims should be the exhibition of beautiful material. Furthermore, it is not worth doing unless it is either very copious and rich, or very delicate — or both. For such an art nothing patchy or scrappy, or half-starved, should be done: there is no excuse for doing anything which is not strikingly beautiful; and that more especially as the exuberance of beauty of the work of the East and of Mediæval Europe, and even of the time of the Renaissance, is at hand to reproach us. It may be well here to warn those occupied in Embroidery against the feeble imitations of Japanese art which are so disastrously common amongst us. The Japanese are admirable naturalists, wonderfully skilful draughtsmen, deft beyond all others in mere execution of whatever they take in hand; and also great masters of style within certain narrow limitations. But with all this, a Japanese design is absolutely worthless unless it is executed with Japanese skill. In truth, with all their brilliant qualities as handicraftsmen, which have so dazzled us, the Japanese have no architectural, and therefore no decorative, instinct. Their works of art are isolated and blankly individualistic, and in consequence, unless where they rise, as they sometimes do, to the dignity of a suggestion for a picture (always devoid of human interest), they remain mere wonderful toys, things quite outside the pale of the evolution of art, which, I repeat, cannot be carried on without the architectural sense that connects it with the history of mankind. To conclude with some general remarks about designing for textiles: the aim should be to combine clearness of form and firmness of structure with the mystery which comes of abundance and richness of detail; and this is easier of attainment in woven goods than in flat painted decoration and paper-hangings; because in the former the stuffs usually hang in folds and the pattern is broken more or less, while in the latter it is spread out flat against the wall. Do not introduce any lines or objects which cannot be explained by the structure of the pattern; it is just this logical sequence of form, this growth which looks as if, under the circumstances, it could not have been otherwise, which prevents the eye wearying of the repetition of the pattern. Never introduce any shading for the purpose of making an object look round; whatever shading you use should be used for explanation only, to show what you mean by such and such a piece of drawing; and even that you had better be sparing of. Do not be afraid of large patterns; if properly designed they are more restful to the eye than small ones: on the whole, a pattern where the structure is large and the details much broken up is the most useful. Large patterns are not necessarily startling; this comes more of violent relief of the figure from the ground, or inharmonious colouring: beautiful and logical form relieved from the ground by well-managed contrast or gradation, and lying flat on the ground, will never weary the eye. Very small rooms, as well as very large ones, look best ornamented with large patterns, whatever you do with the middling-sized ones. As final maxims: never forget the material you are working with, and try always to use it for doing what it can do best: if you feel yourself hampered by the material in which you are working, instead of being helped by it, you have so far not learned your business, any more than a would-be poet has, who complains of the hardship of writing in measure and rhyme. The special limitations of the material should be a pleasure to you, not a hindrance: a designer, therefore, should always thoroughly understand the processes of the special manufacture he is dealing with, or the result will be a mere tour de force. On the other hand, it is the pleasure in understanding the capabilities of a special material, and using them for suggesting (not imitating) natural beauty and incident, that gives the raison d’être of decorative art.
<urn:uuid:3b87fe59-d3c3-4482-8354-127de9fe8256>
{ "dump": "CC-MAIN-2017-30", "url": "https://louizeharries.wordpress.com/2013/02/04/on-textiles-by-william-morris/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424961.59/warc/CC-MAIN-20170725042318-20170725062318-00012.warc.gz", "language": "en", "language_score": 0.9662574529647827, "token_count": 2934, "score": 3, "int_score": 3 }
The Health Benefits of Pokeweed Herb Pokeweed is a perennial herb that is native to eastern North America and cultivated throughout the world. It can grow to a height of more than ten feet during the summer and dies back to the root each winter. The berries and dried roots are used in herbal remedies. Pokeweed supplements are available as liquid extracts, tinctures, powders, and poultices. There is no standard dose for pokeweed. Pokeweed should not be used by people who are taking antidepressants, disulfiram (Antabuse), oral contraceptives, or fertility drugs. Other potential interactions between pokeweed and other drugs and herbs should be considered. Always tell your doctor and pharmacist about any herbs you are taking. Although pokeweed is a poisonous plant do not let that hinder you from harvesting its leaves and roots to be eaten and used in herbal medicines. If it is done right there is nothing to fear. Pokeweed root has been used for achy muscles and joints (rheumatism); swelling of the nose, throat, and chest; tonsillitis; hoarse throat (laryngitis); swelling of lymph glands (adenitis); swollen and tender breasts. Traditionally, however, pokeweed root was very rarely, if at all consumed. The most common use of pokeweed root in Native American medicine was as a laxative or to induce vomiting. The berry was also used as a colouring agent for food and is, in fact, still used today in the food industry. When eaten raw, certain enzymes produced by the plant such as lectins can cause the red blood cells to clump up. In fact, lectins are the active ingredient in many of the most toxic plant enzymes known to man. Poke acts as an anodyne, antibiotic, anti-inflammatory, anti-rheumatic, anti-scorbutic, anti-syphilitic, anti-tumour, cathartic, emetic and parasiticide. The parts of the plant that are used in herbal healing are the berries and roots. Poke Root is a pungent bitter herb that stimulates the immune and lymphatic systems.
<urn:uuid:0b73e7d6-1e3f-4960-84e7-8ef107877511>
{ "dump": "CC-MAIN-2016-50", "url": "http://healthybenefits.info/the-health-benefits-of-pokeweed-herb/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542714.38/warc/CC-MAIN-20161202170902-00186-ip-10-31-129-80.ec2.internal.warc.gz", "language": "en", "language_score": 0.9617292881011963, "token_count": 446, "score": 3.203125, "int_score": 3 }
The White Rabbit is a fictional character in Lewis Carroll’s book Alice’s Adventures in Wonderland. He appears at the very beginning of the book, in chapter one, wearing a waistcoat, and muttering “Oh dear! Oh dear! I shall be too late!” Alice follows him down the rabbit hole into Wonderland. Alice encounters him again when he mistakes her for his housemaid Mary Ann and she becomes trapped in his house after growing too large. The Rabbit shows up again in the last few chapters, as a herald-like servant of the King and Queen of Hearts. [((/public/Random/white.rabbit.jpg|White Rabbit||White Rabbit, sept. 2010))|/public/Random/white.rabbit.jpg||White Rabbit] Neo is told to follow the “White Rabbit” in The Matrix in one of many metaphysical “waking up” metaphors. Seconds later, his doorbell rings, and when he opens the door he finds a woman with a tattoo of a white rabbit on her shoulder. Later in the film right before he meets the oracle one can see Night of the Lepus playing on a nearby television, symbolizing Neo’s decision to “follow the white rabbit” and to disturb the order of the Matrix Jefferson Airplane recorded a song called “White Rabbit”, with references to this character and the Wonderland saga in general as metaphors for drug-induced experiences. [((/public/Random/chac.gif|Chac||Chac, sept. 2010))|/public/Random/chac.gif||Chac] Mayan God Chac was the ancient god of rain and the lightning. He was one of the earliest and most worshiped gods among all the people of Mesoamerica and was a benevolent god for the Mayans, who often sought his help for their crops. Chac was often depicted with a serpentine axe in his hand as a metaphor for lighting and his body was scaled and reptilian. He was worshiped at sacred wells and was associated with the life giving rain needed for agriculture. At the dawn of time, Chac split apart a sacred stone with his axe from which sprung the first ear of maize. When he was not among the clouds, the god could be found near falling waters. Chac was associated with creation and life. Chac was also considered to be divided into four equal entities. Each division represented the North, South, East and West. Chac was also apparently associated with the wind god Kukulcan. Some debate persists as to whether or not Kukulcan was just a variation of Chac.
<urn:uuid:e32a47d0-2009-4dab-af3c-cd88c14de514>
{ "dump": "CC-MAIN-2017-39", "url": "http://samodaj.org/?cat=11&paged=3", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687906.71/warc/CC-MAIN-20170921205832-20170921225832-00622.warc.gz", "language": "en", "language_score": 0.9747251868247986, "token_count": 551, "score": 2.625, "int_score": 3 }
(Note: This essay was originally published as “Origins of the Tomb of the Unknown Soldier,” in the Newport Daily News on November 13, 2021.) This November 11 is the 100th anniversary of the dedication of the Tomb of the Unknown Soldier in Arlington National Ceremony, Arlington, Virginia. The idea to honor the remains of unidentified American soldiers, we borrowed from France and the United Kingdom The dedication of the Tomb came three years after the end of World War I, “the Great War,” 1914-1918, in which close to two dozen countries actively participated, killing some 9 million combatants with an additional seven to ten millions civilian deaths. Four empires fell: the German, the Austro-Hungarian, the Russian, and the Ottoman Empire. Our Veterans Day was originally called Armistice Day, the day the guns fell silent ending WW I. It occurred at the 11th hour, on the 11th day, in the 11th month of 1918. In 1954, President Dwight Eisenhower signed the bill changing it to Veterans Day. Three years after the end of WWI, the Tomb of the Unknown Soldier was dedicated. Its mission: to ensure that all service-members who make the ultimate sacrifice for their country are never forgotten. The Tomb was proposed through a joint resolution of Congress, sponsored by Rep. Hamilton Fish III (R-NY), who had served as an officer in the 369th Infantry Regiment. He believed that this “unknown soldier” should not be taken from any particular battlefield “but should be chosen that nobody would know his identification or the battlefield he comes from.” President Woodrow Wilson approved the legislation supporting the resolution on March 4, 1921. The remains were selected from an American cemetery in France and brought to the nation’s capital. With the casket lying in state in the rotunda of the U.S. Capitol, an estimated 90,000 people paid their respects. On the morning of November 11, 1921, President Warren Harding, Gen. John Pershing, Commander of the American Expeditionary Forces, and other American and foreign dignitaries assembled at Arlington National Cemetery and were greeted by an estimated crowd of 100,000. Pres. Harding was the main speaker. “Hundreds of mothers are wondering today, finding a touch of solace in the possibility that the nation bows in grief over the body of the one she bore to live and die, if need be, for the Republic.” We do not know the eminence of his birth, but we do know the glory of his death. He died for his country, and greater devotion hath no man than this. He died unquestioning, uncomplaining, with faith in his heart and hope on his lips, that his country should triumph and its civilization survive.” “This American soldier went forth to battle with no hatred for any people of the world, but hating war and hating the purpose of every war for conquest.” “Today’s ceremonies proclaim that the hero unknown is not un-honored.” It began as an unguarded site. Civilian guards were added in 1925, changing to military guards the following year. In 1937, the guard was upgraded to a ceremonial honor guard that operates day and night, rain or shine, 24/7. The number 21 plays an important role in the sentinel’s guard service, tied to the highest honor the military can give: a 21-gun salute. The sentinel marches 21 steps to the south, turns to the east and holds that position for 21 seconds. The sentinel then turns north, waits for 21 seconds, and marches with precision 21 steps to the north. The sentinel then turns to the west and repeats the process. Women were added to the prestigious guard detail in 1996. Each time, he or she changes direction, an arms movement is conducted with the M-14 rifle. This is meant to keep the weapon on the shoulder closest to the visitors, indicating the guard’s intent to protect the tomb from any outside threat. October through March, there is a changing of the guard on the hour. April through September this occurs every half-hour. I visited the tomb and small museum this past summer and watched the very solemn and impressive changing of the guard. There are actually four tombs. Three contain the remains of an unknown soldier from WW I, from WW II, and from the Korean Conflict. The remains of the unknown soldier from the Vietnam War were exhumed in 1998 and identified using modern technology. Since then, the crypt for an unknown Vietnam War soldier has remained empty. A retired Army officer, Fred Zilian (zilianblog.com; Twitter: @FredZilian) is an adjunct professor of history and politics at Salve Regina University and a regular columnist. Apple, Charles. “At Rest and On Guard.” Military Officer Magazine, November 2021. Maze, Rick. “Honoring the Unknown.” Army Magazine, November, 2021. Vaughan, Don. “A Century of Honor.” Military Officer Magazine, November 2021.
<urn:uuid:f02abb12-9ae9-4d6f-b1ca-f14d1862e19b>
{ "dump": "CC-MAIN-2022-40", "url": "https://zilianblog.com/2021/12/07/tomb-of-the-unknown-soldier-gives-honor-to-our-unidentified-fallen/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00441.warc.gz", "language": "en", "language_score": 0.9627669453620911, "token_count": 1072, "score": 3.21875, "int_score": 3 }
1. Tell how God cares for birds. 2. Make a simple bird feeder or pinecone feeder. 3. Be able to recognize 10 different birds. 4. Play a bird game. 5. Draw and/or color pictures of the following: a. two water birds b. two seed eaters c. one predator 6. Be able to make five bird sounds. 7. Make a Christmas tree or an Easter basket for birds. 8. Observe some live birds, imitate their movements, and collect feathers whenever possible. Keep in mind that keeping the feathers of migratory birds is illegal in some, if not all, U.S.A. places. 1. Discuss God’s care, citing Matthew 10:28, 31 and Luke 12:24. God created birds to care for themselves (feathers, beak, migration, etc.). 2. Make a simple milk-carton bird feeder by cutting the milk carton so seeds may be placed inside or make a pinecone feeder by rolling a pine cone in peanut butter and bird seed. Hang your feeder so the birds may enjoy their treat. 3. Whenever possible, include birds from your locality. Play recognition games using pictures or flashcards. Invite a local museum or Audubon Society representative to make a presentation. 4. Possible games include: Bird lotto, dominoes, or a birds card game available from your Adventist Book Center. 5. Resources: a teacher supply store, coloring books, magazines, books or videos. 6. Check your public library or Audubon Society for tapes. Select birds that have distinct habits and sounds such as owls, doves, crows, chickadees, killdeer, whippoorwills, etc. 7. Tie bird seeds and fruits to a tree as a special treat for the birds. Decorate an Easter basket (berry basket) with materials that the birds could use for building their nests, such as hair, yarn, string, etc. Hang basket where the birds can borrow materials for nesting. 8. Go to the zoo, aviary, park, or neighborhood birding area to observe and collect (see note above) feathers. In class, act out bird movements.
<urn:uuid:42a0df8a-cdcd-4dfc-b2f3-71812852840b>
{ "dump": "CC-MAIN-2021-21", "url": "https://pfclub.co.uk/products/feathered-friends", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00296.warc.gz", "language": "en", "language_score": 0.8735401630401611, "token_count": 476, "score": 3.859375, "int_score": 4 }
By 1915, any list of the world’s greatest living mathematicians included the name David Hilbert. And though Hilbert previously devoted his career to logic and pure mathematics, he, like many other critical thinkers at the time, eventually became obsessed with a bit of theoretical physics. With World War I raging on throughout Europe, Hilbert could be found sitting in his office at the great university at Göttingen trying and trying again to understand one idea—Einstein’s new theory of gravity. Göttingen served as the center of mathematics for the Western world by this point, and Hilbert stood as one of its most notorious thinkers. He was a prominent leader for the minority of mathematicians who preferred a symbolic, axiomatic development in contrast to a more concrete style that emphasized the construction of particular solutions. Many of his peers recoiled from these modern methods, one even calling them “theology.” But Hilbert eventually won over most critics through the power and fruitfulness of his research. For Hilbert, his rigorous approach to mathematics stood out quite a bit from the common practice of scientists, causing him some consternation. “Physics is much too hard for physicists,” he famously quipped. So wanting to know more, he invited Einstein to Göttingen to lecture about gravity for a week. Before the year ended, both men would submit papers deriving the complete equations of general relativity. But naturally, the papers differed entirely when it came to their methods. When it came to Einstein’s theory, Hilbert and his Göttingen colleagues simply couldn’t wrap their minds around a peculiarity having to do with energy. All other physical theories—including electromagnetism, hydrodynamics, and the classical theory of gravity—obeyed local energy conservation. With Einstein's theory, one of the many paradoxical consequences of this failure of energy conservation was that an object could speed up as it lost energy by emitting gravity waves, whereas clearly it should slow down. Unable to make progress, Hilbert turned to the only person he believed might have the specialized knowledge and insight to help. This would-be-savior wasn’t even allowed to be a student at Göttingen once upon a time, but Hilbert had long become a fan of this mathematician’s highly "abstract" approach (which Hilbert considered similar to his own style). He managed to recruit this soon-to-be partner to Göttingen about the same time Einstein showed up. And that’s when a woman—one Emmy Noether—created what may be the most important single theoretical result in modern physics. Emmy (officially Amalie Emmy) Noether, born 1882, did not stand out in any particular way as a child, although she did, on occasion, attract some notice for her astonishing quickness in providing accurate answers to puzzles or problems in logic or mathematics. Her father, Max, was a fairly prominent mathematician, and one of her brothers eventually attained a doctorate in math. In retrospect, perhaps the Noethers may be another historical example of a family with a math gene. Germany in the early years of the 20th century was not a convenient place for a woman who wanted to pursue mathematics, or for that matter, any academic field outside of a few considered appropriate for the sex. Luckily for Noether, she had a facility with languages and was allowed to become certified as a language teacher. But Noether recognized her passion was in mathematics, and she decided to chase her dream and find a way to study the subject at the university level. While women were not permitted to be official students at most German universities then, they were able to audit courses with the permission of the professor. Noether started this way, sitting in on classes at the University of Erlangen. But she also spent a semester in 1903−1904 auditing courses at Göttingen, where she first encountered Hilbert. Rules surrounding enrollment eventually relaxed, and Noether later matriculated at Erlangen to earn her doctorate in mathematics (summa cum laude) in 1907. However, women were still not accepted as teachers in German universities at the time. Emmy took her fresh doctorate and became an unofficial assistant to her ailing and increasingly frail father, a professor at Erlangen. She also vigorously attacked her own research, forging a personal and original path through abstract algebra. Just a year after her doctorate, Noether's papers and the doctoral research that she was unofficially supervising gained her election to several academic societies, which prompted invitations to speak around Europe. Among those wanting her around, Hilbert reached out to bring Noether to Göttingen in order to tackle Einstein’s theory. The problem with Einstein’s theory No one denied it—Einstein’s Theory of General Relativity was undoubtedly beautiful. It was unlike any theory of nature yet imagined by humankind, more surprising and radical even than the special theory of relativity that Einstein had laid out in his revolutionary paper ten years before. Newton described gravity simply as a force acting over a distance attracting any two masses, whether planets or apples, to each other. The force was proportional to the product of the two masses and inversely proportional to the square of the distance between them. That’s the entire story, and it worked well for over two hundred years. But there was a mystery embedded in this description of gravity that physicists lived with for those two centuries. This coincidence was impossible to ignore, yet seemingly impossible to explain. The mass that determined the strength of the gravitational force was the same mass that appeared in Newton’s second law of motion, F = ma; gravitational mass was the same as the “inertial mass.” There was no apparent reason this had to be true, it simply was. Einstein didn’t think this was mere coincidence. He formulated a “principle of equivalence” that can be described in several ways. One way is to insist that the two types of mass are identical because of a fundamental symmetry in nature; that the laws of physics must take the same form whether one is in a gravitational field or in a region of space with no gravity (say, in a spaceship undergoing an equivalent acceleration). Carrying this principle to its logical conclusions eventually led to the equations of general relativity, the theory considered by many (including the great theoretical physicist Lev Landau) to be “probably the most beautiful of all existing physical theories.” Although Hilbert recognized that general relativity was a tremendous accomplishment, the energy conservation conundrum struck him as unacceptable. To illustrate this idea, let’s draw a circle around a region of space, as in the diagram here. The circle might contain electric and magnetic fields, water in motion, or something else. If we keep track of the energy flowing out of (and into) the perimeter of the circle (Ef) during a certain interval of time, then that total transfer of energy is equal to the amount that the total energy inside the circle (Ev) has changed. This is local energy conservation. In simple terms, energy is not created or destroyed, just moved around.
<urn:uuid:5991e471-d903-47a0-b263-962c1bac2b60>
{ "dump": "CC-MAIN-2022-21", "url": "https://arstechnica.com/science/2015/05/the-female-mathematician-who-changed-the-course-of-physics-but-couldnt-get-a-job/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00427.warc.gz", "language": "en", "language_score": 0.9717293381690979, "token_count": 1498, "score": 3.21875, "int_score": 3 }
Backcountry Advocates To Study Aviation Impacts When leaders of the Recreational Aviation Foundation work to protect access to backcountry airstrips, one objection they often hear from park officials is that the noise disturbs wildlife, says RAF executive director John McKenna -- a claim he hopes to dispute, thanks to a $10,000 research grant from AOPA's new "Giving Back" grant program. "We don't know for sure if the noise disturbs the animals or if it doesn't," he said. "But with this study, we'll be able to get some data." The RAF grant application was written by four Ph.Ds from various universities, said McKenna, who have volunteered to do the study and plan to submit their research to a peer-reviewed publication to provide scholarly credibility to their results. McKenna said that when his group is advocating to protect access, disturbance to wildlife often is raised as a concern. "This is just one box on a list," he said. "But it's close to the top of the list. It's not the only factor, but it's a big one." He said the scientists who will conduct the study hope to use the AOPA grant as seed money to attract more funding. They will try to determine the stress effects by measuring hormone levels in blood samples and scat from wildlife in noisy areas, and compare it to similar tests done in quiet areas. McKenna added that since airplanes don't require roads, they actually have a lower impact than many other modes of transportation into parks. AOPA awarded $10,000 grants to nine other nonprofit groups to support their work in the aviation community. The winners were chosen from a pool of more than 80 applicants.
<urn:uuid:6f7063c2-6162-463d-bb09-885238c1d9c5>
{ "dump": "CC-MAIN-2014-42", "url": "http://www.avweb.com/avwebflash/news/Backcountry-Advocates-To-Study-Aviation-Impacts220759-1.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898894.2/warc/CC-MAIN-20141030025818-00083-ip-10-16-133-185.ec2.internal.warc.gz", "language": "en", "language_score": 0.9798590540885925, "token_count": 346, "score": 2.78125, "int_score": 3 }
European crisis germany and the role Germany’s power is polarizing europe: the continent’s most powerful country is grappling with its leadership role—and other nations are, too. Global european banks and the financial capital flows with a focus on the role of european global european banks and the financial crisis. Before discussing europe's role in this crisis, i think it is prudent for everyone to remember a few things firstly, this is not the first time countries. Increasingly, the european debt crisis is ceasing to be a greek or italian crisis it is a crisis in the future role germany will play in europe. German role in steering euro crisis could lead to disaster of the crisis by brussels and european role that falls to germany today is not. As with the greek debt crisis, germany once again finds itself compelled or condemned to lead by its wealth and size, and by the lack of leadership elsewhere. The eurozone debt crisis is because many countries in the european union took on too much debt the european union, led by germany and france. Germany says it will take in 800,000 syrian refugees this year and while many germans have welcomed the new europe's migration crisis in 25 photos. Poland's jarosław kaczyński says it was germany's decision to open europe's germany created the migrant crisis and germany should pay the consequences. As concern continues to grow about the deepening financial crisis in europe, germany's crucial role in the economy of the region is becoming more and more apparent. The imf’s role in the euro-area crisis: financial sector aspects an important role in the genesis of europe’s to european crisis. As a result of changes in the european union's functioning resulting from both the lisbon treaty and the effects of the sovereign debt crisis, germany has become the. The european debt crisis and controversial role in the current european bond european powers such as france and germany for pushing for the. The european trust crisis and the an important policy implication from the european economic crisis is unemployment in germany fell to pre-crisis. The extent to which it was driven by the global financial crisis and by factors internal to europe crisis – what role european countries, germany is. The european debt crisis is the shorthand term for europe’s struggle to pay the debts it has built up in recent decades five of the region’s countries – greece. Germany's role in the eu the eurozone crisis had meant germany could not dodge her responsibilities as the biggest and and germany was firmly european. A ceo summary of the article boom and (deep) crisis in the spanish economy: the role of the eu in its evolution by miren etxezarreta, francisco navarro, ramón ribera. Faced with a united front of france, germany, and the european institutions germany and the euro crisis: is the powerhouse really so pure john rosenthal. Following world war ii, a german return to dominance in europe seemed an impossibility but the euro crisis has transformed the country into a reluctant hegemon and. European crisis germany and the role Europe: the process of change continues the emerging crisis in europe by geopolitical futures founder george it has played a prominent role in the. What is germany's current role within the eu and in the global economic system what game is it playing these are questions which cannot be answered in an. The european debt crisis refers to the struggle faced by the european sovereign debt crisis started in 2008 with discover the role that junk bonds. - The european migrant crisis austria has taken on the role of regulator after the development of the migrant crisis germany decided to use the. - Migrant crisis: “high time for europe to reclaim a leading role in human rights” – un experts. - “leading from the centre: germany’s role in europe” argues that both berlin and its partners have to decide how to crisis and cohesion in the european union. Investigating germany’s new role in europe: the fourth reich, really greek crisis: germany has a problem with europe. How is the migrant crisis dividing eu countries as did germany on its border with austria the european commission has protested to austria. 2the refugee crisis concerns many people across europe political orientation plays a large role in views of the pew research center does not take policy. Don't blame germany for greece's debt crisis subscribe the european union is a monument to germany’s atonement for its past and germany’s role in it. Both nations were the motors for the process of european integration and both are key decision makers in solving this crisis germany’s role in the european.
<urn:uuid:71e331e0-5d9f-42fb-8a61-77773dbdd863>
{ "dump": "CC-MAIN-2018-26", "url": "http://dgcourseworkuabv.jazmineearlyforcouncil.us/european-crisis-germany-and-the-role.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866965.84/warc/CC-MAIN-20180624141349-20180624161349-00053.warc.gz", "language": "en", "language_score": 0.9457890391349792, "token_count": 1099, "score": 2.953125, "int_score": 3 }
What We're Learning About Deaths from Unintentional Injuries Archived Deaths from unintentional injuries account for approximately two thirds of deaths from all injuries in the United States. From 1999 to 2004, overall in the United States, the rate of deaths caused by unintentional injuries increased 7 percent. This report describes the leading causes of death from unintentional injuries and discusses how raising awareness about the causes of these injuries is key to preventing unintentional injuries and reducing the number of deaths that result. Created: 11/2/2007 by MMWR. Date Released: 11/21/2007. Series Name: A Minute of Health with CDC. A MINUTE OF HEALTH WITH CDC What We're Learning About Deaths from State-specific Unintentional Injury Deaths — United States, 1999–2004 November 21, 2007 This program is presented by the Centers for Disease Control and Prevention. CDC – safer, Scientists prefer the term “unintentional injuries,” instead of “accidents,” because most of these injuries can be prevented or avoided. The causes of these injuries range from slipping and falling to crashes and fires. Unintentional injuries are the leading cause of death among people between 1 and 44 years old. Motor vehicle crashes are the leading cause of unintentional injury deaths. Twice as many males as females die as a result of unintentional injuries. Deaths reported from unintentional injuries have increased since 1999. Raising awareness about the causes of these injuries is key to preventing them and reducing the number deaths that result. Thank you for joining us on A Minute of Health with CDC. To access the most accurate and relevant health information that affects you, your family and your community, please visit www.cdc.gov.
<urn:uuid:bf3b3057-48c4-4645-8387-c3ebae57b9ff>
{ "dump": "CC-MAIN-2014-23", "url": "http://www2c.cdc.gov/podcasts/player.asp?f=7301", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877881.80/warc/CC-MAIN-20140722025757-00158-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9270704388618469, "token_count": 363, "score": 2.9375, "int_score": 3 }
This Common Drug Destroys Your Liver (Doctors Warn) The inadequate functioning of the liver is known as acute liver failure. It can last more than a few days or weeks, and it is common for people with not previous liver diseases. The acute liver failure may lead to more serious health complications, such as: increase brain pressure or excessive bleeding. What is the main cause of this? The main reason for acute liver failure is the overdose of a drug, which we have it home. ACUTE LIVER FAILURE AND ACETAMINOPHEN A great number of drugs, like Excedrin, Tylenol, Theraflu, and Nyquil contain acetaminophen. It is considered that acetaminophen, which is present in many painkillers and cold medicines, is the main reason for over 56.000 visitors in the emergency, 2.600 hospitalized and 460 dead per year. The consumption of acetaminophen in the long-term can be very harmful for the human body even if it is taken in low dosage. Acetaminophen overdoses are usually caused by taking more than one drug, which contains acetaminophen, like taking painkiller and a cold medicine at the same time. However, the acetaminophen is very harmful for the liver. A study, which was published in the Journal of American Medical Association, has shown that acetaminophen leads to liver failure, even though it is taken as prescribed. In the study, there were 145 volunteers, who were classified into three categories. Namely, the first group took acetaminophen / opioid combination; the second group took only acetaminophen, whereas the third group took placebo. Each of the volunteers took the prescribed dose of acetaminophen. The groups were examined in a period of 2 weeks. The results of the study have shown that the two groups, which took only acetaminophen, had increased level of liver enzyme from 31 % – 44 %, which means that it led to liver failure. Acetaminophen uses the resources of glutathione in the body. So, if glutathione is used very quickly, the liver gets stressed, so that acute liver failure is caused. If someone poisoned with acetaminophen is treated in the emergency, they are injected with glutathione or are given IV, so that their liver is protected. Acetaminophen is considered as one of the most harmful drugs in the stores. What is worse is the fact that it is present in almost all homes. Acetaminophen is the main reason why people make calls to Poison Control Centers in the USA – there are over 100. 000 cases a year. According to the recent researches, the risk is slightly increased if acetaminophen is taken with alcohol, like a popping pill for headache, or if it taken with Tylenol for hangover. It has been proven that the consumption of acetaminophen with alcohol increased the risk of kidney damage by up to 123 %. The line between the normal and the harmful dose of acetaminophen is tiny. The PBS News stated that if acetaminophen is taken over a few days, or 25 % over the recommended daily intake, or only two more pills per day, leads to liver damage. We are all familiar with the fact that over the counter drugs and medicines have negative effects. However, the fact that there are medicines, which contain one of the most harmful drugs, in almost every home, is found very worrying. Millions of people consume acetaminophen on a daily basis for various health issues, including headaches, muscle or joint pain, back pain, a flu or a cold. When it comes to the increased risk of acute liver failure in the USA, due to the high acetaminophen dosage, we should think about other ways of treatment and healing.
<urn:uuid:357c4cf5-df78-4129-a14e-8411588e0fec>
{ "dump": "CC-MAIN-2018-51", "url": "http://healthydefinition.com/drug-destroys-your-liver/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823303.28/warc/CC-MAIN-20181210034333-20181210055833-00607.warc.gz", "language": "en", "language_score": 0.9641825556755066, "token_count": 786, "score": 3.125, "int_score": 3 }
Calculus is a branch of Mathematics that deals with the study of limits, functions, derivatives , integrals and infinite series . The subject comes under the most important branches of applied Mathematics, and it serves as the basis for all the advanced mathematics calculations and engineering applications. There are two major categories of Calculus: In this content, we will focus majorly on different solving techniques of Calculus and will also throw some light on a wide range of concepts associated with the subject. Before we jump into the detailed study of the subject, we must be familiar with some basic terms that are associated with the course. A good understanding of Calculus requires you to have a basic knowledge of: These functions are further characterized as Throughout this course, we will be making use of these terms frequently, so it is better if you have a good understanding of the terms listed above. These are not very difficult-to-understand concepts. You may study them on your own before you proceed further into learning concepts of Calculus. Next we move to the core concepts and examples of Calculus. A polynomial function has the form `f(x)=a_n x^n`=`a_(n-1) x^(n-1)+...+a_1 x+a_0`, where `a_n ,a_(n-1),...,a_0` are real numbers and n is a nonnegative integer. In other words, a polynomial is the sum of one or more monomials with real coefficients and non-negative integer exponents. The degree of the polynomial function is the highest value for n where n is not equal to 0. Polynomial functions of only one term are called monomials or power functions. A power function has the form `f(x)=ax^n`. For a polynomial function f, any number r for which `f(r)=0` is called a root of the function f. When a polynomial function is completely factored, each of the factors helps identify zeros of the function. Rational function" is the name given to a function which can be represented as the quotient of polynomials, just as a rational number is a number which can be expressed as a quotient of whole numbers. Rational functions supply important examples and occur naturally in many contexts. All polynomials are rational functions. Logarithmic functions are used to simplify complex calculations in many fields, including statistics, engineering, chemistry, physics, and music. For example,`log(xy)=logx+logy` and `log(x/y)=log x - log y are logarithmic functions that essentially simplify multiplication to addition and division to subtraction. Logarithmic functions are the inverse of their exponential counterparts. An exponential function is a mathematical function of the following form: `f ( x ) = a x` where x is a variable, and a is a constant called the base of the function. The most commonly encountered exponential-function base is the transcendental number e , which is equal to approximately 2.71828. Thus, the above expression becomes: `f ( x ) = e x` When the exponent in this function increases by 1, the value of the function increases by a factor of e . When the exponent decreases by 1, the value of the function decreases by this same factor (it is divided by e ). A function of an angle expressed as the ratio of two of the sides of a right triangle that contains that angle; the sine, cosine, tangent, cotangent, secant, and cosecant. Also called circular function.
<urn:uuid:2df5846d-fcf0-4e56-ae21-5f3086d707dd>
{ "dump": "CC-MAIN-2019-13", "url": "https://www.ipracticemath.com/learn/calculus", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202131.54/warc/CC-MAIN-20190319203912-20190319225912-00460.warc.gz", "language": "en", "language_score": 0.9267752170562744, "token_count": 766, "score": 3.765625, "int_score": 4 }
Tip #228: Debunking Myths About Lesson Plans Some trainers feel that if they create a lesson plan, they will lose the option to be flexible and spontaneous. However, this is entirely untrue. As any trainer who has ever used a lesson plan can tell you, we rarely, if ever, deliver the training exactly as we planned it out. Each training group has different needs, learning styles and paces, issues, and questions- and a good trainer/facilitator alters the training experience (and sometimes even the training content) to meet those needs. The major benefit in using a lesson plan is that it ensures that key content will be covered. The sequence may be changed and additional content and/or learning activities may be added, but that is done still keeping in mind the major information and activities that need to be retained. A well written lesson plan not only identifies the content and learning activities for each module, but also the duration of each activity. This enables a trainer to make informed decisions quickly and effectively in order to adjust to the learners’ interests and needs. For example, an important issue may be raised by a participant that needs to be addressed. Let’s say that this new content requires thirty minutes to handle that was not originally anticipated in the lesson plan. If the next learning activity is a questionnaire that is allocated 50 minutes, with small group discussions and report outs- the trainer will need to revise how the questionnaire is facilitated. Since there is not enough time for the small group discussions and report outs, the trainer will have to quickly select a different way to facilitate the activity in the 20 minutes that remain. The trainer may read each question and have participants indicate whether they agree or disagree by a thumbs up or thumbs down gesture. The trainer can then call on volunteers who voted differently to provide their rationale. The content will still be covered and the original learning activity will still be facilitated, just in a different fashion and for a shorter period of time. Without a lesson plan, there is no guarantee of consistency or quality control on either the content or the learning activities. The learning experience becomes a hit or miss proposition, depending on the mood of the trainer and the interests of the learners. With a lesson plan, the trainer is better able to adjust to the learners while still ensuring that key content is covered and the desired levels of learning are achieved through planned learning activities. July 28, 2008 Last week, we debunked the myth that lesson plans take the flexibility and spontaneity out of training. Ross Thomas had this to say: “I got this latest learning tip, and I can’t agree with you more! My training class typically runs for about 12 days. Each day I have a lesson plan with the high level content summary along with the activities that I plan to do. I also have things on the plan that I have listed that can be moved to the next day if necessary. I can’t imagine trying to conduct this training without the lesson plan that I have developed. It is such an invaluable tool and gives me the confidence and knowledge that I am covering all the topics that I need to cover.” Thanks so much, Ross. I feel the very same way! This week, we debunk the myth that learning = retention.
<urn:uuid:8303635a-d124-47ba-8179-a24653df01d9>
{ "dump": "CC-MAIN-2018-39", "url": "http://laurelandassociates.com/tip-228-debunking-myths-about-lesson-plans/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158429.55/warc/CC-MAIN-20180922123228-20180922143628-00131.warc.gz", "language": "en", "language_score": 0.9502502083778381, "token_count": 676, "score": 3.0625, "int_score": 3 }
Urinary system stones Urinary system stones are the deposition of salts in the urine in the form of lumps that increase over time. The stones can be found in the kidneys, ureters, and bladder. Their presence causes problems that many people might suffer from such as urine retention, bloody urine, wounds, and bleeding in the organs of the urinary system. Causes of its formation -Lack of urination due to high temperature or strenuous work that necessitates the evaporation of fluids from the body while not drinking enough water. Those two factors contribute to the formation of stones. -An increase in the alkalinity or acidity of the urine due to some foods such as meat that contain substances that increase the acidity of the urine, and thus some components and salts such as phosphates accumulate and stones form. -Pus diseases in the kidneys may result in stones in the components of the urinary system. -Increase your intake of calcium-rich foods, such as milk and its products, and the treatment here is not a complete ban, but moderation in eating those foods that contain calcium. -Increased excretion of calcium in the urine due to a disease, an abnormality in the parathyroid gland, or possibly genetic reasons. -A high level of uric acid in the blood due to nutrition or the presence of diseases or genetic reasons leads to an increase in the proportion of this acid in the blood. Symptoms of stone infection Pain in the area between the last rib and the back muscles accompanied with vomiting or nausea usually means the possibility of kidney stones. However, the presence of what looks like renal colic with a break in the urine or a decrease in its quantities indicates the possibility of stones in the ureter. As for the bladder, pain in the urethra with feeling in the opening of the bladder in addition to bloody urine and the presence of interruption in the urine with severe sensitivity in the bladder indicates the possibility of stones in the bladder. Urinary stones treatment There are many ways to treat these stones, whether by using drugs, surgery, lasers, and others, and you can call and book with Dr. Salah Zedan to get rid of this painful condition forever.
<urn:uuid:49583c6f-9f47-401a-88ee-73c8da6c1f34>
{ "dump": "CC-MAIN-2021-17", "url": "https://salahzedan.com/en/blog/specialization/urinary-system-stones/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00604.warc.gz", "language": "en", "language_score": 0.9413864016532898, "token_count": 461, "score": 3.046875, "int_score": 3 }
Fertilize Container Plantings Plants confined to containers can't send out roots in search of nutrients, so we have to supply them regularly. Adding a slow-release fertilizer at planting time provides a steady supply of nutrients for several months. Or you can mix and apply a liquid fertilizer and use it weekly. Be sure to dilute the fertilizer properly -- more is not better. If plants dry out completely, rehydrate them first, then use the fertilizer solution the next time you water. Replace Flagging Annuals Pansies, lobelia, petunias, snapdragons, and other cool-season annuals can go into a slump in midsummer. Either replace them with heat-lovers, or cut them back in the hopes that they'll survive until fall, then bloom again as the weather cools. Spend Half Hour a Day Pulling Weeds Spend just a half hour ... surely you can find a half hour. Weeds really do sprout overnight and by next week they'll be towering over your zinnias. A visit to the garden before work or as soon as you get home can keep the weeds in check. Keep Tomatoes Evenly Watered Dark leathery spots on the blossom end of tomatoes is likely to be a condition called "blossom end rot" that's caused by uneven moisture. Mulch will help moderate the fluctuating moisture levels that nature provides, and it's not too late to spread some around your plants. Avoid Using Sprinklers in Midday Sun The best time to water the garden with a sprinkler is in the morning, second-best is late afternoon, worst is during the hottest part of the day. So much water is lost when using an overhead sprinkler on a sunny afternoon that you may as well pour it down the drain. Even better than a sprinkler are drip irrigation systems and soaker hoses, which slowly apply water directly to the soil and don't wet the foliage. If you do use a sprinkler, be sure to keep it on long enough to wet the soil to the root zones. This can take much longer than you think.
<urn:uuid:7f87dd06-bcca-4f38-98eb-632dbf3fafe4>
{ "dump": "CC-MAIN-2016-07", "url": "http://www.garden.org/regional/report/arch/reminders/2823", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160950.71/warc/CC-MAIN-20160205193920-00021-ip-10-236-182-209.ec2.internal.warc.gz", "language": "en", "language_score": 0.9317216873168945, "token_count": 443, "score": 2.5625, "int_score": 3 }
Italy, the Cold War, and the Nuclear Dilemma: The Struggle over the NPT Washington History Seminar Historical Perspectives on International and National Affairs Italy, the Cold War, and the Nuclear Dilemma: The Struggle over the NPT UNIVERSITY OF ROMA TRE Why do nuclear weapons matter? Italy‘s military nuclear policy throughout the Cold War was an attempt to achieve a position of parity with the major European powers. The Non-Proliferation Treaty, however, challenged this basic goal, and both the signature and the ratification of the treaty became two of the most controversial choices that postwar Italy had to face. Leopoldo Nuti is Director of the Machiavelli Center for Cold War Studies and professor of history of international relations and coordinator of the international studies section of the doctoral school in political science at the University of Roma Tre. He is the Co-Director of the Nuclear Proliferation International History Project. Nuti has been a Fulbright student, NATO Research Fellow, Jean Monnet Fellow at the European University Institute, Research Fellow at the CSIA, Harvard University, Research Fellow for the Nuclear History Program, Senior Research Fellow at the Norwegian Nobel Institute, and Visiting Professor at the Institut d'Etudes Politiques in Paris. He has published extensively in Italian, English, and French on U.S.-Italian relations and Italian foreign and security policy. His latest book is a history of nuclear weapons in Italy during the Cold War, La sfida nucleare. La politica estera italiana e le armi nucleari, 1945-1991. Report from the Field: David Nickles, US Department of State Office of the Historian Monday November 25, 2013 Woodrow Wilson Center, 6th Floor Moynihan Board Room Ronald Reagan Building, Federal Triangle Metro Stop Reservations requested because of limited seating: [email protected] or 202-450-3209 The seminar is sponsored jointly by the National History Center of the American Historical Association and the Wilson Center. It meets weekly during the academic year. See www.nationalhistorycenter.org for the schedule, speakers, topics, and dates as well as webcasts and podcasts. The seminar thanks the Society for Historians of American Foreign Relations for its support.
<urn:uuid:bd6367c9-384f-40f3-9795-b82e1d8be686>
{ "dump": "CC-MAIN-2015-22", "url": "http://www.wilsoncenter.org/event/italy-the-cold-war-and-the-nuclear-dilemma-the-struggle-over-the-npt", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929171.55/warc/CC-MAIN-20150521113209-00339-ip-10-180-206-219.ec2.internal.warc.gz", "language": "en", "language_score": 0.905278205871582, "token_count": 479, "score": 2.53125, "int_score": 3 }
Are you planning a trip to Japan and considering visiting an onsen? Look no further, as this post contains everything you need to know as a traveler about enjoying a hot spring in Japan. As someone who has personally visited a ryokan with a private onsen and fully enjoyed it, I can attest to the amazing experience that awaits you. Yes, it’s true guys, there might be affiliate links in this awesome, free post. This means that if you decide to buy something that you find here, and you use one of my links to do so, I will earn a small commission at no extra cost to you. I plan to use this money on ice cream, chocolate, and to travel more so I can write these useful guides for you. As an Amazon Associate, I earn from qualifying purchases. Table of Contents What are Onsens? In Japan, onsens are hot springs and the bathing facilities and traditional inns around them. These hot springs are popular for their health benefits and have been a part of Japanese culture for centuries. There are approximately 25,000 hot spring sources throughout Japan, and approximately 3,000 onsen establishments use naturally hot water from these geothermally heated springs. Onsen may be either outdoor baths or indoor baths, each with its own strong points and benefits. Traditionally, onsens were located outdoors, although many inns have now built indoor bathing facilities as well. Nowadays, as most households have their own baths, the number of traditional public baths has decreased, but the popularity of sightseeing hot spring towns has increased. Baths may be either publicly run by a municipality or privately, often connecting to a lodging establishment such as a hotel, ryokan, or minshuku. The presence of an onsen is often indicated on signs and maps by the symbol ♨, the kanji 湯 (yu, meaning “hot water”), or the simpler phonetic hiragana character ゆ (yu). When onsen water contains distinctive minerals or chemicals, establishments often display what type of water it is, in part because the specific minerals found in the water have been thought to provide health benefits. They can contain sulfur, sodium chloride, hydrogen carbonate, or iron, each having different benefits for your health. Want to have a helpful resource to make your planning efforts not only easier but also more enjoyable? Check out my Japan Travel Guide from the shop! The History of Onsens in Japan The history of onsens in Japan dates back to the 6th century, as noted in old books of Japanese history. These hot springs were used for purifying rituals in the Shinto religion and for the enjoyment of the emperors. Eventually, the bathing culture in onsens spread throughout the country, becoming accessible to all citizens. Interestingly, it is said that people began to gather around onsens to hunt wild animals that came to drink the hot spring water to absorb minerals. The Health Benefits of Onsens Visiting a traditional onsen in Japan is not just a great way to relax but also a unique way to experience Japanese culture. Many onsens are steeped in history, with stories of samurai warriors soaking in the waters to heal their wounds after a battle. Nowadays, soaking in a hot spring is considered a therapeutic experience for people suffering from various ailments. In fact, there is scientific evidence that suggests that bathing in an onsen can have a number of health benefits. - Increase Blood Circulation: Onsen water is rich in natural minerals, such as sodium bicarbonate and calcium, that get absorbed into our bodies as we bathe. These minerals help increase blood flow and the amount of oxygen in our blood. - Reduce Stress and Sleep Better: The hot spring water can relieve tense muscles, and the natural surroundings of most Japanese hot springs can help clear your mind. Your body quickly cools after leaving the hot spring, which encourages your body to relax and puts you into a deeper sleep. - Relieve Pain: A recent study in the Journal of Rheumatology studied the effects of hot springs on pain. The conclusion was that the intense heat of the bathing experience somewhat dulled our perception of pain. The onsen water also acts as buoyancy for aching joints. The combination of temperature, minerals, mental state, and ease of movement in the water helps relieve different kinds of pain. - Treat Skin Problems: Onsen in Japan have different mineral qualities. Many onsens have been known to beautify the skin or have names like “Beautiful Skin” or “Princess Bath.” Some hot springs contain silica, which can smooth or soften dry and rough skin. Onsens containing sulfur have been recommended for people suffering from eczema and psoriasis. - Reduce Stress: Soaking in hot springs is not only a relaxing experience but also a great way to improve your overall well-being. The warm water of the onsen can help reduce stress and tension in your body, which can lead to better sleep, improved digestion, and increased energy levels. Taking the time to indulge in a hot spring soak can have numerous benefits for your mind and body. It’s no wonder why onsens have become such an integral part of Japanese culture! Tattoos and Onsens in Japan As the tourism industry in Japan grows, more and more foreigners are visiting the country’s onsens. Some onsens that previously banned tattoos are now loosening their rules to allow guests with small tattoos to enter. However, they require guests to cover their tattoos with a patch or sticking plaster to be allowed in. Best Onsen Areas in Japan Japan is known for its numerous hot springs, or onsens, which can be found all over the country. Here are some of the best onsen areas you can choose from: - Kusatsu Onsen: A popular onsen resort town in Gunma Prefecture, known for its high-quality, mineral-rich water. - Hakone Onsen: Located in Kanagawa Prefecture, this onsen area offers stunning views of Mt. Fuji and is easily accessible from Tokyo. - Beppu Onsen: Located in Oita Prefecture on the island of Kyushu, Beppu is known for its many different types of hot springs, including mud baths and sand baths. - Yufuin Onsen: A small onsen town in Oita Prefecture, known for its picturesque rural scenery and relaxing atmosphere. - Kurokawa Onsen: A hidden gem in Kumamoto Prefecture, known for its traditional, rustic atmosphere and beautiful natural surroundings. - Noboribetsu Onsen: Located in Hokkaido, this onsen area is known for its unique, sulfuric water and stunning, volcanic landscape. - Kinosaki Onsen: Located in Hyogo Prefecture, this historic onsen town is known for its charming atmosphere and traditional architecture. - Dogo Onsen: One of Japan’s oldest and most famous onsen, located in Ehime Prefecture on the island of Shikoku. - Fuji Kawaguchiko Onsen: Located near Mt. Fuji, this onsen area offers stunning views of the iconic mountain and is a popular spot for outdoor activities. - Ibusuki Onsen: Located in Kagoshima Prefecture, this onsen area is unique for its sand baths, which are said to have therapeutic benefits. Other honorable mentions include Kusatsu Onsen, Nozawa Onsen, Atami Onsen, Yumoto Onsen, Niwa no Yu, and Ginzan Onsen. No matter where you go, experiencing an onsen in Japan is a must-do activity for any traveler seeking relaxation and rejuvenation. How to get to enjoy Japan’s hot springs? Some public onsens can be visited straight from the city, so if you’re really in a hurry and can only invest a few hours in enjoying this experience, this is a good starting point. Please note though that most public onsens in major cities are not actual hot springs, but pools with hot water. The natural onsens have mineral water that is heated by the volcanic activity of the area, while the unnatural ones have normal water heated by usual means like you would enjoy when showering at home. One of the best ideas though is to visit a special area known for its hot springs. I have offered you a few examples above, and plan to write a few more blog posts about each of them. To get to these areas, I suggest you check out the JR Pass, as it’s most probably your easiest solution to travel between cities in Japan anyway. You can check out my detailed guide about using a train in Japan to see how you can find out if you need the JR Pass for your trip or not. What to Pack for an Onsen Trip If you’re planning to visit an onsen in Japan, it’s important to pack a few essential items to ensure a comfortable and enjoyable experience. First and foremost, bring a bathing suit, as some onsens require all visitors to wear one, and you might go to a different kind of spa where you will need one. Additionally, pack a big towel to dry off after soaking, as well as waterproof patches if you have a tattoo that may not be allowed in some onsens. Don’t forget to bring your favorite cosmetics to use after soaking, as the minerals in the water can be harsh on the skin. Lastly, bring something to hold your hair up, like a hair tie or clip, to keep it out of the water and prevent any discomfort. Choosing the Right Onsen for You If you’re planning to visit an onsen in Japan, it’s important to choose the right one that suits your needs and preferences. For instance, if you have a tattoo, it’s best to look for a tattoo-friendly onsen or book a room with a private hot spring to avoid any issues. On the other hand, if you’re not comfortable bathing naked in front of others, you can opt for a private onsen or a mixed-gender one where that requires the use of bathing suits. Additionally, it’s recommended to check the chemical composition of the onsen to ensure that you don’t have any health issues that could prevent you from enjoying the hot spring. The Best Time to Visit an Onsen Onsens can be enjoyed all year round, whether in the winter or in the summer. During winter, there is nothing more relaxing than soaking in a hot spring while the snow falls around you. The cold weather also makes the hot water even more inviting. In the summer, on the other hand, it is a great way to unwind after a day of exploring under the sun. Whatever the season, a trip to an onsen is a must-do experience in Japan. Who should avoid onsens? As much as we like to believe that hot springs are for everyone, nothing in this world is for everyone so let’s just embrace this. Of course, if you have various health issues that you know you should take care of, please talk to your doctor about this. They will be able to advise you better than any other blogger you will find online. As a female, you should avoid onsens if you’re menstruating, or if you’re pregnant. While pregnant, you can discuss more with your doctor, but when I was pregnant they indeed recommended me not to use any hot springs. Anyone else should avoid onsens if they have an issue with any of the minerals in the water, if they are unwell in any way (especially if having a fever), or if staying in extreme heat is problematic to them in any way (like it is the case in some heart conditions). Tips when visiting hot springs So, you’re ready to take a dip in a hot spring in Japan? Great choice! Onsens are a unique cultural experience that you won’t want to miss. To help you make the most of your onsen experience, here are some tips to keep in mind: - Make sure to wash thoroughly before entering the hot spring. Onsens are meant to be used to relax, not to wash yourself. - Drink plenty of water and you might also need some fruit juice after, as you can feel drained after a hot, steaming soak. - Always look straight ahead to not seem like you’re staring at other people. Being in a room full of naked people might feel weird at the beginning, but you’ll warm up to it rather soon (see what I did there?) - Be mindful of others and keep noise to a minimum. Also, try to relax as well. This is what hot springs are meant for. - Do not submerge your head in the hot spring. Your hair should never touch the water, and I wouldn’t like your nose and eyes in the water either. - If you feel too hot or dizzy, exit the hot spring immediately. Being in such a hot environment can make some people feel a bit lightheaded. If you feel a bit strange, exit right away and take a cold shower to try to feel better. - Do not drink alcohol before or during your time in the hot spring. This can be very dangerous as alcohol and heat do not make a nice combo for your brain, so you shouldn’t have any Pina Coladas in this type of pool. - Follow any additional rules or etiquette specific to the hot spring you are visiting. This should go without saying but yes, some places might have specific rules, and you should be mindful of them. The Dos and Don’ts of Onsen Etiquette When visiting an onsen in Japan, it’s important to understand and follow the etiquette rules to ensure a comfortable and respectful experience for everyone. From bathing procedures to wearing appropriate attire, these guidelines help maintain the cleanliness and tranquility of the hot springs for all who visit. Let’s dive in and explore the do’s and don’ts of onsen etiquette. - Wash Yourself Well: Before entering the onsen, make sure to clean yourself thoroughly. Don’t just take a quick shower, use the small towel you were given to scrub yourself really well. - No Tattoos: Except for the cases when you know the onsen allows tattoos, or if you’re in a private onsen, do not use them if you have tattoos or cover them up with a patch. - Keep Your Hair Out: Your hair should never touch the water. This is to keep it as clean as possible, so do not submerge your head in the water, no matter how cool it seems on TV. - Be Completely Naked: I know, I know, it feels weird to be completely naked in what is essentially a big bathtub with strangers. If it makes you feel better, this is for hygiene reasons, and it happens in Europe as well in some spas. - Use the Towel Wisely: The small towel provided should not touch the water either. So, where do you leave it, you ask? Well, you fold it nicely and put it on your head. Bonus tip: soak it in cold water to get a nice cold patch on your soon-to-be very hot skin. - Avoid Open Wounds: If you have open wounds of any kind, do not enter the onsen. This is due to the risk of infection that you can transmit to other people, but also for your safety, as in the water there are potential things you might not want in your blood. - Keep Quiet: Be mindful of other people and keep the noise level down. Everyone comes here to relax, not to hear all about the party you went to last night, so please allow them to enjoy themselves as well. - No Splashing: Do not jump, dive, splash, or make loud noises in the onsen. This is not a pool where you can run around, pushing each other or a place for your kids to practice their sick jumps. - Mind Your Showering: Be aware when showering as it’s a common area, try to not bother other people. Try not to splash people around you. Wouldn’t you feel disgusted if someone else would throw some foam from their body to yours? - Shower Between Onsens: If you use more than one hot spring, it’s best to shower between them or after using the sauna. As a rule of thumb, before you enter any onsen, you have to wash yourself. - Pat Yourself Dry: After exiting the hot spring, pat yourself dry before entering the changing area. It might feel weird to do so with the already wet and tiny towel you have been wearing on your head until now, but it’s better than going in the changing area with water dripping all over. - Clean Up After Yourself: Clean up any benches or buckets you have used. You have just a tiny space for yourself here, so leave it clean for the people coming after you to ensure that everyone has a good experience. - Use a Towel to Sit On: If you need to sit, put a towel on the benches you use. It would anyway feel weird to just sit with your bare skin on a bench where you have no idea who sat before you, am I right? - No Alcohol: Do not drink alcohol in the onsen for health and safety reasons. Alcohol and heat are not a good combination for your body. Plus, any glass containers in the hot spring can be a safety issue if it gets broken. - Avoid During Your Period: Unfortunately for all people with periods, you should not enter an onsen if you are on your period, even if you’re wearing a tampon or a menstrual cup. Not sure how good you’d feel to be in that heat during your period, so keep it in mind in case you’re as unlucky as I am (I routinely joke about having to buy a plane ticket for my period as well, as it always finds a way to come with us on all vacations!) During my visit to Japan I had the pleasure of staying at a ryokan with a private onsen. The whole experience was amazing, from sleeping on a futon laid out on tatami mat floors to enjoying the delicious food with plenty of options to choose from. The highlight of my stay was, of course, the onsen. Although it was separated into male and female areas, I found the experience incredibly relaxing after a long day of walking over 20,000 steps. The ryokan had both an indoor and outdoor onsen, each with its own charm. However, the only downside was that without my husband, I felt a bit lonely and bored, so I didn’t stay in the onsen for too long. Nonetheless, I would highly recommend visiting a ryokan with an onsen for anyone traveling to Japan. As we didn’t want to go to Hakone for this experience since we knew that’s the place where most tourists go, we decided to go to Gero, Gifu instead. This area is a bit further away from the main Tokyo-Osaka-Kyoto “avenue”, so it’s mostly visited by locals, which is exactly what we wanted. We choose the Yukai Resort and loved it! The food was amazing, the room was nice and they had two onsens, one inside, and one on the roof. The only downside was that it had a lot of stairs at the entrance and no elevator, so we had to carry our pretty heavy luggage up the stairs. Apart from this, we fully recommend the place, check it out below! FAQ about onsens in Japan What are onsens in Japan? Onsens or hot springs are sources of water that are naturally hot and contain minerals. These springs are known for their relaxing and therapeutic benefits. Can foreigners go to onsens in Japan? Yes, of course, as long as they abide by the local rules and customs and follow the correct onsen etiquette. Do you wear clothes in an onsen? No, you are forbidden from entering the onsen while wearing anything. This is a way to keep the spring water clean of any impurities you can carry on your swimsuit. Where are onsens in Japan? You can find onsens in plenty of areas of the country, and there are usually entire towns where hot springs are the main focus. Are onsens separated by gender? Usually yes, onsens are mostly same-sex only. There are a few exceptions though where you can visit a free-for-all onsen, or you can book a private onsen that you will enjoy straight from your room. Do you shower after visiting an onsen? No, you shouldn’t shower really soon, as you’ll wash off all the minerals. But you should shower if switching hot springs or at least get a quick cold shower if you feel uncomfortable upon exiting the onsen. Who should avoid onsen? You should avoid onsens if you are menstruating or pregnant, suffer from various diseases that make you extremely sensitive to heat (or having a fever) or if you’re allergic to any of the minerals present in the hot spring’s water. How long should you sit in an onsen? If you feel alright, you can stay up to 10 minutes at a time in an onsen, and you can take cold feet baths in between onsens. Of course, if you feel dizzy, get out of the hot spring at any time. Do onsens smell bad? Some yes, especially the ones that contain sulfur. You will recognize them immediately by the smell of old eggs. Onsens in Japan – The takeaway Experiencing an onsen in Japan is truly an art form. The history, culture, and tradition behind it make it a unique and must-do activity for anyone visiting Japan. From the healing properties to the stunning natural surroundings, an onsen experience is one that will leave you feeling relaxed, rejuvenated, and in awe of the beauty that Japan has to offer. So, be sure to add visiting an onsen to your Japan travel itinerary. It’s an experience that will not disappoint! And don’t forget to get your PDF with entry fees to various Japanese attractions. It will help you budget plan like no other!
<urn:uuid:6a35f928-1203-4d59-8942-4e8a13ec2238>
{ "dump": "CC-MAIN-2023-40", "url": "https://honesttravelstories.com/onsens-in-japan-rules/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506028.36/warc/CC-MAIN-20230921141907-20230921171907-00754.warc.gz", "language": "en", "language_score": 0.948902428150177, "token_count": 4745, "score": 2.515625, "int_score": 3 }
Caribbean Flamingos are bright pink or salmon in color, a result of eating shrimp. Chilean flamingos are pale pink with bright tips on their feathers. Caribbean Flamingos call North, Central, and South America their home — as well as the Galapagos regions. Chilean flamingos reside farther south, in the muddy wetlands of temperate South America. Did you know a flamingo’s signature pink color comes from its diet? In the wild, flamingos eat tiny algae and brine shrimp that live in the water. The brine shrimp and algae contain carotenoid pigments, the same nutrients that make carrots orange and beets red. We’ve added a new and memorable experience that will tickle you pink! Thanks to a gift from The Family of Dan and Rhonda Hall, a new flamingo feed allows guests to come up close and personal with our flamingos by feeding them one of their favorite foods, krill! You can feed our flocks during one of three scheduled feeding times daily — 11am and 1 & 3pm. You need a feed ticket which is $4. Look no further. Connect with our amazing animals and learn about the wild places they come from. The Animal Amigo program helps care for all of the animals at the Zoo by funding food, medical treatment, equipment, enrichment toys, and habitat improvement for the animals in our care. For a donation of $100 or more, you con sponsor a flamingo at the Indianapolis Zoo. You will receive a plush, collector card, certificate and recognition on the Animal Amigo donor board!Learn More
<urn:uuid:7273108a-95b2-466b-a814-6ce127fc2dd2>
{ "dump": "CC-MAIN-2024-10", "url": "https://www.indianapoliszoo.com/exhibits/flights-of-fancy/flamingos/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474581.68/warc/CC-MAIN-20240225035809-20240225065809-00226.warc.gz", "language": "en", "language_score": 0.9243133068084717, "token_count": 332, "score": 2.796875, "int_score": 3 }
Doesn’t look like it, but this was the site of the king (or emperor) of big explosions. BTW, Wikipedia disagrees on the location compared to the sources we used. The difference is fairly small, but if you feel that you missed out on 2 points because you looked where Wikipedia’s location was and it didn’t match the contest image, make your case and we *might* go ahead and award you the extra point. 6 Replies to “Contest #709 – Hint” The site of Tsar Bomba, on Severny Island To be more specific: 73°32’39.6″N+54°42’16.6″E (site of Tsar Bomba according to https://virtualglobetrotting.com/map/tsar-bomba-crater-largest-nuke-detonated/view/google/, but not according to wikipedia or Google Maps) “BTW, Wikipedia disagrees on the location…” english and russian and french wiki do not agree indeed french wiki gives your coordinates! I’m happy to take the lesser amount, because I’d picked the right landform but never actually found the spot. I mean, tundra lava plains with glacial tongues in the far north … but yes, the Tsar Bomba was one hell of a bang. And seriously, what difference is a couple of km going to make? There’s also this spot listed: 73°43’04.5″N+54°11’29.8″E The Tsar Bomb Virtual Globetrotting shows the same location as yours at Test site of tsar bomba Comments are closed.
<urn:uuid:a8766355-1536-488c-8a91-891edae15b66>
{ "dump": "CC-MAIN-2023-23", "url": "http://whereongoogleearth.net/2021/06/02/contest-709-hint/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648000.54/warc/CC-MAIN-20230601175345-20230601205345-00272.warc.gz", "language": "en", "language_score": 0.8725084662437439, "token_count": 400, "score": 2.640625, "int_score": 3 }
Story: Geology – overview This map is a reconstruction of the New Zealand coastline 20,000 to 18,000 years ago, during the last glacial period. The sea level was about 100 metres lower than at present, so the shallower part of the continental shelf was exposed. Cook Strait did not exist, and it would have been possible to walk the length of the country. Glacier ice was more extensive that it is now, and large glaciers extended out beyond the present coastline on the western side of the South Island. Major rivers carried huge loads of sediment all the way to the edge of the continental shelf. The Waikato River originally flowed north and entered the sea on the eastern side of the North Island (shown by a dashed line). About 20,000 years ago it changed to its present course. About this item This item has been provided for private study purposes (such as school projects, family and local history research) and any published reproduction (print or electronic) may infringe copyright law. It is the responsibility of the user of any material to obtain clearance from the copyright holder. Source: GNS Science
<urn:uuid:695e0817-5659-4bea-b7a0-bce7aff6e256>
{ "dump": "CC-MAIN-2013-48", "url": "http://www.teara.govt.nz/en/map/8388/shoreline-during-the-last-glaciation", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345762908/warc/CC-MAIN-20131218054922-00063-ip-10-33-133-15.ec2.internal.warc.gz", "language": "en", "language_score": 0.9557424783706665, "token_count": 229, "score": 3.796875, "int_score": 4 }
Addition within 20 and beyond EDpaX Math – Addition within 20 and beyond – Grade 2 – 51+ interactive and engaging teaching pages with associated student activities where the students will learn to: - use addition words - record addition sums in number sentences - add by counting steps along a number line - add three numbers - recall the addition facts for 20 - to recognize that symbols can stand for unknown numbers in sums Assessment included. Core Curriculum aligned. EDpaX Math lessons are written in ActivInspire for the Promethean board and in Notebook for the SMART Board. Single User License
<urn:uuid:7a1cf8f3-7da5-4248-96ce-6d1df75f7d82>
{ "dump": "CC-MAIN-2017-51", "url": "http://www.edpax.com/mathematics/math-usa/grade-2/addition-within-20-and-beyond-grade-2/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948541253.29/warc/CC-MAIN-20171214055056-20171214075056-00460.warc.gz", "language": "en", "language_score": 0.9037238955497742, "token_count": 132, "score": 3.796875, "int_score": 4 }
Water vole by Brett Lewis Famed as Ratty in Kenneth Grahame’s novel The Wind in the Willows the water vole was once a familiar waterside creature regularly seen in our rivers and ponds. Their chubby whiskery faces, blunt noses and small ears readily distinguish them from the brown rat, which also inhabits waterways. They are almost wholly vegetarian, feeding on a wide range of plants, roots and tubers. They don’t hibernate but retreat underground where they store food but will emerge when the sun shines in the winter months. They need luxurious bankside vegetation, particularly grasses, rushes and sedges, to provide food and cover from predators and they also favour steep banks to allow them to excavate extensive burrow systems. In fenland situations they are known to burrow into tussock sedge which keeps them safely above fluctuating water levels. Over the last twenty years the species suffered the most rapid and catastrophic decline in numbers of any British mammal and this was due to habitat loss and predation by the non-native American mink. A national water vole survey carried out in the 1990’s revealed the devastating news that water vole was on the point of extinction in several counties. This galvanised the Trust into action and water vole conservation became a high priority. Between 2003-2005 we surveyed all Suffolk river catchments and the story was even more depressing – further dramatic decline on all rivers - in 2003 the river Alde had no water vole on its main channel! However, the 2005 survey gave us hope indicating a healthy water vole population at key coastal sites and in 2007 there was great news regarding the Alde. Water vole site occupancy on the main channel had increased from 0% to 55%! In 2010 the picture was even brighter with water vole recovery in the River Blyth catchment the most successful of any catchment in Suffolk that has been re-surveyed in the last five years. The Trust’s Water for Wildlife Project has been working for eight years to reverse the decline in water vole numbers by liaising with landowners to improve wetland habitat management, and by setting up a mink control project throughout all Suffolk’s rivers. By carrying out regular water vole surveys of our rivers, the Trust has established that where sustained mink control is carried out, water vole are successfully re-colonising our rivers, ponds and lakes and are now widespread and once again becoming a familiar and delightful sight along our riverbanks.
<urn:uuid:127d98cf-1942-4ce1-88b7-9400b61fd9fd>
{ "dump": "CC-MAIN-2017-09", "url": "http://www.suffolkwildlifetrust.org/node/9022", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00233-ip-10-171-10-108.ec2.internal.warc.gz", "language": "en", "language_score": 0.963898777961731, "token_count": 525, "score": 2.984375, "int_score": 3 }
Multiple sclerosis, often called MS, is a disease that gradually destroys the protective covering of nerve cells in the brain and spinal cord. This can cause problems with muscle control and strength, vision, balance, feeling, and thinking. MS has no cure, but medicines may help lower the number of attacks and make them less severe. |Primary Medical Reviewer||Adam Husney, MD - Family Medicine| |Primary Medical Reviewer||Anne C. Poinier, MD - Internal Medicine| |Specialist Medical Reviewer||Barrie J. Hurwitz, MD - Neurology| |Last Revised||October 9, 2012| WebMD Medical Reference from Healthwise
<urn:uuid:d00083be-7060-48e1-ba21-04df884ed447>
{ "dump": "CC-MAIN-2014-23", "url": "http://www.webmd.com/hw-popup/multiple-sclerosis", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270528.34/warc/CC-MAIN-20140728011750-00373-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.8861705660820007, "token_count": 139, "score": 3.046875, "int_score": 3 }
PORTOLA STATE PARK PORTOLA STATE PARK The park has a rugged, natural basin forested with coast redwoods, Douglas fir and live oak. Eighteen miles of trails crisscross the canyon and its two streams, Peters Creek and Pescadero Creek. A short nature trail along Pescadero Creek introduces visitors to the natural history of the area. Visitors can see clam shells and other marine deposits from the time when the area was once covered by the ocean. The park has one of the tallest redwoods (300 feet high) in the Santa Cruz Mountains. FACILITIES AND ACTIVITIES OVERVIEW to this park: Iverson, Summit, Slate Creek Trails 6 miles round trip; longer and shorter options possibleYou could call this tranquil park, perched on the opposite side of the Santa Cruz Mountains from Big Basin Redwoods State Park, "Little Basin Redwoods State Park." Like its well-known cousin, this park is a natural basin forested with coast redwoods. Portola Redwoods State Park it is, however, its name honoring explorer Don Gaspar de Portola, who led an expedition in search of Monterey Bay in 1769. The California landscape has changed immeasurably since Portola's time, but places like this park still evoke the feeling of wild California. This wild feeling begins outside the park boundaries as you travel Alpine Road. The view is of wide-open spaces, of uncluttered valleys and ridges topped with nothing more than grass and cows. The park centers around two creeks-Peters and Pescadero-which meander through a basin. Douglas fir and oaks cloak the ridges while redwoods, accompanied by huckleberry and ferns, cluster in cooler bottomlands. Most redwoods in the area are second-growth trees; this land, like most in the Santa Cruz Mountains, was logged during the 19th century. However, most of "logging" at Portola was for shingle production; trees needed a very straight grain and were selectively cut. Thus, many large trees escaped the ax and may be seen today inside the park. The Islam Temple Shrine of San Francisco used the property as a summer retreat for its members from 1924 until 1945, when the state acquired the land. During the 1960s, Portola had an amusement park-feeling. Pescadero Creek was dammed, providing a large fishing and swimming area. One year, 150,000 people poured into the small park. In 1974, the dam was removed and Portola reverted to quieter pursuits-camping, hiking, nature study. Rangers sometime refer to Portola as a "neighborhood park," meaning thus far only locals have discovered this ideal-for-a-family outing small redwood forest. My favorite day hike is a six mile "walkabout" that utilizes five different trails. Drop in at the park visitor center to view the nature and history exhibits. Interpretive programs are conducted during the summer and on some weekends. Visitor Comments, Memories and Reviews From Highway 35, turn west onto Alpine Road, go 3 miles, and turn onto Portola State Park Road. The road dead ends in the park. Use low gear as both these roads are steep and winding. Due to mountain roads, expect a 1 1/2 to 2 hour drive from most Bay Area locations. There is no gasoline available at or near the park. There is no store in or near the park.
<urn:uuid:e0c61cf4-01f3-40af-b548-241c185cd9df>
{ "dump": "CC-MAIN-2014-42", "url": "http://www.stateparks.com/portola.html?detailed_information=driving_directions", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450097.39/warc/CC-MAIN-20141017005730-00090-ip-10-16-133-185.ec2.internal.warc.gz", "language": "en", "language_score": 0.9470301866531372, "token_count": 729, "score": 2.671875, "int_score": 3 }
3D viewing won't trigger epilepsy6th December 2011 Anyone watching a 3D movie runs a very slight risk of seizure, but that risk does not appear to be higher in epileptics, according to a recent German study. The researchers found, however, other unpleasant reactions were quite common. Study author Herbert Plischke, executive director of the University of Munich's Generation Research Programme, said his study of a group of children who had epilepsy had no increased seizure risk from watching movies in 3D. When watching 3D movies, about 20% of children seemed to experience nausea, headaches, or dizziness. Plischke said all children seemed to have a higher than usual vulnerability to seizure while watching 3D movies. The actual vulnerability, however, would depend upon the content of what was being shown on television, and not the technology itself. While the idea behind 3D movies is not new, the upsurge of consumer interest in the idea has spawned countless movies that require the wearing of 3D glasses in recent years. Now, with televisions that also make use of 3D glasses becoming more and more common, some medical professionals and researchers have raised concerns about how the technology might affect people. One study recently found that, for various reasons related to eye coordination, about 30% of all 3D viewers may experience headaches or nausea when watching such movies. The sensation experienced by such people is similar to feeling seasick. For the study, the researchers tested 100 children, all of whom were about 12 years old, for sensitivity to bright lights. All of the children were epileptic. Each of the children then watched 3D television, sitting just under seven feet away from the screen. One of the children had a seizure after 15 minutes of viewing. That child had an unusually high frequency of seizures, and normally had between three and four seizures a day. During the 3D TV test, one fifth of the children said they experienced nausea and headaches. A slightly smaller number of children said they experienced nausea and headaches during the light sensitivity test. The researchers also used EEG readings of the children's brains during both tests. Orrin Devinsky, director of NYU Langone Medical Center's Epilepsy Centre, who was not involved in the study, said that the finding sounded perfectly in line with what he might expect. He said that, if there was to be a problem, it would be with the content, namely flashing imagery, which would be a concern in 2D or 3D. Share this page There are no comments for this article, be the first to comment! Post your comment Only registered users can comment. Fill in your e-mail address for quick registration. Title: 3D viewing won't trigger epilepsy Author: Luisetta Mudie Article Id: 20528 Date Added: 6th Dec 2011
<urn:uuid:bfdecf75-88d3-4724-b8f9-ffde3ca4d5ad>
{ "dump": "CC-MAIN-2017-13", "url": "http://www.healthcare-today.co.uk/news/3d-viewing-wont-trigger-epilepsy/20528/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203536.73/warc/CC-MAIN-20170322213003-00485-ip-10-233-31-227.ec2.internal.warc.gz", "language": "en", "language_score": 0.9678449034690857, "token_count": 593, "score": 2.671875, "int_score": 3 }
At the enormous Plymouth Aquarium, located in the centre for Marine Biology in Britain, amongst the more mundane fish of the Plymouth Ocean exhibit there lies a mighty monster of a fish, a huge Conger Eel. This colossal tail is only half of the eel’s actual size, and while it is hard to see in perspective, the largest of these species can reach lengths of up to two metres and 100kg. This European Conger, or Conger Conger in Latin, are therefore the largest type of eel in the world, although the moray eels are nearly as long. Normally grey in colour with a white underbelly and darker snouts, the females will tend to be far larger than the males, often up to half a metre longer. They live across the east Atlantic from Scandinavia to North Africa and will also live in most of the Mediterranean ocean, where they are commonly seen in the shallows. They can live up to enormous depths such as 1000m but are also seen making their pits in shallower waters, such as the Plymouth Harbour. Like the eel seen above, Congers live much in the same way as any other eel, nesting in eel ‘pits’ in crevasses in the rocks, often with groups of other eels. It has even been known for morays and congers to share the same pit. Being mainly nocturnal, they well stay in their pit for most of the day, coming out at night to hunt and scavenge. Congers will eat large fish, crabs, lobsters and octopuses that they catch, as well as eating any decaying carcasses that they may find. They have sharp teeth to grab their food with, but are not considered especially aggressive, despite their intimidating size. During the breeding season, the European Conger’s body goes through a massive change, the skin becoming softer and the teeth dropping out of the mouth. Then, a huge migration begins, taking the eels out of cold European waters and into the sub-tropics of the Atlantic, such as the famous Sargasso Sea. The Sargasso is a large gyre by the Gulf of Mexico which is known for its role in the lives of European and American Eels. The sea is sometimes said to be thick with eels, almost as though you could walk out onto them. While this is somewhat an exaggeration, it is certainly true that a huge number of eels exist at one time in that ocean, with each female Conger producing several million larvae. Large and astounding, and yet so close to home, I hope that I can see some of these eels in the Atlantic and Mediterranean in the future. It was a shame that the eel at Plymouth Aquarium didn’t feel inclined to show themselves while I was there, but then again, it was during the day. They were probably quite sleepy.
<urn:uuid:5d1d5f69-7f72-4314-a1c5-863d105e6ebd>
{ "dump": "CC-MAIN-2021-21", "url": "https://benology.me/2015/07/29/conger-eel/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990929.24/warc/CC-MAIN-20210512131604-20210512161604-00387.warc.gz", "language": "en", "language_score": 0.978578507900238, "token_count": 597, "score": 3.515625, "int_score": 4 }
Cosmic Disclosure Season 3 - Episode 1: Inner Earth - Summary and Analysis | Corey Goode and David Wilcock The notion of a hollow or inner Earth has captivated imaginations throughout history. Considering that existing societies only occupy the surface, and there is a vast amount of caverns already well documented, it stands to reason the Earth could be filled with hollow cavities capable of supporting life. Corey Goode clarifies that the secret space program documented what is described as a honeycomb Earth. Unlike many popular hollow Earth theories, depicting a balloon-like planet with a central sun and inner surface, Goode describes a porous makeup, wherein large caverns are formed as a result of geophysical development. Much like how the oceans have been largely unexplored, these inner regions of the Earth are home to entire ecosystems unseen by modern humans. However, within the secret space program, and related groups, there have been expeditions into these areas of our planet. The primary mission of these expeditions was to recover Ancient Builder race technology, left presumably eons ago. Continue Reading at ...... http://sitsshow.blogspot.com/2016/01/cosmic-disclosure.html
<urn:uuid:54065321-6dfd-4f49-807c-2430284453f8>
{ "dump": "CC-MAIN-2017-17", "url": "http://www.ascensionwithearth.com/2016/01/cosmic-disclosure-season-3-episode-1.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118477.15/warc/CC-MAIN-20170423031158-00120-ip-10-145-167-34.ec2.internal.warc.gz", "language": "en", "language_score": 0.9283977150917053, "token_count": 243, "score": 2.6875, "int_score": 3 }
This is a comprehensive tutorial for researching the 1870 U.S. federal census. You will be introduced to what I have used and shared with thousands to successfully find generations of family. Begin by learning how to use the census as a foundation to effective research, identify, map, and follow family through generations. The tutorial will expand your knowledge and skills of how to conduct an exhaustive search to find genealogical and Family History records, repositories, resolve research problems and connect with resources researching similar lines. The tutorial is divided into the following sections: - 1790-1940 U.S federal census resources - Introduction to 1870 U.S. federal census - How to effectively use the 1870 U.S. federal census - Search the 1870 census schedules - Expand your census research with military records - Defining the U.S. federal census - How to use the 1870 U.S. federal census - Questions asked on the 1870 census - Download 1870 U.S. census research aids. Download and print the following resources to aid your census research. - U.S. census learning aids. Throughout the 1870 U.S. federal census tutorial find links to resources that I have specifically prepared to help you. In addition, I have written and assembled 190+ articles and resource aids to provide you a more in-depth understanding of the census research process. I have tried to cover every possible question and angle that you are likely to face in your U.S. census research. I would encourage you to use the resources often. The category headings are as follows: - 190+ U.S. federal census articles and resource aids - U.S. federal census tutorials - Census and genealogy forms - Census research skills - Follow ancestors through the census - Researching names in the census - Defining ancestor age - Expanding census research to other resources - Expand your census research with military records - Census research best practices - 190+ U.S. federal census articles and resource aids Number of persons included in the 1870 census: 39,818, 449 people were enumerated in the United States. 1870 census day: June 1, 1870 1870 census duration: 5 months 1870 census geography: - States and territories enumerated: 37 states and nine territories where included in the census. - New states: The newest states included the in 1870 census were West Virginia, Nebraska, Kansas, and Nevada - Territories included: Kansas, Nebraska, New Mexico, Colorado, Idaho, Arizona, Utah, Wyoming, Montana, Washington, Dakota and Indian - The available states include: Alabama, Arkansas, California, Connecticut, Delaware, District of Columbia, Florida, Georgia, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Nebraska, Nevada, New Hampshire, New Jersey, New York, North Carolina, Ohio, Oregon, Pennsylvania, Rhode Island, South Carolina, Tennessee, Texas, Vermont, Virginia, West Virginia, Wisconsin - The missing states: All census records survived. A few important facts about the 1870 census include: - Specific location for Germany. For the specific country of birth (i.e. Germany), the enumerator was to be more specific: Baden, Prussia, Bavaria, Wurttemberg, Hessen-Darmstadt. - Numeration date. he numeration date is June 1, 1870 with five months to complete the census. On this form you are provided the actual date with the census taker was at the place of residence for your ancestor. While the family was to provide who was in the home as of June 1, 1870. - President during census. Ulysses S. Grant is the president - Three copies of census. Enumerators were to make two extra copies of the original census: 1) one for the county clerk 2) one for the state/territory 3) Census office. This simply means that you will either be looking at the original or a copy of the schedules. - Five schedules. Five schedules were prepared for the 1870 census. They included: - Schedule 1: General Population - Schedule 2: Mortality - Schedule 3: Agriculture - Schedule 4: Products of Industry - Schedule 5: Social Statistics - Emancipation Proclamation. President Abraham Lincoln enacted the Emancipation Proclamation, as of Jan. 1, 1863, all slaves were free. This is the first census where those nameless persons on the slave schedules are now free and listed by name and age. The war had liberated nearly four million slaves and at the same time created the challenge of establishing a new social order based on freedom and racial equality. - The Freedmen’s Bureau. Established in 1865 The Freedmen’s Bureau (the Bureau of Refugees, Freedmen, and Abandoned Lands) provided assistance to thousands of former slaves and impoverished whites mostly in the Southern states and District of Columbia. The records left by the bureau between 1865 and 1872 are the most extensive source for investigating the African American experience in the post-Civil War era. Among the records you will find, for example, the names, ages, and former occupations of freedmen and names and residences of former owners as well as marriage registers that provide the names, addresses, ages, and complexions of husbands and wives and their children. There are three sets of records for you to search that include: 1) Commissioner’s records, 2) Superintendent of Education, 3) Field office records which are the most valuable for genealogy. To find records do a Google search on “The Freedmen’s Bureau Records.” - See the video tutorial series: 25+ Introduction to African-American Genealogy Tutorials Learn how to use the census to effectively find generations of family. Use the information and clues provided to build out your family tree and expand your research. I have provided a comprehensive review of each question that includes research insights, tips and tricks, and must know information to aid your research. - Location and Dwelling number (Col. 1 -2). We are provided with the city/town/village/borough, county, and state where the family resides. This can help in defining geographic areas to search for family and records. - Counted in order. The dwelling was the number of houses that were counted. - Societies. Search for a historical and/or genealogical society in the county to learn about community, records developed at the time you family lived in the area, connect with other genealogists who are researching the same surname, groups (i.e., church) to which your family belonged. - Modern day repositories. Use the location to identify locations of modern day record repositories that are near the place you family lived (e.g., historical societies, genealogical societies, libraries, archives, court houses.) - Family Number (Col. 3). This was the actual number of families counted. - Order of household visited. The census is recorded in the order of households visited. Take special note of the dwelling location versus the family number. For example, you could have dwelling 1 and dwelling 2, but for dwelling two you could have families 2, 3 and 4 living in the dwelling 2. This could be an apartment building or several families living in the same home. If you have people living in the same building, we need to be asking how they are related. - Circle of influence. You can begin to build the circle of influence for your ancestors by seeing who the neighbors were. - Search the neighbors. Often neighbors move with neighbors. Are they the same family? Members of the same congregation? Friends? If you can’t follow family or find the family in the census, see if you can follow neighbors. Are the given names similar among the neighbors and your family? Similar names run in families. This might be a clue that they are more than just neighbors. It has been my experience that neighbors, even when they don’t share the same name are related. Look for the neighbors being the wife’s parents, sister of the husband, siblings of the wife, aunts and uncles and so forth. When I couldn’t find my ancestors in location, I have searched on the names of know neighbors to find my family. Make sure you include the names of neighbors in your family profile. - Composition of the family (Col. 4). Provides members of the household by name. - As of June 1870. List the persons who lived in the home as of June 1870. - Individual names. Individual names for those in the household. - Important enumerator instructions. The enumerator was given the following instructions: “The names are to be written, beginning with the father and mother; or if either, or both, be dead, begin with some other ostensible head of the family; to be followed, as far as practicable, with the name of the oldest child residing at home, then the next oldest, and so on to the youngest, then the other inmates, lodgers and borders, laborers, domestics, and servants.” - Relationships not provided. You will need to use other records to help make associations. Do not make assumptions about the relationships. - Death or birth after June 1, 1870. Individuals who died or children were not included who were born after June 1, 1870. - First census after Civil War. This is the first census after the U.S. Civil War which defines who lived and who were survivors after the war. - New family scenarios. Because of the extensive destruction of the war and migration of the people, you will find families scattered and redefined because of the war. For example, I have seen families who are 1) have not change living in the same place, 2) a mix of extended families (e.g., grandma grandpa, wife and children) because husband died 3) Mix of friends and neighbors helping other (two widows with children living together, 4) New families single soldiers marring, 5) Remarriages where a single mother marries and combines her family with another male. - First census after the migration during 1860’s. Special note about families during the 1860’s. - Migration. Migration during the war resulted from people moving to live with family/relatives because it was saver and far from the front lines, a place to live because there was no male to farm the land/farms were destroyed and some even followed their husbands from war zone to war zone. - Fresh start. You will also see many surviving soldiers wanting a fresh start. Keep in mind that during the war the soldiers, many of whom had never left the county in which they lived, had chances to see new states and were willing to make a new live in the places they had seen. So don’t be surprised to see your family in places you have never seen them before. - Daughters. It was common for families to send their daughter to live with other family, while married women tried to manage farms. - Moving in with family. If a woman lost her husband during the war, it was common for them to move back home with their father and mother. - Missing male. If a male is present in the 1860 census and not in the 1870, it may be a clue that the person was a casualty of the Civil War. See the category “Search military records,” and click on the article, “Civil War 1861-1865,” to learn about how to research and find records available for the war. - Remarriages. Because of the death of many spouses during the 1860’s, you will want to be on the lookout for remarriages between 1860-1870 censuses. You might pick this up in the state census. - Search all lines. In the 1870 census I have made it a practice to search all family direct and related lines (e.g., siblings, aunts/uncles, friends, and neighbors) to reconnect families that were separated by the war. - Searching lost families. Important clues for searching lost families that you can’t find in the 1870 census: - See where individual was born. Look at the 1860 census to see where the family was born. This will be a good place to start your search, since many families (mother and children) moved back home to be with mom and dad or grandfather and grandmother. - Search female name and age. Searching on the female name and age rather than the known husbands first and last name. - Remarriage. Remember if her husband died and the woman remarried, she will have different last name. Start your search in the county where they resided prior to the war. Search every line of the census in that area. - Search for guardianship records. Make sure you also search court records for guardianship papers. If a father and/or husband were killed and the woman didn’t remarry there would most like be guardianship papers filed which can include notes on remarriage and moves. - Search for neighbors you seen in the 1860 census. It is rare when I have not found the same neighbors being present from one census to the next. - Not included. I have had the chance to speak several genealogists who focus on the Southern states research who have share with me that there were many persons who lived but were omitted simply because they were on the move as with migration. - Courthouses burned. Many courthouses were burned during the Civil War loosing forever many records. This makes the census records even more valuable for this time period. Word of caution, if you hear that the courthouse where you ancestor lived was burned, you still need to check to see if records survive. I have on two occasions found that my ancestor’s records were among the few that were saved. - African American research. This is the first census where those nameless persons on the slave schedules are now free and listed by name and age. The war had liberated nearly four million slaves and at the same time created the challenge of establishing a new social order based on freedom and racial equality. - See the video series: 25+ Introduction to African-American Genealogy Tutorials - Courthouse documents. Make it practices to extensively search every available document in the courthouse from purchasing, transferring of slaves, wills, and so forth to help reconstruct the family unit. I have seen where “carpetbaggers,” persons from the Northern states moved to the South to take advantage of the instability of the South. Many of these Northerners went out of their way to help freed slaves to register their real names and record land deeds. - Finding African American families when they changed surnames. Between 1865 and 1875, I have found that it was common for African American families to choose a different surname. If you suspect this happened to your family, try searching on the first name and ages to locate the family or searching on the neighbor’s surnames. - Check the Freedmen’s Bureau. Established in 1865 The Freedmen’s Bureau (the Bureau of Refugees, Freedmen, and Abandoned Lands) provided assistance to thousands of former slaves and impoverished whites mostly in the Southern states and District of Columbia. The records left by the bureau between 1865 and 1872 are the most extensive source for investigating the African American experience in the post-Civil War era. Among the records you will find, for example, the names, ages, and former occupations of freedmen and names and residences of former owners as well as marriage registers that provide the names, addresses, ages, and complexions of husbands and wives and their children. There are three sets of records for you to search that include: 1) Commissioner’s records, 2) Superintendent of Education, 3) Field office records which are the most valuable for genealogy. To learn more about how to find and research these records, do a Google search on “The Freedmen’s Bureau Records.” - Follow family through the census. Make it a priority to follow your family through censuses during their lifetime (e.g., federal, state, territorial, and local censuses) as well as census schedules if they exist (e.g., population, agriculture, manufacturing, social statistics, crime, mortality, veterans, slave.) The following articles will provide you a detailed example of following a family through the census. See the articles: - 1930 U.S. Census example, John I. Stewart 1860-1950 - 1920 U.S. Census example, John I. Stewart 1850-1950 - 1910 U.S. Census example, John I. Stewart 1850-1930 - 1900 U.S. Census example, John I. Stewart 1850-1930 - 1880 U.S. Census example, John I. Stewart 1850-1930 - 1870 U.S. Census example, John I. Stewart 1850-1930 - 1860 U.S. Census example, John I. Stewart 1850-1930 - 1850 U.S. Census example, John I. Stewart 1850-1930 - Searching the 1860 census. If you having a hard time finding you family in the 1860 census, remember the family you see in 1870 just experienced the destruction of the Civil War. Many families were rebuilt through remarriages, combining of families, moving and so forth. Try searching for the neighbors that you see in the 1870 census. It is rare that I haven’t found the same people moving or living alongside my family when they have picked up and moved hundreds of miles. Also make sure you check other records present for the time period such as court, land, wills and probate records. - Check original census. Always seek to see the images of the original census to compare against the transcription. - Search same surname. Look closely at persons with the same surname. Could they be a relative? Does the individual show up as a child in earlier census? Search other records such as deeds, wills to see if the person shows up. Could individuals be in-laws? Check marriages of the county for husbands of sisters, aunts, and mothers. - Research all persons in household. It will not be uncommon to find individuals living in the same household that have different surnames (last names). As a practice, research all persons living in the household with your ancestors or in the home of siblings. There is usually a family connection. I have found it important to search for the surname several pages before and after the page where you find your family. This can also help in suggesting relationships between neighbors. Look for added clues such as given first names, occupations, places of origin. When I contact the genealogical/historical societies, I have often sought to find the genealogists who are researching these surnames to compare research. - Extract all with same surname. Make it a practice to extract all the persons with the same surname living in the same county. Are they family? They could be family connection or related connection such as where they came from. If your family lives near a state our county border, go ahead and extract the persons with the same surnames from neighboring counties. I have usually found important clues and connections among those with the same surname that has enhanced my research. - Age (Col. 5). This is not exact date of birth, but it will provide a “ballpark” number that you can use in the help you track the person in the next census and search for birth event records of the time period. - Children under age of 1. Children under the age of 1 were represented months as fractions such as 1/12. - Search other records. Few states during this period had vital records, but there are good changes that you may be able to look for church records. Start your search for these types of records at the genealogical/historical society. - Age gaps. Look at age gaps between children. Is the age cap normal? For example, every two years. Are the age gaps larger than expected? This could be a clue that there was another child or spouse that had passed away. Look at age of husband and wife. Are they about the same age? Is one spouse much older than the other? If yes, this could be a clue that there is second marriage. Look at the ages of children and the place of birth. This might provide clues of where the parents were married or from where the family migrated. - Color (Col. 7). In this census there are more indication of color with the White (W), Indian (I) American Indian, Black (B), Chinese (C) included all east Asians,, Mulatto (M). This information may be helpful in determining the persons origins. - Occupation (Col. 8). This indicates the person’s occupation and related information can help one search for employment records. Look carefully at the person’s occupation/trade and define what types of records that might exist. I had an ancestor who was a merchant which led me to look for a business license, business/professional directory, ads for his business in the newspaper and related documents all of which I found. Another genealogist, had ancestor who was a member of the clergy which led them to search and find church records. If the person was a farmer, make sure you look at Schedule 4, “Agricultural Census” for more information about the family. - Value of real estate (Col. 9-10). This will help identify records you can locate at the county recorder’s office or equivalent agency for deeds, mortgages, and property tax records. - Value of personal estates. Take note that this was the first time “value of personal estates” was asked. There is evidence that when this question was asked that people may have hesitated to provide the exact answer because of the fear that they would be taxed based on the answer provided. - Courthouse records. If your ancestor lived in the South during 1870, make sure you check court records carefully. Many court battles exist to reclaim land that was confiscated during the war. There is a good chance you will find where the family currently resides if they moved away and are fighting to get their property back. - Place of birth (Col. 11). If the person was born in the United States,the enumerator was to enter the state where they were born. If they were born outside the United States, the enumerator would enter the native country. If the person was born within the state they were being enumerated, the census taker might include the county or township. For the specific country of Germany, the enumerator was to be more specific: Baden, Prussia, Bavaria, Wurttemberg, Hessen-Darmstadt. - Narrow search. Use this information to narrow your search for records to geographic area even town. Also very helpful with clues to immigration and/or migration. - Foreign-born Parents (Col. 12-13). This is the first census to ask if the parents were foreign-born. The mark (y) means yes they are foreign-born. Even though we are not given the actual birthplace, we do have clue that they are immigrants. Other records to check would include ships passenger lists, immigration lists and so forth. Also be on the lookout for naturalization records. - Marital Status (Col. 15). This denotes if the individual was married within the year (i.e., June 2, 1869 to June 1, 1870.) It will provide clues for looking for marriage event records of the time period. Because the person could have been married at any time during that year, make it a practice to also look for school records for the individuals also. - Education (Col. 16). This identifies if the person had gone to school within the year (i.e., June 2, 1869 to June 1, 1870.) This will provide clues to look for school records that can associate children with parents. Look for records such as school census. - Read and write (Col. 17). Use this information to confirm that you have the right person when searching other records. For example, you are searching wills of individuals with the same name of as your ancestor. The census records said that your ancestor could read and write. You find the wills of two persons with the same name. One marked his will with an X the other signed his name on the will. The person using the X most likely couldn’t read or write. Since you are looking for a person who could read and write, the X should raise caution flags that this person may not be the person you are looking for. - Whether deaf & dumb, blind, insane, idiotic, pauper or convict (Col. 18). Do not overlook this category. Insane could lead to institutional and/or guardianship records; convict could finding court and/or jail records. - Male Citizenship over 21 (Co. 19). This category, asked only of men, denotes that person had their rights to vote denied or they didn’t know if they had the right because they had never voted. - Reasons for denial. Usually a person was denied the vote do to reason of insanity, mental defect, etc. In the South a person could no longer be denied to vote based on race. However, there were states that were establishing laws that could deny a person the right to vote based on their ability to pass a literacy test. Many of the former slaves were not able to read or write. - Naturalized by 1870. If the person was foreign-born citizen, then they had become a naturalized citizen 1870. This may lead to finding naturalization papers. - See the articles: - 48 detailed profiles of immigrating peoples to North America - 40+ Genealogy Tutorials for Immigration and Migration Research - 30+ Records and resources genealogists use to find immigrant ancestors - Certificates of Naturalization and where to find Immigration records - Using Federal Census records in researching immigrant ancestors - Male Citizenship over 21 with right to vote (Col. 20). This question, asked only of me, is first time for the question about whether the person has the right to vote. Use this clue to research other records such as voter rolls, deed records and so forth. The 1870 census included the population and several other schedules taken usually at the same time. There are resources online and in print that provide more detail on these schedules and how to use them in genealogy research. I always suggest that you check these schedules. They include: - Industry/Manufacturing Schedule. Provides information on businesses and industries for the year (i.e., June 2, 1869 to June 1, 1870). Manufactures that were household based were not included. The information collected focused on the products of the industry such as mining, fisheries, mercantile, commercial and trading businesses. The census taker included the name of the company/owner, kind of business, amount invested, quantity and value of materials, labor, machinery and products. These schedules are valuable because they many document businessmen and merchants who do not appear in the land records. - Mortality Schedule. Provides information about persons who died during the twelve months prior to the census (i.e., June 2, 1869 to June 1, 1870). It collected the following information: name, age, sex, color, and place of birth, marital status, profession, occupation/trade, month of death, disease or cause of death, number of days ill and remarks. In 1870 a place for parents’ birthplaces was added. In 1880, the place where a disease was contracted, how long the deceased person was a citizen or resident and included fractions (e.g. 1/12) if less than a year. Use the information to research other records such as obituaries, mortuary records, cemeteries, and probate records. - Agricultural Schedule. Provides data on farms and the names of the farmers for the year (i.e., June 2, 1869 to June 1, 1870). Farm information focused on agricultural production. In 1870 and 1880 farms of less than three acres or which produced less than $500 worth of products were not included. Use the information to - Fill in gaps with land and tax records are missing - Distinguish between individuals with the same surnames - Document land ownership and search related records such as deeds, mortgages, tax rolls and probate inventories. - Verify and document sharecroppers (e.g., African American) and their overseers not listed in any other records. - Identify free men of color and their property holdings. - Trace migration and economic growth. - Social Statistics Schedule. Includes information about the following topics: valuation of real estate; annual taxes; colleges, academies, and schools; seasons and crops; libraries; newspapers and periodicals; religion; pauperism; crime; and wages. These schedules are valuable because they many document businessmen and merchants who do not appear in the land records. For example, - Cemeteries. You will have a listing of the cemeteries (i.e., names, addresses, descriptions, procedures for interment) with the city boundaries along with maps pin pointing their locations. You will also find lists of cemeteries that are no longer open and why. - Trade societies, lodges and clubs. You find their names, addresses, and officers. - Churches. You will find a brief history, overview of doctrine and policies and statistical list of members. - See the article: Census Records—There is more than population schedules Even though there is no information in the 1870 census that identifying veterans of war, there are still men living who served in one or more military wars and conflicts. The records available for these men vary but can yield important clues and knowledge about the veteran and his family. - Pension applications. Search for pension applications and records of pension payments for veterans, their widows, and other heirs. The pension applications usually provide the most information and can include supporting documents such as marriage, birth, and dead records/certificates, pages from family Bibles, family letters, dispositions of witnesses, affidavits, discharge papers and other supporting documents. Even if you ancestor did not receive a pension, look to see if his pension request was denied. - Bounty lands. Bounty land applications also are related wartime service. The federal government provided bounty land for those who served in the Revolutionary War, the War of 1812, the Mexican War, and Indian wars between 1775 and 1855. Bounty lands were offered as incentive to serve and as a reward for service. Bounty land was claimed by veterans or their heirs. Search for these military records: Early Indian Wars 1815-1858. Look for military records of men serving in the Early Indian Wars who are between the ages of 35 and 90+ in the 1870 census. These men would have been born prior to 1835.See the article: Mexican War 1846-1848. Look for military records of men serving in the Mexican Wars who are between the ages of 37-85+ in the 1870 census. These men would have been born prior to 1832. See the article: Civil War 1861-1865. Civil War 1861-1865. Look for military records of men who would be serving in the U.S. Civil War who are between the ages of 20 and 70 in the 1870 census. These men would have been born in 1850 and earlier. Keep in mind that many young men lied about their age and served with their father, brother (s), or other family members. If your ancestor lived in the Union or Confederate states that they served in their army. If your ancestor lived in the Union or Confederate states that they served in their army. Many men who were in the Union served in the Confederacy. And there are many men from the South that served in the Union Army. Make sure that you search for all male members of the family (i.e., father, sons, brothers, uncles, and nephews.) The Civil War enlistment card will give you clues of your ancestors’ location and place of residence. See the articles: - Civil War 1861-1865, Researching and finding military records - U.S. Civil War 1861-1865—Search the cemetery for information - U.S. Civil War 1861-1865, Develop a search profile for military records - U.S. Civil War 1861-1865, Find records on the internet Researching military headstones. Military headstones have evolved through time. See the following articles for details: - Anatomy of a military headstone - Symbolism on U.S. military headstones - Emblems of believe on U.S. military headstones A census is a government-sponsored enumeration of the population in a particular area and contains a variety of information — names, heads of household (or all household members), ages, citizenship status, ethnic background, and so on. Here are some different types of census records you are likely to come across in your research. U.S. federal census is also called a population schedule. Federal census records provide the building blocks of your research, allowing you to both confirm information and to learn more. Compiled in the United States for every decade since 1790, census population schedules are comprehensive, detailed records of the federal government’s decennial survey of American households. Information from the schedules is used by the federal government for demographic analysis. The schedules themselves, of interest primarily to genealogists, contain the personal information of the survey respondents. To protect the privacy of the people whose names appear in each schedule, census records are restricted for 72 years after the census is taken and are not available to researchers during that time. - Identify members of household by name - Identify ages of individuals by name - Begin to establish family relationships (e.g., spouse, children, siblings, parents) - Identify who is missing (perhaps a Civil War casualty) - Identify people of color: White (W), Indian (I) American Indian, Black (B), Chinese (C) included all east Asians, Mulatto (M) - Build first family scenario for Freedmen of color - Begin to identify possible remarriages and step relationships - Identify parent of foreign birth - Locate and identify birthplaces - Identify occupations - Locate and identify real estate - Find information in various schedules that include: Population, agriculture, industry and mortality - Locate and identify family who are neighbors - Identify spelling variations - Locate and identify family in other census substitute records (e.g., probate inventories, tax lists) - Locate and identify children not yet known - Locate and identify possible parents - Locate and identify possible children not listed in later censuses - Differentiate between families of the same name - Locate and identify possible neighbors who might be family Col. 1: Line No. on Page Col. 2: Dwelling house No. Col. 3: Family No. Col. 4: Name of every person whose usual place of abode on the first day of June, 1870, was in this family Col. 5: Age last birthday Col. 6: Sex Col. 7: Color - White (W) - Black (B) - Mulatto (M) - Chinese (C) - Indian (I) Col. 8: Profession, Occupation, or Trade of each person, male or female Col. 9: Value of Real Estate owned Col. 10: Value of Personal Estate PLACE OF BIRTH Col. 11: Place of birth Col. 12: Father was Foreign born Col. 13: Mother was Foreign born ADDITIONAL PERSONAL DESCRIPTION Col. 14: Month if born within census year Col. 15: Month if married within census year Col. 16: Attended School within the year Col. 17: Cannot read/Cannot write Col. 18: Whether deaf and dumb, blind, insane, idiotic Col. 19: Male citizen 21 years & up Col. 20: Make citizen 21 with right to vote
<urn:uuid:9cccae14-1601-415b-920d-e72d79eaea5d>
{ "dump": "CC-MAIN-2017-30", "url": "http://genealogybybarry.com/1870-u-s-federal-census-tutorial/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424623.68/warc/CC-MAIN-20170723222438-20170724002438-00207.warc.gz", "language": "en", "language_score": 0.9613746404647827, "token_count": 7577, "score": 3.671875, "int_score": 4 }
The Temptation to Defy the Law Laundry detergent and bags of ice—products of industries that seem pretty mundane, maybe even boring. Hardly! Both have been the center of clandestine meetings and secret deals worthy of a spy novel. In France, between 1997 and 2004, the top four laundry detergent producers (Proctor & Gamble, Henkel, Unilever, and Colgate-Palmolive) controlled about 90 percent of the French soap market. Officials from the soap firms were meeting secretly, in out-of-the-way, small cafés around Paris. Their goals: Stamp out competition and set prices. Around the same time, the top five Midwest ice makers (Home City Ice, Lang Ice, Tinley Ice, Sisler’s Dairy, and Products of Ohio) had similar goals in mind when they secretly agreed to divide up the bagged ice market. If both groups could meet their goals, it would enable each to act as though they were a single firm—in essence, a monopoly—and enjoy monopoly-size profits. The problem? In many parts of the world, including the European Union and the United States, it is illegal for firms to divide markets and set prices collaboratively. These two cases provide examples of markets that are characterized neither as perfect competition nor monopoly. Instead, these firms are competing in market structures that lie between the extremes of monopoly and perfect competition. How do they behave? Why do they exist? We will revisit this case later, to find out what happened. In this chapter, you will learn about: - Monopolistic Competition Perfect competition and monopoly are at opposite ends of the competition spectrum. A perfectly competitive market has many firms selling identical products, who all act as price takers in the face of the competition. If you recall, price takers are firms that have no market power. They simply have to take the market price as given. Monopoly arises when a single firm sells a product for which there are no close substitutes. We consider Microsoft, for instance, as a monopoly because it dominates the operating systems market. What about the vast majority of real world firms and organizations that fall between these extremes, firms that we could describe as imperfectly competitive? What determines their behavior? They have more influence over the price they charge than perfectly competitive firms, but not as much as a monopoly. What will they do? One type of imperfectly competitive market is monopolistic competition. Monopolistically competitive markets feature a large number of competing firms, but the products that they sell are not identical. Consider, as an example, the Mall of America in Minnesota, the largest shopping mall in the United States. In 2010, the Mall of America had 24 stores that sold women’s “ready-to-wear” clothing (like Ann Taylor and Urban Outfitters), another 50 stores that sold clothing for both men and women (like Banana Republic, J. Crew, and Nordstrom’s), plus 14 more stores that sold women’s specialty clothing (like Motherhood Maternity and Victoria’s Secret). Most of the markets that consumers encounter at the retail level are monopolistically competitive. The other type of imperfectly competitive market is oligopoly. Oligopolistic markets are those which a small number of firms dominate. Commercial aircraft provides a good example: Boeing and Airbus each produce slightly less than 50% of the large commercial aircraft in the world. Another example is the U.S. soft drink industry, which Coca-Cola and Pepsi dominate. We characterize oligopolies by high barriers to entry with firms choosing output, pricing, and other decisions strategically based on the decisions of the other firms in the market. In this chapter, we first explore how monopolistically competitive firms will choose their profit-maximizing level of output. We will then discuss oligopolistic firms, which face two conflicting temptations: to collaborate as if they were a single monopoly, or to individually compete to gain profits by expanding output levels and cutting prices. Oligopolistic markets and firms can also take on elements of monopoly and of perfect competition.
<urn:uuid:ec3ef579-76e8-4945-9c13-9c8703f5dcd4>
{ "dump": "CC-MAIN-2020-50", "url": "https://openstax.org/books/principles-microeconomics-ap-courses-2e/pages/10-introduction-to-monopolistic-competition-and-oligopoly", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188947.19/warc/CC-MAIN-20201126200910-20201126230910-00713.warc.gz", "language": "en", "language_score": 0.9572281837463379, "token_count": 848, "score": 2.921875, "int_score": 3 }
This is the second re-release in my Project Logicality series, and it was posted in its original form in April of 2011. I’ve corrected and re-written this and reposted it here, hopefully clearer and to the point. May all your arguments be rational and all your disputes be resolved. ~ Troythulu We persuade others through our arguments, to get them to accept the statements and claims we make as likely true of their own free choice, justified on the basis of the reasons we give rather than prove them absolutely. Argumentation contributes to healthy discussion and debate, to let those so arguing find common ground, and to make easier a willingness to compromise. People argue daily, though seldom with skill, and in my view, argumentation as a well-honed tool of a functional democratic republic is needed more than ever with the increasing decay of social discourse, political polarization and interpersonal conflicts that ever more are seen as irreconcilable. In this post, I’ll describe the basic assumptions and basic conditions that go into any attempt at constructive argument, and before I do, I’ll note as before that good argument is intellectual in force, not coercive or deceptive. It is an ethical means of influencing others, limiting their freedom of action without imposing on their freedom of will. First, argument is carried out under conditions of uncertainty: We generally don’t argue about things we think certain, though that doesn’t prevent us from talking about them. We argue about things because we think it important enough to convince others of them, and things may well turn out to be otherwise. If things were absolutely self-evident, they would be so to all, and there would be no need to convince anyone of them. These differences may be implied and apparent to an analyst, concealed in the context of an argument, or explicit, obvious to an audience. Bear in mind that even the concept of certainty can depend on the audience addressed and the assumptions they bring to the table as to what it means. Second, Argumentation must consider the needs of an audience. people argue about things that matter to them, attempting to resolve what they think are conflicting positions that cannot simply be settled by any non-argumentative means; appealing to common knowledge, or widely-shared empirical methods; things they consider to be non-trivial, matters important enough to need resolution. This is not to pander to their biases, or to say that one claim is just as good as any other, it’s just that in being ethical, we must consider what is likely to persuade a given audience as if they were exercising their critical judgment on the merits of the arguments we give, and the soundness of the justifications we offer for our claims. The audience is the final judge of whether an argument is strong or weak, justified or not, assenting to it if it is strong or justified, rejecting it if not. Third, argumentation is both adversarial and cooperative. we make choices in arguments, choices in what arguments to select, and how to arrange and present them, based upon the audience we are addressing. The adversarial components of argumentation help the rigor of the discussion; they help us avoid hasty generalizations; they reduce omission of important details. Skilled arguers seek first to find common ground which is itself the bedrock upon which they can meaningfully discuss their disagreement. Ultimately, these enhance our confidence of the outcome, a confidence pending better arguments to be made in future. Fourth, argumentation involves restrained partisanship. It requires a cooperative effort between arguer and audience, despite the contentiousness often associated with everyday argument. Arguers must share a common system of terms, assumptions, and meanings. This allows resolution of the dispute, and is needed to permit any meaningful argument at all. Fifth, and finally, argument involves elements of risk. This is the risk of losing the argument, the risk of being shown wrong, the risk of having to alter one’s views and position, and in either case the emotional disruption of wounding one’s self-esteem or losing face with others. But the cooperative aspect of argument means that in willingly accepting these risks, each arguer is respecting the rights and personhood of the other, and in so doing, claiming that same privilege of respect from the other for him or herself. I think that these are good situational benchmarks, and are the optimal conditions, I would argue even necessary conditions, under which can be made any serious attempt to argue constructively, for the purpose of reaching the best possible conclusions given the means at hand. I feel up to blogging for this morning, and during this day and the next I’ll be reading up on SF approaches to zero-point energy production for a friend of mine, which should be fun. *waves at @Ravenpenny* Especially important in looking into zero-point energy is avoiding any use of blatant pseudoscience from so called “free energy” machine sellers… Rubber science is acceptable within the context of fiction, implausible technological quackery is NOT! So far, I’ve got two reference pages out of five candidates in separate browser tags. The other three candidate pages are all crank sites, with obvious red flags. I won’t sully my reputation, such as that is as a relative no-name in the skeptical community, by using those last as sources. This raises a question… Out of the arguments of both proponents and critics of any claim, how do I decide which claimant is more credible? There are a set of steps I use that make for a useful start of any inquiry, and I’ll put these into three groups of related questions: - First: Which side in a given controversy, genuine or manufactroversy, commits the fewest logical fallacies? Which side has the most valid or cogent arguments and makes the fewest errors in reasoning? Once these are compared and an answer obtained, I then choose the side with the best arguments and go to step two. Remember though, to take care to see fallacious arguments that are actually there, and not the result of wishful seeing. And so… - Secondly: Which side has the better factual support for their claims. Do their respective claims add up under adequate fact-checking using reliable sources? Do credible sources support or reject the claims made? Which sources have the better track-record and reputation as a valid and reliable? Next… - Thirdly: Related to the second, but worth it’s own step: Which factual statements, when checked, even if and when true, are actually relevant to the claims and counterclaims made? Does the alleged factual support of a given claim actually have anything to do with it? These three points are a basic rundown of the steps I use. Answering these questions on science and science-relevant news are one reason I tend to support climate scientists over so-called climate sceptics, and professional biologists over the various species of creationists found online and in religion and politics. They are the reason that I tend to give more credence to the statements of astronomers than I do astrologers, Physicists and psychologists more than psychic claimants, chemists over alchemists, and neuroscientists over phrenologists. These questions are the reasons I don’t get my science from clergymen, religious apologists, allegedly fair and balanced media outlets, politicians or radio talk-show propagandists. Those are not what I would call credible sources. I get my science from scientists, and science-writers with a real background in the field, thank you, not preachers, partisan bloggers, or people who loudly decry government and taxation while also running for public office so they can get paid a rather handsome salary, with kickbacks and bribes paid by lobbyists, otherwise funded by my taxes. - Top 10 Fallacies of Internet Trolls (americanlivewire.com) - Conservative media’s attacks on climate science effectively erode viewers’ belief in scientists (rawstory.com) - 2013 SkS Weekly News Roundup #32A (skepticalscience.com) - The Appeal to Authority (ethicalrealism.wordpress.com) - The Prodigy Effect (ketyov.com) - 5 Ways Right-Wing Media Make Their Fans Fear Science (alternet.org) - Anti-science arguments: How do we respond? (newanthropocene.wordpress.com) - Moving science communication beyond the standard argument (nrelscience.org) This course from the Teaching Company, taught by Northwestern University professor David Zarefsky, has long been one of my favorites where home study is concerned and life situation, tuition, textbook, and travel expenses make de facto college study cost-prohibitive. This set of twenty-four thirty-minute lectures, in a set of four DVDs, is a good introduction to both the fundamentals and finer points of argumentation, the use of reason to gain the willing adherence of an audience to whatever case you wish to argue. Of course, the point made in the very first lecture is that far from being mere bickering and quarreling, far better than a verbal fight, argumentation is not about these things, but the noble art of negotiation and deliberation by the process of offering reasons, acceptable and sound ones, for the claims we make. This course, as Zarefsky tells you from the start, is not about winning more arguments with your spouse, convincing an atheist that God exists, nor about convincing a theist that there is no god. There’s a selection of suggested textbooks for the course, though I’ve found the lectures will do perfectly fine on their own with the study guide booklet that is included. For my own purposes, I’ve gotten some of the textbooks because of the usefulness of delving deeper into the subject matter, and I have taken written drafts of study notes from each lecture on the most important points of the lessons. Some criticisms, otherwise I’m a poor critic, but I’ll keep it constructive: Zarefsky uses many examples and illustrations of the main points of each lecture, and most of these are helpful, though some are a bit overused and a couple of times I had to improvise once I got his point by coming up with my own. In one lecture, (#13, Reasoning from Parts to Whole) he uses hypothetical emails from Teaching Company customers to clarify a discussion of arguing from general to specific and from specific to general and how either can be inductive or deductive. Once was sure I got it, I translated it into a discussion of generalizing and classifying about sand-worn stones found on a beach, used in an old post of mine (Here). All in all though, Professor Zarefsky’s a top-notch instructor, and I would be very pleased to study under him as a classroom environment teacher now that I’m used to his style. The course as a whole is extremely information-dense, and that’s a good thing, though it’s spaced out nicely in the format of the twenty-four lessons it’s recorded in. I recommend having a pen and note book or the digital equivalent handy while watching or listening to these — there’s a lot to take in, even as spaced out as they are, and you may want to get the more subtle but vital points of each lesson as well as the well-illustrated ones. I recommend this course for anyone interested in developing their skill in rational deliberation and decision-making in a world where we are all too often divided and polarized in our positions, a world in which the climate of debate is poisoned by the forces of unreason and dogmatic bullheadedness.
<urn:uuid:2175738c-52dc-407a-8bde-6f8cfd52c7fd>
{ "dump": "CC-MAIN-2014-41", "url": "http://kestalusrealm.wordpress.com/tag/logic/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136896.39/warc/CC-MAIN-20140914011216-00235-ip-10-234-18-248.ec2.internal.warc.gz", "language": "en", "language_score": 0.9431671500205994, "token_count": 2471, "score": 2.5625, "int_score": 3 }
A language still spoken by the descendants of Jews expelled from Spain 500 years ago is getting a helping hand from the Spanish Royal Academy to keep it alive. Ladino, a language taken abroad by Spanish Jews expelled from Spain in the late 15th century, uniquely preserves many elements of medieval Spanish, but some fear it is dying out. The Spanish Royal Academy has given it a lift by taking a first step toward creating a distinct academy for Ladino that will nurture the archaic language. “I feel this is a very important moment, a historic moment,” said Tamar Alexander-Frizer, president of the Autoridad Nasionala del Ladino i su Kultura (National Authority of Ladino and its Culture), established by Israel in 1996 to support and foster the language. Alexander-Frizer spoke Tuesday at the Madrid headquarters of the Spanish Royal Academy, where Ladino experts signed an agreement to set up a new institution that will become part of the 23-member Association of Spanish Language Academies. Sephardic Jews is the term commonly used for those who once lived in the Iberian peninsula. They fled to other countries in Europe, the Middle East, Africa and Latin America. The largest community is in Israel. Granting Ladino the distinction of its own academy and locking it into an international support network aims to secure its future. Only a few thousand people are thought to still speak the language fluently. At least 250,000 people in Israel are believed to have some knowledge of Ladino, according to Shmuel Refael Vivante of the National Authority of Ladino and its Culture. But outside Israel, the number is “a mystery,” he says. UNESCO, the U.N.’s educational, scientific and cultural agency, classifies Ladino as a language that is “severely endangered.” Jacobo Sefami, a Sephardi born in Mexico and now a professor at the University of California, Irvine, is pessimistic. “The truth is that no children are speaking it anymore and its progress toward extinction seems irreversible,” he wrote in an email to the Associated Press. Others, like Maria Cherro de Azar, a specialist at the Buenos Aires-based Center for the Research and Spread of Sephardic Culture, are less gloomy. “There has been talk of the language dying for more than 100 years,” she said by telephone. Ladino is still used by the Rhodes community in Seattle
<urn:uuid:15ee3db0-757a-442c-be05-e7536b4f266e>
{ "dump": "CC-MAIN-2023-14", "url": "https://www.theyeshivaworld.com/news/general/1474055/spain-helps-keep-alive-archaic-language-of-sephardic-jews.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00533.warc.gz", "language": "en", "language_score": 0.9575467109680176, "token_count": 523, "score": 3.375, "int_score": 3 }
December 2015, OSU Associate Professor and local artist Amy Youngs borrowed specimens from the Tetrapod Collection for her art installation for a BioPresence exhibition at OSU. The word “STRIKE” was spelled out with 116 bird specimens from our collection to commemorate the bird deaths resulting from collisions with human-made structures that occur every year. Amy describes her motivation for the project: “The project comes from my desire to see the world from the perspectives of other animals. As a human animal, I can never fully understand the experience of a bird, but as an artist I try to translate that effort in ways that speak to other humans and perhaps have some positive effect for birds. I began thinking about the window strike issue when I saw Angelika Nelson collecting a dead bird that had hit a window at the Heffner Building at the Olentangy Wetlands Research Park. I began asking questions about what birds see and don’t see and what is known about preventing the problem of building collisions. I thought about how many of the dead birds in the collection of the Museum of Biological Diversity could attest to the tragedy of human-built structures. What if the birds went on strike? What if we saw our buildings like birds did? Perhaps we would learn to build in ways that would allow us to become better citizens of the ecosystem.” Collaborations between Art and Science like this one are an innovative way to raise awareness of environmental issues. In this case we focused attention on bird strikes. Artists and scientists can work towards creating unique ways to both increase building visibility for migrating birds and public awareness of the problem. Check out this project at Temple University in Philadelphia, PA for some inspiration. For now, we will keep using the bird collision study skins as outreach tools in education events on this pressing matter. About the Author: Stephanie Malinich is Collection Manager of the OSU Tetrapod Collection.
<urn:uuid:b868b4f3-4cc4-4f83-b136-fd76fd896f73>
{ "dump": "CC-MAIN-2019-51", "url": "https://u.osu.edu/biomuseum/tag/bird-collision/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540503656.42/warc/CC-MAIN-20191207233943-20191208021943-00328.warc.gz", "language": "en", "language_score": 0.954071581363678, "token_count": 393, "score": 3.265625, "int_score": 3 }
s are examples of an astronomical formation involving a planetary system s which have several moons in the same orbit, with equal period s, are Klemperer Rosettes. The moons remain in fixed positions relative to each other, and balance out their respective gravitational influences on each other. This is much the same phenomenon described in the Ringworld books by Larry Niven as the home system of the Pierson's Puppeteers; he, however, calls the system a Kemplerer Rosette. It differs from the description above in that there is no planet at the center of the formation; the five homeworlds of the Puppeteers balance each other and orbit a centerpoint. The proper name is probably a Klemperer Rosette; the phenomenon was described by W.B. Klemperer in The Astronomical Journal, vol. 67, number 3 (April, 1962), on pages 162-7, "Some Properties of Rosette Configurations of Gravitating Bodies in Homographic Equilibrium."
<urn:uuid:7d4b0e2e-6d75-483c-bbf9-c5c7e50a7d1e>
{ "dump": "CC-MAIN-2013-48", "url": "http://everything2.com/title/Klemperer+Rosette", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164974477/warc/CC-MAIN-20131204134934-00035-ip-10-33-133-15.ec2.internal.warc.gz", "language": "en", "language_score": 0.9012917876243591, "token_count": 213, "score": 3.328125, "int_score": 3 }
NASA's Ocean Currents Study Confirms Providential Care by Brian Thomas, M.S. * Virtually everybody knows that the world's oceans have currents. But few know who first discovered them, why they are important, or what can be gained by mapping them in greater detail. NASA's Aquarius satellite is collecting related data from the world's oceans, and a recent NASA video highlighted the vital importance of its currents. Life on earth depends on the continuous movement of ocean water to mix nutrients both horizontally and vertically, otherwise ocean life—and, by extension, life elsewhere on earth—would not survive. The vast majority of the planet's breathable oxygen is generated by marine algae, which are fueled by this mixing. Ocean currents have complicated causes. The video, while highlighting NASA's Aquarius satellite project, also explained that "at the ocean surface, currents are primarily driven by winds. Deep below the surface, however, currents are controlled by water density, which depends on the temperature and salinity of the water."1 Aquarius detects sea surface salinity data from the world's oceans—and not just from shipping lanes, from which most data have historically been obtained.2 Water becomes denser and saltier as it cools, and as a result it sinks below less dense waters. The NASA video stated, "This globally interconnected process of overturning circulation occurs in all ocean basins and helps to regulate earth's climate."1 Thus, ocean currents feed vital organisms and help govern climate, including the distribution of vital rain. But these things were known in the 1800s. Matthew Maury, the father of modern oceanography, was the first to establish the causes and courses of marine currents. He rejected contemporary but lesser theories of what causes currents, including the idea that they are forced by river runoff. In 1855, Maury wrote, "Hence we say we know that the sea has its system of circulation, for it transports materials for the coral rock from one part of the world to another."3 Maury also recognized the importance of ocean circulation to earth's climate. Concerning animals that can build solid shells out of miniscule materials in solution, he wrote: "For to them, probably, has been allotted the important office of assisting in giving circulation to the ocean, of helping to regulate the climates of the earth, and of preserving the purity of the sea."4 Unlike NASA's Aquarius project, Maury rightly credited God, not nature, for establishing the ocean's currents. For example, he wrote that "reason assures us that they move in obedience to some law of Nature, be it recorded down in the depths below, never so far beyond the reach of human ken; and being a law of Nature, we know who gave it, and that neither chance nor accident had any thing to do with its enactment."5 His life's work was inspired by Psalm 8:8, which refers to "the paths of the seas."6 Thus, the ocean currents illustrated in the recent NASA video clearly testify to the paths of the seas with which Matthew Maury and the psalm's writer, King David, were acquainted. God still oversees His earth. - Aquarius Ocean Circulation. NASA/Goddard Space Flight Center video. Posted on nasa.gov, accessed September 15, 2011. - Overview: Mission Basics. Aquarius Sea Surface Salinity from Space fact sheet. NASA/Goddard Space Flight Center. Posted on aquarius.nasa.gov, accessed September 15, 2011. - Maury, M. F. 1855. The Physical Geography of the Sea, 2nd ed. New York: Harper & Brothers, 153. - Ibid, 165. - Ibid, 124-125. - Gish, D. 1991. Modern Scientific Discoveries Verify the Scriptures. Acts & Facts. 20 (9). [link: ] Image credit: NASA/Goddard Space Flight Center * Mr. Thomas is Science Writer at the Institute for Creation Research. Article posted on September 22, 2011.
<urn:uuid:c65a29ea-5b10-4a53-b968-0db305e94928>
{ "dump": "CC-MAIN-2015-32", "url": "http://www.icr.org/articles/view/6385/278/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986357.49/warc/CC-MAIN-20150728002306-00335-ip-10-236-191-2.ec2.internal.warc.gz", "language": "en", "language_score": 0.9411922097206116, "token_count": 839, "score": 3.84375, "int_score": 4 }
Since I was a young child Mars held a special fascination for me. It was so close and yet so faraway. I have never doubted that it once had advanced life and still has remnants of that life now. I am a dedicated member of the Mars Society,Norcal Mars Society National Space Society, Planetary Society, And the SETI Institute. I am a supporter of Explore Mars, Inc. I'm a great admirer of Elon Musk and SpaceX. I have a strong feeling that Space X will send a human to Mars first. Rocks Rich in Silica Present Puzzles for Mars Rover Team This May 22, 2015, view from the Mast Camera (Mastcam) in NASA's Curiosity Mars rover shows the "Marias Pass" area where a lower and older geological unit of mudstone -- the pale zone in the center of the image -- lies in contact with an overlying geological unit of sandstone. Credit: NASA/JPL-Caltech/MSSS › Full image and caption In detective stories, as the plot thickens, an unexpected clue often delivers more questions than answers. In this case, the scene is a mountain on Mars. The clue: the chemical compound silica. Lots of silica. The sleuths: a savvy band of Earthbound researchers whose agent on Mars is NASA's laser-flashing, one-armed mobile laboratory, Curiosity. NASA's Curiosity rover has found much higher concentrations of silica at some sites it has investigated in the past seven months than anywhere else it has visited since landing on Mars 40 months ago. Silica makes up nine-tenths of the composition of some of the rocks. It is a rock-forming chemical combining the elements silicon and oxygen, commonly seen on Earth as quartz, but also in many other minerals. "These high-silica compositions are a puzzle. You can boost the concentration of silica either by leaching away other ingredients while leaving the silica behind, or by bringing in silica from somewhere else," said Albert Yen, a Curiosity science team member at NASA's Jet Propulsion Laboratory, Pasadena, California. "Either of those processes involve water. If we can determine which happened, we'll learn more about other conditions in those ancient wet environments." Water that is acidic would tend to carry other ingredients away and leave silica behind. Alkaline or neutral water could bring in dissolved silica that would be deposited from the solution. Apart from presenting a puzzle about the history of the region where Curiosity is working, the recent findings on Mount Sharp have intriguing threads linked to what an earlier NASA rover, Spirit, found halfway around Mars. There, signs of sulfuric acidity were observed, but Curiosity's science team is still considering both scenarios -- and others -- to explain the findings on Mount Sharp. Adding to the puzzle, some silica at one rock Curiosity drilled, called "Buckskin," is in a mineral named tridymite, rare on Earth and never seen before on Mars. The usual origin of tridymite on Earth involves high temperatures in igneous or metamorphic rocks, but the finely layered sedimentary rocks examined by Curiosity have been interpreted as lakebed deposits. Furthermore, tridymite is found in volcanic deposits with high silica content. Rocks on Mars' surface generally have less silica, like basalts in Hawaii, though some silica-rich (silicic) rocks have been found by Mars rovers and orbiters. Magma, the molten source material of volcanoes, can evolve on Earth to become silicic. Tridymite found at Buckskin may be evidence for magmatic evolution on Mars. Curiosity has been studying geological layers of Mount Sharp, going uphill, since 2014, after two years of productive work on the plains surrounding the mountain. The mission delivered evidence in its first year that lakes in the area billions of years ago offered favorable conditions for life, if microbes ever lived on Mars. As Curiosity reaches successively younger layers up Mount Sharp's slopes, the mission is investigating how ancient environmental conditions evolved from lakes, rivers and deltas to the harsh aridity of today's Mars. Seven months ago, Curiosity approached "Marias Pass," where two geological layers are exposed in contact with each other. The rover's laser-firing instrument for examining compositions from a distance, Chemistry and Camera (ChemCam), detected bountiful silica in some targets the rover passed on its way to the contact zone. The rover's Dynamic Albedo of Neutrons instrument simultaneously detected that the rock composition was unique in this area. "The high silica was a surprise -- so interesting that we backtracked to investigate it with more of Curiosity's instruments," said Jens Frydenvang of Los Alamos National Laboratory in New Mexico and the University of Copenhagen, Denmark. Gathering clues about silica was a major emphasis in rover operations over a span of four months and a distance of about one-third of a mile (half a kilometer). The investigations included many more readings from ChemCam, plus elemental composition measurements by the Alpha Particle X-ray Spectrometer (APXS) on the rover's arm and mineral identification of rock-powder samples by the Chemistry and Mineralogy (CheMin) instrument inside the rover. Buckskin was the first of three rocks where drilled samples were collected during that period. The CheMin identification of tridymite prompted the team to look at possible explanations: "We could solve this by determining whether trydymite in the sediment comes from a volcanic source or has another origin," said Liz Rampe, of Aerodyne Industries at NASA's Johnson Space Center, Houston. "A lot of us are in our labs trying to see if there's a way to make tridymite without such a high temperature." Beyond Marias Pass, ChemCam and APXS found a pattern of high silica in pale zones along fractures in the bedrock, linking the silica enrichment there to alteration by fluids that flowed through the fractures and permeated into bedrock. CheMin analyzed drilled material from a target called "Big Sky" in bedrock away from a fracture and from a fracture-zone target called "Greenhorn." Greenhorn indeed has much more silica, but not any in the form of tridymite. Much of it is in the form of noncrystalline opal, which can form in many types of environments, including soils, sediments, hot spring deposits and acid-leached rocks. "What we're seeing on Mount Sharp is dramatically different from what we saw in the first two years of the mission," said Curiosity Project Scientist Ashwin Vasavada of JPL. "There's so much variability within relatively short distances. The silica is one indicator of how the chemistry changed. It's such a multifaceted and curious discovery, we're going to take a while figuring it out." For more about Curiosity, which is examining sand dunes this month, visit:
<urn:uuid:b09fc1ff-8ede-4683-aba3-53400939a899>
{ "dump": "CC-MAIN-2018-22", "url": "http://jacksmars.blogspot.com/2015/12/rocks-rich-in-silica-present-puzzles.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867220.74/warc/CC-MAIN-20180525215613-20180525235613-00249.warc.gz", "language": "en", "language_score": 0.9417611956596375, "token_count": 1438, "score": 2.515625, "int_score": 3 }
Where you live may be putting you at risk for foodborne illness, researcher finds Thursday, Aug. 28, 2014 MANHATTAN — Improving education about risky food handling behaviors would reduce the amount of foodborne illness and help improve food security around the world, according to Kansas State University research. For their study, the university's Kadri Koppel, assistant professor of human nutrition, and Edgar Chambers IV, university distinguished professor and director of the Sensory Analysis Center, worked with around 100 consumers from India, Korea, Thailand, Russia, Estonia, Italy, Spain and two cities in the United States. The consumers completed questionnaires about their purchase, storage, handling and preparation practices of poultry and eggs. It is one of the only studies to use the same questionnaire to collect data between different countries and is part of a larger project to develop science-based messages for consumers about food safety practices. The study produced the article "Eggs and Poultry: Purchase, Storage and Preparation Practices of Consumers in Selected Asian Countries," which was published in the journal Foods. "We really wanted to know how consumers in different countries are actually handling raw eggs and poultry because these products are the source of two main bacteria: salmonella and campylobacter,” Koppel said. "These bacteria lead to many cases of foodborne illness and we need a better understanding of food handling practices to find the risky behaviors that may lead to contamination." Food safety regulations vary by country. The research found that most consumers purchase their eggs from the supermarket, with the exception of Argentina, where consumers get their eggs from the regular open-air market. However, the way the eggs were stored at the supermarkets varied. While some countries kept the eggs refrigerated, most eggs in Thailand, India, Spain, Italy and Colombia were stored at room temperature. "When you think about the range of countries that we had and you compare the annual average temperatures in those countries, they can vary by about 50-degrees Fahrenheit — and that's a pretty big range," Koppel said. "A lot can happen to eggs if they're stored at room temperature in a country where the climate may be somewhat tropical." The researchers found the majority of consumers store their eggs in the refrigerator once they brought them home. Another similar finding was that the majority of consumers in these countries buy raw poultry and meats, but how they store those meats varies. Fifty percent or more of the consumers in Russia, India, Thailand, Colombia and the U.S. would freeze the meat right away, although these consumers often would improperly store the meat. "If you think about the typical refrigerator and the air movement within the fridge, warmer air typically rises higher," Koppel said. "If you put the meat in a place where the temperature is warmer, then it's more likely to spoil. Raw meats also may have juices that leak and there is a possibility that the juices may cross-contaminate ingredients on lower shelves." The safest place to store raw meat in the refrigerator is on the bottom shelf. The research found mixed results on this, with most of the consumers in Argentina and Colombia storing meat on higher shelves, putting them at a higher risk for contamination. The riskiest behavior was exhibited in preparing the eggs and poultry. About 90 percent of consumers in Colombia and 70 percent of consumers in India washed these products in the sink before preparation. In the U.S., about 40 percent did. "If you think about washing something in the sink, typically water splatters on the surface around the sink," Koppel said. "If you have some other ingredients near the sink that you're about to use for your meal, all that water splattering around the sink could cross-contaminate the other ingredients you are about to use." The researchers found consumers also need to improve their cutting board cleanliness. About 40 percent of Colombian consumers reported using the same cutting board for multiple ingredients without washing or wiping it down between each use. While most other consumers reported cleaning the board between ingredients, Koppel said that not all forms of cleaning are effective. "This may seem like a safe behavior, but it really depends on the wiping agent," Koppel said. "If you're using a kitchen towel, it may not remove a lot of the material that's come into contact with the cutting board. If you use the sponge that you use to wash dishes, research has shown that those sponges actually contain a lot of other bacteria and that may contaminate your other ingredients in addition to what's already on the cutting board." The safest practice is to use a different cutting board for different ingredients, she said.photo credit: JaBB via photopincc
<urn:uuid:5a196075-0ea8-4b9f-8dda-e9c6aa09ed40>
{ "dump": "CC-MAIN-2016-36", "url": "http://www.k-state.edu/media/newsreleases/aug14/worldfood82714.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982293922.13/warc/CC-MAIN-20160823195813-00273-ip-10-153-172-175.ec2.internal.warc.gz", "language": "en", "language_score": 0.9615572094917297, "token_count": 972, "score": 2.703125, "int_score": 3 }
Weather conditions in the St. Johns area can swiftly change, monitor weather conditions closely throughout the day. If you see anyone spraying pesticides in today's conditions please report the activity. Elm span outbreaks are often caused by low populations of T.Droozi; a small beneficial insect about the size of a fruit fly. It is a great predator to the elm span worm, killing the elm span in the egg stages before they ever hatch. T. Droozi is almost undetectable by humans and animals, but is very sensitive to many pesticides and can be killed off quite easily, therefore the use of pesticides can worsen the elm span situation as you are killing their best known predator. Pen State University has used T. Droozi, destroying 80% of elm Span worm in one season. Elm span outbreaks mostly occur year-after year in urban areas where pesticides are used. The best fight against elm span would be for the entire community to ban together and stop using pesticides and perhaps work with local students and entomologists to bring the ecosystem back into balance by re-introducing the beneficial insects. For the time being, you may also want to scrape off any eggs that you can before they hatch. Buy a pressure washer, and add some liquid soap such as Dr. Bronners and blast your trees. Disrupting the ecosystem with chemicals NEVER solves the problem; it only causes stronger breakouts of more resistant insects. Current provincial weather regulations: -Wind must be between 2 and 15 km/hr for ground applications -Winds must be between 2 and 10km/hr for trees taller than 3meters -Air Temperature must be below 25oC -It must not raining nor is rain anticipated over the next 2hour period -The relative humidity must be above 50% Where to report a violation
<urn:uuid:737b806e-c0a0-46d4-a079-33ed98063777>
{ "dump": "CC-MAIN-2018-09", "url": "http://sprayadvisory.blogspot.com/2011/07/no-go-for-spray-activities-today_05.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813602.12/warc/CC-MAIN-20180221083833-20180221103833-00361.warc.gz", "language": "en", "language_score": 0.9439048767089844, "token_count": 383, "score": 3.265625, "int_score": 3 }
Currently, less than 3% of the food that Americans eat is grown within 100 to 200 miles of where they live. And many people in poorer neighborhoods simply do not have ready access to affordable local produce. A fascinating new project, the Food Commons, aspires to radically change this reality. It seeks to reinvent the entire “value-chain” of food production and distribution through a series of regional experiments to invent local food economies as commons. By owning many elements of a local food system infrastructure – farms, distribution, retail and more – but operating them as a trust governed by stakeholders, the Food Commons believes it can be economically practical to build a new type of food system that is labor-friendly, ecologically responsible, hospitable to a variety of small enterprises, and able to grow high-quality food for local consumption. Food Commons explains its orientation to the world by quoting economist Herman Daly: “If economics is reconceived in the service of community, it will begin with a concern for agriculture and specifically for the production of food. This is because a healthy community will be a relatively self-sufficient one. A community’s complete dependency on outsiders for its mere survival weakens it….The most fundamental requirement for survival is food. Hence, how and where food is grown is foundational to an economics for community.” Food Commons is a nonprofit project that was officially begun in 2010 by Larry Yee and James Cochran. Yee is a former academic with the University of California Cooperative Extension who has been involved in sustainable agriculture for years. Cochran is the founder and president of Swanton Berry Farms, a mid-scale organic farming enterprise near Santa Cruz, California. In 2012, Larry Yee told me in a phone conversation, the leaders of Fresno’s business, academic and social justice communities invited the Food Commons to develop its first prototype/proof of concept in Fresno. Fresno leaders see the idea as a way to foster economic development, create jobs and provide access to healthy foods — this in a region that has the most impoverished congressional district in the nation, along with all the nutritional deficiencies that this entails. Last November, the Food Commons Trust in Fresno finished its business plan; it plans to launch the first phase of Food Commons business operations by 2014. Strictly speaking, Food Commons is not a commons – it is a project that seeks to launch and support regional food commons, which it defines as an integrated regional structure of production, governance and distribution benefits everyone. As the project’s website puts it, “Food Commons is developing a new physical, financial and organizational infrastructure for localized food economies that are fair, just and sustainable for the health and well-being of our people, our communities and the planet.” The project consists of three components: Food Commons Trusts is a nonprofit “quasi-public entity to acquire and steward critical foodshed assets such as land and physical infrastructure. It holds those assets in perpetual trust, which are then used to benefit everyone. The Trust would lease land and facilities to participating small farms and businesses at affordable rates, giving entrepreneurs opportunities that they might not have in more concentrated markets. Food Commons Banks are community-owned financial institutions that provides capital and financial services to all parties in the regional food chain. This would allow eco-minded farmers and specialty agriculture to obtain the financing that they might need to succeed. Food Commons Hubs are locally owned, cooperatively integrated businesses that help deal with the complex logistics of aggregating and distributing food and the various players in the regional food system. The Hubs would also help small food businesses “achieve economies of scale in their administrative, marketing, and human resources and other business functions” and provide technical assistance and specialized vocational training. The stated goal of the project is to build “a networked system of physical, financial and organizational infrastructure that allows new local and regional markets to operate efficiently, and small to mid-sized food enterprises – from farms to processors, distributors, and retailers – to compete and thrive according to principles of sustainability, fairness, and public accountability.” As a sign of its values and ambitions, Food Commons invokes the democratic and cooperative models of the Mondragon Co-operative network in Spain, the Organic Valley Co-op in the U.S., and the VISA International financial services network. To fulfill itsvision, Food Commons has set up a governance structure revolves around two core principles: Preservation of common benefit along the value chain.The governing boards of entities within the Food Commons system will be tasked with balancing the needs of the whole system, from the environment, to workers, to farmers and fishers, to aggregators/processors, to retailers, and to consumers. Sustainable, steady-state profitability. The governing boards will establish goals, incentive structures, and checks and balances that drive efficient use of resources and sustainable positive economic value creation, not unlimited growth and maximization of shareholder profit at the expense of other stakeholders, including future generations. The watchwords of the new system is “accountability, economic viability and social equity.” The Fresno project aims to be "a proof of concept and as an engine for economic development, job creation, and healthy food access in a region characterized by the paradox of great wealth and agricultural resources existing side by side with entrenched poverty, food insecurity, and diet-related chronic disease.” Besides Fresno, another regional Food Commons project is underway in Atlanta, Georgia, both at the largest regional scale as well as in neighborhood-scale community food systems, which the project calls “Fertile Crescent.” In Auckland, New Zealand, Food Commons has been developing a third project – an online marketplace “where food growers and producers of any scale can sell directly to customers online via a super low cost distribution system. The idea is to short circuit the standard long supply chains so that the growers are paid more and the customers pay less.” One cannot help but be impressed by the ambition, rigor and scope of the Food Commons project. If you’d like to learn more, download a pdf file of its 2011 annual report. Photo credit: Southwest Atlanta Fertile Crescent Facebook page
<urn:uuid:4d1e6458-5d1e-4cf3-9a18-3bea196dd415>
{ "dump": "CC-MAIN-2020-29", "url": "https://www.resilience.org/stories/2014-03-18/the-power-of-a-regional-food-commons/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655929376.49/warc/CC-MAIN-20200711095334-20200711125334-00241.warc.gz", "language": "en", "language_score": 0.9374567866325378, "token_count": 1285, "score": 2.6875, "int_score": 3 }
Lipstick is the most used cosmetics all over the world. It enjoyed its global share of $5760 million in 2016 and expected to reach $8670 million in 2021. To provide a safe, non-toxic and eco-friendly substitute to synthetic colors, the Scientists of Council of Scientific and Industrial Research- Institute of Himalayan Bioresource Technology (CSIR-IHBT) have extracted natural colors and dyes from the natural occurring vegetables and plants sources. The main concern in natural colors is stability. To overcome this issue, natural colors were stabilized by natural methods and used for the preparation of beauty-enhancing and health-protecting cosmetic composition i.e., Herbal Lipstick. It is prepared in different shades like cherry red, pink, purple, and orange by the use of natural colors derived from vegetable and plant sources and blended with various essential oils in cosmetically suitable base materials. These herbal lipsticks have the potential to beautify the texture and shade of lips and to provide health-promoting and protective effects. According to Dr. Sanjay Kumar, Director, CSIR-IHBT, “The developed technology provides a process for the preparation of herbal lipstick and has great market potential with additional health-promoting effects.” Different classes of people worldwide use cosmetics for beautification since ancient times. However, during the last few decades, there has been a tremendous increase in the use of cosmetics. The daily use of cosmetics may lead to localized skin problems, and the harmful effects are caused by skin or oral absorption of some chemical substances. The toxic elements are related to mineral pigments, which are used as coloring agents. Numerous cosmetics used daily are applied in susceptible areas like lips where the absorption of toxic material is very high. Lipstick is the common cosmetic item worn by women in their day-to-day life. It is a product holding primary ingredients like waxes, pigments, and oils that impart shading, texture, and softness to the lips. Fragrances and preservatives are additionally included to prevent lipstick from becoming rancid. Synthetic colors and dyes used in lipstick might be responsible for various allergies, skin irritation, skin discoloration, dermatitis, neurotoxicity, and cancers. However, due to increased awareness in the consumers, the concern towards the quality of products has been amplified. “Nowadays, natural colors and dyes become important commodities in today’s global forethought because of the hazardous effects of synthetic dyes on humans, animals, as well as to the environment. These lipsticks may provide a solution to all these problems,” said Dr. Kumar.
<urn:uuid:b286eba2-0ca5-442f-bdeb-7f7a431870ef>
{ "dump": "CC-MAIN-2022-05", "url": "https://www.techexplorist.com/scientists-created-organic-herbal-lipsticks-lipstick-lovers/28951/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00048.warc.gz", "language": "en", "language_score": 0.9507483839988708, "token_count": 546, "score": 3.234375, "int_score": 3 }
Cover of Skywatchers: A Revised and Updated Version of Skywatchers of Ancient Mexico, by Anthony Aveni This event is expired. Since their archaeological and artistic remains were first studied by Western scholars about a century and a half ago, we have begun to appreciate that the ancient Maya rulers of Central America were possessed by the study of time, the calendar, and astronomy. This lecture mainly examines the evidence that suggests Maya priest-astronomers carefully watched the planet Venus, clocking its motion to an accuracy of better than two hours in five centuries, all without the advantage of the technologies we have today. What drove them to such precision? What was the observational methodology employed to follow the planet? Why was Venus, above all other celestial objects, so important to Maya astronomers? What other celestial bodies were given attention? These questions will be discussed in some detail along with Maya calendar documents, hieroglyphic writing, and the role of astronomical orientations in standing Maya architecture. Dr. Anthony F. Aveni has been a professor of Astronomy, Anthropology, and Native American Studies at Colgate University since 1963. In 1988, he was named the Russell B. Colgate Distinguished University Professor. Notably, he led the development of the field of archaeoastronomy. As an author, he has published research publications, academic articles, and numerous books throughout his career.
<urn:uuid:36013df6-cd96-4440-8f33-cdabe17bf593>
{ "dump": "CC-MAIN-2016-22", "url": "http://deyoung.famsf.org/calendar/guest-lecture-skywatchers-ancient-mexico-dr-anthony-aveni?mini=2014-08", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274994.48/warc/CC-MAIN-20160524002114-00204-ip-10-185-217-139.ec2.internal.warc.gz", "language": "en", "language_score": 0.9693512916564941, "token_count": 281, "score": 3.09375, "int_score": 3 }
What Is the Chemical Formula of Bleach? The chemical formula of household bleach is NaClO. Its chemical name is sodium hypochlorite. Though household bleach generally all has the same formula, in chemistry, bleaching can be done with a number of different substances, including hydrogen peroxide. According to Info Please, the process of bleaching means to whiten certain fibers, such as cloth or paper, with chemicals. Sunlight can also bleach fibers. According to How Stuff Works, household bleach is created by electrically charging saltwater. Since the chemical components of salt are NaCl (sodium and chloride), when it is mixed with water (hydrogen and oxygen) and given a charge, it produces chlorine gas and sodium hydroxide, which combined form bleach. Bleach works both as a disinfectant and a stain remover.
<urn:uuid:c77e77bc-e27f-4019-bc76-d911b030483e>
{ "dump": "CC-MAIN-2022-49", "url": "https://www.reference.com/world-view/chemical-formula-bleach-69832f84e0c7315c", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710765.76/warc/CC-MAIN-20221130160457-20221130190457-00672.warc.gz", "language": "en", "language_score": 0.9407680630683899, "token_count": 172, "score": 3.65625, "int_score": 4 }
Snoring is a common breathing disorder during sleep. It is hypothesized that head posture during sleep could change the bending angle and the cross-sectional area of the airway, which could cause changes in airflow and aerodynamic pressure during sleep. In this work, an experiment-driven computational study was conducted to examine the aerodynamics and pressure behavior in human upper airway during snoring. An anatomically accurate human upper airway model associated with a dynamic uvula was reconstructed from human magnetic resonance image (MRI) and high-speed photography. The airway bending at different head posture and the corresponding change in airway cross-sectional area are modeled based on measurements from literature. An immersed-boundary-method (IBM)-based direct numerical simulation (DNS) flow solver was adopted to simulate the corresponding unsteady flows of the bent airway model in all their complexity. Analyses were performed on vortex dynamics and pressure fluctuations in the pharyngeal airway. It was found that the vortex formation and aerodynamic pressure were significantly affected by the airway bending. A head-neck junction extension posture tends to facilitate the airflow through the upper human airway. Fast Fourier transformation (FFT) analysis of the pressure time history revealed the existence of higher order harmonics of base frequency with significant pressure amplitudes and energy intensities. The results of this study help better understand the pathology of snoring under the influence of head posture from an aerodynamic perspective.
<urn:uuid:55c313d2-c10b-481c-8a0e-a512cbd87723>
{ "dump": "CC-MAIN-2022-49", "url": "https://asmedigitalcollection.asme.org/FEDSM/proceedings-abstract/FEDSM2020/83723/V002T03A019/1088020", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00634.warc.gz", "language": "en", "language_score": 0.9575662016868591, "token_count": 299, "score": 2.625, "int_score": 3 }
Why is mold such a big problem? Because it can case serious health issues. There are numerous types of mold, and not all molds are toxic, but all can cause allergic reactions, asthma attacks, and other respiratory problems. More dangerous molds can lead to brain damage, cancer, and even death! Mold can make your home or commercial property unsafe, so it’s important to get it cleaned up ASAP. Our certified inspectors can determine whether a mold problem exists in your home. Inspections uncover moisture intrusions that can lead to mold growth if left unattended. A testing scenario is formed by the inspector after visual inspection of the property. Inspectors can carefully explain the problem, and recommend ways to correct it. We are certified mold detectives and remediators. However, we do not test. That is something that needs to be done independently, as we feel that it is a conflict of interest. We can detect mold – so can you. Mold remediation services (thru MAS Labs) can be performed if required. Mold Remediation Services A HEPA vacuum is very useful for removing mold spores from contaminated surfaces. An air scrubber is a device that is used to remove particles and gases such as those created from mold spores from the air within a given area. Dry Ice Blasting Dry ice blasting uses compressed air to propel dry ice pellets (compressed CO2) thru a dispersal gun aimed at the surface, which needs to be cleaned. The ice is delivered to the surface at a supersonic speed allowing energy transfer to knock off the contaminant. The cold temperature of the ice (-79C or -109F) creates thermal shock that breaks the bond between the contaminant and the substrate. As the ice penetrates the surfaces it turns back to gas and expands which helps push the contaminant off of the substrate as well. The ice sublimates back to gas leaving only the dead mold to be HEPA vacuumed. Negative Pressure Containment These systems contain and capture indoor air particles, mold spores and other contaminants and odors using a time-proven technique known as negative pressure particle containment.
<urn:uuid:d08cfb52-7c73-44bc-8667-4c0c5d586e1f>
{ "dump": "CC-MAIN-2018-17", "url": "http://valuedry.com/mold-remediation/location/Malvern/PA", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945584.75/warc/CC-MAIN-20180422100104-20180422120104-00363.warc.gz", "language": "en", "language_score": 0.9233590960502625, "token_count": 442, "score": 2.578125, "int_score": 3 }
Want to start sharing your mind and have your voice heard? Join our community of awesome contributing writers and start publishing now. Emma Woodhouse, Jane Austen’s titular main character, came as a surprise to many readers when Emma stepped forth into the world. Infuriatingly prideful, vain, and obstinate, she was distinct from the general body of female protagonists in 19th-century England. In other words, Emma’s character wasn’t dictated as a means to preach the virtues of womanhood, but rather, written in a way that is consistent with the story being told. She views her surroundings and the people in it through the lens of her own wants, not taking into account the wills of others. A defining example of this is when she takes the reins of her friend Harriet Smith’s love life and manipulates her into declining Robert Martin, a man whom Harriet loves but falls short of Emma’s set of expectations. This controlling behavior doesn’t just pollute the inner lives of Harriet and Martin, but also herself; her evils of presumption all come down to her false perception of her own grandiosity. This creates a character with a central moral flaw that allows for a fascinating and believable journey toward maturity. She is depicted as human with her own thoughts, emotions, and desires—even if it means being wrong and foolish—as opposed to a stereotype based on how a patriarchal society believes women should be. This would sound super obvious to the modern reader, but it’s important to remember that this type of creative license hasn’t always been encouraged. It is a universal truth that literature is impacted by the social and cultural prejudices of the times, for better or for worse. Best case scenarios produce harrowing, yet necessary, art, such as Moll Flanders, The Handmaid’s Tale, and The Color Purple; other cases use art to reflect ingrained ideals of womanhood, such as War and Peace and A Tale of Two Cities. Not to dispute that the latter two are masterpieces worthy of study, but it is still important to acknowledge that Tolstoy and Dickens, respectively, based Natasha Rostova and Lucie Manette off patriarchal influences of femininity. The history of fictional women is, unfortunately, brimming with that kind of influence. While it is true that the first English novel by Samuel Richardson, Pamela; or, Virtue Rewarded, does set the stage for a female character telling her own story—it was done at the expense of personal creativity. Pamela, an epistolary tale of a maidservant and her torrid, one-sided affair with her employer, is very much dictated by the dynamics of Richardson’s mid-1700s climate. Two conflicting forces: the feminist movement that was taking shape, and the traditionalists concerned about matters of morality—each of which impacted the story of Pamela from clashing angles. Richardson focused on character as much as the sensibilities of his peers, even surrounding himself with a female advisory group. Perhaps that’s why Pamela is seen as feminist by some; the character is presented as more than a figment of the male fantasy, but rather as a thinking, feeling individual who is strong in her own right. However, the questions of morality and virtue that confront her in the course of her relationship with Mr. B (the employer who would’ve faced many sexual harassment lawsuits in the modern world) aren’t a matter of character conflict, but rather a reflection of the moral concerns from the reading masses. The virtue “rewarded,” in this case, is marriage between Pamela and Mr. B. Never mind that he repeatedly manipulated her through lies and disguise, dismissed and exploited her boundaries, and essentially treated her as a prize to be won. The patriarchal nature of the society of Pamela (both of the novel and character) demands that a symbolic union be made between the two, lest the feminine virtue attached to her character be marred by Mr. B’s sexual advances. This decision, no matter how much catharsis it brings to the contemporary reader of Richardson, doesn’t ring true to the feminist narrative of the story itself. Even more shameful, it limits the potential complexity of Pamela—while her search for independence may be seen as morally ambiguous back then, at least it does her character and her story justice. But, at the end of the day, I believe that art outlasts everything, including the stones that cynics throw at it. After all, the master of the English language himself is still celebrated for the scope of his female characters. The women in Shakespeare’s plays come alive with the same vivid and scorching fire as the male characters: leading rich inner lives, confronting trials and tribulations, inhabiting spaces that allow for moral greyness. Lady Macbeth is shameless in her unwavering ambition; Juliet is unapologetic in experiencing a world-defying desire; Beatrice is delightful in her scorching wit; Rosalind is literally given the free range of a man; even Kate, who is stuck in a patriarchal narrative, has a fire inside her that can never truly be burned out. These ladies defied the Elizabethan model of women being the weaker sex, forever confined to the shadows while their husbands take the spotlight. They are allowed to be just as lovesick and impetuous, witty and clever, terrible and ruthless—in other words, human. Because, in order for Shakespeare to frame the narrative in a way that fully brings his message to life, the female characters have to be treated with the same creative focus as the male characters. The character doesn’t follow the calling of external forces, but only that of the story. And, as a result, he became the greatest playwright (and arguably, storyteller) in the history of English literature. In fact, some of the most acclaimed novels in literature famously involve female characters that remain morally complex. Charlotte Brontë’s Jane Eyre doesn’t follow the Gothic archetype of the helpless and innocent young woman; on the contrary, Jane is emotionally stronger than Rochester (her Byronic, tormented lover who comes with his own demons), inhabiting a clear-eyed wisdom that is very much conscious of her own happiness and well-being. When she discovers the existence of Rochester’s mad wife in the attic, she asserts her own will by leaving him despite his protests. She returns his love on her own terms, holds him accountable for his lies and mistakes, and decides to marry him once she’s in a self-determined place to do so (a sharp contrast to Pamela). Now, it is true that Jane is complicated and easy to root for since the reader understands her tragic history through the confidential honesty of her narration. However, there are numerous fictional woman who aren’t as easy to support, and who are excellently written in their own right. Daisy of The Great Gatsby is a famous example; she is the driving force of the novel, bringing with her the same gilded carelessness that drove a former lover to his death and left others to clean up the blood. She is surrounded by others who are on par, or even more advanced, with her level of selfishness—the key difference being that she is female, and the best thing a girl can be in this world is a “beautiful little fool.” So she ducks behind white-lace femininity to hide the snake inside, a fictional expression of the real-world barriers that societies have placed on women when it comes to showing their true colors. The relationship between the popularity and the moral strengths or weaknesses of a character are commonly connected, but not always. Emma may not be vastly understood, but she is rich in her moral conflicts; Pamela was popular among Richardson’s fans, but her complexity is arguably limited by societal expectations. The point is, women in fiction don’t have to be likable (or even respectable) to deserve a place at the table; after all, we don’t enter the realm of fiction with hopes of going to tea or making polite small talk with its characters. We want to feel, to burn, to die, and to be born again through the pages. Unsurprisingly, moral conflicts are critical ingredients to this process. What we don’t begrudge morally subjective male characters for, such as Brutus and Oedipus, shouldn’t be held against the likes of, say, Medea and Merricat Blackwood. The ability to be blinded by one’s own narrative goes both ways. Please register or log in to personalize and favorite your content. Please register or log in to view notifications. Send this to a friend
<urn:uuid:5511202a-f4eb-48de-83b9-286d8c99b39f>
{ "dump": "CC-MAIN-2022-27", "url": "https://mindfray.com/debate/patriarchal-society-case-morally-complex-women-literature/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00111.warc.gz", "language": "en", "language_score": 0.9698188304901123, "token_count": 1822, "score": 3.3125, "int_score": 3 }
E-learning holds the image of being more cost effective and convenient than classroom or lecture based training. However, is it as yielding when it comes to changing behavior and enhancing the learning experience? That’s the question we hear a lot. The answer is that it differs from person to person. However, one of the reasons, my corporate clients consider e-learning specifically effective for retention is that learners are able to revisit the courses as much as they need at any time, for refreshing purposes. In addition, leaners can also choose the ideal time for them to take the online session, as per their commitments and timeline. In the past 30 years, as first televisions and now mobile devices and computers have grown around the world, learning by video has become one of the rapidly growing fields in education and training. The market for training video for learning and skill development is huge and runs into billions of dollar on annual basis. It makes sense that video learning is definitely an innovative and modern model for revolutionizing training and education. However, the video learning is not ye universally acclaimed. Many learners and trainers still prefer conventional classroom learning. Other see technical problems holding up the speed of progress. It is complicated area with a lot of potential and a subject we will be covering more in future. In a debate between video or digital modes of training and education, and face to face interaction characterized in in-person courses, we might lose the way to much better and truly effective training opportunities in future. It becomes necessary to determine the pros and cons of both side, so as to establish the cost which we might bear as a result of ignorance. The benefits of telecommunication can never be denied, since it makes the communication cheaper, immediate and easier. It is still important to remind that before the inventions of all these advances in science and technology, people used to interact or communicate more. Why is this so? This is because of a basic reality that all humans are social beings. The young and growing number of adults now find themselves quite active on different social media platforms, however their search for the connections depicts that people crave for human interaction. One of the great contributing writers for Psychology Today, Mr. Ray Williams says that human interaction is quite fundamental to one’s life and is one of the defining human traits that distinguishes us from rest of the species in the world. Physical interaction is still the best way to communicate, learn and gather memories, opponents of video learning says. It is the fact that all meaningful relations are built through personal interactions, with the firmest connections made when the time is spent together. When it comes to training, virtual or digital or online training has become an increasingly common replacement to classroom-based learning. With increasing demand, more educational universities and organizations are capturing the online training and modern learning models for their students and employees, respectively. The digital learning setting, specifically video learning, has been proved to have a number of benefits, like minimizing spatial barrier and increasing flexibility. However, at the same time, there is cost in terms of decreasing face-to-face method of learning, which no matter what, still carries at least some unbeatable benefits. Critics have argued that though digital models are readily accessible to all, yet these are not be an ideal option for everyone. Researchers in this regard have revealed that among the most successful and reputed businesses globally, the majority still prefer face to face forms of training delivery. In addition, group learning, that is one of the most prominent components of in-person courses, facilitates problem solving skills and develops collaborative skills and teamwork, which are quite critical in life. Video learning thus doesn’t seem a good medium when it comes to teamwork and group learning. However, trainers are still finding the ways to minimize the gap as much as possible. For instance, video learning is being made in 2-way webcam settings, enabling teacher or trainer to be in central location and reach learners that are spread globally. Such innovations might greatly reduce the cost of video learning versus in-face courses. Copyright 2016 Bryant Nielson. All Rights Reserved. Bryant Nielson – Managing Director of CapitalWave Inc.– Being a big believer in Technology Enabled Learning, Bryant seeks to create awareness, motivate adoption and engage organizations and people in the changing business of education. Bryant is a entrepreneur, trainer, and strategic training adviser for many organizations. Bryant’s business career has been based on his results-oriented style of empowering the individual. Learn more about Bryant at LinkedIn: www.linkedin.com/in/bryantnielson
<urn:uuid:8bee4aed-e70d-4b4f-a55b-68cf8bce53d8>
{ "dump": "CC-MAIN-2017-47", "url": "http://www.yourtrainingedge.com/the-cost-of-video-learning-vs-in-person-courses/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806615.74/warc/CC-MAIN-20171122160645-20171122180645-00374.warc.gz", "language": "en", "language_score": 0.9504615664482117, "token_count": 928, "score": 2.703125, "int_score": 3 }
Westerners read from left to right. Yet web pages are not necessarily read from left to right. There are many theories out there about how web pages are read. Some are true, but only in very specific circumstances. Most are false. Here is a selection of the best known ones. Popular misconception n°1: web pages are read from left to right This is the most widespread misconception, because it is the most logical. If the text is written from left to right and the reader is a Westerner, why wouldn’t web pages be read from left to right? This is false in almost all circumstances. The only time a user reads from left to right is when reading a text with the intention of reading it in its entirety and understanding its content. This is the case when reading a newspaper article online. Text is indeed read from left to right. Even in this case, though, the eyes don’t follow a strictly linear path. The eyes have a tendency to skip from word to word and make small backward movements. But overall we can say that the eyes follow a general left to right path. Another example of left-to-right reading can be found on the site of our client, Correctmot. When we skim and scan a text quickly to find information, however, the left-to-right reading path no longer applies. In this case, the eyes skip randomly from sentence to sentence and from paragraph to paragraph to find the information sought. We call this “skim reading”, because the eyes skim over the page. Likewise, left-to-right reading no longer applies when the page contains multimedia elements. Photos, films, animations and interactive elements change the reading direction. This is the case for most web pages. On most sites, the eyes don’t follow a predefined pathway. A number of factors influence where a user’s eyes travel on a page. Popular misconception n°2: web pages are read in a triangular shape Reading in a triangular shape was observed on Google pages. We actually discussed these eye tracking results in a previous newsletter about the Google study. The eyes scan through the pages to find interesting information. The organic results at the top are more relevant and logically receive more attention than those at the bottom. Finally, the sum of all gazes forms a triangle on the page. This is the famous “Golden Triangle”. The triangle is formed by the total number of gazes of several users. But the eye movements of each individual user don’t follow a triangular pattern. What’s more, this result cannot be generalized. As soon as you move away from the predefined and well-known organic results scenario, you don’t see a triangle. For example, merely introducing images on the page is enough to change the scan path. Popular misconception n°3: web pages are read in an F-shape Jacob Nielsen recently presented the concept of F-shaped reading in his newsletter F-Shaped Pattern For Reading Web Content. This scenario only holds true when a page of text has distinct paragraphs. The first paragraph is read in full, attentively. The second is read more rapidly. Then the reader’s attention is lost. In the end, the accumulated eye movements on the page form a sort of F shape. This F pattern is consistent with how we theoretically read a text: we scan the page from top to bottom looking for information and we register information by reading from left to right. Just like the first two scenarios, this one is no longer valid once multimedia elements are introduced. Popular misconceptions n°4, 5, 6, 7, etc.: web pages are read in the shape of a Gamma, Z, C, etc. These theories are totally false. They appeared in the 1980′s for billboard design. Then they spread like urban legends. We know now that they are not based on any real evidence. People don’t read in Gamma or Z patterns. - We do read text from left to right - When a text is organized into paragraphs, eye movements can form an F - On a Google results page, the accumulated eye activity of all users forms a triangle Otherwise, our way of viewing web pages cannot be reduced to one unique path. The next newsletter will list the factors that determine how a website is viewed.
<urn:uuid:cf3fade9-b1f3-4851-8a37-eddb8e2bd3ab>
{ "dump": "CC-MAIN-2023-40", "url": "http://miratech.com/blog/eye-tracking-lecture-web.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510994.61/warc/CC-MAIN-20231002100910-20231002130910-00367.warc.gz", "language": "en", "language_score": 0.9066247940063477, "token_count": 918, "score": 3.03125, "int_score": 3 }
Replica based on the Seedorf shield Up until the turn of the millenium shields were either painted in a single colour or had simple geometric designs such as quadrants. There was no need to decorate them extensively as they were disposable items which would be unlikely to last a battle, rather than valued items which the owner customised. The design shown in the shield below is one of the most elaborate patterns we have evidence for (from an early eleventh century manuscript). From roughly 1040 onwards, teardrop shaped shields (kite shields – shown above, between two round shields) start to appear, which offer greater body protection than the earlier round shields and can more easily be used from horseback. Our first records of kite shields come from some Northern French bibles, showing kite shields painted in a single colour or with two colour crosses. In the Bayeux tapestry most shields continue to be painted in this way, but some of the high status Normans are depicted with simple zoomorphic designs such as birds and dragons. During the twelfth century kite shields become flat-topped as a result of changes in fighting style and fashion, and during the thirteenth century they also become smaller, because the increasing use of plate armour means you don’t need as big a shield. By the end of the thirteenth century shields have become what we today call a “heater” shield. The early zoomorphic designs had also become a key part of the heraldric system and shields were decorated as a way for rich individuals to ostentatiously display wealth, and to show the allegiances of their retainers. One of the only surviving examples of a shield from this period is the Seedorf shield (1180-1225), shown below. This particular shield has clearly been cut down from its original size, possibly so it is in the most fashionable shape. It is decorated with a silver-gilded raised lion design. One of my recent commissions has been to make a planked shield based on this design for novelist Elizabeth Chadwick. The shield itself is constructed as discussed in my post on the construction of planked shields. The relief design is formed using one hundred and forty individual leather strips to create raised areas: The completed design before covering and painting: Once the design was complete, I covered the entire shield in leather and painted it blue. In the medieval period, blue was most commonly obtained using lapis lazuli, but due to its high cost only the very rich could afford to use it in large quantities. As lapis is just as expensive today, this particular shield has been painted with a modern equivalent. The relief design was then gilded in gold leaf (the original is gilded in silver):
<urn:uuid:6743c865-8100-4f6f-9280-e42ba971e26f>
{ "dump": "CC-MAIN-2022-27", "url": "https://tokimedieval.com/2012/06/24/replica-based-on-the-seedorf-shield/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00458.warc.gz", "language": "en", "language_score": 0.9731767773628235, "token_count": 570, "score": 3.03125, "int_score": 3 }
The growing approach to communication technology and digital information has played a colossal role in the economic,social,financial and political development of our world today.The recognition of this significance of access to technology by the policy makers,has finally adapted in means to empower women and ensure their safety. In a recent development,Punjab Safe Cities Authority(PSCA) in collaboration with the Punjab Commission on Status of Women has designed an Android Application to prevent the crime cases related to violence against women. About the App: -The App will issue a warning signal that will allow victims,under possible threat, to inform the Police Integrated Command,Control and Communiction (PPIC3) officials about their location. -Once notified,the initial response team,comprising of the Dolphin Force,Police Response Unit and Police Stations Beat Officers will be dispatched to the crime scene. The Application will be used for two purposes:To create awareness about harasment and to help victims of it.It will consist of two to three modes including a panic button for emergency situations. Phone calls generated from the application will be directed to the 15 helpline where a special desk has been entitled to answer the calls.These operators have undergone pecial training to effectively deal with cases of various forms of harassment. Shamsher Haider,who is the deputy chief of System Integration at Punjab Safe Cities Authority(PSCA )said in a statement “Definitions of what constitutes harassment and what does not will be built into the application, so apart from helping out the citizens we can also educate them.” Awareness on sexual harrassment The objective of the program is not only to provide prompt assistance to possible victims,but also to raise public awareness of harassment as a condemn-able issue and to ensure women rights and honor at the workplace.Therefore,it is important that alongside making the essential technological advancements,intolerent behaviour on part of harassers is checked and rectified so ensure that the opressed can actually count on the system that was made to safeguard them.
<urn:uuid:55f54052-1da8-4e51-9ceb-9c39a3e17f68>
{ "dump": "CC-MAIN-2017-30", "url": "http://penduproduction.com/punjab-curb-women-harassment-technology/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426639.7/warc/CC-MAIN-20170726222036-20170727002036-00293.warc.gz", "language": "en", "language_score": 0.9399471879005432, "token_count": 421, "score": 2.609375, "int_score": 3 }
Capitan in Yosemite National Park is considered the greatest granite rock wall in the world. Rising nearly 3,000 feet above the Merced River, El Capitan's legendary walls draw climbers from around the world to battle against the wages of gravity. The climbing history on El Capitan is relatively short. In 1958 Warren Harding lead an epic battle that lasted weeks, and became the first to aid-climb the 3,000 feet of vertical rock. Harding's monumental route up the prow became known as The Nose and started a new revolution in climbing. Three years later in 1961 Royal Robbins, Chuck Pratt and Tom Frost set their eyes on another part of the wall called the Salathe Wall. The Salathe Wall was named by Yvon Chouinard in honor of John Salathe, one of Yosemite's early pioneers. Robbins, Pratt, and Frost's climb up The Salathe Wall firmly established the Golden Age of Yosemite climbing and set the fundamental framework for a generation of climbers that persists today. In 1988, Todd Skinner and Paul Piana came to Yosemite Valley with the hope of free climbing the Salathe's face. By using modern day climbing techniques they realized that another new age in climbing was emerging, and were ready to meet the challenge. Standing on the shoulders of the early Yosemite pioneers, Skinner and Piana became the first to free climb El Capitan Salathe Wall and usher in a new dawn of climbing. |All photos Bill Hatcher. More on Bill Hatcher|| Page 1 of 2 |Click here to start or on any image.| |Top of Page|
<urn:uuid:3edbe608-3ec3-43c1-89f6-774b2a226aae>
{ "dump": "CC-MAIN-2017-30", "url": "http://toddskinner.com/Gallery_Salathe/index.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424564.72/warc/CC-MAIN-20170723142634-20170723162634-00141.warc.gz", "language": "en", "language_score": 0.9392011165618896, "token_count": 326, "score": 3.09375, "int_score": 3 }
Tuesday, Aug. 29, 2017 Court rejects bid to declare Sonoran Desert bald eagle endangered WASHINGTON – A federal court has rejected a bid to declare the Sonoran Desert bald eagle an endangered species, saying the U.S. Fish and Wildlife Service acted properly when it determined the birds were no different than other bald eagles. The ruling Monday by a panel of the 9th U.S. Circuit Court of Appeals is the latest turn in a years-long fight by environmental groups to gain protection for the desert eagles separate from other bald eagles in the U.S. Bald eagles were declared endangered in 1967 but were removed from the endangered species list in 2007 after making a remarkable recovery. An official with the Center for Biological Diversity, which pushed for the protection, said the desert eagles should be listed as a distinct population because they “really are unique in that they are adapted to the desert.” He said the center is considering its next steps in the case. Justin Augustine, the center’s counsel, said that without the protection the bald eagles in the Sonoran Desert are in grave danger. Despite the government’s finding, he said the desert eagles are clearly a separate population segment. In several different reports, the Fish and Wildlife Service agreed that the desert eagles had “a number of unusual characteristics” such as a “preference for cliff nests,” and that they “are smaller than, and breed earlier than, other bald eagles.” But those reports also said that bald eagles as a whole “are highly adaptable, wide-ranging habitat generalists … capable of inhabiting areas throughout North America, so long as a sufficient food source persists.” The population under dispute is defined as “all bald eagle territories within Arizona, the Copper Basin breeding area in California near the Colorado River and the territories of interior Sonora, Mexico, that occur within the Sonoran Desert.” The center first tried to have the desert eagles declared a distinct population in 2004, when the government began talking about “delisting” bald eagles as a whole. After study, the government rejected that request in 2006 – and again in 2010 and 2012 after the center went to court to challenge those decisions. In one of the agency’s reports, it said the desert eagle’s unique characteristics did not require a conclusion in and of themselves that the birds were “ecologically or biologically significant for the bald eagle taxon as a whole.” A three-judge panel of the 9th Circuit agreed Monday. In his opinion for the panel, Judge William A. Fletcher cited the agency’s finding that there was “no evidence of distinctive traits or genetic variations among the Sonoran Desert Area population that suggest that loss of the population would have a negative effect on the bald eagle as a whole.” Augustine challenged that reasoning. “They are saying that if this population didn’t exist anymore it wouldn’t matter to the whole population,” he said. Augustine said he didn’t understand the agency’s “shoddy” and “stingy” reasoning on the birds, adding that he is “deeply disappointed in the agency.” The center is considering its next step, which could be anything from an appeal to the Supreme Court to starting at square one with another petition to Fish and Wildlife. In the meantime, he said, other efforts to help the birds will go on. “There are still people voluntarily taking their own efforts to sustain the population,” he said. “Things are very concerning in the long term; I want to remain optimistic. We may be required to get on the Endangered Species Act through other means somewhere down the road.”
<urn:uuid:4c63157d-8a33-4ae3-91e4-9ef8be35ac40>
{ "dump": "CC-MAIN-2017-47", "url": "http://justsaynews.com/court-rejects-bid-to-declare-sonoran-desert-bald-eagle-endangered/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809746.91/warc/CC-MAIN-20171125090503-20171125110503-00323.warc.gz", "language": "en", "language_score": 0.9591185450553894, "token_count": 802, "score": 2.953125, "int_score": 3 }
What our study shows is that humour clearly plays an important role in how children interact with one another and that children who use humour to make fun of themselves are at more risk of being bullied.” Funded by the Economic and Social Research Council and supported by academics at the University of Strathclyde and Oxford Brookes University, the research examined the links between how 11-13 year olds use different styles of humour and the problem of bullying in schools. The findings reveal that children who use self-defeating forms of humour – eg. self-disparaging language / putting themselves down to make other people laugh – are more likely to be bullied than those who use more positive forms of humour.* The study also found that peer victimisation led to an increase in the use of self-defeating humour over time, showing that victims of bullying are often trapped in a vicious cycle, where being bullied deprives them of the opportunities to practice positive humour with peers and leads them to rely on self-defeating humour, perhaps as a way to get others to like them. Dr Claire Fox, lead researcher from Keele University, said, “What our study shows is that humour clearly plays an important role in how children interact with one another and that children who use humour to make fun of themselves are at more risk of being bullied. We know that this negative use of humour is a nurtured behavior, influenced by a child’s social environment rather than genetics. This makes the behaviour easier to change, so we hope the next step for this study is to see whether it is possible to ‘teach’ children how to use humour to enhance their resilience and encourage them to not use negative forms of humour.” The two year study involved 1,234 children who were questioned at the beginning and end of each school year. Researchers measured three types of bullying and victimisation: verbal, physical and relational/indirect (e.g. social exclusion, spreading nasty rumours) and used self-reports and peer nominations to draw their conclusions. Each child was also assessed in relation to their number of friends, humour styles, symptoms of depression and loneliness and self-esteem. Notes to editors For more information please visit http://esrcbullyingandhumourproject.wordpress.com/ or contact Kate Dawson on 0121 713 3878/07909 993 197 [email protected] Six schools across Staffordshire, Derbyshire and Shropshire were involved in the study. *Four types of humour Positive humour • Self-enhancing humour, e.g. ‘If I am feeling scared I find that it helps to laugh’. • Affiliative, e.g. ‘I often make other people laugh by telling jokes and funny stories’. Negative humour • Self-defeating, e.g. ‘I often try to get other people to like me more by saying something funny about things that are wrong with me or mistakes that I make’. • Aggressive, e.g. ‘If someone makes a mistake I will often tease them about it’.
<urn:uuid:7d5b9d2d-02af-471d-a018-df08c1854c27>
{ "dump": "CC-MAIN-2019-04", "url": "https://www.healthcanal.com/mental-health-behavior/38183-humour-styles-and-bullying-in-schools-not-a-laughing-matter.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584334618.80/warc/CC-MAIN-20190123151455-20190123173455-00617.warc.gz", "language": "en", "language_score": 0.9619433283805847, "token_count": 652, "score": 3.65625, "int_score": 4 }
Any kind of restriction caused by stress, physical trauma, immobility, or bad posture affects the body's complex network of nerves, tissues, muscles, fascia, bones, tendons and ligaments. Reduces Muscle Tension - Muscle tension reduces the circulation of the blood and movement of lymph in an area. Neuro-Myofascial Chirotherapy relieves contracted, shortened, and hardened muscles. Improves Blood Circulation - Neuro-Myofascial Chirotherapy aids in stimulating nerves that supply and blood vessels. Neuro-Myofascial Chirotherapy dilates and allows greater blood supply to them. Induces Better Lymph Movement - The lymph drains impurities and wastes away from the tissue cells. Neuro-Myofascial Chirotherapy helps to move lymph thereby eliminating toxins from the body. Increases Mobility and Range of Motion of Joints - Muscles and connective tissues surround and support many other parts of the body. Neuro-Myofascial Chirotherapy's method of gentle stretching action lets tissues and muscles regain elasticity for movement. Stimulates or Soothes Nervous System - Neuro-Myofascial Chirotherapy balances the nervous system by soothing or stimulating it, depending on which effect is needed by the individual at the time of treatment. - Neuro-Myofascial Chirotherapy resets and adjusts nerve signal conduction in order to restore proper function of organs and systems and promotes overall mobility. “myo” - muscles “fascia” - tissues, soft ligaments, tendons, joints “chiro” - spine The fascia is the thin layer ofin the body that resembles a spider web and extends without interruption from the top layer of the head to the tips of the toes. It covers and interpenetrates every muscle, bone, nerve, artery, vein, organ and cell in the body. When it becomes tight, restricted or unbalanced in any way, it can place incredible stress on any bone, organ, nerve, or other system in the body. - Acute and chronic pain - Movement restriction - Unexplained headaches - Temporomandibular Joint (TMJ) dysfunction - Chronic fatigue - Sports injuries - Slipped disc - Nerve impingement - Neurological dysfunction - and many more
<urn:uuid:28c570b4-270a-4826-8539-a57b05dd2cfc>
{ "dump": "CC-MAIN-2018-30", "url": "http://medicalinfraredthermographyphilippines.com/?q=sports-medicine/myofascial-chirotherapy", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591718.31/warc/CC-MAIN-20180720154756-20180720174756-00519.warc.gz", "language": "en", "language_score": 0.8194871544837952, "token_count": 506, "score": 2.71875, "int_score": 3 }
- Teachers need to remember that the very core of their job is helping students, making the development of a classroom where pupils feel supported and respected crucial, wrote educator Beth Pandolpho writes for Edutopia. - Pandolpho advocates that educators listen to students, ask what they’re thinking about during class discussions, and then refer to details from previous comments they’ve made so they know their teacher has heard them. - While some students are easy to talk with and eager to connect, Pandolpho also makes a point of reaching out to those who may be a bit more withdrawn so that every child is heard, building a more inclusive classroom. While teachers are the navigators of a classroom, children are hardly the workers below deck. The trend today is to empower students to find their own voice and give them some autonomy in the way they learn or how their school day is run. Students at Greece Central School District near Rochester, NY, for example, voiced their discomfort with changing into gym clothes every day, and the administration ended that requirement, as Education Dive reported last year. While not every student who speaks up will grow into a leader, every child can be taught to find their voice through the curriculum, as The Hechinger Report recently wrote, describing a leadership program for elementary school children called the Bonstingl Leaders for the Future. The program helps students learn how to be active listeners while also looking for solutions to situations that are diplomatic and searching for a way to resolve issues that works for the betterment of all.
<urn:uuid:a09f8d61-d6d5-4d2e-9dfd-9209a82d56f8>
{ "dump": "CC-MAIN-2020-40", "url": "https://www.educationdive.com/news/strong-curriculum-empowers-students-to-develop-their-voices/527864/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198213.25/warc/CC-MAIN-20200920125718-20200920155718-00523.warc.gz", "language": "en", "language_score": 0.9773582220077515, "token_count": 315, "score": 3.71875, "int_score": 4 }
Registered nurses (RNs) make up the majority of workers in our healthcare system, holding 2.6 million jobs. RNs collaborate with physicians in treating and examining patients, administering medication, and providing rehabilitation. They may also be involved in educating the public about medical conditions and promoting health. The specific duties of an RN vary depending on the work setting and patient population. RNs may specialize in specific health conditions (eg, diabetes management nurse), settings (eg, perioperative nurse), populations (eg, pediatric nurse) or organs/body systems (eg, cardiovascular nurse). RN Education and Degree Requirements There are many educational paths that you can pursue to become an RN. RN education degree requirements range from a diploma to a bachelor’s degree. Diploma programs are offered at hospitals and typically last three years. Associate’s degrees in nursing are offered at community colleges and take two to three years to complete. Bachelor’s degrees in nursing are offered at colleges and universities and take four years to complete. Associate’s Degree vs. Bachelor’s Degree Diploma programs and associate’s degree programs prepare graduates for entry-level nursing positions in hospitals and other healthcare settings. Many nurses with a diploma or associate’s degree later enter bachelor’s degree programs, so they can prepare to take on a broader range of roles. RNs with bachelor’s degrees are also qualified to work in community health promotion and disease prevention. All nursing education programs combine classroom instruction with supervised clinical experience. Courses that nursing students may be required to take include anatomy, physiology, microbiology, chemistry, nutrition, and psychology. Nursing students may also be required to take courses in liberal arts subjects. Students gain clinical experience in nursing homes, public health departments, and hospital departments. Graduates of nursing education programs must pass a national licensing examination called the National Council Licensure Examination (NCLEX-RN) to obtain a nursing license. Further requirements for licensing vary by state. Career Outlook for Becoming an RN Around 60% of RNs work in hospitals, but many nurses also work in nursing homes, schools, offices, and community centers. Patients in hospitals and nursing homes require care around the clock, which means nurses often have to work nights, weekends, and holidays. Tasks that a RN may be responsible for on any given day include: - Taking a patient’s medical history and symptoms - Assisting physicians during surgery or treatment - Establishing a care plan or contributing to an existing care plan - Explaining home care procedures - Providing emotional support to family members - Performing diagnostic tests and analyzing results - Operating medical machinery - Helping with patient follow-up Employment Projections and Salary Registered nursing is a fast-growing career field, so RN career prospects are projected to be excellent. According to the Bureau of Labor Statistics, the employment of RNs is expected to grow at a rate of 26% from 2010 to 2020. The highest growth rate for RN jobs will be in doctor’s offices and home healthcare services. Employment growth at hospitals will be slower. Opportunities will be the best for nurses with advanced education and training. The median annual wage for RNs and advanced practice nurses was $65,950 in 2011. Beginning a Career in Nursing Nurses work to promote health, prevent disease, and provide the best possible care for patients. If you are a caring, responsible, and detail-oriented person who is capable of directing or supervising others, a career in nursing may be right for you. Learn more about becoming a nurse today to establish a lasting career in this challenging and rewarding field. [showSchools schoolsorderby=”campus” school=”” campus=”” location=”” degree=”” category=”Health & Medical” subject=”Nursing” state=”” schoolshide=”category”]
<urn:uuid:0edfac30-081c-4804-8403-74098b2a65d2>
{ "dump": "CC-MAIN-2017-09", "url": "http://www.collegequest.com/how-to-become-an-rn.aspx", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00642-ip-10-171-10-108.ec2.internal.warc.gz", "language": "en", "language_score": 0.9429280161857605, "token_count": 832, "score": 3.3125, "int_score": 3 }