text
stringlengths
277
230k
id
stringlengths
47
47
metadata
dict
Parents are a child’s first and most important teacher. Interacting and reading with a child is crucial to development of many skills used throughout life. Children are amazing. They are born with the innate capacity to learn from everything they feel, see and hear around them. Much like a tiny seed has the ability to become a beautiful plant if given the right conditions, children have unlimited potential if a solid foundation of literacy is laid– the earlier, the better. For children to become successful in today’s society, they must be literate– able to read, comprehend and write. Children are ready to learn from the moment they are born with the first five years of life as the most crucial for establishing a foundation for future literacy. To achieve and develop fundamental language skills, children need interactions with adults, including being talked to and read to. Children naturally crave this contact. It is also during this time children become familiar with the connection between the spoken and written word. Family literacy activities are a primary part of the process, which optimally begins long before a child enters the formal educational system. Until the time children are old enough to enter the school system, usually around age five, their parents are the first and most influential teacher in their life. Children model themselves after the examples of what they see around them so that if their parents are readers who enjoy books and value education, the child will be a reader and have a more positive attitude toward school as well. Studies indicate the literacy level of parents especially that of the mother’s, directly influences a child’s literacy level and eventual educational attainment. ‘Three decades of research have shown that parental participation improves student learning. This is true whether the child is in preschool or the upper grades, whether the family is rich or poor, whether the parents finished high school.’ –Strong Families, Strong Schools, US Department of Education, 1994. It is not difficult to encourage children to become readers. The natural curiosity of children will lead them in the right direction with the addition of a good parental example and by having reading materials readily available. By spending just a few minutes a day engaged in literacy-based activities, parents can instill a love of books and learning as well as create a time for bonding with their children. It doesn’t take specialized training or education for parents to begin the process of raising a reader. Storytelling is a great way to interest younger children in reading and is one of the simplest activities to start with. Many believe that it takes a special talent to tell a story but by breaking the process down into the basic parts, storytelling can be done without expensive learning toys or any equipment for that matter. It only takes a bit of imagination and a dash of silliness and can be done anywhere and anytime a parent has a few moments to share. Telling and sharing stories with children will certainly entertain them but can also teach them traditions and values, facilitate healing, and give a sense of caring and security. There are three basic parts to telling a story: - Who is the main character? The hero or heroine can be a person or even an inanimate object. - What is the situation or conflict? This can be funny or serious. - How will the story end or the situation be resolved? There are a couple of other types of stories that are simple to use as early literacy activities, building upon the three basic story parts. Family stories, usually about family gatherings, odd relatives and family foibles, can fascinate children especially when they describe funny moments when they were children or their parents were children. For example, stories can be told about the crazy things grandpa did to decorate the house for the holidays. This is a great way to relate family history to children. Moral stories or parables are great ways to communicate family values, traditions and virtues while opening a dialogue with the child about caring, sharing, celebrating differences and a host of other values. Some important points to remember about early literacy activities: - Read to or with your child at least ten minutes per day. Reading aloud develops listening skills and provides bonding time for parent and child. - Encourage curiosity. Discuss what you read with your child. Help your child understand the story. Ask him questions about the story. Answer questions. - Give your child your undivided attention during this time. Turn off TV, radio and other distractions. Let the answering machine answer the telephone. - If your child gets fussy, limit the reading time or pick another book. The idea is to keep the experience a pleasant one. - Reinforce the value of your child and of reading. Let your child see you reading and writing. Give your child books as gifts. Reward reading-give stickers, incentives, commend your child for reading. - Visit the library frequently with your family. Make an event out of it. Get your child his own library card. It’s free! According to literacy experts, children exposed to a rich variety of literacy experiences at home are more likely to enter school ready to learn, read and write. Parents who have books in the home and read to their children raise children who are good readers and better students. Well-developed reading and writing skills are critical elements to a child’s future success in school and in life because they are the foundation upon which all other skills depend. Parents do not need to have years of schooling or specialized knowledge to contribute to the successful development of their child through simple literacy activities. All parents have something to contribute to the process and can make a positive difference in their child’s academic readiness and future success by instilling a love of reading and learning.
<urn:uuid:9a21ebd8-d280-4469-afd4-6cd8a3e57468>
{ "dump": "CC-MAIN-2020-16", "url": "http://www.no2six6.com/foundation/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371805747.72/warc/CC-MAIN-20200407183818-20200407214318-00189.warc.gz", "language": "en", "language_score": 0.9609991312026978, "token_count": 1164, "score": 3.984375, "int_score": 4 }
The Ironic Rise in Vaccine Hesitancy The science is irrefutable: vaccines work. Hundreds of studies over many decades show overwhelming evidence that vaccines are safe, effective and have helped to improve the overall health of entire populations, prevent unnecessary illness and save millions of lives. Not to mention the untold billions in health care cost savings that go along with preventing devastating, debilitating disease. My fellow physicians and other medical experts agree, and most of the general population understands, that vaccines are safe and effective. But ironically, the very success of vaccination is now contributing to a rise in vaccine hesitancy – a behavior influenced by lack of trust in the medical community or concerns about vaccine safety, efficacy, necessity or convenience. And hesitancy has contributed to undervaccination through parental decisions to delay or refuse vaccines for their children, putting more and more children and communities at risk. While vaccine hesitant parents make up a small group, it appears to be growing as anti-vaxxers – those who vocally question and refuse vaccines – are trying to gain traction again in a political climate that is allowing them to amplify their voices. That’s why we at PolicyLab published a new Evidence to Action brief to explore the true causes of vaccine hesitancy and propose ways to help parents understand the real risks to their children’s health. We all want to do what is best for our kids. We want them to be safe, happy and healthy. Unfortunately, there is a lot of misinformation out there about how to do that, and it can be really difficult to separate truly evidence-based facts from unfounded opinions. Once you hear a claim – from anyone or anywhere – that something might be dangerous for your child, it’s incredibly hard to unhear it. We’ve even heard rehashed concerns about debunked (but prevalent) myths regarding vaccine safety from the highest levels of our government as recently as just a few weeks ago – messages that further fuel fear and anxiety as they reach well-meaning parents across the country. In general, we’re fortunate to live in a time when most of us have never seen a case of polio, or small pox, or even measles or rubella because strong vaccination programs have all but eliminated them from our country and much of the world. However, not witnessing these diseases first hand has desensitized us to their severity, and collective memory of their crippling effects has faded. As a result, in recent years we’ve seen a rise in the U.S. of vaccine-preventable diseases like measles and pertussis as more parents intentionally choose to delay or refuse vaccines for their kids. These types of refusals tend to happen in clusters – in school districts or geographic regions where many parents share their views with one another. And these areas, where vaccination rates drop below levels that protect populations from the spread of disease, are where we see spikes in disease outbreaks. That’s why it’s on us – as health care providers, as policymakers, as educators, as fellow parents – to compassionately and without judgment, do what we can to help more parents understand that any perceived risk they might have about vaccinating their kids is far outweighed by the risk of not doing everything we can to protect them. Please take a look at our new brief, “Addressing Vaccine Hesitancy to Protect Children and Communities Against Preventable Diseases,” which proposes policy changes and actions we can take at all levels improve health care providers’ ability to make strong vaccine recommendations, strengthen immunization requirements for school enrollment and improve the public’s trust in vaccines. Combining and scaling up these efforts can encourage greater vaccine acceptance, increase vaccination rates and work with parents to protect the current and future health of our children.
<urn:uuid:fe5d2ecf-4912-4b2e-a95c-9641309bfb56>
{ "dump": "CC-MAIN-2018-26", "url": "https://policylab.chop.edu/blog/ironic-rise-vaccine-hesitancy", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860041.64/warc/CC-MAIN-20180618031628-20180618051628-00356.warc.gz", "language": "en", "language_score": 0.9579682350158691, "token_count": 780, "score": 2.921875, "int_score": 3 }
The humanities program at MTS gives students an opportunity to discover the cultural richness of our world while promoting values of personal and social responsibility. By celebrating differences and recognizing similarities, students learn to look beyond themselves and value the contributions and perspectives of others. In the younger grades, we focus on the individual, familial, and local community, and gradually expand to the study of California, America, world history and geography. As students mature, the curriculum evolves into a broader view of global culture and more nuanced historical perspective. Critical thinking and study skills are emphasized to better understand the past, present, and future. Across all grade levels, we use a variety of methods to introduce, explore, enhance, and reinforce content. These include discussion, role-play, reading source material and related literature, debate, lecture, art and music activities, and field trips. Through individual and small-group work, students are encouraged to take initiative and responsibility for their own learning. Scroll down to read about our humanities program by grade, and explore project highlights from Lower School and Middle School. In Lower School, Humanities is built around sharing your story and learning from the stories of others. MTS utilizes Teachers College Reading and Writing Workshop, a research-based curriculum that focuses on exploration and expression in an authentic environment. We “aim to prepare students for any reading and writing task they will face and to turn kids into life-long, confident readers and writers who display agency and independence.” Mini lessons pull from mentor texts that support skill development and understanding of author’s craft. Then, students are given ample time to engage as readers and writers, practicing newly learned skills and techniques. During the workshop, teachers work with small groups or individual students, tailoring their support and feedback to enhance each student’s work. Readers learn to consider their purpose for choosing books, filling their reading material with a combination of “just right texts” and those that align with interests, content focus areas, and more. Students learn that "readers are thinkers" and learn a variety of “Stop & Jot” and notebooking techniques to keep track of their ideas, wonderings, important details, and more. As writers, students in Lower School are building a solid foundation to engage in their written communications. Students learn that each phase of the writing process (thinking & planning, drafting, revising, editing, publishing) are vital and important. Teachers confer with students throughout the writing process to establish purpose and goals for each phase. In thinking and planning, teachers and mentor texts help students identify the stories they want to tell. Then, students spend time drafting voraciously, focusing on getting their ideas down on paper and letting ideas flow freely. Next, students begin the revising and editing process. Within these phases, teachers shift their mini-lessons to support elaboration techniques, vocabulary and word choice, organization and structure. They learn that spelling, grammar, and handwriting are components of their work that add strength to their communication. Throughout both readers and writers workshop, students also engage in mid-lesson shares, either to identify and share aspects of their work they are proud of, or to seek support and ideas. These opportunities enhance the cycle of learning from student to student as they inspire and learn from each other, facilitated through the expert guidance of our teachers. Finally, students celebrate their work as writers with a published piece. Social Studies, Science, Math, and SEL are often integrated into the reading and writing work. Students are given opportunities to engage independently and with peers. Through the use of technology integration and PBL, students connect lessons and books across disciplines, in authentic and creative ways. Kindergarten and 1st grade students begin developing their identities as readers and writers, discovering the joy, connection, and wonder provided through experiences with a wide variety of literature. Students learn and utilize decoding strategies as a toolbox of “reading super powers” and respond to text through text-to-self and text-to-text connections. In Kindergarten and Grade 1, students have ample opportunities to read words, read pictures, listen to reading, and retell stories. They engage both independently and with peers. Teachers read with students in a combination of whole group, small group, and with individuals. Opportunities to play with words and language through games, and bring stories and ideas to life through readers theater are beloved components you might witness in these classrooms. Students also use art and imagination to connect to stories. As writers, students begin learning the power of telling a story and taking their rich vocabulary from spoken word to print. They learn to write descriptively to share about topics close to their hearts. As the year continues, Kindergarten and Grade 1 students explore a variety of genres and learn specific techniques from Mentor Texts authors use to share within these genres. To support Readers and Writers workshop, Kindergarten and Grade 1 students utilize the Fundations Phonics program. At the end of these two years, the excitement is palpable as eager, confident students celebrate learning to read and expanding on these important foundational skills and strategies developed in their beginning Lower School years. Continuing Readers & Writers workshop in 2nd and 3rd grade, students move from the “learning to read” phase to “reading to learn.” Mini lessons and focus activities begin to expand student’s use of comprehension strategies. 2nd and 3rd grade students begin making deeper connections between texts, themselves and the world around them. They begin to “Stop & Jot” about the details within texts that support their ideas or statements. Students expand on their inferring and visualization strategies as the complexities in their books also expand. They build off their knowledge and skills acquired in K and 1, confidently applying the tools they’ve learned to tackle new complexities in text and written craft. 2nd and 3rd grade readers also begin to engage in book clubs, reading series of books, and apply skills and strategies towards projects they are passionate about, often stretching themselves beyond what would typically be considered grade-level tasks. 2nd and 3rd grade readers and writers also become more keenly aware of the connection between being an accomplished writer and a voracious reader. They begin reading for enjoyment and studying and applying mentor texts to enhance their own work. Having explored a variety of genres in Kindergarten and Grade 1, 2nd and 3rd grade students demonstrate ownership of using a variety of genres to share their ideas and thinking - from writing songs about concepts in social studies to researching, writing, and directing a play about science content - 2nd and 3rd grade readers and writers begin internalizing their identities as powerful storytellers with an array of ways to share with others. As writers, students begin to understand and use the Writing Process more deeply, focusing on expanding vocabulary and sensory details within their writing to bring stories to life. Students in Grades 4 & 5 focus on “readers as researchers.” They are able to apply the skills and strategies they have learned in previous years to read for information and learn to synthesize across a variety of complex texts. Students read similar topics from multiple perspectives and genres, and practice using textual evidence to support their claims and analysis. In Grades 4 & 5, students read to learn about others, our country, our history, and our world. They connect threads from past to present and use their written work to communicate not only their understanding but to inspire action and change. Social Studies and current events are woven into students’ reading and writing experiences, with an emphasis on promoting and empowering students to utilize their skills to communicate their ideas in authentic, inspiring, and impactful ways as seen through the eyes and hearts of these double-digit aged students. Texts in Grades 4 & 5 support deeper conversations and questions, enabling students to better understand themselves and others. Students are challenged to reflect on their own experiences, those of others, and to seek justice. In Grades 4 & 5, the questions of “Whose story is being told?” “Whose story is being left out?” and “Is there justice and equity?” are directly tied to daily considerations when reading about history and current events - whether digging into California History to accurately share from the perspectives of indigenous Californians, or learning about the lives of influential women in history to share stories often left untold. Grades 4 & 5 students are empowered by their reading and writing skills to share their voices and ideas to promote action and change. Additionally, a deeper dive into grammar and vocabulary enhance the Grades 4 & 5 Readers and Writers workshop. The power of the LS Readers & Writers workshop from K-5 is evidenced in the wide variety of reading and writing discussions, projects, and student work and action. As students leave Lower School, the experiences they have accumulated throughout Humanities support confidence in their ability to access and analyze information, think critically, and communicate effectively and creatively to share their voices and stories with others and for others. Our Middle School Language Arts program instills a strong academic foundation through our English, Geography, Humanities, and Social Studies curriculums. In these courses, our students learn to think critically, effectively communicate their ideas, and develop an independent voice. We strive to create lifelong learners through inquiry-based instruction that is student-centered and comprises of collaborative and individual tasks. Students learn to take intellectual risks, approach problems thoughtfully, and develop an active appreciation for all people's experiences. Middle School English celebrates the art of language: how to process and interpret the communications we receive as well as how to package our own messages with clarity, persuasion, and beauty. Critical thinking is another primary focus: how to think through texts and big ideas synthetically, how to engage with other perspectives openly, and how to read the word, and the world, through various critical lenses. To achieve these goals, we read a variety of texts from diverse voices, traditions, and genres, teaching students how to unpack literature through active reading strategies and how to discuss interpretations through inquisitive Socratic dialogue. We also continue to honor and cultivate the childhood love of reading through a robust independent reading program. Advances in vocabulary are encouraged through the computer adaptive platform Membean. Writing instruction prioritizes analytical, argumentative composition through the writing process. Students also practice writing in informational, descriptive, and narrative modes. Students utilize personalized, regular feedback and conferencing to develop their writing confidence, precision, and voice. Additionally, grammar instruction helps students elevate the structural complexity of their sentences through sentence deconstruction and combination exercises. 21st Century technology skills receive attention in Middle School English through regular projects involving slideshow design, public speaking, video and audio editing, and various apps and online platforms. Digital literacy -- assessing the trustworthiness of sources on the internet -- receives special focus during research projects. In sixth grade Humanities, we want our students to think critically, ask questions, and learn how to make their voice heard. To that end, our students focus on reading and writing to form and inform ideas, develop and practice critical thinking skills, learn how to present, collaborate, develop effective research skills, and to recognize bias. We teach concrete skills through a mixture of small and large group discussion-based, mini-lessons focused on reading and interpreting. The goal of these group activities is for students to internalize new skills and then use them later in a new context. The curriculum includes hands-on, project based learning where students are actively engaged in analyzing information and developing problem-solving strategies. Throughout the curriculum we strive to create a space where students feel comfortable talking about topics and ideas that may be uncomfortable. We want students to become comfortable making mistakes, asking questions, engaging in debate, and being themselves. We aim to provide an environment where students are able to form relationships with both their peers and teachers. Reading: The class reads texts written by a diverse group of authors (racial, socioeconomic, & family structure) to provide windows and mirrors and to learn about people who are different from them, with a focus on understanding and celebrating differences. Writing: Students write a thesis driven, analytical essay with a focus on developing writing processes, brainstorming, drafting, revising, and continuing the drafting loop. We place specific emphasis on the process and peer editing. In our geography coursework, we focus on two areas: Skill and Knowledge units. Skill units: This includes maps, globes, direction, legend reading, latitude and longitude. Knowledge Units: Here we focus on continents - major things on the map. We talk in broad terms about demographics and culture. Geography classes are hands-on! We work with globes, maps, do a lot of drawing, as well as collaboration when possible. Our students complete one big project per trimester: 1st trimester: A pamphlet and oral presentation on a South American country 2nd trimester: A group project focused around debunking an African stereotype 3rd trimester: A group presentation on an Asian country with a choice of medium Highlights of the curriculum include the National Geography Bee, and a group project which utilizes the blacktop outside our classroom to draw a chalk map. Sixth grade students learn how to learn – it’s not about memorization – it’s about teaching students how to form ideas with an authentic interest. We want our students to enjoy the story of the world and its diversity, to learn about new cultures worldwide- examining both our cultural differences, and our similarities. We read proverbs and stories across cultures and explore the universal truths and wisdom woven through them all. We examine driving questions that promote discussion on how the past influences where we find ourselves today. We teach our students that looking at history through a critical lens may engender a shift in our own views- and that is okay. Highlight projects in past years for sixth graders have included the “Big Dig” archaeology project and the “Greek Festival.” Under the theme of “Identity,” 7th Grade English explores issues of culture and virtue— the external and internal forces that shape who we are. It is also an important year for solidifying writing skills. A writer’s workshop model takes students step-by-step through the crafting of sentences, paragraphs, and essays. To aid their persuasive compositions, students study rhetorical strategies, leading to more convincing and nuanced arguments. In descriptive writing, students refine their "show-don’t-tell" fundamentals, using sensory details and figurative language to make their writing more evocative. Grammar studies focus on shoring up sentence writing fundamentals: writing complete sentences, compound sentences, and complex sentences. Finally, students continue to improve their public speaking skills with several multimedia presentations and regular class discussion. Texts: A Good Kind of Trouble, March: Book 1, A Midsummer Night's Dream, American Born Chinese, Underground America, The House on Mango Street, and selected short stories, non-fiction, poetry, and journalism. In our Middle School social studies curriculum, students learn about ancient history (different parts of the world, historically and culturally, including arts and religion) and American history and government. In seventh grade, students gain an understanding of different cultures and how it impacts our country. 8th grade English prepares students for success in high school reading, writing, speaking, and critical thinking. Building on the 7th grade focus on Identity, 8th grade literature is themed around “The Self in Society.” Texts such as Persepolis and Julius Caesar delve into the social contract, government, and civics, leading students to engage in frequent small and large group discussions exploring the tension of individual liberty within societal constraints. Literary analysis skills are also refined through regular analytical prompts, for example reading into the postcolonial symbolism at work in Orwell’s Shooting the Elephant. In their grammatical studies, students focus on achieving sentence variety, using modifying clauses and phrases with proper comma usage in order to construct more sophisticated and stylistic sentences. Students also practice group work in several multimedia presentations, such as an Animal Farm propaganda ad campaign project. Texts: All American Boys, Animal Farm, Persepolis, Julius Caesar, Part-Time Indian, A Christmas Carol, and selected short stories, non-fiction, poetry, and journalism. In our Middle School social studies curriculum, students learn about ancient history (different parts of the world, historically and culturally, including arts and religion) and American history and government. Humanities Project Highlights Every year students engage in project based learning and field trips connected to their interests and that year’s school-wide theme, providing students opportunities to infuse the curriculum with their own voice and choice, as well as demonstrate learning in ways that are creative, connected, and meaningful to them. In the past, experiences have included a third grade Water Cycle Play, Green Screen projects, a 4th grade overnight on one of California’s historic ships, a 5th grade science and nature trip to the Marin Headlands, 6th grade's "Big Dig" archeology project and "Greek Festival," 7th grade's “Race to Damascus,” and the great “Geography Bee” contest, for all sixth through eighth grade students. During the second grade vocabulary project based learning unit, Whitney O’Keefe and Anastassia Radeva joined forces and asked students the driving question: “How can you use everyday materials to bring words to life in an extraordinary way?” As part of the first grade social studies unit on communities and as a way to explore what it is like to be a member of a town/city community, first graders are given the opportunity to open their own “stores.” As part of the fourth grade humanities Native American cultures unit, Megan Kukendall and Rachael Olmanson asked fourth grade students the driving question: “What would a modern museum of California and Native Amercians history look like and why?” Animal Farm introduces 8th grade students to the concept of propaganda, how the farm’s rulers, the pigs, use misleading and manipulative information to support their rule. After reading and discussing the novel, students explore the difference between propaganda and advertising. In this time-honored 6th grade social studies ancient world history project, students develop an understanding of different cultures and the way that humans construct history from incomplete data by creating and deciphering their own invented civilizations.
<urn:uuid:1d3c94b1-7a31-4319-9a67-ec68f723abe5>
{ "dump": "CC-MAIN-2021-04", "url": "https://www.mttam.org/program/humanities", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703587074.70/warc/CC-MAIN-20210125154534-20210125184534-00657.warc.gz", "language": "en", "language_score": 0.9475317597389221, "token_count": 3820, "score": 3.046875, "int_score": 3 }
FIGURE 12.-The Lunar Rover. Both astronauts sit in seats with safety belts. About 7 minutes are required to fully deploy Rover. The capacity of the Rover is about 1000 pounds. The vehicle travels about 10 miles per hour on level ground. The steps necessary to remove it from the LM and to ready it for use are shown in Figure 15. EP-95 ON THE MOON WITH APOLLO 16 - Back -
<urn:uuid:e332597b-d786-46d0-83bb-6ebfeea5b641>
{ "dump": "CC-MAIN-2013-20", "url": "http://history.nasa.gov/EP-95/p15.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699138006/warc/CC-MAIN-20130516101218-00035-ip-10-60-113-184.ec2.internal.warc.gz", "language": "en", "language_score": 0.8472168445587158, "token_count": 95, "score": 2.8125, "int_score": 3 }
Wednesday, 7 April 2010 Bowel cancer is the second highest Cancer killer in Ireland. This is the second year that the Irish Cancer Society has launched April as Bowel Cancer Awareness Month because most of us don't know the symptoms, or are too embarrassed to go to the doctor. Dr. John Ball Prof O'Moran - Gastroenterologist - Irish Cancer Society About Bowel Cancer (or known as colorectal cancer) Cancer of the bowel is when cells in the bowel change and affect how the bowel works normally. The main symptoms of bowel cancer are a change in your normal bowel motion, blood in your stools, pain or discomfort in your tummy, weight loss. Bowel cancer can be diagnosed by tests such as a rectal exam, colonoscopy, barium enema, CT Colonography. The main treatment for bowel cancer is surgery. Chemotherapy, radiotherapy or biological therapy may be used as well. Dr. John Ball What is bowel cancer ? Bowel cancer happens when cells in the bowel change and start to grow quickly. They can form a tumour. A malignant tumour is also known as Cancer. If a malignant tumour is not treated, it will affect how the bowel works. Most bowel cancers occur in the large bowel. Bowel cancer is also known as Colorectal Cancer or cancer of the colon and rectum. How common is bowel cancer? Bowel cancer can occur in men and in women. In Ireland it is the second most common cancer. In 2005, there were 2184 people diagnosed with it. It is also the second most common cause of cancer death in Ireland. Why is it the second highest cause of death by cancer? Over half of patients diagnosed will have caught it in the later stages of the disease. 1. Early detection is key. However, a recent survey has shown that 36% of people don't know the symptoms of bowel cancer (Irish Cancer Society) and 25% don't know the risk factors. 2. People can be embarrassed discussing bowel issues and movements with the doctor, which puts them off being tested. Are these not conditions that you would see regularly? These symptoms can also be due to complaints other than bowel cancer. But do get them checked out by your doctor, especially if they go on for more than 4 to 6 weeks. Bowel Cancer Risk Factors . Having a family history of bowel cancer . Having a family history of polyps (abnormal growths of tissue in the lining of the bowel) . Having a diet which is high in fat and low in fruit, vegetables and fibre . Lack of physical activity How is bowel cancer diagnosed? First visit your family doctor (GP) if you are worried about any symptoms. If your doctor has concerns about you, he/she will refer you to a hospital. There you will see a specialist who may arrange more tests. You may need some of the following tests: . Rectal exam . Stool sample to check for hidden blood . Special tests to look inside your bowel Source: Irish Cancer Society Irish Cancer Society Helpline 1 800 200 700
<urn:uuid:5a72a98f-e6d6-445a-a367-7c39e94496d1>
{ "dump": "CC-MAIN-2016-22", "url": "http://www.rte.ie/tv/theafternoonshow/2010/0407/bowelcancer898.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275328.0/warc/CC-MAIN-20160524002115-00188-ip-10-185-217-139.ec2.internal.warc.gz", "language": "en", "language_score": 0.955996572971344, "token_count": 659, "score": 2.875, "int_score": 3 }
When he appeared in rear of Savannah, and capturing Fort McAllister by a coup de main, communicated with the naval squadron, the transports were sent round to the mouths of the Ogeechee and Savannah Rivers, and light-draft steamers, fitted for river and bay service, which had been dispatched upon the first news of his approach, arrived in time to transfer to the landings the clothing, camp and garrison equipage, quartermaster's stores, and forage and provision which had been of necessity sent in seagoing vessels, both sail and steam, and which were of too heavy draft to enter the Ogeechee or pass through the opening first make in the artificial obstructions of the Savannah. The army was quickly reclothed, reshod, and refitted; its wagons filled with rations and forage. A large portion of the army was transferred by steamers from the Savannah to beaufort, S. C., or Port Royal Harbor, at which place the vessels of heavy draft could land their stores without the labor of transshipment. After a short and much-needed rest, the army,re-equipped, left the coast, and the transports and fleet of light-draft steamers repaired to the harbor of Morehead City, where they awaited the arrival of the troops, who, after a march of 500 miles through a hostile country, without communication with their base of supplies, depending solely upon the stores in their wagons and the resources of the enemy's country for their subsistence, were certain to arrive in a condition to require an entire renewal of their clothing and shoes and a new supply of provisions. When I parted with General Sherman at Savannah on the 19th of January he told me to look out for him at Kinston, and also to be prepared for him lower down the coast should the rebel Army of Virginia, abandoning Richmond, unite with the troops in the Carolinas and succeed in preventing his passage of the Santee. During the month of December, also, an expedition was embarked at City Point and Fortress Monroe, which made an unsuccessful attempt, in co-operation with the navy, upon Fort Fisher, at the month of Cape Fear River. The troops failing to attack were re- embarked and returned to Hampton Roads. The transportation by sea, the landing and return, were successfully performed. In January the expedition was re-embarked with a large force and successfully landed above Fort Fisher, which place, with the aid of a naval bombardment unexampled in severity, they carried by assault. The troops of the Twenty-third Army Corps,under General Schofield, having borne their part in the campaign in Georgia and Tennessee, after the battle of Nashville, which took place on the 15th and 16th of December, and the termination of the pursuit of the rebel army on the Tennessee, were moved by rail and river to Washington and Baltimore, where, amid many difficulties from the severity of the season, ice entirely suspending for a time the navigation of the potomac, they were embarked on ocean steamers and dispatched to the Cape Fear River and to Beaufort, N. C., to move, in co-operation with the victors of Fort Fisher, upon Wilmington and Kinston, N. C. In anticipation of the arrival of General Sherman's army, I had order to Savannah a portion of the Military Railroad Construction Corps. Two division of the corps, as organized, with tools and materials and officers, were brought from Nashville to Baltimore by railroad. At Baltimore they were re-enforced and embarked on ocean steamers and were promptly at the rendezvous.
<urn:uuid:723e14b2-356b-4ef7-b914-4627e4529bf5>
{ "dump": "CC-MAIN-2020-40", "url": "https://ehistory.osu.edu/books/official-records/126/0226", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401641638.83/warc/CC-MAIN-20200929091913-20200929121913-00197.warc.gz", "language": "en", "language_score": 0.9819803833961487, "token_count": 741, "score": 2.796875, "int_score": 3 }
Now let us cover virtual memory, or VM. With Mac OS 9 and its predecessors, virtual memory unfortunately meant slower execution, terrible results with certain multimedia applications, and tons of other system burps and glitches. With Mac OS X, VM is much more efficiently managed, enabling the OS to distribute automatically the exact amount of required memory to applications. In earlier versions of the Mac OS you could control the amount of VM used, but it still always suffered some system lags. With Mac OS X, VM is automatic and cannot be controlled or disabled. In Mac OS X, VM is referred to as Advanced Virtual Memory, or Persistent Virtual Memory, but no matter what the name VM is Mac OS Xs great enabler. We have both protected memory and pre-emptive multitasking thanks in large part to the UNIX foundations for Mac OS X. The true performance of schedulers and kernels in a UNIX environment has been well studied and documented since the birth of the OS. Home - Table Of Contents - Contact Us CertiGuide to A+ (A+ 4 Real) (http://www.CertiGuide.com/apfr/) on CertiGuide.com Version 1.0 - Version Date: March 29, 2005 Adapted with permission from a work created by Tcat Houser et al. CertiGuide.com Version © Copyright 2005 Charles M. Kozierok. All Rights Reserved. Not responsible for any loss resulting from the use of this site.
<urn:uuid:c62a3481-5ed2-4c3f-b899-448c6c39ec43>
{ "dump": "CC-MAIN-2017-30", "url": "http://www.certiguide.com/apfr/cg_apfr_VirtualMemory.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424931.1/warc/CC-MAIN-20170724222306-20170725002306-00239.warc.gz", "language": "en", "language_score": 0.8701662421226501, "token_count": 309, "score": 2.65625, "int_score": 3 }
A new study shows one in four high school students drink soda every day — a sign fewer teens are downing the sugary drinks… That’s less than in the past. In the 1990s and early 2000s, more than three-quarters of teens were having a sugary drink each day, according to earlier research. The CDC reported the figures Thursday, based on a national survey last year of more than 11,000 high school students. They appear in one of the federal agency’s publications, Morbidity and Mortality Weekly Report. Consumption of sugary drinks is considered a big public health problem, and has been linked to the U.S. explosion in childhood obesity. One study of Massachusetts schoolchildren found that for each additional sweet drink per day, the odds of obesity increased 60 percent. As a result, many schools have stopped selling soda or artificial juice to students. Indeed, CDC data suggests that the proportion of teens who drink soda each day dropped from 29 percent in 2009 to 24 percent in 2010, at least partly as a result. “It looks like total consumption is going down,” said Kelly Brownell, director of Yale University’s Rudd Center for Food Policy and Obesity. But the results of the new CDC study are still a bit depressing, said Brownell, who has advocated for higher taxes on sodas. “These beverages are the kinds of things that should be consumed once in a while as treat — not every day,” he said. “That’s a lot of calories.” [continues at AP via Yahoo News]
<urn:uuid:979eb7ed-c8a0-4745-8b7e-e73d23db668a>
{ "dump": "CC-MAIN-2015-35", "url": "http://disinfo.com/2011/06/one-quarter-of-american-teens-drink-soda-every-day/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062327.4/warc/CC-MAIN-20150827025422-00083-ip-10-171-96-226.ec2.internal.warc.gz", "language": "en", "language_score": 0.9597888588905334, "token_count": 336, "score": 2.578125, "int_score": 3 }
This infographic by Bill Marsh in the New York Times did the rounds a while ago. 1980 saw the election of Ronald Regan as US president. On taking office at the start of 1981 he ushered in a package of right wing economic policies that soon picked up the nick-name “Reganomics”. How did that work out for workers and for inequality? Bill Marsh/The New York Times Sources: Robert B. Reich, University of California, Berkeley; “The State of Working America” by the Economic Policy Institute; Thomas Piketty, Paris School of Economics, and Emmanuel Saez, University of California, Berkeley; Census Bureau; Bureau of Labor Statistics; Federal Reserve
<urn:uuid:02ee4ce4-e737-4dba-a8e7-83104547dcbb>
{ "dump": "CC-MAIN-2016-07", "url": "http://thestandard.org.nz/the-impact-of-right-wing-economics/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157443.43/warc/CC-MAIN-20160205193917-00243-ip-10-236-182-209.ec2.internal.warc.gz", "language": "en", "language_score": 0.902923583984375, "token_count": 143, "score": 2.5625, "int_score": 3 }
Sensor Fusion: The future of intelligent devices Combining the data from multiple sensors can tell us a great deal more about the application environment than each sensor could on its own. Sensor fusion is an intriguing idea: if data from more than one sensor can be combined in the right way, the combined data can be more accurate, more reliable or simply provide a better understanding of the context in which the data was gathered. It is perfectly possible to combine data from two or more (or in fact, lots of) sensors to produce extremely rich, context aware data that eliminates the limitations in range or accuracy of the individual sensors. The whole effectively exceeds the sum of the parts, using only standard sensor technologies as all the analysis is done in the software. For today’s remote, small sensor nodes in the internet of things, this can easily be done using the ample computational power of the cloud. At the simplest end of the scale, a classic example of sensor fusion is combining the data from a moisture sensor with a temperature sensor to calculate relative humidity; the amount of water vapour present in the air expressed as a percentage of the amount needed for saturation at the same temperature. Relative humidity is an important parameter in heating, ventilation and air conditioning (HVAC) systems, as well as metrology equipment. Relative humidity is also important in the painting and coating industries, as processes can be very sensitive to the environment’s dew point. For example, ALPS has a humidity and temperature sensor module mounted on a small PCB, the HSHCAL series (right, top), which enables a lot of design freedom as it can be mounted in the optimal location within the system. Meanwhile, TE Connectivity’s HTU21D humidity and temperature sensor module (right, bottom) comes in a reflow solderable DFN package for automated assembly, which measures just 3 x 3 x 0.9mm for compact applications. Both these relative humidity sensor examples operate across the full range of 0 to 100% relative humidity, and provide a digital (I2C) output of the data for direct interface with a microcontroller. Another classic application of sensor fusion is determining the orientation of a system in three-dimensional space. A gyroscope can be used to measure the angular velocities of the system in all three dimensions, then the result from each axis can be mathematically integrated to get a position, but even with today’s technology this data is not always very accurate. Gyroscopes are prone to bias error, which produces a non-zero reading even when the sensor is stationary, and this error varies with temperature and the sensor’s age. An accelerometer could be used to detect linear motion in three dimensions, but these sensors are susceptible to vibration. A three-dimensional magnetometer could be used, which detects the Earth’s magnetic field to give an idea of orientation, but these sensors are not always accurate in the face of interference from devices nearby. The best approach is to combine the data from all three sensors, and ideally a temperature sensor too, to eliminate the inaccuracies of any individual sensor. Complex filtering algorithms (such as the widely used Kalman filtering technique) are used to combine the data. 9-DoF (nine degrees of freedom) sensor systems, which include 3D accelerometers, gyros and magnetometers plus a microcontroller with a filtering algorithm, are available combined into one package that outputs easy to deal with position data. Different combinations of accelerometers, gyros and magnetometers are also available to suit various applications. For example, Murata has a combined single axis gyro and 3D accelerometer solution, the SCC2000 series (right), which comes in x or z axis gyro configurations, allowing six degrees of freedom to be implemented on a single application board. This component is intended for sensor fusion in harsh environment applications, offering high reliability and performance in the most demanding systems. This might include industrial machine control, where robotic arms are used to complete manufacturing tasks such as welding, painting and material handling. These robots can move quickly in any direction so they need to have accurate knowledge of the position of their “hand” (actually, end of arm tooling, or EOAT) in order to complete their tasks. These robots often operate in areas that are deemed too dangerous for humans to work in, so it’s important that the sensors used to gather the data for fusion can withstand extreme temperatures, shock and vibration. Another harsh environment that uses sensor fusion extensively is the world of automotive. In this case, the SCC2000 series may be used for applications such as electronic stability control (ESC) which detects skidding using a number of different sensors. Data is input from sensors including a steering wheel angle sensor, which determines the direction the driver intends to go, a gyroscope to measure yaw rate and possibly roll rate, an accelerometer to measure linear acceleration and a wheel speed sensor which detects changes in speed due to loss of traction. This data is fed into a control algorithm; if the result indicates a skid, an intervention such as braking individual wheels may be implemented for safety. |Healthcare and medical electronics is another area where sensor fusion from accelerometers and gyroscopes is enabling exciting new systems. Advances in the miniaturisation and power consumption of MEMS sensor devices and microcontrollers are enabling wearable sensor systems that can be used in a variety of medical environments. For example, body-worn systems which monitor the movement of limbs can be helpful for physiotherapy, to ensure exercises are being done correctly. Wearable activity trackers, already popular in the consumer wellness market, may in the future have their data fused with data from wearable heart rate monitors, temperature sensors, etc. as part of telehealth services or remote monitoring of patient conditions. Uploading and analysing this data in the cloud means it can be accessed and reviewed by doctors at any time. Intelligent sensor fusion of vital signs data gathered by body-worn sensors can even make it possible for electronic systems to diagnose common diseases without seeing doctors at all.| Healthcare and medical electronics is another area where sensor fusion from accelerometers and gyroscopes is enabling exciting new systems. Advances in the miniaturisation and power consumption of MEMS sensor devices and microcontrollers are enabling wearable sensor systems that can be used in a variety of medical environments. For example, body-worn systems which monitor the movement of limbs can be helpful for physiotherapy, to ensure exercises are being done correctly. Wearable activity trackers, already popular in the consumer wellness market, may in the future have their data fused with data from wearable heart rate monitors, temperature sensors, etc. as part of telehealth services or remote monitoring of patient conditions. Uploading and analysing this data in the cloud means it can be accessed and reviewed by doctors at any time. Intelligent sensor fusion of vital signs data gathered by body-worn sensors can even make it possible for electronic systems to diagnose common diseases without seeing doctors at all. Behind these new sensor fusion applications are many, many innovations in both hardware and software. On the hardware side, MEMS sensors can be integrated together in any number of different combinations, into tiny, power-efficient packages. MEMS have also vastly reduced in price in recent years due to miniaturisation and new automatic calibration techniques, while their limitations in terms of accuracy have been offset with advanced sensor fusion techniques. These innovations are set to bring sensor fusion to more varied applications than ever before. Using other technology advances such as digital signal processing, huge amounts of data can now be fused very quickly to allow a system response to be provided in real time, while wireless internet access provides sensor systems with access to huge computing power in the cloud. The eventual aim is to emulate with electronics the ultimate in sensor fusion hardware – the human body – which uses the brain as a processor to fuse data from the nervous system, visual systems and other sensory inputs to allow people to perform incredibly complex tasks. As Technical Manager, Martin is responsible for marketing strategy across IP&E, power and battery products into key market segments. Martin has over 15 years' experience in electronics having begun his career at Nortel Networks, and since occupied roles at RS Components, Avnet and Altera. Women in engineering – where are we now? To mark International Women’s Day, we spoke to two female engineers at different points in their car... How to choose the right DC-DC converter for HV gate driver applications Learn about the key design considerations and technical trade-offs that you need to know when select... A quick guide to powering smart objects with kinetic energy harvesting As the world becomes increasingly smart and connected, batteries can be undesirable in some of the u...
<urn:uuid:598d9f54-215a-4c99-aa41-897102918aab>
{ "dump": "CC-MAIN-2019-13", "url": "https://www.avnet.com/wps/portal/abacus/resources/engineers-insight/article/sensor-fusion-the-future-of-intelligent-devices/!ut/p/z0/lY7BasMwEER_xT3kKFYpweSqhNLS4BaCKa4uZa3I9iZCsqW12_x91X5AILcZmHkzoKEB7XGhHpmCR5f9py6_XuS-XKutfHt-qtdS7T7qevcuD0e1gVfQtwOZQOdp0gq0CZ7tD0ODLZo5Ff_ec-GojRivK8nWDJ4MugIjk3E2rWSyPoUoujnlR4IHmyXP0YrQCcp956jPEHGyCxmb_vYeY7WvetAj8pAzXYDmXsp40e31Wz38Au8_o44!/?urile=wcm%3Apath%3A%2Fabacus%20content%20library%2Ftechnical%20articles%2Fsensor-fusion-the-future-of-intelligent-devices", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203123.91/warc/CC-MAIN-20190324002035-20190324024035-00134.warc.gz", "language": "en", "language_score": 0.9227311015129089, "token_count": 1792, "score": 2.984375, "int_score": 3 }
North Brunswick Education Association (Middlesex County) member Danita Guarino understands that space is becoming a rare commodity in many communities and that future generations will need to create new and innovative ways to grow food. Funded through a $5,510 grant from the NJEA Frederick L. Hipp Foundation for Excellence in Education, Guarino?s students will learn how to develop the hardiest, fastest growing plants using soil-less technology. This project is designed to help 105-115 middle school students learn about plants, solve realistic problems, and grow high quality food in a limited space while conserving water. Groups of children will be challenged to grow the best produce using some form of hydroponic gardening. Monitoring of the hydroponic farms will enforce practical uses of chemistry, mathematics, physics, economics, and engineering. Assessment of student progress will include how well they work as a team, solve problems, design, and carry out experiments. With assistance from a mechanical engineer acting as design mentor, and a Rutgers University professor of plant sciences, students will produce a product and provide information on how they developed and cultivated said plant(s). To better understand the economics of running a small plant research business, each team will be provided a fictitious bank account with a given amount of money. Teams will draw upon their funds to establish their businesses and buy materials. They will learn that experimenting on plants does not always run as expected and that they must learn from their mistakes. All materials purchased for this project will be utilized year after year to continue the program. For further information, contact:
<urn:uuid:b8debf77-0be4-4b17-87ae-538d0a47aa30>
{ "dump": "CC-MAIN-2016-44", "url": "http://www.njea.org/about/njea-hipp/hipp-recipients-2002-2003/hydroponic-farming", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722653.96/warc/CC-MAIN-20161020183842-00268-ip-10-171-6-4.ec2.internal.warc.gz", "language": "en", "language_score": 0.9460760951042175, "token_count": 323, "score": 3.203125, "int_score": 3 }
Looking in the mirror, have you ever wondered how your teeth got so stained? If the answer is yes then you are not alone! Stains develop gradually and go overlooked. Teeth stains do not typically indicate a severe condition, but they can be disheartening. If you are facing this issue, there is no need to worry; you can still get your bright smile back. What Is The Normal Color Of Your Teeth? Not all teeth are naturally white in color. They may come in various shades of white, grey, and yellow, affected by different elements such as the way your brain and eye perceive color. What Can Cause Teeth Stains? There are a few different reasons for the discoloration of your teeth: Foods and drinks Extrinsic stains which appear on the outermost layer of the teeth are the result of certain foods and beverages. Coffee, tea, soda, chocolate, red sauce, and red wine can stain your teeth. Dark-colored foods and drinks may have chemicals known as chromogens which can cause permanent stains, especially if you have poor dental hygiene. Foods and drinks with artificial colors can also cause considerable tooth stains. Nicotine and tobacco products A higher prevalence of stains has been reported in smokers. Nicotine and tobacco contain elements that stick to the pores in your tooth enamel and cause stains that tend to darken over time. Intrinsic stains happen when the dentin darkens over time. The reason may be ingesting too much fluoride or taking tetracycline in early childhood, suffering from dentinogenesis imperfection, internal bleeding inside the tooth, or traumas. Other medications associated with stains are and chlorhexidine. Natural Glibenclamide Staining Natural staining typically occurs due to aging. Dentin naturally gets discolored, and enamel wears down with age, allowing stains to appear. Tooth injuries play an important part in these cases. Genetics is another important factor that can lead to dental staining. Your natural tooth color is different from that of other people, and the strength of tooth enamel, the enamel reactions to pigments, hereditary conditions, and developmental disorders can all vary from person to person. Previous dental procedures Your dental history can sometimes provide clues about the discoloration of your teeth. Your dental fillings, crowns, and bridges all lose their color over time and can lead to stains. Plaque and tartar bacteria produce acids and weaken enamel, making your tooth’s yellowish layers more apparent. Too much fluoride can cause stains that appear as white or greyish lines across your teeth. In severe cases, it may cause dark brown spots as well. Yellow Stains on Teeth You may notice yellow stains on your teeth if you smoke tobacco. Other factors may also lead to yellow discolorations: drinking tea, coffee, or red wine on a regular basis, an improper diet high in sugars, certain drugs, and chronic dry mouth. - Brown Stains On Teeth Brown spots can appear as a result of specific foods and beverages, tartar, tooth decay, and smoking. - White Stains On Teeth There are several possible reasons for white stains on your teeth. A cavity can form a white spot on your tooth which will get darker as the situation advances. Another common cause is fluorosis. Using too much fluoride can cause stains to develop even before the teeth break through the gums. Enamel hypoplasia is another factor, and refers to the situation in which your enamel does not form properly. Other causes of white teeth stains are poor dental hygiene and improper lifestyle. - Black Stains On Teeth Black teeth stains can be caused by underlying cavities or tooth decay. Foods and drinks that leave behind pigment, liquid iron supplements, fillings and crowns containing silver sulfide, and dental cavities will make your teeth black. However, your teeth will not change from their whitish hue to black overnight, and you will notice other signs before they turn black. For example, your teeth may become sensitive, resulting in pain when eating or drinking. Small dots may also start to appear near the gum line. Black stains on the teeth usually need professional therapy. How Can You Prevent Discoloration? Despite all that was said teeth stains can still be controlled or prevented, at least to some degree. - Dental care Caring for your teeth after consuming pigmented foods and drinks can help with teeth stains. If you are in a situation where dental hygiene is not possible, swish with water, at least. It can help get rid of some particles that can stain your teeth. - Proper oral hygiene Practicing good dental hygiene could improve the situation immensely: – Brush your teeth at least three times a day. – Floss daily. -Use a water flosser or mouth rinse. - Healthy lifestyle You should quit chewing tobacco and cut back on foods and beverages that could potentially stain your teeth. What Can Help To Get Rid Of Teeth Stains? Dealing with your stained teeth may seem like a hassle, but there are specific procedures that can whiten your teeth by simply preventing the formation of stains. Most of the approaches you can take to get rid of your teeth stains are safe, and you can even practice them at home. Methods of treating this condition generally fall into three categories: - In-office treatments Your dentist will typically use a higher concentration of Hydrogen peroxide to whiten your teeth. This solution works quickly and offers you long-lasting results. Once you face a severe condition, restorative treatments can be the best solution, especially for stained teeth due to tooth decay. This form of dentistry may help with employing white fillings, tooth extractions, and root canal procedures. - At-home treatments You can benefit from custom trays and whitening gels at home. You may be required to use the trays for a few weeks to accomplish your ideal results. - Over-the-counter products Whitening toothpaste and strips can decrease surface stains, but they are not very successful in dealing with intrinsic stains inside your teeth. Some of these products may even lead to tooth sensitivity or gum irritation. Chewing gum and mints can also be helpful in removing the stains, giving you a fresher breath as a bonus. If you are noticing a stain on your teeth that you cannot get rid of, feel free to schedule an appointment with our professional dentists at Ariadental. We can help prevent your dental issues from becoming complicated.
<urn:uuid:8a8366aa-c8ba-4388-871b-a44680b862d3>
{ "dump": "CC-MAIN-2023-14", "url": "https://ariadentalcare.com/blog/get-back-your-smile-from-the-teeth-stains/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00666.warc.gz", "language": "en", "language_score": 0.9344227313995361, "token_count": 1399, "score": 3.546875, "int_score": 4 }
Help support New Advent and get the full contents of this website as an instant download. Includes the Catholic Encyclopedia, Church Fathers, Summa, Bible and more all for only $19.99... Marguerite Bourgeoys, the foundress, was born at Troyes, France, 17 April, 1620. She was the third child of Abraham Bourgeois, a merchant, and Guillemette Garnier, his wife. In 1653 Paul Chomody de Maisonneuve, the founder of Ville Marie (Montreal), visited Troyes, and invited her to go to Canada to teach; she set out in June of that year, arrived at Ville Marie, and devoted herself to every form of works of mercy. She opened her first school on 30 April, 1657, but soon had to return to France for recruits, where four companions joined her. A boarding school and an industrial school were opened and sodalities were founded. In 1670 the foundress went back to France and returned in 1672 with letters from King Louis XIV and also with six new companions. In 1675 she built a chapel dedicated to Notre Dame de Bon Secours. To insure greater freedom of action Mother Bourgeoys founded an uncloistered community, its members bound only by simple vows. They had chosen 2 July, as their patronal feast-day. Modelling their lives on that of Our Lady after the Ascension of Our Lord, they aided the pastors in the various parishes where convents of the order had been established, by instructing children. Although the community had received the approbation of the Bishop of Quebec, the foundress became very desirous of having the conditions of non-enclosure and simple vows embodied in a rule. To confer with the bishop, who was then in France, she undertook a third journey to Europe. She returned the next year, and resisted the many attempts made in the next few years to merge the new order in that of the Ursulines, or otherwise to change its original character. In 1683 a mission on Mount Royal was opened for the instruction of Indian girls. This mission, under the auspices of the priests of St. Sulpice, was removed in 1701 to Sault au Rocollet, and in 1720 to the Lake of Two Mountains. It still exists. The two towers still standing on the grounds of Montreal College were part of a stone fort built to protect the colony from the attacks of their enemies; they were expressly erected for the sisters of that mission: one for their residence, the other for their classes. The sisters continued their labours in the schools of Ville Marie, and also prepared a number of young women as Christian teachers. Houses were opened at Pointe-aux-Trembles, near Montreal, at Lachine, at Champlain and Château Richer. In 1685 a mission was established at Sainte Famille on the Island of Orléans and was so successful that Mgr de St. Vallier, Bishop of Quebec, invited the sisters to open houses in that settlement, which was done. In 1689 he desired to confer with Mother Bourgeoys in regard to a project of foundation. Though sixty-nine years of age, she set out at once on the long and perilous journey on foot to Quebec, and had to suffer all the inconveniences of an April thaw. Acceding to the demands of the bishop for the new foundation, she had the double consolation of obedience to her superior, and of keeping her sisters in their true vocation when, only four years later, the bishop himself became convinced that such was necessary. Mother Bourgeoys asked repeatedly to be discharged from the superiorship, but not until 1693 did the bishop accede to her petition. Eventually on 24 June, 1698, the rule and constitution of the congregation, based upon those which the foundress had gathered from various sources, were formally accepted by the members. The next day they made their vows. The superior at the time was Mother of the Assumption (Barbier). Mother Bourgeoys devoted the remainder of her life to the preparation of points of advice for the guidance of her sisterhood. She died on 12 January 1700. On 7 Dec., 1878, she was declared venerable. The proclamation of the heroicity of the virtues of the Venerable Marguerite Bourgeoys was officially made in Rome, 19 June, 1910. In 1701 the community numbered fifty-four members. The nuns were self-supporting and, on this consideration, the number of subjects was not limited by the French Government, as was the case with all the other existing communities. The conflagration which ravaged Montreal in 1768 destroyed the mother-house, which had been erected eighty-five years before. The chapel of Bon Secours, built by Mother Bourgeoys, was destroyed by fire in 1754, and rebuilt by the Seminary of St. Sulpice in 1771. During the latter half of the nineteenth century, missions were established in various parishes of the Provinces of Quebec, Ontario, Nova Scotia, New Brunswick, Prince Edward Island, and in the United States; also many academies and schools were opened in the city of Montreal. The normal school in Montreal, under the direction of the congregation, begun in 1899, has worthily realized the hopes founded upon it. Of its three hundred and eighteen graduates, authorized to teach in the schools of Quebec, one hundred and eighty-four are actually employed there. The house, built after the fire of 1768, was demolished in 1844 to give place to a larger building. A still more commodious one was erected in 1880. This was burned down in 1893, obliging the community to return to the house on St. Jean-Baptiste Street. A new building was erected on Sherbrooke Street, and here the Sisters have been installed since 1908. The Notre Dame Ladies College was inaugurated in 1908. Today the institute, whose rules have been definitively approved by the Holy See, counts 131 convents in 21 dioceses, 1479 professed sisters, over 200 novices, 36 postulants, and upwards of 35,000 pupils. The school system of the Congregation of Notre Dame de Montreal always comprised day-schools and boarding-schools. The pioneers of Canada had to clear the forest, to cultivate the land, and to prepare homes for their families. They were all of an intelligent class of farmers and artisans, who felt that a Christian education was the best legacy they could leave their children; therefore they seized the opportunity afforded them by the nascent Congregation of Notre Dame, to place their daughters in boarding-schools. The work inaugurated in Canada, led to demands for houses of the congregation in many totally English parishes of the United States. The schools of the Congregation of Notre Dame everywhere give instruction in all fundamental branches. The real advantages developed by the systematic study of psychology and pedagogy have been fully turned to account. The system beings with the kindergarten, and the courses are afterwards graded as elementary, model, commercial, academic and collegiate. The first college opened was in Nova Scotia at Antigonish, affiliated with the university for young men in the same place: since the early years of its foundation it has annually seen a number of Bachelors of Arts among its graduating students. In 1909 the Notre Dame Ladies' College, in affiliation with Laval, was inaugurated in Montreal. The fine arts are taught in all the secondary schools and academies, while in the larger and more central houses these branches are carried to greater perfection by competent professors. The teaching from the very elements is in conformity with the best methods of the day. DE CASSON, Histoire de Montreal, I (1673), 62 sq.; FAILLON, Vie de la Soeur Bourgeoys, II (1853); RANSONNET, Vie de la Soeur Bourgeoys (1728); DE MONTGOLFIER, Vie de la Soeur Bourgeoys (1818); SAUSSERET, 1er Eloge Historique de la Soeur Bourgeoys (1864); IDEM, 2nd Eloge Historique de la Soeur Bourgeoys (1879); SISTERS OF THE CONGREGATION, The Pearl of Troyes (1878), 338-68; DRUMMOND, The Life and Times of Marguerite Bourgeoys (1907). APA citation. (1911). Congregation of Notre Dame de Montreal. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/11127a.htm MLA citation. "Congregation of Notre Dame de Montreal." The Catholic Encyclopedia. Vol. 11. New York: Robert Appleton Company, 1911. <http://www.newadvent.org/cathen/11127a.htm>. Transcription. This article was transcribed for New Advent by Ann M. Bourgeois. In thanksgiving to Almighty God for William Bourgeois & family. Ecclesiastical approbation. Nihil Obstat. February 1, 1911. Remy Lafort, S.T.D., Censor. Imprimatur. +John Cardinal Farley, Archbishop of New York. Contact information. The editor of New Advent is Kevin Knight. My email address is webmaster at newadvent.org. Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
<urn:uuid:f6f25972-a8a2-401d-abef-980772866145>
{ "dump": "CC-MAIN-2016-22", "url": "http://[email protected]/cathen/11127a.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270798.25/warc/CC-MAIN-20160524002110-00060-ip-10-185-217-139.ec2.internal.warc.gz", "language": "en", "language_score": 0.97154700756073, "token_count": 1987, "score": 2.828125, "int_score": 3 }
- 1 Physical Description of the Victrola - 2 Material Affordances of the Portable Victrola - 3 Precursors to the Victrola - 4 The Uniqueness of the Victrola - 5 Historical Context of Mass Reproduction - 6 Acoustic Realism and the Marketing of High Fidelity - 7 The Successors to the Portable Victrola Physical Description of the Victrola The portable Victrola is designed as a container for the playback of sound. It is contained in a hard suitcase for the purpose of mobility and the protection from possible damages that would arise from being in transit. There is a handle which is semiotically associated with a brief case, and affords the ability to carry it in motion. The exterior of the portable Victrola cannot be differentiated from any other briefcase. The crank is located on the side of the machine that juts out when in use. The crank can be taken out, and stored in the case. On the corner of the case there is a container for storing needles. There is a place to store the records in order to carry around many at a time. The horn is not visible, and cannot be accessed. Material Affordances of the Portable Victrola The portable Victrola has a partially predetermined volume. There is no tool for the mechanical adjustment of volume within the device itself. However, if one were to close the container slightly, the sound would diminish. Although this is ostensibly an inconvenience of the device, in other competing phonographs, where the user is given control over the volume, the clarity of the recording is simultaneously altered. The fixed high volume level of the portable Victrola suggests that it was meant be used outdoors. In competing formats such as the gramophone, the adjusting spring could be tightened or loosened allowing variation in volume and clarity and the tone would become less clear as the volume grew louder. (Jonathan Sterne, The Audible Past, 215) However, in the Victrola, the user can increase the volume by replacing a sharp needle with a dull one. This creates a louder volume, accompanied by a less articulated tone. Thus it is up to the user to decide how s/he would prefer to hear the recording. In this manner pre-recorded sound could be expressed in a multiplicity of ways, and afforded consumer preference and active involvement in the perception of the recording. In a sense, there was a collaboration between the studio engineers, and the listeners. By eliminating certain aspects of the original recording event, and augmenting others, the user could partially manipulate the way in which the recording was heard. For example, Jonathan Sterne notes that if one were to point the machine towards the wall, the bass sound would increase. The portable model of the Victrola afforded it to be positioned according to the user’s preference, and placed in an infinite variety of acoustic situations. The user of the Victrola had to mechanically wind the machine in order for it to work. The more that the machine was wound, the longer it would play. However, the duration of the recording itself determined the length of the playback, and therefore the length of the sonic expression was mechanized.The portable Victrola allowed the user to hear the record being played back at different intervals of speed. Therefore it was up to the listener to determine how s/he wanted to hear the recording. However, the device symbolically suggests that the middle level would be the most accurate and truthful expression of the original sound recording event. Material Substrate and Sonic Representation If one were to listen to the early phonographic recordings, one would notice that they sound somewhat flat or “tinny” and contain “pops and hisses.” This is due to the process and materials used to record the sound. The early recordings on wax surface were unsuccessful in capturing the heavy bass notes, and notes from the top end of the spectrum. “Loud sounds would force the stylus to the edge of the groove and sometimes beyond it, ruining the record.” (Sterne, 200) In the earliest recordings, music was captured by the phonograph’s horn and then channeled to the recording stylus. Sometimes the elements captured the sound accurately, and other times the material used in the construction of the instrument, the room’s size and dimensions, noise from outside, would all get incorporated into the recording. The earliest phonographic recordings were impractical, in that they could only hold two minutes worth of sound. (Steffen, 27) Wax was used as the material substrate for recording, because it allowed for a more defined and sharper recording, because the grooving was closer together than that of tinfoil. However, recordings on wax were significantly softer then tin foil. (ibid, 28) This phenomenon leaves it up to the listener to imagine the music as if it were without the "pops and hisses," as a smooth coherent whole. Precursors to the Victrola The Victrola is situated in a technological history of automated devices that mediated sonic performance, from the the music box, to the piano roll. The mediation comes from machines meant to read inscriptions and codes and thus to mechanize a sonic event. The automation would produce “live music” and could only be repeated using the machines that were present. The Musical Box was one of the first automatic instruments. It was manufactured in Switzerland in 1814 and was the first use of the cylinder in the production of music. There was a winding lever, and a “fan-shaped governor” to regulate the speed. With one winding, the instrument could run by itself for two hours. (Science > Vol. 12, No. 306 (Dec., 1888), pp. 286-288) The Victrola is a remediation of the Musical Box, using much of the same technology in its process of automation. The winding lever is present in the mechanical Victrolas, as well as the ability to regulate speed. The telegraph is another invention that directly preceded the creation of the phonograph. It was invented by Samuel Morse in 1844 and was the first invention to provide instantaneous communication of information through space. The telegraph created the possibility of disembodied communication, where face-to-face interaction was no longer necessary for interaction. It is possible to see the first instantiation of disembodied performance as the telegraph operator could make patterns that were broadcasted elsewhere. Stenography and the Encoding and Decoding of Messages Stenography can be seen as a precursor to the earlier forms of audio/visual representation, as well as a cultural paradigm from which Edison’s initial intention for the phonograph developed. Stenography, also known as “short hand,” was a system of representation used prior to the phonograph, and was replaced by the phonograph. Its function was to capture live testimonials, and transcribe them verbatim. Eventually stenography evolved into phonography, which was a more advanced version of the former. Stenography was taught to students, and used widely by teachers. It was taught to students because it was known to utilize intellect as well as mechanical dexterity. (Headline: One of the Marvels of the Nineteenth Century. Sound Recording Itself; Article Type: News/Opinion Paper: Porcupine's Gazette, published as The Pittsfield Sun; Date: 10-31-1861; Volume: LXII; Issue: 3189; Page: ; Location: Philadelphia, Pennsylvania) In combination with still photographs, stenography was used as a method of recording history. (Gitelman, 25) It attempted to capture and edify aspects of oral expression, and evolved to focus on the vocal element as opposed to the spelling of the word. (Gitelman, 25) The logic of capturing oral expression was extended into the subsequent invention of the Phonoautograph. This device was invented by M.L. Scott and purported to visibally fix sound on a tablet. In the representation of sound on a tablet, each sound could be seen as visually distinct. Human voice was able to be distinguished from the sound produced by musical instruments. The acoustic vocabulary of tones and frequencies were employed to show how different sounds were represented visually on the tablet. This is a remediation of stenography as it attempted to capture the ephemeral sonic event, and to represent this event visually in a semiotic system. However, stenography employed writing, and the language used was the alphabet. It relied almost exclusively on the human labor of the stenographer. The Phonoautograph relied on the interface between sound and machinery much like the camera did with visual material substrate. There was an attempt to transcribe the graphic representation of sonic waves back into words, thus mimicking the process of stenography, however, this proved to be impossible, as all words looked visually different. “It is difficult to imagine the importance of the discovery, whether in respect it be in respect to the unimpeachable accuracy of the process, the entire absence or trouble and expense in reporting any articulate sounds, or the great saving of time, and exhausting labors of parliamentary reporters.” (Porcupine’s Gazette)The Phonograph extended the notion of converting oral experience into evidence. (26) The notion of short-hand used in order to capture oral experience is seen in the 18th. Century. “..The fallies of imagination and the falutary advise of wisdom and experience would die with their professors, and be unavailing to posterity.” “By it we can make the copious effusions of animated oratory our own, catch the beautiful or the sublime, from the lips of the speaker we admire.”Headline: On the Art of Common, and That of Stenography or Short Writing; Article Type: News/Opinion Paper: State Gazette of South-Carolina; Date: 12-17-1793; Volume: LV; Issue: 4296; Page: ;) The Phonograph solved the problem of stenography in that it got rid of the anxiety of inaccuracy due to fallibility of the stenographers. Stenography evolved into another form of writing called Phonography, which was intended for teachers. The use of the word Phonography first appeared in 1845. The art of Phonography was “intended to benefit mankind.” (Headline: Pursuit of Knowledge; Article Type: News/Opinion Paper: Sun, published as The Pittsfield Sun.; Date: 07-15-1847; Volume: XLVII; Issue: 2443; Page: ; Location: Pittsfield, Massachusetts) The Early Phonograph In the late 19th century, there were many competing formats for sound recording and reproduction. The Victrola took the successful elements from of the technology employed by both the early phonograph, and the gramophone. The phonograph was invented by Edison in 1877. Edison came up with the idea for the phonograph initially, while thinking of ways to speed up the telegraph. He wanted to speed up the process of the telegraph in order to increase the amount of messages and the speed at which they could be sent. In doing so he realized that he had to rid the process of the human component, as people could only work at a finite speed. In order to automate the process, he saw the necessity for recording the coded messages. (Steffen: From Edison to Marconi) The initial phonograph was used in order to permanently record the human voice and other sounds, from which the sounds could then be reproduced and rendered audible at another time. (phonograph patent) Like the photograph, the phonograph engaged in the freezing of a moment of time, and displacing it into the future. The Phonograph V.S. The Gramophone The original patent shows the process for mechanical reproduction of audio: “the record, if it be upon tinfoil, may be stereotyped by means of the ‘plaster of paris’ process, and from the stereotype multiple copies may be made expeditiously and cheaply by casting or by pressing tin-foil or other material on it. “ (Patent 200521) According to information from the patents, Edison’s phonograph used a stylus attached to a vibratory diaphragm to indent a traveling sheet of tinfoil. The stylus would indent the substrate (tinfoil or other substrates that could be indented) and its imprints were to a debth that depended on the amplitude of the sound waves. Emile Berliner, who invented the gramophone, saw the Edison’s phonograph as defective, and sought to improve upon the method of mechanical reproduction. He argued that this method was defective because of the weak force of the vibrations therein, and saw the weakness as a defect for its lack of volume. Additionally he believed that the vibrations had to overcome the resistance in the material substrate, which would lead to a modification in the imprints, and a less accurate reproduction. He argued that Edison’s invention, due to issues with the material substrate, would not be able to pick up the voice of a loud speaker. Berliner overcame this in his gramophone, by positioning the stylus parallel to the substrate, so that there would be minimal friction, and the substrate used would have minimal resistance. “The vibrations were then engraved onto metal and could stand an infinite number of reproductions without altering its accuracy.” (patent 372786) At this point, the cylinder was still being used to support the surface that was recorded on. Berliner switched to the flat disc because the surface had to be straightened in the photo-engraving copy process. (patent 564586) He used the side to side cut method as opposed to the vertical cut, where the needle moved up and down in order to reproduce the signal. Edison’s original invention of the phonograph was used for both recording and for playback. He employed the use of the cylinder as opposed to the disc found in the Victrolas and the Gramophones. In 1889 Edison created the phonograph that separated the instruments used for recording, and playback. He claimed to have done this because he feared that the stylus used to record onto the surface, would “obliterate” the record if used for playback. He made the reproducing point thinner than the recording point. As the phonograph was reproduced by tracing a groove in the surface of a wax cylinder, this produced a considerably low volume, but was ostensibly more pleasing to the ear. The gramophone used a rubber record which was reproduced by scratching a tack in the granulated groove. This had a higher volume. Volume was framed as a class issue where higher volume was deemed as more suitable for teenagers. (Sterne, 279) Johnson and Berliner developed a way to mass produce the disk. This was done by stamping discs from a malleable material such as rubber or wax. (Steffen, 48) In the 1880s, Berliner’s gramophone was distinguished from the phonograph in its loud volume. There was a competing aesthetic that was mixed in corporate interests: some people criticized the gramophone for its loudness. The Uniqueness of the Victrola The consolidated Talking Machine Company was the precursor to Victor Talking Machine Company, which later became RCA Victor. The Victor Talking Machine Company began in 1901. There was no standardization of equipment or software and the companies as well as the consumers had to choose between cylinder or disk. In 1901, Eldridge Johnson of the Talking Machine Company created the Victrola. The Victrola is a type of early phonograph that used an internal horn. It was patented by the Victor Talking Machine Company, and only refers to internal horned phonographs. The Victrola is a direct descendant of the gramophone in that it uses the same technology of a flat record disk. (Patent, 781429) These disks revolved at 78 RPM (rotations per minute) and used the lateral side to side cut method. The Victrola produced sound into the environment through a reverse process of its inscription: “The needle would track the groove, the vibration is coupled into the sound-box, which holds the diaphragm. The diaphragm vibrates the air molecules into a hollow “tonearm” and mechanical energy is converted into acoustical energy.” (official Victrola website) The tonearm then routes the sound into the horn, which is hidden in the box. The horn then directs the waves into the listening environment. (official Victrola website) While the record spins, the stylus tracks the groove which contains the acoustical signal. The stylus vibrates and the vibrations hold the frequency and amplitude information of the audio signal. The diaphragm then converts the mechanical energy into acoustical energy, which excites the column of air in the tonearm. The company saw the gramophones as inconvenient due to the large horn that would just out in order to amplify sound. The prevailing attitude was that it had a dominating presence and was unsuitable for the middle class family’s parlors, which the company sought to market to. (Official Victrola Website) In 1905 the company invented the cabinet sound-producing machine. This consisted of an internal horn folded downward into a large floor standing cabinet. The horn opening was placed below the turntable. (Patent, 1159978) The cabinet was used to block the visual of the horn, and as a “crude volume control,” where if closed made the volume louder, and if opened, softer. At first, the unique model of the Victrola was its design as a “cabinet sound-producing machine.” These were expensive, and the company designed tabletop versions in 1909 for the average American home. The Cabinet Producing Machine and Home Entertainment The first Victor-Victrola was advertised for the wealthy as it was costly to produce, and sold at $200. The Cabinet Producing machine is described in an advertisement created in 1908 as follows: "The Horn and all moving parts are entirely concealed in a handsome mahogany cabinet, and the music is made loud or soft by opening or closing the small doors." (Advertisement 3--No title, Outlook (1893-1924); Nov 7, 1908; 90, 10; APS online) Advertising suggested the value of the talking machine for home entertainment. The Phonograph replaced the piano as the dominant form of entertainment in the household. Playing the piano was time consuming and an active process. With the advent of the Phonograph, entertainment could now be a passive process, and more efficient, as almost no energy had to be emitted to hear pre-recorded music. It was redefined as a parlor instrument to bring culture, education, and social status into the household. The phonograph suited the middle class household, and was targeted specifically towards women. Thus the first designs of the Victrola’s “cabinet sound reproducing machines,” were designed in order to blend in with household furniture. The phonograph initially was associated with low-brow culture, because it was only seen in nickelodeons and archades. It was viewed as a novelty. Advertisers sought to transform its image into one that would be acceptable for an in-home market. In doing so, the industry had to promote its ability to play “high class music.” Early advertisements mimicked campaigns for home products such as the piano to cleaning supplies, hoping to change the image of the phonograph. The ads promised to provide culture, education, and upward social mobility. (Creating a home culture for the phonograph: Women and the rise of sound recordings in the United States,1877--1913, Bowers, Nathan David. Proquest Dissertations And Theses 2007. Section 0178, Part 0323 321 pages; [Ph.D. dissertation].United States -- Pennsylvania: University of Pittsburgh; 2007. Publication Number: AAT 3270123., 72) The Orthophonic Victrola The Mechanical Victrola evolved into the Orthophonic Victrola (1924) This was a device designed to sound more like radio. Radio used electricity to transmit sound across space. The Orthophonic Victrola used an electric recording process with an electric speaker which could pick up more treble and base. In 1929 the company was bought by the RCA corporation. The Orthophonic Victrola eventually was replaced by Vinyl record players. Historical Context of Mass Reproduction Sound media was part of an emerging field of mass communication and culture oriented towards the middle class. (Benjamin, “Art in the Age of Mechanical Reproduction) “The technique of reproduction detaches the reproduced object from the domain of tradition. By making many reproductions it substitutes a plurality of copies for a unique existence.” (ibid) The 19th Century saw a trend towards mass culture which was characterized by high volume, low unit-cost production. This was reflected in the high-speed printing press, newspapers, and books that could be sold cheaply. Content was popularized, and became more democratic. (Hoover) At the same time, there was a need for the democratization of culture. The salient attitude was that “music was a powerful cultural and moral force, and that Americans sadly lacked access to it…” The American ideal was that all members of society should have access to “the highest forms of human culture.” (Making America More Musical through the Phonograph, 1900-1930, by Mark Katz American Music © 1998 University of Illinois Press) The Phonograph and public taste and popular culture: The phonograph allowed the working class American to access “high culture” by providing them with mass produced music. In the beginning of the mass production of records, “records were sold by volume and not by subject, much less by title or artist. The customer bought a dozen mixed records instead of choosing the songs he or she preffered. “ (sterne, 33) Thus it could be said that the record companies became the taste-makers of audio culture. Advertisements would speak to the fact that people could now hear "The world's greatest artists" in their own home. Acoustic Realism and the Marketing of High Fidelity The Original Use of the Phonograph Music was not Edison’s preferred application of the Phonograph. Edison wanted "to preserve for future generations the voices as well as the words of our Washingtons, our Lincolns, our Gladstones, etc.. their utterances transmitted to posterity, centuries afterwards, as freshly and forcibly as if those later generations heard his living accents." (As Quoted by Steffen) “The Sun says nothing could be more incredible than the likelihood of hearing the voice of the dead, yet the invention of the new instrument is said to render this possible hearafter.” The interest in hearing the voices of the dead also came from a desire to preserve cultures that the U.S. government wished to destroy. (Sterne) Edison initially intended the invention to be used in order to record voices, and as a business machine. He stated that it could be used for stenography, teaching and preservation of language, recording of lectures and instructions from teachers and professors, capturing the dying words of family and friends, voice clocks that would announce the hour, an attachment to Bell’s telephone, talking books for the blind, talking dolls, and music boxes. (Steffen, 26)f) The Phonograph was first considered by the public as an improvement on the telephone. Not only did it receive sound but kept it “corked up in a coil of electric wire until it is wanted.” “The state ought to order thousands with which to bottle up the eloquence of our legislators this winter for the edification of posterity.” (Headline: [State]; Article Type: News/Opinion Paper: New Orleans Times, published as The New Orleans Times; Date: 11-14-1877; Volume: XIV; Issue: 7464; Page: 4; Location: New Orleans, Louisiana) The phonograph was seen by Edison as a device for the storage of public knowledge. (Gitelman, 13) Thus it can be seen that the initial usage of the phonograph was for posterity and common historical knowledge of public affairs. At the same time, it was also seen as a device for personal and private affairs, where dying family members could record their voices, and leave messages for their future family members. In this sense the recording of voices allowed for the first time, the voice to be separated from the body, but to contain a trace of bodily identity. The Pursuit of High Fidelity Advertisements were geared at fidelity. Whether or not a person could tell the difference between the real and its reproduction. The relations between sounds made by people and those made by machines were called into question. (216, Sterne) This notion comes from the perspective that reproduced sound is a mediation of live sounds that occur in face-to-face presence and live musical performance. (218, ibid) The recording technology was supposed to be a “vanishing mediator” as if it were not there. b) This nostalgia for capturing the original voice speaks directly to Benjamin’s essay “Art in the Age of Mechanical Reproduction.” There is a sense of nostalgia that accompanies reproduction. “When speech fails to protect presence, writing becomes necessary.” (Derrida, Of Grammatology 144) Writing is commonly understood as supplementing speech. Edison accepted the illusion of presence, believing that the phonograph would transmit to our perception the sonic event exactly as if it was present at the instant of its perception. This thought process is also mimicked in the belief in the fidelity of the photograph, as the captures the objective reality. Edison believed in the transparent communication. The phonograph was a way of preserving the illusion of presence, and exemplifies Derrida’s concept of logo-centrism. The voice was supposed to be the marker of origin and identity. Sound and Time in the 19th Century The Bourgeoisie conception of time was that it was something that could be measured, objectified, repeated, and saved. (Sterne, 300) Sound recordings dealt with time in this manner. According to Kittler, in the 19th century the “real” took the place of the symbolic. Kittler is referring to time as the primary mode of measurement as opposed to length. With the advent of the phonograph the notion of frequency was changed. “The measure of length is replaced by time as an independent variable. It is a physical time removed from the meters and rhythms of music.” (Kittler, 24) What was objectified were sonic vibrations, and the length of time. In this sense, the recording of sound can be viewed as repeatable, but within in a physically bounded frame that could only exist in a unique space. “Phonographic time was the outgrowth of a culture that had learned to can, to embalm, in order to ‘protect’ itself from seemingly inevitable decay.” (Sterne, 311) The act of recording sounds, is a process of extraction. The extraction of a particular spacio-temporal moment can then be stored and repeated any number of times. However, there would always be qualitative differences in the playback of the original sounds, that would be contingent upon the particular situational context of the payback. Although there was a creative act of recording (as in the selection of the event, and the artificial conditions that were constructed, the creative act extended to the domain of the listener, and did not stop with the original performance. Sterne argues that the sound reproduction was inherently a studio art and therefore bound up with the reproduction technologies. He argues that the main point of Benjamin’s essay is that reproduction precedes originality. People performed for the machines, capturing reality suitable for reproduction. “Considered as a product, reproduced sound might appear mobile, de-contextualized, disembodied. Considered as a technology, sound reproduction might appear mobile, dehumanized, and mechanical.” (Sterne, 236) The listener that was enmeshed in a discourse of fidelity and authenticity saw the noises made by the machines as exterior to the sound. There was an effort on the part of the listener to ignore the pops and hisses that came from the recording. There was a discourse of realism surrounding reproduction devices. The record was supposed to reflect sound as opposed to shaping it. Analogue v.s. Digital The groove of a phonograph record is a spatial analogue of the sound emitted by the original performing medium. In early automatic instruments, there is no spatial representation, but rather an indirect representation of the live performance. Analogue recording technologies have an authentic relation to the original because there is a causal relation between the sound and the analog recording. This is in contradistinction to digital recording that converts sound into zeros and ones, to be reconstructed as sound at the moment of production. Therefore, Sterne argues that analog and digital are ontologically different in terms of live recording. The Successors to the Portable Victrola The first successor to the portable Victrola was the portable Stereophonic Record Player patented in 1964. This record player had an automatic record changer and delivered sound waves through independent amplifiers. This was a remediation of the portable Victrola because of its design. "It is another object of the present invention to provide an assemblage of speakers and automatic record changer whereby they can be quickly and easily assembled together into a compact portable unit and can at the same time be very quickly and easily set up in proper physical separation for stereophonic reproduction."(Patent 3135837) This is a successor because of its design and its affordances. Another successor is the "Magnetic Recording and Reproducing Machine" or the tape recorder. The advantage of the tape recorder was its ability to be reused indefinitely. Another successor to the portable Victrola is the portable stereo system, also known as the "boombox." This was a system used in the 1980s that played at a high volume either cassette tapes or compact discs. As it was designed for public outdoor use, it can be said to be a current incarnation of the the portable Victrola. Portability: From Group to Individual Listening Although the portable Victrola was easily transportable by design, its intended use remained open-air and group playback. Portability was a value not yet associated with individualized listening, as would become customary with later portable technologies that employed headphones. The Victrola was instead marketed as a device which could be easily taken from place to place for open-air playback, not a device which could offer playback en route. This was a function of both the issues of volume control mentioned earlier as well as the obvious difficulty in handling rather clumsy and large material recordings. Indeed, the problem of analogue, carved inscription (wax records), as compared to analogue, magnetic recording (cassettes), rendered portable playback on one's person illusory for some time. Radio technology lent itself more immediately to portable use (in the 1950's car, for instance). The Victrola and subsequent record players, on the other hand, required a stable platform upon which to sit during use, lest the needle jump. Stability was key. That said, a number of ill-fated devices did emerge which attempted to solve this problem. Portability would thus become synonymous with individualized and isolated playback: the walkman, the discman, and, more recently, the iPod. An early forerunner to such technologies, introduced well after the advent of both the eight-track and subsequent audiocassettes, presented itself in 1983 as the "personal portable phono system." This was the the Mister Disc pictured above. Jonathan Sterne, "The Audible Past: Cultural Origins of Sound Reproduction," Duke University, copyright 2003 Lisa Gitelman, "Scripts, Grooves, and Writing Machines: Representing Technology in the Edison Era," Stanford Press, copyright 1999 Friedrich Kittler, "Gramophone, Film, Typewriter," Stanford University, 1986 S. Hoover, "Religion in the Media Age," Routledge, copyright 2006 David Steffen, "From Edison to Marconi: The First 30 Years of Recorded Music," Minnesota, Copyright 2005
<urn:uuid:50a2be6e-9721-4587-acd4-a3abb171f5b5>
{ "dump": "CC-MAIN-2017-09", "url": "http://cultureandcommunication.org/deadmedia/index.php/The_Victrola", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00627-ip-10-171-10-108.ec2.internal.warc.gz", "language": "en", "language_score": 0.966298520565033, "token_count": 6789, "score": 3, "int_score": 3 }
Alphabet Coloring In Pages. Preschool phonics skills include letter recognition and letter sounds. The theme of each letter is from our popular alphabet flash cards. With these a to z alphabet coloring sheets, your children can slowly learn how to write letters while having fun! We have interactive jigsaw puzzles, and. Alphabet coloring sheets bring much fun for young children. These Alphabet Coloring Pages Support Kids To Improve Their Hand And Eye Coordination, Fine And Gross Motor Skills By Letting Them Be Creative In Their Own Ways. Are your children working with the alphabet? Alphabet coloring pages help with letter recognition and letter sounds. These will keep the kids busy for a while and get them practicing their leters of the alphabet! Preschool Phonics Skills Include Letter Recognition And Letter Sounds. Letter recognition is the first step to learning the letters of the alphabet and alphabet coloring pages are wonderful letter activity for that. We have coloring pages for each letter of the alphabet. Beautiful alphabet coloring page : Alphabet Coloring Pages Are Those Pages That Help Children Learn To Recognize Letters Before They Start Reading And Writing. Grab the pdf alphabet worksheets for a no prep alphabet activity kids will love. Alphabets & words | coloring. This will reinforce letter recognition. We Have Interactive Jigsaw Puzzles, And. Letters and alphabet coloring pages | free coloring pages A z alphabet coloring pages. M is for magnet, mail, mushroom, moose, mouse, map, mouth, moon, muffin, mug, mail. Kids Will Enjoy Printable Activities Like Find The Letter, Color By Letter, Alphabet Maze, Animal Pictures, Food Pictures, Practicing Printing Upper And Lower Case Letters, And More. Alphabet coloring pages will help your child focus on details, develop creativity, concentration, motor skills, and color recognition. Your children will absolutely love you for it and we guarantee they’ll be asking for more. Alphabet coloring pages are the perfect educational tools for children.
<urn:uuid:687e23fd-4ea4-4599-b7d3-459d0405292c>
{ "dump": "CC-MAIN-2022-40", "url": "https://howtogetbigger.info/alphabet-coloring/alphabet-coloring-in-pages/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00743.warc.gz", "language": "en", "language_score": 0.8307445645332336, "token_count": 405, "score": 3.828125, "int_score": 4 }
PHP & MySQL Training PHP is a server-side scripting language designed for web development but also used as a general-purpose programming language. Originally created by Rasmus Lerdorf in 1994, the PHP reference implementation is now produced by The PHP Group. PHP originally stood for Personal Home Page, but it now stands for the recursive acronym PHP: Hypertext Preprocessor. PHP code may be embedded into HTML code, or it can be used in combination with various web template systems, web content management systems, and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable. The web server combines the results of the interpreted and executed PHP code, which may be any type of data, including images, with the generated web page. PHP code may also be executed with a command-line interface (CLI) and can be used to implement standalone graphical applications. PHP is a great option for many reasons, here are some reasons why the language may be right for you or your project: Fast Load Time – PHP results in faster site loading speeds. PHP codes runs much faster than ASP because it runs in its own memory space while ASP uses an overhead server and a COM based architecture. Less Expensive Software – In working with PHP, most tools associated with the program are open source software, such as WordPress, so you need not pay for them. As for ASP, you might need to buy additional tools to work with its programs. Less Expensive Hosting – ASP programs need to run on Windows servers with IIS installed. Hosting companies need to purchase both of these components in order for ASP to work, this often results in a more expensive cost for monthly hosting services. On the other hand, a PHP would only require running on a Linux server, which is available through a hosting provider at no additional cost. Database Flexibility – PHP is flexible for database connectivity. It can connect to several databases the most commonly used is the MySQL. MySQL can be used for free. If ASP is used, MS-SQL, a Microsoft product must be purchased. Increased Available Programming Talent – PHP is used more often creating a larger pool of talent to choose from for modifications and building and lowering the cost per hour for those services. And making it easier to find someone to update your site in the future if you choose to hire a staff member for the task or work with an alternate provider than the one who built your site. Php has good career opportunity, before few years ago big MNC companies like, Infosys, Wipro, TCS prefer to hire Java programmers and there was very low recruitment of people with profile of php. But now lot’s of big companies come with php projects and increase the recruitment in this field as compare to Java and Dot net programmers. You may also check the growth of php When it comes to variety, php developers works on Most of popular are based on ti’s frameworks and CMS and they all are very valuable and most demanded from eCommerce to blogging industry. PHP Developer Salary Focus Training Services Provides Python Training with highly experienced trainers. We are focusing to fill the gap of industry requirement and available resources. at Focus Training Services, we give hands on practice training in PHP and MySQL and motivate students to do a project based on PHP, which helps them to get the best jobs based in PHP development . Focus Training Services is offering PHP & MySQL technology package which enables our students to get be best technical exposure and career opportunities . This Program will enable you to get opportunities as a PHP Developer. Our Technology Experts Our Technology Expert - More than 10 years of experience in training and development. - worked with Google and HCL. - 15+ Corporate training - Trained 5000+ candidates Our Technology Expert - More than 5 years of experience in PHP Development. - worked with Cognizent . - Trained 500+ candidates - 20+ Project Experience
<urn:uuid:7df2b542-33c2-4f2d-991c-b95d600efd06>
{ "dump": "CC-MAIN-2019-43", "url": "http://focustech.in/courses/php-mysql-training-in-hyderabad/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00080.warc.gz", "language": "en", "language_score": 0.9310950040817261, "token_count": 828, "score": 3.09375, "int_score": 3 }
You’ll love the brand-new and improved National Green Building Standards. Here’s why. In April 2016, the American National Standards Institute (ANSI) approved the most recent version of the ICC/ASHRAE 700-2015 National Green Building Standard™ (NGBS). First published in 2009, the NGBS sets the bar for sustainable and high-performance residential construction and provides a pathway by which builders, remodelers, designers, and developers may seek third-party certification of their work. This can include new homes, multifamily buildings, land developments, remodeling projects as well as hotels and dormitories. Although voluntary, the NGBS serves as the basis for many federal, state, and local green building programs. In addition, those whose projects are NGBS Green Certified may be eligible for federal, state, or local incentives—like tax credits, permit streamlining, or density bonuses. Forecasts suggest that the green single-family housing market will represent 26 to 33 percent of the market in 2016. Home buyers have identified green building standards—including energy efficiency, low maintenance, resale value, and healthy indoor environment—as the most influential of all factors in their purchasing decisions, and builders are responding. As green building becomes more common, these standards become even more critical to the industry because they allow consumers and municipalities alike to understand appropriate green building criteria. The NGBS is the only ANSI-approved green building standard specifically designed for residential projects. It covers everything from converting raw land into finished lots, to single-family new home design and construction, to high-rise multifamily development. The standard also delivers stand-alone chapters for development sites, home remodeling, and additions to and renovations of apartments and condos. In short, the NGBS outlines a variety of green practices and materials that can be used to minimize a project’s environmental footprint and create a higher quality home. These practices and materials also provide consumer benefits, such as lower utility bills, improved indoor air quality, and increased home value. Recognizing that what is considered green construction will vary according to local climate, geography, and market preferences, the standard’s flexibility allows those who use it to integrate green features at the appropriate level for their individual clients, businesses, and housing markets. The 2015 edition incorporates changes that better align the NGBS with the 2015 family of ICC building codes. In addition, it expands the application of innovative practices, and builds upon the knowledge gained from years of designing, building, operating, and certifying to the green building standard. Some notable updates include: - Substantial revisions to the Energy Efficiency chapter, which now has more stringent rating levels based upon whole-house energy savings that are above the 2015 International Energy Conservation Code - A new energy compliance path for the HERS Index - Mandatory Grade I insulation installation - A comprehensive update of exterior and interior lighting provisions, including common areas in multifamily buildings - Mandatory installation of carbon monoxide alarms for all buildings constructed following the International Residential Code (IRC), regardless of level of certification or local code - Revamped stormwater management options that encourage low-impact development practices, such as swales and rain gardens - Greater emphasis on and recognition of multi-modal transportation options including bicycle parking, pedestrian connectivity, proximity to transit, and electric-vehicle charging - New references for Environmental Product Declarations for both specific and industry-wide products - Expanded provisions for Universal Design Projects can be certified to the Bronze, Silver, Gold, or Emerald levels by reaching progressively more rigorous goals for energy, water and resource efficiency, indoor environmental quality, lot design and development, and home owner education. The NGBS is the only national rating system that requires this kind of comprehensive rigor for all the attributes that contribute to a home’s “green-ness.” To date, more than 100,000 NGBS Green Certified homes have been built nationwide, and many industry professionals are discovering the value that increased sustainability can bring to their homes and businesses. For residential building professionals who need a credible and comprehensive definition of green that allows for regional and market-based flexibility, the NGBS provides a superior option. It allows builders to set themselves apart by responding to the growing consumer demand for more responsible, higher-performing housing options without creating unnecessary administrative burdens or adding significant cost. Whether an industry professional is new to high-performance building or a seasoned veteran, the NGBS makes it easy to become part of the growing green market. The full article, written by Susan Asmus, was published in the Summer 2016 Issue of Best in American Living.
<urn:uuid:b3fc313f-fbc0-49b1-8909-48ebd9b6a9c1>
{ "dump": "CC-MAIN-2021-49", "url": "https://bestinamericanliving.com/2017/05/green-building-standards-are-keeping-up-with-the-times/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358903.73/warc/CC-MAIN-20211130015517-20211130045517-00464.warc.gz", "language": "en", "language_score": 0.9296474456787109, "token_count": 954, "score": 2.578125, "int_score": 3 }
Google Earth reveals ancient stories "Any sufficiently advanced technology is indistinguishable from magic," science fiction author Arthur Clarke once suggested. Well, how about Google Earth instead? Like a friendly genie, that modern technology has started answering archeologist's wishes with its worldwide catalog of satellite views of the Earth. A pair of studies in the Journal of Archaelogical Science this year suggest these views are revealing a vast and ancient story, one only starting to emerge from the fabled desert of Arabia. "(W)e are on the brink of an explosion of knowledge," writes archeologist David Kennedy of University of Western Australia in Perth, in a report in the current edition of the journal. Aerial photography and satellite images from Syria to Yemen are, "revealing hundreds of thousands of collapsed structures, often barely (19 to 30 inches) in height and virtually invisible at ground level," he writes.... comments powered by Disqus - Historian author Antony Beevor says his new World War 2 book may anger Americans - Ron Radosh and Allis Radosh plan to defend Warren Harding in a new book - Historians tackle America’s mass incarceration problem - Report: Russian studies in crisis - Ken Burns: Donald Trump’s birtherism — a “politer way of saying the ‘N-word'” — proves America isn’t remotely “post-racial”
<urn:uuid:50451a0e-b76d-4ba2-aa2c-e1791ece92e7>
{ "dump": "CC-MAIN-2015-35", "url": "http://historynewsnetwork.org/article/142609", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00117-ip-10-171-96-226.ec2.internal.warc.gz", "language": "en", "language_score": 0.9073432683944702, "token_count": 289, "score": 2.765625, "int_score": 3 }
In Ch. 2 Volf writes of the meaning of labor, of work. Volf ties work in with the existential matters of life's meaning and purpose. "There are many possible ways of construing the meaning of work. One purpose that immediately comes to mind is to put bread on the table—and a car into the garage or an art object into the living room, some may add. Put more abstractly, the purpose of work is to take care of the needs of the person who does it... But when we consider taking care of ourselves as the main purpose of work, we unwittingly get stuck on the spinning wheel of dissatisfaction. What we possess always lags behind what we desire, and so we become victims of Lewis Carroll’s curse, “Here, you see, it takes all the running you can do to keep in the same place.” In our quiet moments, we know that we want our lives to have weight and substance and to grow toward some kind of fullness that lies beyond ourselves. Our own selves, and especially the pleasures of our own selves, are insufficient to give meaning to our lives. When the meaning of work is reduced to the well-being of the working self, the result is a feeling of melancholy and unfulfillment, even in the midst of apparent success." (Kindle Location 639) The antidote to the "rat race" and boredom of work is to live for "some kind of fullness that lies beyond ourselves." For example, live for this cause.
<urn:uuid:b3dabd07-9648-4bc8-8606-794fe7895702>
{ "dump": "CC-MAIN-2019-39", "url": "https://www.johnpiippo.com/2011/12/miroslav-volf-on-meaning-of-work.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574159.19/warc/CC-MAIN-20190921001810-20190921023810-00516.warc.gz", "language": "en", "language_score": 0.9596484303474426, "token_count": 314, "score": 2.65625, "int_score": 3 }
by Jayson Hawkins Certain human rights are inalienable, even for incarcerated individuals. When Joshua Davis received a shot of insulin in 2018 that was tainted with other prisoners’ blood, the resulting lawsuit against the institution that risked exposing him to a host of deadly diseases should have been a slam dunk. Yet an unusual issue prevented the suit. According to Rhode Island law, Davis was already dead. An archaic state statute defines anyone sentenced to life as “civilly dead,” which renders their civil rights null and void. Not only do they lack the ability to sue, they cannot be lawfully married or divorced, nor can they hold title to any property. This holds true even if they eventually regain freedom. The idea of civil death traces its roots to the Classical Greeks and Romans. Criminals facing execution were barred from military service, voting, and other civic privileges. The Germanic tribes utilized “outlawry,” a similar concept wherein those guilty of certain crimes lost all rights and protections within the community. Much later, the English incorporated civil death into the common laws, and it made its way to America with the colonists. An 1871 court ruling in Virginia declared convicted felons had no rights and existed solely as “slave-[s] of the state.” English common law only applied civil death to felony convictions, but almost all felonies used to be capital crimes. Lengthy prison terms began to be substituted for the death penalty over the years for many crimes, but rather than also eliminating civil death, many U.S. states opted to widen its definition to include life sentences. Problems inevitably arose when some lifers were paroled yet still found themselves legal non-persons. A New York man declared civilly dead while serving a life term discovered after his release he was still legally wed even though his wife had remarried during his incarceration. A judge refused to annul the original union simply because of his temporary “death,” thus leaving his (ex) wife married to both men. A Missouri court ruled the legal paradoxes of civil death unconstitutional in 1976, agreeing that it was “an outdated and inscrutable common law precept.” Civil death nonetheless remains on the books of New York and Rhode Island, though only the latter enforces it. Other states that long ago overturned their own statutes have found ways to retain several effects of civil death. Rights such as having certain occupations, voting, and running for office remain forbidden for many felons even after serving their time. Sonja Deyoe, an attorney representing Joshua Davis and another Rhode Island lifer burned by a steam pipe at the same facility, has brought suit against the state on their behalf. The Civil Death Act, she claims, has denied her clients “basic civil, statutory, and common law rights and access to the courts, [and] imposes an excessive and outmoded punishment contrary to evolving standards of decency.” Her argument echoes rulings of other states’ courts and alleged numerous Constitutional violations. “The state could choose not to feed these individuals, deny them medical care, torture them, or do anything short of execute them,” she said. A number of Rhode Island’s current population of over 200 civilly dead prisoners will have parole opportunities, but courts have not determined if freedom will include legal resurrection. The few lifers who have already been paroled have been hesitant to question their status. The state’s General Assembly agreed on legislation to overturn civil death in 2007 only to have it vetoed by the governor at that time. Multiple bills that would achieve the same end have since been proposed, but not one has passed. As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content. Already a subscriber? Login
<urn:uuid:cc69f86d-b871-4941-8a47-2ce7159fee2d>
{ "dump": "CC-MAIN-2024-10", "url": "https://www.criminallegalnews.org/news/2019/nov/18/civil-death-laws-when-life-death/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473690.28/warc/CC-MAIN-20240222030017-20240222060017-00016.warc.gz", "language": "en", "language_score": 0.9722945094108582, "token_count": 786, "score": 2.65625, "int_score": 3 }
What Is the Linear Model of Communication? The linear model of communication is an early conceptual model that describes the process of information being transferred in one direction only, from the sender to the receiver. The model applies to mass communication, such as television, radio and newspapers. The linear model of communication was first proposed in 1949 by information theorists Claude Shannon and Warren Weaver. Shannon and Weaver use seven terms to define the model: sender, encoding, decoding, message, channel, receiver and noise, according to Communication Studies. The sender is the message creator, such as the writer of a newspaper article. The sender encodes the message by writing it as an article and then sends it to a specialized channel, such as a printed newspaper. The receiver collects the message by reading the newspaper and decoding, or interpreting, the message so the receiver can understand it. Noise includes distractions that interfere with the message being transferred and received, such as music playing so loudly that the receiver cannot concentrate on the newspaper article. The linear model describes communication as a one-way process. It doesn’t allow for feedback, which is the receiver’s response to the message. The linear model doesn’t apply to a conversation, because a conversation involves an exchange of messages between sender and receiver. Each participant provides verbal and nonverbal feedback to the other person as the conversation continues.
<urn:uuid:ba1a07ca-878e-412c-a272-16ebd8d2a32c>
{ "dump": "CC-MAIN-2022-27", "url": "https://www.reference.com/world-view/linear-model-communication-1f30ca627b9e983c", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00414.warc.gz", "language": "en", "language_score": 0.9533405303955078, "token_count": 276, "score": 3.640625, "int_score": 4 }
Researchers from Tasmania and Finland have found that passive smoking can cause irreversible damage to the structure of children's arteries. The study published in the European Heart Journal found that children whose parents both smoke had thicker arteries later in life. The University of Tasmania's Dr Seana Gall says that can add more than three years to the age of the blood vessels by the time they reach adulthood. "What that suggests to us is that the effect of the passive smoke exposure on the arteries is something that happens in childhood," she said. "And it remains throughout life as an affect on the arteries so it's really important in terms of getting the message through that children should not be exposed to passive smoke."
<urn:uuid:ed85d9c8-55e1-42b6-a981-0f9049c4375e>
{ "dump": "CC-MAIN-2015-32", "url": "http://mobile.abc.net.au/news/2014-03-06/new-passive-smoking-study/5302314?pfm=sm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988718.8/warc/CC-MAIN-20150728002308-00163-ip-10-236-191-2.ec2.internal.warc.gz", "language": "en", "language_score": 0.9757989048957825, "token_count": 141, "score": 2.921875, "int_score": 3 }
Bounded at its western end by the church of St Peter Hungate and at the eastern end by that of St Simon and St Jude, this lovely street dates back to medieval times and is one of the finest and prettiest of its kind in England. Elm Hill takes its name from a succession of elm trees which have stood in the small square to the top of the hill. The last of these was felled in 1979, having succumbed to Dutch elm disease, and was replaced by a London plane, which is thriving nicely. The first evidence for the existence of the street dates from the early thirteenth century, at which time it continued in a straight line westward beyond the present bend at the top of the hill for a further quarter of a mile or so. It was diverted to its present line in the fifteenth century to accommodate the building of the St Andrew's and Blackfriars' halls for the Dominican friars. Today Elm Hill is lined on both sides mostly by timber framed merchants' houses which were built in the early sixteenth century following a fire in 1507 which devastated the earlier medieval street and much of the surrounding area. The only building in the street to survive the fire was what is now the Britons Arms, towards the top of the hill and adjacent to the camera position for the photograph, above. It has been suggested that in the early fifteenth century the building housed a béguinage, a lay sisterhood of pious women, but this cannot be confirmed. By the late eighteenth century the building had become an alehouse, and remained as such until the early 1940s. Today it is home to a coffee house and restaurant. For much of its length Elm Hill runs parallel to the river Wensum, once an important means of importing and exporting goods to and from Norwich via the port of Great Yarmouth, some twenty miles downstream. Many of the properties on the river side of the street had their own quays, with warehouses, workshops and workers' dwellings in the spaces between the houses and the river. Walking down the street today it is hard to believe that by the late nineteenth century this once prosperous area had fallen into decline and had become a rat infested slum. In the 1920s the city council proposed a plan to demolish Elm Hill and replace it with an area of light industrial premises. This was opposed by the newly formed Norwich Society and at a subsequent council meeting the plan was defeated by just one vote. A programme of renovation and restoration commenced in 1927, and today many of the houses are home to art galleries and book and antiques shops. Wrights Court, off Elm Hill. Tudor brick nogging in herringbone pattern, at Paston House, Elm Hill.
<urn:uuid:12bd3a41-b265-429f-a00f-b7b67d18a10b>
{ "dump": "CC-MAIN-2019-09", "url": "https://norwich360.com/elmhill.html?1", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482347.44/warc/CC-MAIN-20190217172628-20190217194628-00523.warc.gz", "language": "en", "language_score": 0.9794642925262451, "token_count": 567, "score": 2.53125, "int_score": 3 }
The characters of Alice’s Adventures in Wonderland are having a Parade! All players are producers of this parade. Characters from Lewis Carroll’s books such as Alice, The White Rabbit, and The Hatter are steadily invited to join this weird procession. On your turn, you play a card (from your hand of five) to the end of the parade. Unfortunately, that card might cause other cards to walk off the parade. These cards count as negative points in the end. The length of the parade line is important. If the number of the card you just played is less than the line length, you may receive the excess cards (counting from last played to the first of the line). But you don’t take all the relevant cards, only the cards that meet one of these requirements: 1. the color is the same as color of the card just played, or 2. the number is the same or lower than the card just played The game ends when the draw deck is exhausted or when one player has collected all six colors in their point piles. Then everyone plays one last card. From the four cards remaining in their hand, players choose two cards to add to his or her point piles. The player who has the least negative points after this is the winner. Normally, negative points are same the number on the card. But if you have the most cards in a certain color, each of your cards of that color counts as only 1 negative point! Thus, play your cards well!
<urn:uuid:b3b3b8b9-7bfd-4509-8db4-837766050eb9>
{ "dump": "CC-MAIN-2019-22", "url": "https://acrosstheboardcafe.com/product/parade/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258058.61/warc/CC-MAIN-20190525124751-20190525150751-00547.warc.gz", "language": "en", "language_score": 0.9662203192710876, "token_count": 313, "score": 2.5625, "int_score": 3 }
What is glaucoma? Glaucoma is a group of diseases that result in degeneration of the optic nerve and many are caused by increased pressure inside the eye. Glaucoma can lead to vision loss and even blindness. In most patients, glaucoma occurs when pressure inside the eye is at a level sufficient to damage to the optic nerve. The optic nerve is a bundle of more than 1 million nerve fibers. It connects the retina to the brain. (See diagram below.) The retina is the light-sensitive tissue at the back of the eye. A healthy optic nerve is necessary for good vision. Glaucoma is one of the leading causes of blindness. Glaucoma causes blindness in little over 12 percent of people with the condition. However, early treatment can often prevent serious vision loss. What causes optic nerve damage in glaucoma? Glaucoma is often associated with increased pressure inside the eye. In the front of the eye is a space called the anterior chamber. A clear fluid flows continuously in and out of the chamber and nourishes nearby tissues. In normal eyes, fluid leaves the chamber through the open angle, the area where the cornea and iris meet. When the fluid reaches the angle, it flows through a spongy meshwork, like a drain, and leaves the eye. In eyes with glaucoma, the drainage of fluid is either slowed down or blocked. The pressure inside the eye then rises to a level that can damage the optic nerve. When the optic nerve is damaged from increased pressure, it can cause vision loss. This is why controlling pressure inside the eye is important. What are the types of glaucoma? There are three main types of glaucoma: - Open-angle glaucoma – In this type, the angle in the eye is open, but it does not function properly. This prevents the fluid inside the eye from draining and causes the pressure in the eye to rise. - Closed-angle glaucoma – In this type, the angle in the eye is closed, or blocked. This prevents the fluid inside the eye from draining and causes the pressure in the eye to rise. In some people, the blockage happens very suddenly and causes severe pain and vision loss. This is called “acute closed-angle glaucoma.” In other people, it happens slowly over time, and might cause periods of headaches. This is called “chronic closed-angle glaucoma.” Closed-angle glaucoma is a serious condition and needs to be treated immediately. - Congenital glaucoma – This happens when a child is born with a defect in the angle of the eye that slows the normal drainage of fluid. These children usually have obvious symptoms, such as cloudy eyes, sensitivity to light, and excessive tearing. Other types of glaucoma include: Low-tension or normal-tension glaucoma – In this type, the optic nerve is damaged without an increase in eye pressure. People with this condition might also have problems with side vision. In some people with this type of glaucoma, medications or surgery can lower the eye pressure and slow the disease. In other people, lowering the eye pressure will not stop the glaucoma from getting worse. If you have low-tension or normal-tension glaucoma, make sure your eye doctor knows your complete medical history. This can help him or her identify other risk factors that might be contributing to your condition, such as low blood pressure. In people with no other risk factors, treatment options for low-tension or normal-tension glaucoma are the same as those for open-angle glaucoma. Secondary glaucoma – These are types of open-angle glaucoma that are caused by medication or other medical conditions. They include: - Pseudoexfoliative glaucoma – People with this type have deposits of a protein-like material in their eye. - Pigmentary glaucoma – In this type, pigment from the iris flakes off and blocks the meshwork of the angle. This slows fluid drainage. - Neovascular glaucoma – This is a severe form of glaucoma that can happen in people with diabetes. - Uvetic glaucoma – This type of glaucoma can happen from eye inflammation. In some people, corticosteroid drugs used to treat eye inflammations and other diseases can also cause secondary glaucoma. Does increased eye pressure mean that I have glaucoma? Not necessarily. Increased eye pressure means you are at risk for glaucoma, but does not mean you have the disease. A person has glaucoma only if the optic nerve is damaged. If you have increased eye pressure but no damage to the optic nerve, you do not have glaucoma. However, you are at risk. Your eye doctor can help you understand whether or not you have glaucoma. Will I develop glaucoma if I have increased eye pressure? Not necessarily. Not every person with increased eye pressure will develop glaucoma. Some people can tolerate higher eye pressure better than others. Also, a certain level of eye pressure might be too high for one person but normal for another. Whether or not you develop glaucoma depends on the level of pressure your optic nerve can tolerate. That’s why it is important to have a comprehensive dilated eye exam. It can help your eye doctor determine what level of eye pressure is right for you. Can I develop glaucoma without increased eye pressure? Yes. This form of glaucoma is called low-tension or normal-tension glaucoma. It is not as common as open-angle glaucoma. Who is at risk for glaucoma? Anyone can develop glaucoma, but a person’s risk increases with age. In addition, some people are at higher risk for developing the condition than others. They include: - People with a family history of glaucoma - African Americans Among African Americans, research shows that glaucoma is: - Five times more likely to occur than in Caucasians - About four times more likely to cause blindness than in Caucasians - Fifteen times more likely to cause blindness between the ages of 45 to 64 than in Caucasians of the same age group What can I do to protect my vision? Studies have shown that the early detection and treatment of glaucoma, before it causes major vision loss, is the best way to control the disease. If any of the risk factors above apply to you, make sure to see an eye doctor for a comprehensive dilated eye exam at least once every two years. If you are being treated for glaucoma, be sure to take your glaucoma medicine as directed and see your eye doctor regularly. You also can help protect the vision of family members and friends who might be at high risk for glaucoma. Encourage friends and family at risk to have a comprehensive dilated eye exam at least once every two years. Remember that lowering eye pressure in glaucoma’s early stages slows progression of the disease and helps save vision. What are the symptoms of glaucoma? When it first starts, glaucoma has no symptoms. Vision stays normal, and there is no pain most of the time. However, as the disease progresses, a person with glaucoma might notice his or her side vision gradually failing. Objects in front might still be clear, but objects to the side might be missed. Glaucoma can develop in one or both eyes. A person whose glaucoma remains untreated might routinely miss objects to the side and out of the corner of his or her eye. Without treatment, the person will slowly lose his or her peripheral (side) vision. Vision appears as if it is through a tunnel. Over time, straight-ahead vision might also decrease, leaving no vision remaining. In acute closed-angle glaucoma, symptoms include severe pain and nausea, as well as redness of the eye and blurred vision. In chronic closed-angle glaucoma, symptoms include periods of headaches or eye pain. If you experience any of these symptoms, you should seek treatment immediately. Closed-angle glaucoma is a medical emergency. If you have symptoms of closed-angle glaucoma and your doctor is unavailable, go to the nearest hospital or clinic. Without treatment to improve the flow of fluid, the eye can become blind in a short period of time. Usually, prompt laser surgery and medicines are needed to clear the blockage and protect sight. How is glaucoma detected? Glaucoma is detected through the following tests: - Comprehensive eye exam – This includes: - Visual acuity test – This test uses an eye chart to measure how well a person sees at various distances. - Tonometry – For this test, an instrument measures the pressure inside the eye. Numbing drops might be applied to the eye beforehand. - Pachymetry – For this test, the eye doctor uses an ultrasonic wave instrument to measure the thickness of the cornea. A numbing drop is applied to the eye beforehand. - Dilated eye exam – For this exam, drops are placed in the eyes to widen, or dilate, the pupils. The eye doctor then uses a special magnifying lens to examine the retina and optic nerve for signs of damage and other eye problems. After the exam, the person’s close-up vision might be blurred for several hours. - Visual field test – This computerized test measures peripheral vision. It helps the eye doctor determine if a person has had loss of peripheral (side) vision, which can be a sign of glaucoma. - Optical coherence tomography – This test measures optic nerve thickness. In glaucoma, the nerve fiber layers (the fibers that make up the optic nerve) can get thinner as the disease progresses. - Pictures – The eye doctor might take a picture of the optic nerve. This picture can serve as a baseline to see if changes happen over time. Can glaucoma be treated? Yes. Immediate treatment for early stage, open-angle glaucoma can slow progression of the disease. That’s why early diagnosis is very important. There are many different treatments for glaucoma. However, while these treatments might save remaining vision, they do not restore sight already lost from glaucoma: Medication – Medication, in the form of eyedrops or pills, is the most common early treatment for glaucoma. Some medications cause the eye to make less fluid. Others lower pressure by helping fluid to drain from the eye. Glaucoma medications might need to be taken several times a day. Before you begin glaucoma treatment, tell your eye doctor about other medications you are taking. Sometimes the drops can cause problems with the way other medications work. Most people have no problems with these medications, but in some people they can cause side effects such as fatigue, low heart rate and drowsiness. For example, drops can cause stinging, burning, and redness in the eyes. Many medications are available to treat glaucoma. If you have problems with one medication, tell your eye doctor. Treatment with a different dose or a new medication might be possible. Because glaucoma often has no symptoms, people might be tempted to stop taking, or might forget to take, their medication. But regular and correct use is very important to control eye pressure. Your eye doctor can show you how to put drops into your eye. Laser surgery – There are different kinds of laser surgery doctors can use to treat glaucoma. They include: Laser trabeculoplasty – This can help fluid drain out of the eye in open angle glaucoma. It is a surgery that the eye doctor performs in his or her office. If you have open-angle glaucoma, your eye doctor might suggest this as an option at any point in your treatment. In many cases, a person needs to keep taking glaucoma medication after this procedure. Before your surgery, your doctor will apply numbing drops to the eye. You will sit facing the laser machine and the doctor will hold a special lens to your eye. The doctor will then use the laser to make several evenly-spaced spots in the meshwork inside your eye. This will allow the fluid to drain better. You might see flashes of bright green or red light during this part of the surgery. Laser iridotomy – aser iridotomy Like any surgery, laser surgery can cause side effects, such as inflammation. The doctor might give you drops to take home for soreness or inflammation inside the eye. Several follow-up appointments might be needed to monitor your eye pressure. Usually, if a person has glaucoma in both eyes, one eye will be treated at a time. But some people will have both eyes treated at the same time. If you have your eyes treated separately, the laser treatments for each eye will be scheduled several days to weeks apart. Studies show that laser surgery is very good at reducing eye pressure in some people. However, its effects can wear off over time. If this happens, further treatment might be needed. Conventional surgery – Most eye doctors will only do conventional surgery for glaucoma after medications or laser surgery have failed to control eye pressure. Surgery is about 80 to 95 percent effective at lowering eye pressure. The types of conventional surgeries for glaucoma include: - Trabeculectomy – In this surgery, the eye doctor makes a new opening for the fluid to leave the eye. (See diagram.) Your eye doctor might suggest this treatment at any time. - Glaucoma drainage implant – For this surgery, the eye doctor places an implant inside the eye to help fluid drain. There are different types of implants eye doctors can use for this treatment. Your eye doctor can help you decide which type is best for you. - Trabectome – For this surgery, the eye doctor removes the meshwork inside the eye to help fluid drain. The Human EyeConventional surgery is performed in the hospital. Before the surgery, you will be given medicine to help you relax. Your doctor will also numb your eye. If you need surgery in both eyes, your doctor will perform surgery on one eye at a time. For several weeks after your surgery, you will have to put drops in your eye to fight infection and inflammation. These drops will be different from those you might have used before surgery. You will also need to avoid strenuous activity, such as bending, lifting, and straining. You will be able to read and watch TV normally after surgery. If you have severe pain or vision loss after surgery, contact your eye doctor immediately. This might be a sign of infection, which can have major consequences for your vision. How should I use my glaucoma eyedrops? If your eye doctor gives you eyedrops for treating your glaucoma, you must use them exactly as instructed. Correct use of your glaucoma medication can improve its effectiveness and reduce your risk of side effects. To use your eyedrops, follow these steps: - First, wash your hands. - Hold the bottle upside down. - Tilt your head back. - Hold the bottle in one hand and place it as close as possible to the eye. Do not touch your eye with the bottle, as this can contaminate the bottle. - With the other hand, pull down your lower eyelid. This forms a pocket. - Place the prescribed number of drops into the lower eyelid pocket. If you are using more than one eyedrop, be sure to wait at least five minutes before applying the second eyedrop. - Close your eye or press the corner of the eye that is nearest to your nose lightly with your finger for at least one minute. Either of these steps keeps the drops in the eye and helps prevent the drops from draining into the tear duct and through the nose, which can increase your risk of side effects. What can I do if I already have lost some vision from glaucoma? If you have lost some sight from glaucoma, first see your eye doctor. He or she can help prevent further loss of vision by controlling your glaucoma with treatment. You can also talk to your eye doctor about low vision services and devices that can help you make the most of your remaining vision. Ask him or her to refer you to a specialist in low vision. Many community organizations and agencies offer information about low vision counseling, training, and other special services for people with visual impairments. A nearby school of medicine or optometry might provide low vision services.
<urn:uuid:029572fc-bae3-4f43-b659-5a395d135af4>
{ "dump": "CC-MAIN-2023-23", "url": "https://www.canyoncresteye.com/eye-conditions/glaucoma/?s=", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656869.87/warc/CC-MAIN-20230609233952-20230610023952-00059.warc.gz", "language": "en", "language_score": 0.9355567693710327, "token_count": 3580, "score": 3.890625, "int_score": 4 }
Welcome to Year 1! In the first half of the Summer Term we will be learning about endangered animals. We will be writing non-fiction texts and visiting Paradise Wildlife Park. In the second half of the term we will be reading and writing traditional tales. In Maths we will continue to add and subtract as well as looking at shapes and measures. In Science we will be learning how to group animals and finding similarities and differences between the different groups. In Geography we will be learning about our local area and using maps and the learning about the points of a compass. In PE we will be using our outdoor area to learn how to play games and focus on our athletics skills. Sports day will be on Friday June 30th. We are always available if you have any concerns or queries, feel free to come to the classroom door at the beginning or end of the day to arrange a suitable time to talk. Mrs Woodier and Miss Atakora. Have you seen the Kids' Zone? Play games, and visit some cool websites. You can vote for your favourites!
<urn:uuid:98fdf605-08e5-4ae8-8126-c2ffe349faed>
{ "dump": "CC-MAIN-2017-34", "url": "http://www.burleighschool.co.uk/year-1/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107744.5/warc/CC-MAIN-20170821080132-20170821100132-00375.warc.gz", "language": "en", "language_score": 0.9273483157157898, "token_count": 223, "score": 2.6875, "int_score": 3 }
Roughly half the world's population speaks languages derived from a shared linguistic source known as Proto-Indo-European. But who were the early speakers of this ancient mother tongue, and how did they manage to spread it around the globe? Until now their identity has remained a tantalizing mystery to linguists, archaeologists, and even Nazis seeking the roots of the Aryan race. The Horse, the Wheel, and Language lifts the veil that has long shrouded these original Indo-European speakers, and reveals how their domestication of horses and use of the wheel spread language and transformed civilization. Linking prehistoric archaeological remains with the development of language, David Anthony identifies the prehistoric peoples of central Eurasia's steppe grasslands as the original speakers of Proto-Indo-European, and shows how their innovative use of the ox wagon, horseback riding, and the warrior's chariot turned the Eurasian steppes into a thriving transcontinental corridor of communication, commerce, and cultural exchange. He explains how they spread their traditions and gave rise to important advances in copper mining, warfare, and patron-client political institutions, thereby ushering in an era of vibrant social change. Anthony also describes his fascinating discovery of how the wear from bits on ancient horse teeth reveals the origins of horseback riding. The Horse, the Wheel, and Language solves a puzzle that has vexed scholars for two centuries--the source of the Indo-European languages and English--and recovers a magnificent and influential civilization from the past. "David W. Anthony argues that we speak English not just because our parents taught it to us but because wild horses used to roam the steppes of central Eurasia, because steppedwellers invented the spoked wheel and because poetry once had real power. . . . Anthony is not the first scholar to make the case that Proto-Indo-European came from this region [Ukraine/Russia], but given the immense array of evidence he presents, he may be the last one who has to.... The Horse, the Wheel, and Language brings together the work of historical linguists and archaeologists, researchers who have traditionally been suspicious of each other's methods. [The book] lays out in intricate detail the complicated genealogy of history's most successful language."--Christine Kenneally, The New York Times Book Review "[A]uthoritative . . . "--John Noble Wilford, New York Times "A thorough look at the cutting edge of anthropology, Anthony's book is a fascinating look into the origins of modern man."--Publishers Weekly (Online Reviews Annex) "In the age of Borat it may come as a surprise to learn that the grasslands between Ukraine and Kazakhstan were once regarded as an early crucible of civilisation. This idea is revisited in a major new study by David Anthony."--Times Higher Education Table of Contents
<urn:uuid:0d901f78-e817-44b0-9230-2c96b40626ef>
{ "dump": "CC-MAIN-2014-23", "url": "http://press.princeton.edu/titles/8488.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997903265.4/warc/CC-MAIN-20140722025823-00022-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9293259382247925, "token_count": 581, "score": 3.1875, "int_score": 3 }
In 1954, Brown v. Board of Education of Topeka (347 U.S. 483) launched a revolution that changed the world. The Supreme Court decision not only outlawed school segregation, it also inspired an era of civil and human rights progress for all Americans. A unanimous Court in Brown described the importance of education in terms that are as relevant now as they were nearly six decades ago: Today, education is perhaps the most important function of state and local governments. Compulsory school attendance laws and the great expenditures for education both demonstrate our recognition of the importance of education to our democratic society. It is required in the performance of our most basic public responsibilities, even service in the armed forces. It is the very foundation of good citizenship. Today it is a principal instrument in awakening the child to cultural values, in preparing him for later professional training, and in helping him to adjust normally to his environment. In these days, it is doubtful that any child may reasonably be expected to succeed in life if he is denied the opportunity of an education. Such an opportunity, where the state has undertaken to provide it, is a right which must be made available to all on equal terms. Yet our nation continues to be plagued by conditions of inequality and deprivation in schools the minority poor are required by law to attend. Today, the dream of Brown—equal educational opportunity for all American children—remains a dream deferred. Though many Americans may consider ensuring quality education for all children to be an insurmountable challenge, this is not the case. In this article, the authors pose some alternative ways of thinking about and enforcing the right to education through the dual and related lenses of the “disparate impact” theory of liability under Title VI of the Civil Rights Act of 1964 and international human rights law. Applying an international human rights framework to promote an affirmative right to education, together with bolder enforcement of civil rights laws that address disparate impact, will shift the discussion from educational inputs to educational outcomes. Using international treaties as a legal underpinning emphasizes the need for government to eliminate discrimination and specifically provide access to quality education for all children—the vision and promise What Happened on the Road from Brown to Obama? The concept of a federal right to education has been steadily eroded to the point where federal court litigation is no longer a reliable tool to achieve educational justice for minority students. Any hope that Brown and its progeny would be used to require states to equalize their educational systems based on wealth and class was erased by the Supreme Court in its decision in San Antonio Independent School District v. Rodriguez, 411 U.S. 1 (1973), where the Court upheld the constitutionality of Texas’s state system of school finance. Texas, like many other states, relied heavily on local property taxes to fund its public schools. The Rodriguez Court held that the system did not violate the Equal Protection Clause of the Fourteenth Amendment and that wealth would not be subject to the heightened scrutiny reserved for race and national origin classifications. The Court also decided that education was not a “fundamental right” under the federal Constitution. Rodriguez forced students and education officials in under-resourced school districts—often enrolling high proportions of minority students—to mount legal and political advocacy on a state-by-state basis. Predictably, however, in the nearly forty years since Rodriguez, we now have a confusing patchwork of state court rulings. In those states where courts have ordered improvements in resource allocation, there has been widespread noncompliance. In the 1990s, the Supreme Court further curtailed federal education rights in a trilogy of cases from Oklahoma City; DeKalb County, Georgia; and Kansas City, Missouri. The Court signaled to states and districts that had maintained de jure systems that far less than complete elimination of all vestiges of segregation and its effects would be required of them. The impact of these decisions, in the aggregate, was a watering down of standards districts are required to meet in order to meet unitary status. The cases also provided a quick and easy exit strategy for districts and states seeking to avoid further compliance with federal court orders and desegregation agreements. While the Court later articulated standards for voluntary integration plans in Parents Involved in Community Schools v. Seattle School District No. 1, 551 U.S. 701 (2007), it is not likely to recognize an affirmative right to education in the foreseeable future. The constitutional right to education that has survived into the Roberts Court is a limited right: of children residing in the United States to attend a public school free of intentional discrimination on the basis of race and other protected categories. While Brown and its progeny clearly established a federal right for children to be free from government-sponsored segregation (and other invidious discrimination), the federal courts in the main did not require more than a minimal set of educational inputs. Moreover, administrative enforcement of Title VI by the Department of Education’s (the Department’s) Office for Civil Rights (OCR) has proven insufficient to bring about systemic improvement in reducing resegregation, excessive discipline rates experienced by minority students, or inequitable resource allocation to schools. Disparities in Achievement, Resources, and Student Discipline Nearly six decades after Brown, gross disparities in academic achievement, resource allocation, and student discipline persist. High-quality public education is not “available to all on equal terms,” as the Supreme Court mandated in Brown. Simply put, the public school system in this country is failing millions of children—especially children of color, poor children, English learners, and those with disabilities. (Due to space, however, the focus of this article is on students protected from discrimination on the basis of race, color, or national origin.) How bad is it? Only last year, the National Assessment of Educational Progress (NAEP) reported that half of African-American and Latino fourth-graders lacked even a basic level of reading and literacy skills (compared to 22 percent of whites). In mathematics, the United States continues to lag behind our international competition. While we have seen some remarkable improvement in progress from below-basic to basic achievement, only 12 percent of African-American and 18 percent of Latino students have reached the levels of “proficient” or “advanced (compared to 33 percent of whites). These achievement disparities are exacerbated by the disproportionate dropout rates for these student populations. In 2009, African-American students dropped out of high school at an annual rate of 9.3 percent and Latino students at a rate of 17.6 percent, while their white counterparts exit school prematurely at a rate of 5.2 percent. Race-based achievement gaps often correlate with significant shortfalls in the resources allocated for underprivileged communities. A 2011 Department report confirmed that school districts habitually underfund schools enrolling higher proportions of low-income and minority students. Based on 2008–09 school year data, the report found that “from 42 percent to 46 percent of Title I schools (depending on school grade level) had per-pupil expenditure levels that were below their district’s average for non-Title I schools at the same grade level, and from 19 percent to 24 percent were more than 10 percent below the non-Title I school average.” More recently, following a trend in state court litigation, a state trial court in Colorado determined that the state’s school finance system was both inadequate and unequal, violating the state’s constitutional guarantee for a “thorough and uniform system” of public education. Another factor related to achievement gap is the persistence of race-based disparities in school disciplinary actions. According to the Department’s Civil Rights Data Collection, in the 2008–09 school year, black students were suspended nearly three times more frequently than white students. In 2010, OCR opened compliance reviews in two school districts that reported suspending two-thirds of their African-American male students in a year. Latino students were suspended more than two times as often as whites. Students with disabilities, especially those of color, experience higher rates of suspension and are far more often subjected to physical seclusion or restraint. Although school discipline codes are facially neutral, their impact on these student groups has been injurious. Changing the Paradigm All of these conditions are associated with significant disparities in educational outcomes for low-income and minority students. A campaign by the administration and advocates to challenge educational policies and practices that result in a disparate impact would emphasize the need for positive student outcomes—for example, staying in school, academic achievement, college-readiness. And public officials might be required to finally begin to address the patterns of policy—systemic discrimination that adversely impacts students of color. Viewed this way, substantial outcome disparities between student groups would be treated as legal violation wrongs that would trigger positive remedies. So, too, could policies and practices like inequitable systems of resource allocation (including qualified, effective teachers) and disciplinary rules that have a disparate impact on children of color, students with disabilities, and English language learners. While the Fourteenth Amendment and the Title VI statute require proof of intent to discriminate, the Title VI regulation does not and includes an effects standard. The Supreme Court determined in Alexander v. Sandoval, 532 U.S. 275 (2001), however, that there is no private right of action to enforce this provision. This decision severely limited the ability of private plaintiffs to pursue legal remedies for policies and practices with an adverse, disparate impact. Fortunately, the disparate impact provision can still be invoked by federal agencies on behalf of people who experience unintentional but demonstrable discrimination. The Department has the ability to resolve complaints and to conduct compliance reviews against states, districts, and schools for practices that create a disparate impact on students. By accelerating investigation of egregious cases of disparate impact, the Obama administration can take significant steps toward enforcing the law, protecting students’ right to education, and guaranteeing that all students are actually afforded a quality education. This approach, with its emphasis on addressing outcomes, is consistent with international human rights norms and standards. The international human rights framework, which the United States helped to develop when the United Nations was founded, focuses on realizing affirmative rights as well as protection from denial of such rights. Within this framework, the government’s role is clear: to respect and ensure the rights of individuals. At the very basic level, respect involves not violating one’s rights, while to ensure is an affirmative obligation to protect rights, investigate and punish rights violations, and promote and fulfill rights. This holistic view considers rights to be indivisible, interrelated, and interdependent and acknowledges that all must be considered in order to effectively address social ills. It places the onus on governments to create policies based on human rights principles that effectively combat discrimination in all of its forms and to take affirmative steps to implement and monitor human rights obligations domestically. Education As a Human Right As enumerated in the Universal Declaration of Human Rights (UDHR), and further expanded in the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ESCR), human rights are those that are essential to live as human beings—basic standards without which one cannot enjoy equality and dignity. These treaties serve as the blueprint for all rights and the foundation for the development of the human rights framework, universal norms, and standards. The right to education appears specifically in several human rights instruments, including the UDHR (Article 26), the ESCR (Articles 13 and 14), the Convention on the Rights of the Child (CRC) (Articles 23(3), 28, 29, and 33), the International Convention on the Elimination of Racial Discrimination (ICERD) (Articles 2(2), 5(e)(iv), and 33), and the Convention on the Elimination of Discrimination Against Women (CEDAW) (Articles 10 and 14(2)). (While the United States has endorsed the UDHR, which is comprehensive and inclusive of the right to education, the only one of these treaties the United States has ratified is CERD, which includes a binding commitment on the nation to implement its provisions.) The right to education, when it was first recognized internationally, focused on access and established an entitlement to free, compulsory primary education for all children; an obligation to develop secondary education, supported by measures to render it accessible to all children, as well as equitable access to higher education; and a responsibility to provide basic education for individuals who have not completed primary education. Unquestionably, progress in that regard has been made. However, achieving the goal of assuring every child a quality education that respects and promotes his or her dignity and optimum development has necessitated a broader focus. A recent report by the United Nations Educational, Scientific and Cultural Organization entitled A Human Rights Based Approach to Education for All describes the rights-based approach to education for all as a holistic one, encompassing access to education, educational quality (based on human rights values and principles), and the environment in which the education is provided. This approach integrates the norms, standards, and principles of international human rights into the entire education process from development to programming, including plans, strategies, and policies. It specifically considers the effect that the policy will have rather than focusing on its intent. And in doing so, it enables us to reevaluate our current systems and assess those inputs that directly affect a child’s ability to receive a high-quality education. As applied, it seeks to create greater awareness among governments and other relevant institutions of their obligations to fulfill, respect, and protect human rights and to support and empower individuals and communities to claim their rights. The Committee on ESCR describes it best in its General Comment No. 13, which states that “education is the primary vehicle by which economically and socially marginalized adults and children can lift themselves out of poverty and obtain the means to participate fully in their communities.” Stated another way, without the right to education, realization of all other rights becomes impracticable. This is the very foundation of Brown. Although no affirmative constitutional right to education has been recognized in this country, it is important to note that the United States is accountable for moving toward the realization of the right to education in the context of its international treaty obligations. In particular, as a party to ICERD and the ICCPR, the United States is required to file a periodic report to each committee, detailing how it has successfully implemented each provision. Responding to those reports with respect to education, both ICERD and the ICCPR noted with great concern “the persistence of de facto racial segregation in public schools; the persistent ‘achievement gap’ between students belonging to racial, ethnic or national minorities, including English Language Learner (ELL) students, and white students; and the alleged racial disparities in suspension, expulsion, and arrest rates in schools [that] contribute to exacerbat[ing] the high drop-out rate and the referral to the justice system of students belonging to racial, ethnic or national minorities.” Further, the committee recommended that the United States take steps to adopt all appropriate measures to “elaborate effective strategies aimed at promoting school desegregation and providing equal educational opportunity in integrated settings for all students and enact legislation to restore the possibility for school districts to voluntarily promote school integration in accordance with article 2, paragraph 2 of the convention.” Also, the committee urged the United States to take special measures to reduce the achievement gap, improve the quality of education for all students, and encourage school districts to review “zero-tolerance” school discipline policies. It is no surprise that these recommendations by the committee are directly related to the pervasive problems within our education system as stated above. American Public Education Through the Human Rights Lens There is broad agreement that the current approaches employed to achieve the goal of quality education and eliminate discrimination are inadequate. Need-based and service-delivery approaches fail to acknowledge or address the complex barriers that impede children’s access to school, attendance, completion, and attainment and, in so doing, inhibit progress in closing the gap among underserved communities. By contrast, a human rights framework for confronting systemic inequities in the American public education system would emphasize outcomes rather than inputs or access. Under this framework, neutral policies crafted as an attempt to eliminate discrimination and ensure all students have an equal opportunity would be considered ineffectual given the persistence of wide disparities in educational outcomes. What would it mean to apply a human rights framework to education? In the context of school discipline, for example, while our current practice to address discipline issues is often to remove the student from the class, under a human rights approach, one would conclude that both out-of-school and in-school suspensions prohibit students from participating in the daily activities inherent to quality schooling and arguably violate their right to education. A human rights approach would involve intervening in an effective and holistic way, determining the child’s needs, and attempting to meet them. Utilizing disparate impact theory is one way to begin to implement the human rights framework through domestic laws. As a result of bringing disparate impact cases against states, districts, and schools, there would need to be new solutions to improve education for all that may exist beyond the current educational paradigm. As disparate impact cases highlight discrimination in our public education system, adopting the human rights framework will help to develop new, holistic solutions. It is premature to speculate on exactly what proposals could grow out of the implementation of a human rights framework; however, it is certain that they will be more comprehensive and cross-cutting rather than isolated; reflective of all student needs; cognizant of the results of the policy; and cognizant of all students’ right to education. The United States is a world leader in advancing human rights and promoting basic civil and political rights and equality around the globe. Yet, application of the international human rights framework has generally not occurred domestically; rather, the pursuit of civil rights and social justice in the United States has rested primarily on rights guaranteed by the Constitution and our domestic laws. Unquestionably, there have been substantial improvements in domestic law prohibiting discrimination with the passage of the Civil Rights Act, the Voting Rights Act, the Americans with Disabilities Act, and many others. Yet we still fall short in successfully eliminating discrimination at its root, a failure that may be attributed in part to our focus on proving intent. Through the human rights framework, we have an opportunity to define a clear mandate for our government, the private sector, and our nation to dramatically improve public education in America.
<urn:uuid:8e81b143-67cd-4574-a5b6-3fde5b2d6d12>
{ "dump": "CC-MAIN-2016-40", "url": "http://www.americanbar.org/publications/human_rights_magazine_home/human_rights_vol38_2011/fall2011/remedying_dsiparate_impact_in_education.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662159.54/warc/CC-MAIN-20160924173742-00296-ip-10-143-35-109.ec2.internal.warc.gz", "language": "en", "language_score": 0.953368067741394, "token_count": 3814, "score": 3.625, "int_score": 4 }
Although the Islamic State (aka IS, ISIS, and ISIL) was considered defeated in Syria and Iraq in recent years, the jihadists have become increasingly active again. The resurgence has primarily been made possible by the COVID-19 pandemic and the United States’ quasi withdrawal. For over a year, reports have been surfacing about ISIL’s return. January 22 marked yet a stark reminder that the group remains a deadly threat. Thirty-two people were killed and dozens injured when members of the group actuated a suicide bombing in a bazaar in Baghdad. The attack was aimed at Shiite Muslims, the terrorist militia announced on its propaganda channel on Telegram, which also provided the names of the two attackers. ISIL in Syria ISIL’s activity is not limited to Iraq. Twenty-one terrorist militia fighters were killed in Russian airstrikes in northeast Syria on February 22, even as ISIL has been very active in the country and has revealed astonishing logistical capacities, considering its status quo and presence in all regions of the country. And whether offensives such as Russia’s are an effective remedy against the organization remains doubtful. Half of Syria consists of desert, and the Badia desert, in particular, has become ISIL’s new base. The Russian airstrikes took place in an area that is 40,000 square kilometres wide, which the group knows very well, including its countless caves and hiding spots. Defeating ISIS in the desert is thus almost impossible, particularly without sufficient boots on the ground. ISIL is currently reorganizing itself in the Badia desert, while the international community lacks the will to really solve the problem, especially in northeast Syria. As a result, ISIL is currently reorganizing itself in the Badia desert, while the international community lacks the will to really solve the problem, especially in northeast Syria. To be sure, in this region, as in western Iraq, ISIL no longer dominates a large contiguous area. However, according to the United Nations, there are still approximately 10,000 fighters on both sides of the Iraqi-Syrian border, hiding in desert areas that are difficult to access. From these places, ISIL terrorizes the population in neighboring settlements, for example, near the northeast Syrian city of Deir az-Zor. The extremists take hostages to extort ransom and carry out attacks. In August 2020 alone, 100 attacks reportedly occurred in northeast Syria. ISIL in Iraq The situation is similar in Iraq. The Iraq army fought for three years to recapture a third of the country from ISIL in a bloody, grueling war. Until the conquest by the US-led coalition in spring 2017, Mosul was the world’s most important hub for ISIL, where its central institutions were located and where many leadership cadres were staying. Its fall marked a turning point for the group. However, it only took ISIL a year to regroup in the region. Smaller cells conducted attacks via land mines and raided checkpoints at night. Iraq’s army could hardly do anything against it. It was the first serious sign that while ISIL was defeated, it was not eradicated and thus remained a threat. Particularly, the rural areas of Iraq were hit hard by ISIL attacks, including large-scale assaults on police stations and military bases, causing dozens of deaths. The Pentagon put the number of attacks conducted by ISIL in Iraq in 2019 at 139 per month and 1,669 for the year. These include targeted executions, kidnappings, and road bombs. Moreover, in early May 2020, ISIL targeted the country’s infrastructure and paralyzed the electricity supply for the entire Diyala Province. For months now, ISIL has been successful in conducting attacks in Iraq, and their focus is no longer only in the rural areas, but also back to the cities, as the attack in the bazaar in Baghdad – the worst in three years – indicates. [ISIS Resurgence in Iraq is Cause for Alarm] [The Ongoing War in Southern Syria] What Made the Return Possible? There are multiple reasons why the Islamic State was able to regain strength despite the destruction of the caliphate. Even if the Iraqi anti-terrorist units have repeatedly been able to eliminate ISIL cells in the past, they are simply overwhelmed without sufficient support. They are dependent on reconnaissance by American drones and airstrikes against ISIL hideouts. However, the number of American forces in Iraq was reduced by more than half in 2020, to 3,000 men. And the fight against ISIL has already suffered from the US withdrawal. Since then, questions around the capacity of the local security forces to eradicate ISIL have arisen and been further complicated by accusations of corruption and government abuses. The outbreak of COVID-19 in early 2020 forced the international coalition to suspend its combats and cease its training for Iraqi soldiers. The COVID-19 pandemic has also played a significant role in these developments. The outbreak of the virus in early 2020 forced the international coalition to suspend its combats and cease its training for Iraqi soldiers to prevent the virus from spreading among the troops. Even though the Iraqi armed forces have restarted their training, countries like Germany have also drastically reduced their number of training personnel. ISIL has taken advantage of this new vulnerability for the past year, as it emboldened the group and led to it increased attacks in April 2020 compared to March that same year. Loss of Influence and Power but Still a Threat Despite the renewed strength of ISIL, it appears that the group neither organizationally nor symbolically carries the “dynamism” that it had during its early years. The califate is gone, and ISIL is clearly weakened. This has resulted in difficulties in obtaining new, heavy weaponry and recruiting new members. The group’s withdrawal to the desert has also stopped ISIL from conducting sophisticated operations. It can therefore be assumed that the Islamic State has largely lost its network of relationships, and although it continues to rely on ample financial assets by still generating millions through smuggling and extortion, it has lost its financial sources worldwide as the group is closely being monitored. ISIL clearly remains a viable danger to security around the region, and the previous pandemic year has only exacerbated its momentum. That said, ISIL clearly remains a viable danger to security around the region, and the previous pandemic year has only exacerbated its momentum. Iraqi Foreign Minister Fuad Hussein recently stated that ISIL continues to pose a threat. He also emphasized that his country needed the support of the region and the international community in the ongoing fight against the group, warning all parties must “take ISIL seriously.” But with the US withdrawal and the COVID-19 situation still not solved, Syria, as well as Iraq, appear to be destined for yet another long fight against the Islamic State.
<urn:uuid:5de2b40f-4615-4f19-ae8d-46adc79f0794>
{ "dump": "CC-MAIN-2023-14", "url": "https://insidearabia.com/the-islamic-state-remains-a-threat-in-syria-and-iraq/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00045.warc.gz", "language": "en", "language_score": 0.9785884618759155, "token_count": 1384, "score": 2.625, "int_score": 3 }
In this chapter, you learned the basics of a network. The various components and the areas where information is entered to make the computer work on a network were covered. The basic protocols in use on the Internet were explained and how to configure Internet Explorer as well as recent issues as spam and Spyware were covered. There is much more to learn about networking than is covered in this chapter. We encourage you to continue your studies and pursue the Network+ certification to learn even more. Home - Table Of Contents - Contact Us CertiGuide to A+ (A+ 4 Real) (http://www.CertiGuide.com/apfr/) on CertiGuide.com Version 1.0 - Version Date: March 29, 2005 Adapted with permission from a work created by Tcat Houser et al. CertiGuide.com Version © Copyright 2005 Charles M. Kozierok. All Rights Reserved. Not responsible for any loss resulting from the use of this site.
<urn:uuid:60e728f3-c7e1-4c5c-a683-18b17b895934>
{ "dump": "CC-MAIN-2019-13", "url": "http://certiguide.com/apfr/cg_apfr_Summary.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202572.7/warc/CC-MAIN-20190321213516-20190321235516-00384.warc.gz", "language": "en", "language_score": 0.9449582695960999, "token_count": 204, "score": 3.703125, "int_score": 4 }
Over at TBD.com/weather, John Metcalfe takes a look at the University of Arizona’s new interactive model of sea level rise. What would current estimates of 2 meters of sea level rise by 2100 mean for the DC area? The Jefferson Memorial will not just be by the water, it may be underwater. The northeast part of Roosevelt Island will gain more marshland, as well as the bit of Rock Creek where it meets the Potomac, which should please the old-timers who hunt catfish there. It’s hard to see the upside of Bolling Air Force Base becoming submerged, but the military has solid engineers – can’t they build a bigger sea wall? And Old Navy’s name will finally make a little sense as the creeping water moves inland over the Potomac Yard Shopping Center. I’m sure the chain’s marketing whizzes can figure out something about shopping with gondolas. If those were the worst effects of runaway global warming, that would be expensive to deal with, but not necessarily devastating. However, scientific modeling has seriously UNDERestimated sea level rise to this point — seas are rising faster than scientists have predicted & so far they’re not sure why. And just a few minutes of tinkering with the University of Arizona’s model reveals just how much is at stake for the DC area if scientists have even slightly lowballed sea level rise. What if it’s 3 meters instead of 2? Bye bye, National Airport & Tidal Basin: And what if scientists are dramatically underestimating sea level rise? There’s a reason this model includes 6 meters of sea level rise by 2100 — while it’s unlikely, it’s possible. And what would that mean? Might be time to relocate the nation’s capital to higher ground: The takeaway of all this — especially for places like Hampton Roads where smaller degrees of sea level rise would be much more devastating — is that so far, America is rolling the dice with the above scenarios. Congress has done nothing to address global warming, leaving our fate — be it 2 meters, 3 meters or 6 meters of sea level rise — to chance. Wouldn’t it be better to gradually reduce our carbon pollution now & reduce our chances of a worst-case scenario? Isn’t that the conservative thing to do? And the same solutions that protect our climate can also cut energy bills, strengthen our national security, and create millions of jobs. Or hey, we could follow Jim Webb’s lead & do nothing. Maybe there are big untapped economic opportunities I’m missing. Scuba diving expeditions through the underwater remains of Old Town Alexandria?
<urn:uuid:1e7bf9bf-c69e-4e94-9976-f5fd2b73c78c>
{ "dump": "CC-MAIN-2018-17", "url": "http://bluevirginia.us/2010/11/soggy-bottom-new-model-shows-impact-of-rising-seas", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946314.70/warc/CC-MAIN-20180424002843-20180424022843-00562.warc.gz", "language": "en", "language_score": 0.9148440957069397, "token_count": 568, "score": 2.515625, "int_score": 3 }
Researchers may have found a new way for people to achieve fat loss without dieting or exercising. When mice received an experimental injection in their abdominal fat, they lost about 20 percent of it. The shots contained small capsules of genetically engineered cells that release signals to nearby fat tissue. This causes it to act like brown fat, which burns excess calories, rather than storing them, in order to generate heat. (The mice's body temperature increased slightly.) The capsules are made of a polymer that has been implanted in people before, but human tests of this procedure are at least a few years away, says study author Ouliana Ziouzenkova, an assistant professor of human nutrition at the Ohio State University in Columbus.
<urn:uuid:2f41b497-0136-4550-a4c9-09da6883d521>
{ "dump": "CC-MAIN-2015-32", "url": "http://www.allure.com/beauty-trends/health/2013/fat-burning-shot", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981576.7/warc/CC-MAIN-20150728002301-00315-ip-10-236-191-2.ec2.internal.warc.gz", "language": "en", "language_score": 0.9558906555175781, "token_count": 144, "score": 3.125, "int_score": 3 }
An Emory project studying schizophrenia genetics is a good example of how geneticists are shifting from examining small, common mutations to “rare variants” when studying complex diseases. From studies of twins, doctors have known for a long time that heredity plays a big role in causing schizophrenia. But dissecting out which genes are the most important has been a challenge. Three landmark studies on schizophrenia genetics published this summer illustrate the limitations of “genome wide association” studies. New York Times science reporter Nicholas Wade summarized the results in this way: “The principal news from the three studies is that schizophrenia is caused by a very large number of errant genes, not a manageable and meaningful handful.” The limitations from this type of study comes from the type of markers geneticists are looking at, says Steve Warren, chair of the human genetics department at Emory. Genome wide association studies usually follow SNPs — single nucleotide polymorphisms. This is a one-letter change somewhere in the genetic code that is found in a fraction of the population. It’s not a big change in the genome, and in many cases, it will have a small effect on disease risk. Researchers looking for the genes behind complex diseases such as schizophrenia and autism are starting to shift their efforts away from genome wide association studies, Warren says. Think of a SNP like a misspelling of a word in a certain place in a book, he says. In contrast, the “rare variants” geneticists are starting to study more intensively are more like printers’ errors or missing pages. The rapid sequencing technology that allows scientists to investigate these changes easily is just now coming on line, he says. One example of these rare variants is DiGeorge syndrome, a deletion that gets rid of dozens of genes on one copy of chromosome 22. Children who have this chromosomal alteration often have anatomical changes to their heart and palate. But it also substantially increases the risk of schizophrenia – to about 25% lifetime risk. That’s a lot more than any of the SNPs identified this summer. Working with several Emory colleagues, researcher Brad Pearce is planning to examine the genes missing in DiGeorge syndrome in several groups of patients: people with DiGeorge, patients with “typical” schizophrenia and people at high risk of developing schizophrenia. An article in this spring’s Emory Health describes genetic research on autism. Several of the researchers mentioned there, such as geneticist Joe Cubells and psychiatrist Opal Ousley, are involved in this schizophrenia project as well, because deletions on chromosome 22 also lead to an increased risk of autism. Pearce’s project is funded through American Recovery and Reinvestment Act money from the NIH.
<urn:uuid:27401e3f-654a-4580-9a98-a04ea5169c20>
{ "dump": "CC-MAIN-2019-39", "url": "http://www.emoryhealthsciblog.com/page/83/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573519.72/warc/CC-MAIN-20190919122032-20190919144032-00551.warc.gz", "language": "en", "language_score": 0.9369503855705261, "token_count": 570, "score": 3.078125, "int_score": 3 }
- This book helps to measure one’s based on results from online training, statistics and test question. - A rigorous learning strategy prepared by our internal experts. - Thorough tutorials in the GRE divisions of Quantitative (Mathematical), Verbal, and Theoretical Learning Appraisal (AWA), accompanying more than 130 topical practical queries and listening to interpretations from our GRE teacher experts. |Title||Magoosh Present 100 GRE Vocabulary| |Author||Magoosh, Chris Lele, McGarry| |Number Of Pages|
<urn:uuid:fc54df2a-af16-4f83-b96b-e1e2c0f34b16>
{ "dump": "CC-MAIN-2022-40", "url": "https://kisob.com/product/magosh-100-gre-vocabulary/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00418.warc.gz", "language": "en", "language_score": 0.7508531212806702, "token_count": 163, "score": 2.5625, "int_score": 3 }
This preview shows page 1. Sign up to view the full content. Unformatted text preview: a. 9. The time of James Monroe's presidency when the Democratic-Republican party was unchallenged by a major political rival. a. 10. A group of U. S. Congressmen from the western states who urged that the U. S. declare war against Britain in 1812. a. 11. This northern Federalist expression of frustration at no longer controlling the "Virginia dynasty", was a meeting in which legitimate worries about future embargoes, expansionism, declarations of war, etc., were expressed. a. 12. The negotiated sale of Spain's territories in eastern and western Florida to the U. S. for $5 million. a. 13. It prohibited slavery north of the 36 30' line. a. 14. Henry Clay's alleged shifting of electoral votes in the House to John Quincy Adams in the 1824 election in exchange for his appointment as Secretary of State. a.... View Full Document This note was uploaded on 06/28/2010 for the course HIS HIS 1043 taught by Professor Johnson during the Spring '10 term at University of Texas-Tyler. - Spring '10 - Civil War
<urn:uuid:1c318707-9bc6-45ce-8dab-389571261524>
{ "dump": "CC-MAIN-2017-09", "url": "https://www.coursehero.com/file/5915053/Henry-Clay/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170186.50/warc/CC-MAIN-20170219104610-00089-ip-10-171-10-108.ec2.internal.warc.gz", "language": "en", "language_score": 0.9313699007034302, "token_count": 252, "score": 3.1875, "int_score": 3 }
Fun-loving, amiable, clever, and eager to please, the German Shorthaired Pointer (GSP) is a member of the American Kennel Club’s (AKC) Sporting Group. Originally bred for hunting, today, this breed serves as a lovable human companion. Of the 193 breeds that are registered with the AKC, the GSP ranks #11. The German Shorthaired Pointer originated in Germany. Though the exact origination of the GSP is unknown, it’s believed that the breed was the descendent of the German Bird Dog. In an effort to create an even more versatile hunting dog, Germans crossbred various different breeds of canines with the German Bird Dog, including English Pointers, Old Spanish Pointers, Arkwright Pointers, and various other types of tracking dogs, water dogs, and scent hounds. This crossbreeding resulted in the German Shorthaired Pointer, which was highly successful at locating various types of game on land and in water. The name for this breed was likely derived from the stance that he takes when he has spotted prey: a straight back with the snout pointing directly at the location of the animal it is looking for. Though the first records of the GSP were document in the Klub Kurzhaar Stud Book in the 1870s, it is believed that variations of this breed existed prior to that. The first documented GSPs were heavy and relatively slow. In an effort to enhance the breed’s agility, speed, and intelligence, breeders continued to refine these hunting dogs, and between the latter part of the 19th century and the first part of the 20th century, the German Shorthaired Pointer that we know today was created; a highly intelligent, agile, fast dog with a strong sense of smell that tracks equally as well on land as it does on water. This new breed became extremely popular with hunters throughout Germany and the rest of Europe, and their popularity spread to the United States. American sportsmen began using the GSP for hunting in the early part of the 1900s. The breed was used to track various types of game, such as partridge, quail, grouse, woodcock, rabbit, duck, and even possums and raccoons. They could also locate larger game, such as deer. In 1930, the GSP was officially registered with the AKC. It is still used for hunting today, but due to its attractive looks, lovable disposition, and desire to please, the German Shorthaired Pointer is also a beloved pet. Characteristics of the German Shorthaired Pointer Next, we’ll go into the appearance and temperament of the German Shorthaired Pointer. The German Shorthaired Pointer is a lean, muscular, medium-sized dog. Females can stand between 21 and 23 inches tall and weigh between 45 and 60 pounds, while males can stand between 23 and 25 inches high and weight between 55 and 70 pounds. Both genders feature a long muzzle with highly expressive eyes, floppy ears, and a large nose. German Shorthaired Pointers have a short, dense, water-repellent coat. The color of the coat is either a solid reddish brown (liver), or a combination of liver and white, with the two colors creating very unique patterns, which, according to breeders and the AKC, are referred to as either patched, roan, or ticked. The GSP has webbed feet, making this breed excel at swimming. They are fast, powerful, and have high endurance, and they do just as well in water as they do on land. The distinctive physique and coat coupled with its incredible hunting capabilities make this dog a highly versatile hunter. GSPs are often referred to as “noble” or “aristocratic” dogs. The German Shorthaired Pointer is beloved for his good-natured disposition. This breed loves to be surrounded by his human companions and generally, does well with other animals. Being that the GSP was bred for hunting and remains one of the best hunting dogs, this breed is highly active and prefers to spend his time exploring and playing. German Shorthaired Pointers also have a high endurance level and can keep up with the most rigorous activities. After a day of playing with plush dog toys, swimming, and scouting for anything that he catches the scent of, the GSP likes to lounge in his dog bed or curl up with human pack members. This breed is very good with young children. It can keep up with the most active child and enjoys rigorous play; however, because they are so exuberant, play with small children should be supervised, as the GSP may become over-zealous and unintentionally knock down his human playmates. The GSP is fiercely protective of his pack and will notify his human companions when someone unexpected is approaching with an alerting, non-aggressive bark, making this breed a great watchdog. The German Shorthaired Pointer does have a few traits that some may find burdensome. For example, they tend to be overly exuberant and rowdy, particularly when they are not properly exercised. As such, this breed may not be well-suited for someone who cannot dedicate the time to providing this dog with the activity that he needs. If not properly trained, the GSP may be aggressive toward small animals, such as cats, as a result of their strong instinct to track and chase. While this breed is very people-friendly, if left alone for prolonged periods of time, the German Shorthaired Pointer can suffer from separation anxiety and may incessantly bark and become destructive. The use of dog calming aids and homeopathic remedies, such as CBD oil for dogs, can help to prevent anxiety; for example, offering your pet medications for dogs before leaving for work may help to ease his anxiousness. The GSP is best-suited for families that are active and enjoy spending time outside. With proper care and plenty of activity, this breed will offer plenty of love, affection, and fun. Caring for a German Shorthaired Pointer Like all breeds, German Shorthaired Pointers have specific care needs. It’s important to understand these needs and ensure that they are being met in order to provide your pet with a happy, healthy, and fulfilling life. In order to maintain optimal health, a German Shorthaired Pointer should be fed a well-balanced, highly nutritious diet. Since the GSP was bred for hunting and they are highly active, dog food that is made specifically for active canines is recommended. Generally, they require a high caloric intake; an average of 3,000 to 3.500 calories per day is recommended to ensure that they maintain a healthy weight and have the energy they need to thrive. The dog’s age and activity level should be considered when determining calorie intake; younger and more active canines will require more calories, while seniors and those that are less active will need less. Animal nutritionists recommend feeding twice a day; once in the morning and once in the afternoon or evening. Typically, German Shorthaired Pointers aren’t picky eaters; they enjoy both dry dog food and wet dog food. It should be noted, however, that whichever option you offer your pet, it should contain a large percentage of protein, so a high protein dog food should be considered. Since the GSP is so active and because they are primarily carnivorous, this breed does best on recipes that contain premium quality sources of protein. They should not be fed starches or complex carbohydrates, such as wheat, corn, soy, or white potatoes. Therefore, a grain free dog food is the best option for this breed. Avoid recipes that contain gluten-based ingredients and other fillers, such as animal byproducts. Additionally, foods that are made with artificial flavors, colors, and preservatives should be avoided. Dog food brands that feature a high percentage of healthy animal proteins, such as salmon, beef, chicken, bison, lamb, duck, and tuna are ideal for the GSP. While this breed is naturally carnivorous, the German Shorthaired Pointer can benefit from the nutritional content of wholesome fruits and vegetables, such as carrots, blueberries, cranberries, apples, peppers, peas, and sweet potatoes. There are several commercial dog food brands that will meet the needs of a German Shorthaired Pointers. Some of the best options to consider include: In addition to the pleasant, lovable, and generally easy-going disposition of the German Shorthaired Pointer, another desirable trait is the minimal grooming needs of this breed. Shedding is minimal and their hair is short, so tangles and mats are not a concern. Brushing once a week with a firm-bristled dog brush is all that is needed to remove any spent hair and keep the coat and skin healthy. The German Shorthaired Pointer should only be bathed on an as-needed basis. The coat is naturally water-repellent, and excessive bathing can strip the skin and coat of natural oils, which can lead to dry skin and damaged hair. Generally, this breed should only require a bath once every 6 to 8 weeks or when visibly dirty. When bathing, select a high quality dog shampoo that is free of harsh ingredients, such as dyes and perfumes. Since this breed loves the water, bathing should be hassle-free. Baths should be given in a secure location and lukewarm water should be used. Since the GSP is such an active breed, their nails wear down naturally; however, if you can hear your pet’s nails clicking when he walks, use a sturdy pair of nail clippers for dogs to trim them. This breed has webbed toes, so it’s important to check and cleanse their pads on a regular basis. Once a week, apply a dog paw wax to the pads to keep them pliable and to prevent drying and cracking. The floppy ears of the German Shorthaired Pointer should be cleaned regularly, as built-up dirt, debris, and wax can lead to infection. Use a clean, damp cloth to gently wipe out the underside of the ears. A gentle dog ear cleaner can be helpful, too. To keep the teeth and gums healthy and prevent bad breath, consider brushing on a weekly basis. If your pet does not tolerate tooth brushing, dental chews can be offered to remove plaque and tartar buildup and dog breath freshener can keep bad breath at bay. Exercise and Activity The German Shorthaired Pointer is a highly energetic dog. As such, they need a great deal of exercise on a daily basis. Exercise helps to build and strengthen muscles and joints; it also prevents boredom, which can lead to troublesome behavior. Walks should be offered every day. Two shorter walks or one long walk is essential for the overall health and well-being of this breed. Do note, however, that the GSP has a strong sense of smell and has a tendency to pull when a scent is picked up; therefore, a leash for dogs that pull may be necessary. GSPs are avid swimmers, so your pet should be offered plenty of opportunities to get in the water; playing fetch with water toys for dogs and simply diving and swimming will be welcomed activities. German Shorthaired Pointers enjoy playing on land as much as they in the water. Fetching rope and tug toys, running in a secured yard, and playing with automatic fetch machines will keep your GSP physically and mentally stimulated and assure he is happy and healthy. German Shorthaired Pointers are very intelligent and aim to please; however, due to this breeds innate desire to track, the GSP does have a tendency to become easily distracted. As such, your pet is highly capable of being trained, but patience and persistence are essential. By keeping training sessions highly engaging and offering plenty of positive reinforcement, your GSP will be able to learn various commands. Though the GSP does have an amiable personality, it can become aggressive with other dogs and small animals if not properly trained; therefore, socialization is a must. Attending training classes and play sessions at dog parks is a great way to introduce your pet to other animals and humans. When training, start with the basic commands, and as your pet succeeds with these, you can begin introducing more complex commands and partaking in more rigorous training exercises. Because of their agility, speed, swimming skills, the GSP excels in various types of activities, including field and agility events, to dock diving. Housetraining should begin as early as possible. Puppies can be trained as soon as they have weaned off their mother’s milk, and older GSPs can be housebroken as soon as they enter the home. Crate training is the most effective housebreaking strategy. Canines do not like to “mess” in their dens, and since they view a dog crate as a den, this method of housetraining is highly successful for virtually all breeds, including the GSP. Make sure that the crate is properly sized. Your pet should have enough space to stand and turn around without banging into the sides or top; but, he should not have excess room, as additional space can render crate training ineffective. Until completely housebroken, confine your pet in the crate when you cannot respond to his need to eliminate; however, do not keep him confined in the space too long, as doing so can lead to separation anxiety and aggression. As soon as your pet exits the crate, direct him to the proper location to use the bathroom. Establish a feeding schedule and take your pet outside 15 minutes after eating. Be sure to offer positive reinforcement and never scold your dog if he has an accident; it is a learning process and scolding is detrimental to the process. On average, the lifespan of a German Shorthaired Pointer is between 10 and 12 years. Understanding the health concerns that are associated with this breed is essential so prevention and treatment can be offered. Like all breeds, the GSP is genetically predisposed to a few different health conditions, including: - Like humans, canines can also develop diabetes, and the GSP is prone to this condition. Diabetic dogs either do not produce enough or any insulin, and as such, they are not able to break down glucose. This condition can be life-threatening and requires medical care. - This condition occurs when the thyroid produces too much hormone, which can lead to weight loss, lethargy, anxiety, and depression. Medications are available to treat hyperthyroidism. - Canine Gastric Dilation- A gastrointestinal condition that affects canines. It occurs when a dog consumes large amounts of food and the stomach dilates and the food and gas are unable to be expelled. Symptoms include dissention in the abdomen, weakness, drooling, and difficulty breathing. Emergency medical care is required, as this condition can be lfife-threatening. - German Wirehaired Pointer - Portuguese Pointer - German Longhaired Pointer - Slovakian Wirehaired Pointer - How Much Does A German Shorthaired Pointer Cost? - Best Dog Food for German Shorthaired Pointers - Best Puppy Food for German Shorthaired Pointers - Best Dog Crate for German Shorthaired Pointers - Best Dog Bed for German Shorthaired Pointers - Best Dog Brush for German Shorthaired Pointers - Best Dog Toys for German Shorthaired Pointers - Best Dog Collar for German Shorthaired Pointers - Best Dog Harness for German Shorthaired Pointers - Best Dog Muzzle for German Shorthaired Pointers - Best Dog Shampoo for German Shorthaired Pointers - 10 Breeds Most Compatible with German Shorthaired Pointers
<urn:uuid:389a384e-7d38-4502-8247-afc3238504de>
{ "dump": "CC-MAIN-2023-23", "url": "https://www.dogproductpicker.com/german-shorthaired-pointer-breed-info/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644506.21/warc/CC-MAIN-20230528182446-20230528212446-00469.warc.gz", "language": "en", "language_score": 0.9628137946128845, "token_count": 3342, "score": 2.546875, "int_score": 3 }
(Medical Xpress)—Having the seasonal flu jab could reduce the risk of suffering a stroke by almost a quarter, researchers have found. Academics from the University of Lincoln and The University of Nottingham in the UK discovered that patients who had been vaccinated against influenza were 24% less likely to suffer a stroke in the same flu season. Their findings are reported in the scientific journal Vaccine. In 2010, the same research team showed a similar link between flu vaccination and reduced risk of heart attack. Lead investigator Professor Niro Siriwardena, who is Professor of Primary and Pre-hospital Healthcare in the School of Health and Social Care at the University of Lincoln and also a GP and Research Lead with Lincolnshire Community Health Services NHS Trust, said: "The causes of stroke are not fully understood. Classical risk factors like age, smoking and high blood pressure can account for just over half of all cases. "We know that cardiovascular diseases tend to hit during winter and that the risks may be heightened by respiratory infections such as flu. "Our study showed a highly significant association between flu vaccination and reduced risk of stroke within the same flu season. The results were consistent with our previous research into heart attack risk." Dr Zahid Asghar, statistician on the project, supported by Dr Carol Coupland (University of Nottingham) analysed records of more than 47,000 patients who had suffered a stroke or TIA (transient ischaemic attack, or "mini stroke") between 2001 and 2009. Data were drawn from the UK's national General Practice Research Database (now the Clinical Practice Research Datalink). Alongside flu vaccine take-up, they also looked at take-up of pneumococcal vaccination, which protects against infections like pneumonia. They found flu vaccination was associated with a 24% reduction in risk of stroke. The reduction was strongest if the vaccination was given early in the flu season. There was no statistically significant change in risk of TIA with flu vaccination. Pneumococcal vaccination did not appear to reduce risks of either stroke or TIA. The study, called IPVASTIA, used a matched case-control design. Actual cases of stroke were compared against 'control' patients, adjusted for other factors that might explain the differences in risk associated with flu vaccination such as age, existing diseases and treatment history. This type of analysis is widely used in health research to identify risk factors in large samples, although it cannot prove direct cause-and-effect relationships. Professor Siriwardena added: "Further experimental studies would be needed to better understand the relationship between flu vaccination and stroke risk. However, these findings reinforce the value of the UK's national flu vaccination programme with reduced risk of stroke appearing to be an added health benefit." In the UK the seasonal flu vaccination is recommended for everyone over 65 years of age and other at-risk groups, such as those with disabilities or chronic illnesses. Take-up of the vaccine across England is lower than national targets at 74% for over-65s in 2011/12 and around 52% for under-65s in at-risk groups. Explore further: Seasonal flu vaccine lowers risk of first heart attack A. Niroshan Siriwardena, Zahid Asghar, Carol C.A. Coupland, "Influenza and pneumococcal vaccination and risk of stroke or transient ischaemic attack—Matched case control study," Vaccine, Available online 28 January 2014, ISSN 0264-410X, dx.doi.org/10.1016/j.vaccine.2014.01.029.
<urn:uuid:22b78343-a082-476c-adf9-93c09bb07fc5>
{ "dump": "CC-MAIN-2016-36", "url": "http://medicalxpress.com/news/2014-02-seasonal-flu-vaccine.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982298977.21/warc/CC-MAIN-20160823195818-00205-ip-10-153-172-175.ec2.internal.warc.gz", "language": "en", "language_score": 0.958001434803009, "token_count": 738, "score": 2.640625, "int_score": 3 }
Ian Breen: Notes on Past Conventions In"Presidential Nominations" (April 1884), Oliver T. Morton addressed what he perceived to be a serious problem in the American electoral process: the tendency for political conventions to produce candidates that are neither the most capable leaders nor the choice of the majority of voters. Morton was writing at a time when conventions played a much more central role in selecting candidates. Until state primaries were instituted early in the twentieth century, delegates at the conventions made nominations and debated among themselves until a candidate was chosen. While this may sound to us like democracy in action, in fact the delegates were typically handpicked by state party bosses to ensure ahead of time that certain people would receive votes. The common voter only got to weigh in after the candidates had been chosen by the convention insiders."In truth," Morton wrote,"the people of this country have very little to do with the choice of the supreme magistrate, their option being restricted to two men, the creatures of two practically irresponsible conventions." Not only were nominating conventions exclusive, Morton argued, they also typically produced mediocre candidates. He quoted John Stuart Mill, who had written of America's flawed candidate selection process, In the United States...the strongest party never dares put forward any of its strongest men, because every one of these, from the mere fact that he has been long in the public eye, has made himself objectionable to some portion or other of the party, and is therefore not so sure a card for rallying all their votes as a person who has never been heard of by the public at all until he is produced as the candidate. Morton thus felt that great Presidents such as Abraham Lincoln and Ulysses S. Grant had come to power in spite of the conventions, rather than because of them. Their nominations, he believed, were due to"exceptional causes." He quoted a contemporary English economist, Walter Bagehot, who argued that Lincoln's nomination for the Presidency had been a matter of luck rather than an example of the nominating process functioning effectively. It was government by an unknown quantity. Hardly any one in America had any living idea what Mr. Lincoln was like, or any definite notion what he would do...Mr. Lincoln, it is true, happened to be a man, if not of eminent ability, yet of eminent justness. But success in a lottery is no argument for lotteries. In Morton's view, then, the exclusionary nature of the nominating conventions was an impediment to true democracy. Correcting the problem, he wrote,"necessitates a transfer of power from that body to the people." To that end, he outlined a series of measures designed to put power into the hands of the voters, many of which were similar to those that were eventually adopted in the state primary system.... comments powered by Disqus - CIA Plans Huge Release of Top-Secret Reports From the 1960s - South Dakota drops history as a high school requirement - The Forgotten History Of 'Violent Displacement' That Helped Create The National Parks - Gospel of Jesus’ Wife May Be Authentic, New Tests Suggest - Architect Sought for Obama’s Presidential Library Complex - Historian author Antony Beevor says his new World War 2 book may anger Americans - Ron Radosh and Allis Radosh plan to defend Warren Harding in a new book - Historians tackle America’s mass incarceration problem - Report: Russian studies in crisis - Ken Burns: Donald Trump’s birtherism — a “politer way of saying the ‘N-word'” — proves America isn’t remotely “post-racial”
<urn:uuid:97f1c120-843c-4307-9cbb-2075dd3e732d>
{ "dump": "CC-MAIN-2015-35", "url": "http://historynewsnetwork.org/article/53894", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.25/warc/CC-MAIN-20150827025424-00201-ip-10-171-96-226.ec2.internal.warc.gz", "language": "en", "language_score": 0.9718980193138123, "token_count": 759, "score": 2.90625, "int_score": 3 }
Before there were programmable computers, humans were the only computers. Processes were broken down into sequential steps, which a pool of people worked on, hour-after-hour, day-after-day. This process was labor-intensive and prone to error. Mathematicians sought to find a more efficient means of simulating the human computer. During this period, our world of computing progressed, as sequential tasks were finally captured into a form that a machine could process. This period brought the world relay-logic, the precursor to modern day computer circuits. People worked as programmers, converting sequential instructions into the form machines and circuits could execute. The first commercial computer was delivered in 1951. This period was a major turning point in our computing history—one which shaped our field and roles. Founder of Computer Science and Modern Computing – Alan Turing This discussion begins with the founder of computer science and modern computing, Alan Turing. Every computer science and software engineering student is required to learn about Turing, as computing began here. In 1936, Turing invented the Turing machine. You can read his paper titled, “On Computable Numbers, with an Application to the Entscheidungsproblem”. All modern-day computers are based upon the Turing machine. Therefore, we need to spend some time discussing it. What is the Entscheidungsproblem? In the early 1920s, German mathematician David Hilbert challenged the world to convert the human computer into provable, repetitive, reliable, and consistent mathematical expressions. He essentially wanted to arrive at a true or false state. A true statement equates to a numeric value of one. In electricity, a true value equates to a state of on. Conversely, a false value, which is a numeric value of zero, equates to an off state in electrical circuits. Think about this challenge. How can you capture deductive reasoning into discernible proofs in maths? Can every mathematical problem be solved in this manner? Could we capture the logical steps to problem-solving into the form of math? He called this challenge Entscheidungsproblem, which is the “decision problem.” In Turing’s paper, he set off to tackle computable numbers and the Entscheidungsproblem. He disproved Hilbert’s challenge by showing there is no absolute method which can prove or disprove a problem in all cases. What changed our world was his proof, i.e. the Turing machine. What is the Turing Machine? The Turing machine solves any computing problem which can be translated into sequential and logical steps. Stop and think about the impact of this discovery. If you can describe how to solve a problem, then his machine can solve it. The Turing machine converted the human machine into a mechanical machine. How does it achieve this? Think of his machine as a very simple robot with the ability to move from side-to-side to specific points on a long paper tape. Now imagine this tape having small boxes in a row all the way down the length of the tape. Within each box it can have a single 1, 0, or nothing. That’s it. The robot slides along to a specific box (or state), reads the value, fetches the instructions for that box location (code), and then does what it says. It can leave the value as is or change its state (just like in memory). Then, it would move to the position that particular code said to go to for the next state. This process continues until it receives a halt command. We will go into more depth about Turing’s machine and how it works in CS 0100 Big Picture of the Computer and the Web. For now, remember that Turing gave us a fundamental understanding that code and data can be represented the same way in a machine. Meet John von Neumann In 1945, John von Neumann took Turing’s ideas and developed the machine’s architecture. He designed a central core to fetch both the data and code out of memory, execute the code (perform the maths), store the results, and then repeat the process until the halt command was given. This may not sound amazing now, but in its day, it was revolutionary. His architecture included what he called, “conditional control transfer,” or subroutines in today’s terms. Think about Turing’s machine. Each time it came to a box, it fetched the instructions for that location, i.e. it went into another chunk of code or subroutine. Instead of a linear, sequential approach, where a machine can do these steps in order, this design allowed for moving around or jumping to specific points in code. This concept lead to branching and conditional statements, such as IF (instruction) THEN (instruction) as well as looping with a FOR command. This idea of “conditional control transfer” led to the concept of “libraries” and “reuse,” each of which are cornerstones of software engineering principles and quality code. Building off of Turing In 1938, Konrad Zuse built the first mechanical binary programmable computer. Then, in 1943, Thomas Flowers built Colossus, which many consider to be the first all-programmable digital computer. It was used to decipher encrypted messages between Adolf Hilter and his generals during World War II. Machines had not yet been capable to be general purpose; rather, they did a specific function for a specific purpose. The first commercial computer was delivered to the U.S. Bureau of the Census in 1951. It was called UNIVAC I and it was the first computer used to predict the presidential election outcome. Early Programming Languages Early programming languages were written in machine language, where every instruction or task was written in an on or off state, which a machine can understand and execute. Recall that on is the same as a decimal value 1, while off is a decimal value of 0. These simple on and off states are the essence of digital circuitry. Think of a light switch on your wall. You flip the switch in one direction and the light comes on. Flip it the other direction and the light goes off. This switch’s circuit is similar to the simple electrical circuit diagram in Figure 1. The switch on your wall opens and closes the circuit. When closed, the electricity flows freely through the power source (a battery in this case) to the switch on your wall and then to the lamp in your room, which turns the light on. When the switch is open (as it is shown in the diagram), the circuit is broken and no electricity flows, which turns the light off. Now think of the on state as a value of 1. When the switch is closed (picture pushing the switch down until it touches wires on both sides), power flows, the light comes on, and the value is 1 or on. Invert that thought process. Open the switch. What happens? The switch is open, the circuit is broken, power stops flowing, the light goes off, and the value is 0 or off. Within a machine, past and present, powering through circuits are represented by 1s and 0s. Machine language uses these 1 and 0 values to turn on and off states within the machine. Combining these 1s and 0s allows the programmer to get the machine to do what is needed. We will cover machine logic and language later. The takeaway for now is: - An “on” state is represented by a decimal value of 1. - An “off” state is represented by a decimal value of 0. This combination of 1s and 0s in the first programming languages was manually converted into a binary representation. This is machine code. Imagine programming in just 1s and 0s. You would need to code each and every step to tell the machine do this, then that, and so on. Let’s see what steps you would walk through to compute the following equation: A = B + C. Steps to compute A = B + C in Machine Code In machine code, the first part of the binary code represents the task to be done. It is a code in itself. Therefore, a load instruction may be 0010, an add instruction may be 1010, and a store may be 1100. Using these instructions, let’s break down the steps required to tell the computer how to solve the equation A = B + C. Note: In this next section, don’t worry about understanding binary code, as we will explain the binary numbering system later in this course. The memory locations and instructions are arbitrary. If you want to see what the decimal value converts to, you can use a converter such as this one. Step 1: Load B First, we need to load the data stored in B into a working memory location for us to manipulate and use. In machine code, this means we are doing a load instruction. We tell the computer to go and grab the location where B is stored in memory (e.g. memory location 1) and then put it (load it) into the working memory (e.g. at location 11). In binary code, this task becomes: 0010 1011 0001 - 0010 is the Load instruction - 1011 (decimal value 11) is the working memory location of where to store B - 0001 (decimal value 1) is the storage memory location of B, i.e. where it is stored. Let’s do the next step. Step 2: Load C Just like in Step 1, we need to load the data for C into memory in order to work on it. Therefore, it is a load instruction again, where C is in memory location 2 and we put it into the working memory location of 12. 0010 1100 0010 - 0010 is the Load instruction - 1100 (decimal value 12) is the working memory location of where to store C - 0010 (decimal value 2) is the storage memory location of C, i.e. where it is stored. At this point, the computer has both B and C loaded into the working memory. Now we can do the next step. Step 3: Add the numbers in working memory In this step we add the two numbers together. Therefore, we need to tell the computer to do an add instruction, store it into a memory location (e.g. 5) using the memory location for B (which is 11 from above) and C (which is 12 from above). 1010 1101 1011 1100 - 1010 is the add instruction - 1101 (or decimal 13) is the memory location where to store the result. - 1011 (or decimal 11) is the memory location where B is stored in working memory. - 1100 (or decimal 12) is the memory location where C is stored in working memory. Now we have a result for A. Next we need to store it. Step 4: Store the result It’s time to store the result into a stored memory location. We need to tell the computer to do a store instruction from memory location 13 into the memory location (3). 1100 0011 1101 - 1100 is the store instruction - 0011 (or decimal 3) is the memory location where to store the result. - 1101 (or decimal 13) is the memory location where result is stored in working memory. Congratulations, you just wrote machine code. Complete machine code The complete machine code for A = B + C is: 0010 1011 0001 0010 1100 0010 1010 1101 1011 1100 1100 0011 1101 Wrap it Up Looking at the steps and having to think in binary, remember the instruction in binary code, and then ensure you don’t make a mistake would be very tedious and time-consuming. It’s no wonder that the early programmers were mathematicians and engineers. Your Code to Machine Code The binary code you just stepped through is the code your human-readable code is eventually converted into after it goes through the parsing, translation, and final conversion processes. Working hour after hour in numbers is not efficient. As we will see in the next section, our role expands as translators were invented, thereby allowing our role to abstract away the machine code into a form we could use and understand. Code. Eat. Code. Sleep. Dream about Code. Code. Total Lab Runtime: 01:07:56
<urn:uuid:6d7a76af-09ed-4eee-953e-09537e0339d7>
{ "dump": "CC-MAIN-2021-21", "url": "https://knowthecode.io/labs/evolution-of-computing/episode-2", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00396.warc.gz", "language": "en", "language_score": 0.937768280506134, "token_count": 2615, "score": 4.09375, "int_score": 4 }
When I toured a public elementary school last spring, one question in particular seemed to make the principal squirm. Do the kindergartners get homework, I asked? Yes, he replied, explaining that it can help to solidify concepts—but he quickly conceded that some parents weren’t at all happy about it. The debate over elementary school homework is not new, but the tirades against it just keep coming. This fall, the Atlantic published a story titled “When Homework Is Useless”; you might have also seen the Texas second-grade teacher’s no-homework policy that went viral on Facebook around the same time. “Research has been unable to prove that homework improves student performances,” the teacher wrote to class parents. OK, but I had questions. If the issue really is this black-and-white, why do elementary school teachers still assign homework? How much homework are elementary kids getting, how much is too much, and how is “too much” even determined? What should parents do if they want to put an end to it? What I discovered, after lots of digging, is a more complex issue than you’d expect. Young students are indeed getting more homework than they used to. But what’s not clear is exactly how this heavier workload is affecting their well-being. Homework has only been evaluated through the myopic lens of how it influences academic performance (spoiler: in elementary school, it doesn’t seem to). And while researchers have all sorts of ideas about how it might affect kids more generally, these possibilities haven’t been tested rigorously. The upshot, then, is that we really don’t know what homework in elementary school is doing to our kids—but there’s reason to think it can do more harm than good, particularly among disadvantaged students. First, let’s take a close look at the science on how homework affects school performance. By far the most comprehensive analysis was published in 2006 by Duke University neuroscientist and social psychologist Harris Cooper, author of The Battle Over Homework, and his colleagues. Combing through previous studies, they compared whether homework itself, as well as the amount of homework kids did, correlates with academic achievement (grades as well as scores on standardized tests), finding that for elementary school kids, there is no significant relationship between the two. In other words, elementary kids who do homework fare no better in school than kids who do not. (Their analysis did, however, find that homework in middle school and high school correlates with higher achievement but that there is a threshold in middle school: Achievement does not continue to increase when kids do more than an hour of homework each night.) Cooper doesn’t interpret the elementary school findings to mean that homework at this age is useless. For one thing, he says, we can’t make causal conclusions based on correlational studies, because things like homework and achievement can easily be influenced by other variables, such as student characteristics. If a kid is really struggling in school, he might spend twice as long on his homework compared with other students yet get worse grades. No one would interpret this to mean that the increased time he is spending on homework is causing him to get worse grades, because both outcomes are driven by whatever is giving him academic trouble. Likewise, a really motivated student may be more likely to finish all of his homework and get higher grades, but we wouldn’t say the homework caused him to get better grades if his motivation was the main driver. Correlations can give us hints about causal relationships (or in this case, a lack of causal relationship), but they don’t prove them. (It’s worth mentioning that Cooper’s analysis also included a few small interventional studies that tracked outcomes between kids who had been randomly assigned to receive homework each night and those who had not; these studies did suggest that homework provides benefits, but these studies, Cooper and his colleagues noted, “were all flawed in some way that compromised their ability to draw strong causal inference.”) There are, of course, many other ways that homework could affect a young child—in both good ways and bad. Cooper points out that regular, brief homework assignments might help young kids learn better time management and self-regulation skills, which could help them down the line. Regular homework also lets parents see what their kids are working on and how well they’re doing, which could tip them off to academic problems or disabilities. “For a 6-year-old to bring home 10 minutes of homework is almost nothing, but it does get them to sit down and think about it, talk to Mom and Dad, and so on,” Cooper says. On the other hand, homework can also be a source of stress and family tension. For kids from low-income families, especially, homework can be tough because kids may not have a quiet place to work, high-speed internet (or computers for that matter), or parents who are available or knowledgeable enough to help. A 2015 study surveyed parents in Providence, Rhode Island, and found that the less comfortable parents were with their kids’ homework material, the more stress the homework caused at home. “I’ve talked to parents—a lot of parents, actually—who feel very burdened by the fact that kids have to do homework at night, and the parents feel responsible for getting it done, and that starts to dominate the home life,” says Nancy Carlsson-Paige, an early-childhood education specialist at Lesley University in Cambridge, Massachusetts, and the author of Taking Back Childhood. Homework could also take kids away from other enriching activities like music, sports, free play, or family time. “It’s sort of an opportunity cost issue,” says Etta Kralovec, a teacher educator at the University of Arizona South and the co-author of The End of Homework. “I’m a fifth-grader, and I either can go play with my friend or hang out with my grandmother—or I can go home and do a worksheet for math. Those are the kinds of choices that kids have to make.” One eighth-grader told me that when he was in sixth grade, he had so much homework he couldn’t participate in the sports or music classes he wanted to. Cooper points out, however, that homework could also take the place of television or video games, which might be a good thing (but is yet another complicated topic). Then there’s the argument that as elementary school has become more rigorous in recent years—a result, many say, of No Child Left Behind and the Race to the Top Fund, both of which made schools much more accountable for low test scores—the last thing overworked, exhausted young students need is more work when they get home. “We’re seeing rates of school phobia and unhappiness and angst about school among young children at higher rates than ever before,” says Carol Burris, a former high school principal who is now the executive director of the nonprofit Network for Public Education. “I think that giving them a break after 3 o’clock in the afternoon is an awfully good idea.” But the crux of the problem is that, while all of these points are potentially legitimate, no one has studied how homework affects children’s well-being in general—all we’ve got are those achievement findings, which don’t tell us much of anything for elementary school. How likely is it that regular homework will help first-graders manage their time? Will it do so to a degree that offsets the added family stress or the loss of much-loved soccer practice? Is 20 minutes of homework OK, but 30 minutes too much? This research hasn’t been done, so we don’t know. The other big question—also tough to answer—is how much homework elementary school kids are actually getting. There are some highly publicized estimates of average homework time derived from a standardized test called the National Assessment of Educational Progress, which is given annually to most American students. It includes the following question for 9-, 13-, and 17-year-old test takers: “How much time did you spend on homework yesterday?” Compared over time, the answers suggest that 9-year-olds have more homework today than they used to, but not by a ton. Yet many researchers question the validity of these answers, because, they say, students aren’t typically given much homework the night before a standardized test anyway. And the data from this questionnaire—along with the data from a 2007 MetLife survey of third- to 12th-graders that is also frequently quoted as evidence that homework levels remain flat—don’t tell us what’s happening with young elementary school kids. But in the 2015 study in Providence I mentioned earlier, researchers did attempt to answer this question. They had 1,173 parents fill out a homework-related survey at pediatricians’ offices and found that the homework burden in early grades is quite high: Kindergarten and first-grade students do about three times as much homework as is recommended by the “10-minute rule.” What’s the 10-minute rule, you ask? It’s a standard, adopted by most public schools around the country (more on this later), recommending that students spend roughly 10 times their grade level in minutes on homework each night—so first-graders should be spending 10 minutes on homework and fifth-graders 50. (By this rule, kindergarteners shouldn’t be getting any homework.) Considering these numbers in combination with their findings on how homework can increase family stress, the researchers concluded, “the disproportionate homework load for K–3 found in our study calls into question whether primary school children are being exposed to a positive learning experience or to a scenario that may promote negative attitudes toward learning.” That’s just one study, conducted in one city, so it’s hard to generalize from it; clearly, we need more data. But another national online survey suggests that homework time for the younger grades has been increasing over the past three years. Annual teacher surveys conducted by the University of Phoenix reported that in 2013, only 2 percent of elementary teachers assigned more than 10 hours of homework per week. This figure quadrupled to 8 percent in 2015. On the bright side, though, several elementary schools in recent months announced that they have stopped assigning homework entirely. Let’s now revisit that 10-minute rule. It is a recommendation backed by the National Education Association and the National Parent Teacher Association that teachers have been using for a long time—but it is not based on any research. When teachers saw Cooper’s analysis of the homework data and noticed that the amounts of homework that correlated with the highest achievements in middle school and high school were similar to their rule, they used it as evidence that their rule was appropriate. But here’s the thing: While the 10-minute rule implies that 10 minutes of homework a night per grade is appropriate even starting in elementary school, Cooper’s data do not support this conclusion. In a nutshell, then, we don’t have evidence that homework is beneficial for young kids, yet studies suggest that they are doing more homework than even the pro-homework organizations recommend, and the amounts they’re getting also seem to be increasing. So, if you’re a parent of a first-grader who’s getting 30 minutes of homework a night, what should you do? “The first thing you should do is talk to the teacher and let the teacher know how long it’s taking the child to do homework,” Burris says. It’s best not to be confrontational—sometimes the teacher really has no idea that it’s taking so long and will make adjustments. Laura Bowman, the Virginia chapter leader at Parents Across America, a nonprofit organization for parents who want to strengthen public schools, explains: “I always feel that the initial conversation with the teacher is so important, and at that point a lot of teachers will say, ‘I did not realize how long it was taking, and if it’s going to take your child more than 10 minutes, then just do it for 10 minutes.’ ” Also, in early grades, homework should be really easy. “The assignments should be short, they should be simple, and they should lead to success,” Cooper says. “We want these kids to have a successful experience doing schoolwork on their own in another environment.” If the teacher isn’t responsive, try the principal next, Burris suggests. Connect with other parents first to see if their kids are having similar experiences. “Go up the chain of command—if you have to go to a school board meeting, then you do, and you bring a few other parents with you, because there’s strength in numbers,” Bowman says. “The parent voice is a powerful one, and we all have to do what’s in the best interest of our own children.” Parents Across America has a handy toolkit for parents who want to organize other parents around a particular issue. If you still can’t make headway, you can also tell the teacher that your child simply won’t be doing homework, or won’t be doing more than a certain amount. I know several parents who have done this without suffering any consequences other than a little side-eye from the teacher at school events. If this kind of confrontation makes you squeamish, get a letter from a pediatrician or psychologist that says it for you. Bottom line is this: You’re the best judge of how homework is affecting your child. If you’ve got a second-grader who whizzes through his worksheets, then stick with the status quo, no harm done. But if your first-grader is struggling for an hour each night, or the homework is taking him away from other activities you feel are more important, take the above steps to remedy the problem. You want your kid’s earliest education experiences to be as positive as they can be; what happens in elementary school will forever shape his relationship with the classroom and his motivation to learn. We, as parents, have more power than we realize, and we should not feel ashamed to wield it for the sake of our children.
<urn:uuid:8401d061-2d18-4367-9044-2f7cb9c62b83>
{ "dump": "CC-MAIN-2017-26", "url": "http://www.slate.com/articles/life/the_kids/2017/01/are_grade_schoolers_doing_too_much_homework.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320323.17/warc/CC-MAIN-20170624184733-20170624204733-00085.warc.gz", "language": "en", "language_score": 0.9749974608421326, "token_count": 3035, "score": 2.71875, "int_score": 3 }
Loss creates a void — a physical and emotional space that needs to be filled. In the case of the September 11, 2001 and February 26, 1993 attacks, the void is particularly severe as it affects families, a nation and the lot once occupied by the World Trade Center. Architect Michael Arad took it upon himself to fill the latter with a memorial that preserves the memory of the 3,000 people who lost their lives in these tragedies. Arad’s memorial design, which was selected from a 2003 international design competition, stood out from 5,201 applications from 63 countries. Although the original design is uniquely Arad’s, ambitious plans called for cooperative relationships between Arad, the Lower Manhattan Development Corporation and Peter Walker and Partners, a Berkeley, Calif.-based architecture firm. Challenging plans to execute construction and sky-high estimated costs grated nerves and caused relations to thin. Despite myriad difficulties, however, the disparate teams struck a collaborative balance in the end. The National September 11 Memorial is now completed and rises out of Ground Zero, open just in time for the 10-year anniversary since the tragedy. Some changes suggested by Walker were made to the design Arad submitted to the competition, but the essence of Arad’s original vision for “Reflecting Absence,” as he titled his project, has been maintained. The proposed plan offered a space in which people could fall into the hush of private and shared grief; the final creation does exactly that. Protected by trees whose leaves quietly track the passage of time with seasonal changes of color, the memorial offers a haven away from shrieking traffic and overbearing pressure of the city environment. Pools swell 30 feet below ground in the square acre prints where the Twin Towers originally stood, inviting visitors to reflect. Waterfalls hem in the pools; the water’s perpetual flow is reminiscent of a collective whisper that persists in the mists of memory. For individual commemoration, names have been engraved in bronze around the pools with the intention of visitors being able to find the names of their loved ones and create rubbings and impressions of the inscribed letters. To facilitate tracking a specific person on the memorial walls, the 9/11 Memorial website features a search bar in which one can enter a name, birthplace, residence, employer, affiliation, first responder unit or flight. Additionally, names can be searched for in an on-site museum that complements Arad’s outdoor space for remembrance. Designed by the Norwegian architecture firm Snohetta, the museum presents educational opportunities through interactive exhibits. Children who are too young to remember the events or adults who are just now ready to confront them will have the chance to learn about the victims, the events that led to the attacks and how 9/11 continues to affect us. Inside the museum and outside the plaza surrounding the pools there is space. Rather than creating emptiness, the space is filled with respect and the knowledge that, despite our checkered patterns of personal backgrounds, shared memory brings us a common American identity.
<urn:uuid:b2a92b2c-73e9-448b-bbe5-7556ed2a06b5>
{ "dump": "CC-MAIN-2018-26", "url": "http://www.dailycal.org/2011/09/08/michael-arad-berkeley-based-architecture-firm-to-unveil-911-memorial/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859817.15/warc/CC-MAIN-20180617213237-20180617233237-00007.warc.gz", "language": "en", "language_score": 0.9566881060600281, "token_count": 626, "score": 2.515625, "int_score": 3 }
With so many advances in medical technologies and treatments, many of us are living longer and healthier lives. Too often, however, how long we live and what we die of are largely determined by the color of our skin, our gender and where we live. Despite all the progress minorities have made over the past few decades in areas of employment, education and politics — health remains an area of significant disparity. As one looks at data in the United States for nearly any major health issue, one sees huge differences in who is affected most. Right here in Washington, D.C., are some of the starkest differences: African-American children have asthma at a rate of 24 percent versus 12 percent among white children. The District reports the highest breast cancer mortality rate in the country. White women are 14 percent more likely to get breast cancer than African-American women, but African-American women have a 41 percent higher rate of death from it. The rate of new HIV infections among African-American men is more than six times the rate among white men. 12 percent of white residents are obese, while 36 percent of African-Americans are obese. There is some good news. The Affordable Care Act was strategically crafted to begin eliminating health disparities. One way that will take place is through greater access — 94 percent of Americans will be covered under the comprehensive program. Access alone, however, is not sufficient, so let’s not think we don’t need to develop other strategies. Three steps we need to take in conjunction with increased access include: 1. Raise awareness. More than 40 percent of Americans are unaware of racial and ethnic disparities in health. For ethnic minorities, it’s even more prominent; nearly half of all African-Americans and almost 80 percent of Hispanics and Latinos are unaware about the differences in health care between their groups and whites. This lack of awareness causes inaction. If people are not aware of these disparities or do not believe there is a problem, significant change will not occur. To truly raise awareness, we need to work with community groups, churches, barbershops and hair salons — the places where people gather and chat about a range of issues, including their health conditions. Make these places of information about disparity, and we’ll see people clamoring for change. 2. Confront unconscious racial bias. The majority of the medical profession remains white and male. Numerous studies have shown patients with the exact same medical presentation are referred for treatment differently depending on the color of their skin. This is despite the fact that physicians often state they treat everyone equally — evidence is often to the contrary. These biases are often unconscious and occur despite the best intentions. This creates different and disparate diagnosis and treatment. Information technology can help by providing clinical decision support tools, making some diagnostic and treatments decisions more standardized. More important, we need to get more minorities to seek careers in the health care profession. African-Americans represent less than 3 percent of U.S. doctors. This figure has not changed significantly in 100 years! Let’s profile more doctors of color in television shows, movies, books and videos. We need more mentorship programs for minority youth. And quite frankly, it is about time that academia starts promoting minorities. There are few African-American or Hispanic deans of medical schools or department chairs. Society made a conscious and deliberate decision to increase the number of women in medical school, and now, after nearly 25 years of trying, half of medical students are women. No one expects half of medical school students or doctors to be people of color, but it certainly should be more than 3 percent. Let’s develop a specific plan to get it to 15 percent over the next 15 years. 3. Recognize the importance of lifestyle, especially food. There are also many other determinants of good health besides access to the health care delivery system. Many people are aware of the relationship of education and income with respect to health, yet forget about lifestyle, especially food choices. Unfortunately, even when minorities have such knowledge, they can’t do much about it because of “food deserts” in minority communities — areas that lack groceries and therefore reduced ability to purchase fresh fruit and vegetables, lean meats and low-fat dairy items. An example can be seen in Ward 8. There is one full-size grocery in Ward 8 serving 70,000 people, compared with Ward 3, where there are six full-size groceries serving 74,000 people. One way to fix this is to develop tax policies to encourage grocery chains to open in minority areas to make healthful options available, which would most likely have both short-term and long-term effects on health. Along with increased access, these three steps will put us on the road to truly eliminating disparities. Tom Daschle, a Democrat from South Dakota, is a former Senate majority leader and a senior policy adviser at DLA Piper; John Whyte is currently the chief medical expert and vice president of health and medical education at Discovery Channel.
<urn:uuid:e2ec9d51-7ad5-4b33-be60-2a522379cb5f>
{ "dump": "CC-MAIN-2014-23", "url": "http://dyn.politico.com/printstory.cfm?uuid=5F3F8C8A-5517-4C93-8FFD-32282B03E5BD", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267330.29/warc/CC-MAIN-20140728011747-00413-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9594272375106812, "token_count": 1027, "score": 3.375, "int_score": 3 }
What is an apprentice? An apprentice is a person who works for an employer in a chosen occupation and gains the necessary skills, knowledge and attitudes to become a qualified craftsperson. What are the trades covered by the Standards Based Apprenticeship? Brick & Stonelaying Carpentry & Joinery Construction Plant Fitting* Electronic Security Systems* Floor & Wall Tiling* Heavy Vehicle Mechanics* Painting & Decorating* Refrigeration & Air Conditioning* Vehicle Body Repairs* Wood Manufacturing and Finishing What are the educational qualifications required to become an apprentice? The minimum educational qualifications necessary to become an apprentice are 5 D grades in the Junior Certificate examination or equivalent, or successfully complete an approved Pre-Apprenticeship course, or be over 16 years of age and have at least three years relevant work experience approved by SOLAS. However, some employers may specify higher educational qualifications. Are there any other requirements? Yes – a person wishing to become an apprentice in one of the crafts marked with * above must pass the “Ishihara” Colour Vision Test 24 Plate Edition. How do I become an apprentice? You must obtain employment as an apprentice in your chosen trade. Your employer must be approved to train apprentices and must register you as an apprentice within 2 weeks of recruitment. Possible options include:- – A relative, neighbour or friend who works in the trade – Local or regionally-based companies operating in the trade – Register with your local Employment Office indicating your interest in becoming an apprentice. Local Employment Office staff try to match job vacancies with registered individuals where possible. What is the minimum age? The minimum age is 16. How long does it take? A minimum of 4 years. The only exception is for the trade of Print Media which is a minimum of 3 years. How is an Apprenticeship structured? Apprenticeship generally comprises of seven phases, three off-the-job and four on-the-job. The only exceptions to the above are the Floor/Wall Tiling and Print Media apprenticeships, which have five phases, three on-the-job and two off-the-job training phases. Off-the-job training is provided as follows:- Phase 2 : Training Centre Phase 4 : Institute of Technology or College of Further Education Phase 6 : Institute of Technology or College of Further Education Employers have responsibility for providing on-the-job training in Phases 1, 3, 5 and 7. How much will I get paid during my apprenticeship? You will be paid an apprentice rate. The actual rates paid may vary depending on the occupation and the sector of industry in which you are employed. You should seek details of rates of pay for apprentice from your prospective employer. Can I select which Training Centre or Institute of Technology I attend? No. While every effort is made to accommodate apprentices close to home, this may not always be the case. Apprentices are scheduled on the longest waiting basis on the day of scheduling to the nearest available training location to their home address. What happens if I don’t attend a Training Centre or Institute of Technology or College of Further Education when I am called? SOLAS strongly recommends apprentices to accept whatever offers are made to them as failure to accept such offers will extend the period of apprenticeship training and may have financial repercussions throughout. Failure to attend after three calls will result in your apprenticeship being automatically suspended. Can I take leave while on off-the-job training? You are not entitled to take holidays during your off-the-job training phases. Do I get paid when I am in a Training Centre or Institute of Technology or College of Further Education? During off-the-job phases ,all qualifying apprentices are paid an Apprentice Training Allowance by SOLAS/ETB and where appropriate, a contribution towards travel or accommodation costs. If I fail my exams, can I repeat them? If you are referred (i.e have not met the required minimum standard in your assessments) you may repeat your exams on two more occasions, if necessary. However, failure in the final attempt will result in the termination of your apprenticeship. You may appeal the termination of your apprenticeship and if your appeal is successful, will be granted a final attempt. What happens if my employer goes out of business or if I am made redundant during my apprenticeship? Every effort will be made by SOLAS/ETB to help you progress through your apprenticeship if you find yourself in this situation. You should contact yourTraining Adviser assigned to you when you were registered as an apprentice who will be able to offer you advice and assistance. If I am made redundant and can’t get work in Ireland, can I continue my apprenticeship abroad No, however work experience gained abroad may be considered for accreditation towards time served. What are my career prospects? On successful completion of your apprenticeship, a Level 6 Advanced Certificate Craft is awarded. This is recognised Nationally and Internationally as the requirement for craftsperson status. You may further develop your career i.e. company-based, cross-skilling, up-skilling, management or self-employment. Further Study – Progression On successful completion of your apprenticeship you are eligible for consideration for entry into related degree programmes provided by the Institutes of Technology provided you also meet other special entry requirements. Details of the higher education institutes offering progression from Advanced Certificate – Craft to levels 7 and 8 are available on the FETAC website I don’t want to attend the Awards Ceremony, so can I get my cert without going? Yes, you may apply to the local SOLAS/ETB office in your employment location. If I lose my card/cert can I get a replacement? SOLAS can provide replacement cards. Application for a replacement card should be made to Apprenticeship Services. Please phone: 01 6070966. A fee will be charged for all replacements. QQI will issue a record of award to those who have misplaced their original certificate. The forms can be obtained from Apprenticeship Services at the above number. I’ve worked in a trade for a number of years but wasn’t a registered apprentice, can this time be considered as part of my apprenticeship? SOLAS will consider applications for exemptions from the Standards-Based Apprenticeship programme for trade-related experience. However applicants must be registered as an apprentice before an application can be considered. What if I have Special Requirements or require Learning Support? If you have any special requirements and need support i.e. dyslexia, numeracy, literacy, physical disabilities or medical conditions, it is your responsibility to inform SOLAS/ETB at the time of registration so that these needs can be catered for when you attend Off-the-Job training. If you require additional learning supports in learning techniques, maths/science/drawing, practical work, you should log on to www.solas.ie and go to eCollege or contact your local Training Centre, an Institute of Technology, local VEC, evening classes or seek private tuition.
<urn:uuid:dbeab059-ef24-4737-aa70-589eaf7f9815>
{ "dump": "CC-MAIN-2019-04", "url": "http://kerryetbtrainingcentre.ie/programme-overviews/apprenticeships/apprenticeships-frequently-asked-questions/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658662.31/warc/CC-MAIN-20190117000104-20190117022104-00084.warc.gz", "language": "en", "language_score": 0.9261383414268494, "token_count": 1500, "score": 2.71875, "int_score": 3 }
Written by Lori Heikkila World War I had ended and a social revolution was under way! Customs and values of previous generations were rejected. Life was to be lived and enjoyed to the fullest. This was the era of the "lost generation", and the "flapper" with her rolled stockings, short skirts, and straight up-and-down look. They scandalized their elders in the cabarets, night clubs, and speakeasies that replaced the ballrooms of pre-war days. Dancing became more informal - close embraces and frequent changes of partners were now socially acceptable. Only one kind of music suited this generation - jazz, the vehicle for dancing the fox-trot, shimmy, rag, Charleston, black bottom, and various other steps of the period. Jazz originated at the close of the nineteenth century in the seamy dance halls and brothels of the South and Midwest where the word Jazz commonly referred to sexual intercourse. Southern blacks, delivered from slavery a few decades before, started playing European music with Afro modifications. The birthplace of jazz has many origins: New Orleans, St. Louis, Memphis and Kansas City are just a few. But New Orleans was and still remains an important jazz center. The ethnic rainbow of people who gravitated to the bars and brothels were a major factor in the development of jazz. The city had been under Spanish and French rule prior to the Louisiana purchase. By 1900, it was a blend of Spanish, French, English, German, Italian, Slavic and countless blacks originally brought in as slaves. The first jazz bands contained a "rhythm section" consisting of a string bass, drums, and a guitar or banjo, and a "melodic section" with one or two cornets, a trombone, a clarinet, and sometimes even a violin. Years later, jazz was taken over by large orchestras; a "society jazz band" contained fifteen or more musicians. Today, there is a renewed interest in the "big band" era, even though the music has very little to do with real jazz. True jazz is characterized by certain essential features. The first is a tendency to stress the weak beats of the bar (2nd and 4th) in contrast to traditional music which stressed the first and third beats. The second feature is syncopation through an extensive repetition of short and strongly rhythmic phrases or "riffs". The third feature of jazz is swing (regular but subtle pulsation which animates 4/4 time). The swing must be present in every good jazz performance. Buy Instructional Jazz Dance Videos and DVDs in our video store. See also History of Dance in our video store. Copyright Centralhome.com Company Inc. Recommend this Jazz Dance History page to a friend - choose email in orange share box.
<urn:uuid:7936a85f-1eaa-49e1-814b-86abe6da9665>
{ "dump": "CC-MAIN-2014-10", "url": "http://www.centralhome.com/ballroomcountry/jazz.htm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010359804/warc/CC-MAIN-20140305090559-00037-ip-10-183-142-35.ec2.internal.warc.gz", "language": "en", "language_score": 0.9639382362365723, "token_count": 581, "score": 3.390625, "int_score": 3 }
Updated: Jun 29, 2022 His chapter, Joshua 5, portrays two different responses to the knowledge of God. These two responses serve as bookends to the chapter. The chapter ends with Joshua’s encounter with the commander of the Lord’s army (verses 13–15). Bible scholars tell us that the Lord’s army commander is Jesus Christ, the “visible revealer of the invisible God.”[i] Joshua does not recognize that it is God who is standing before him (verse 13). But God reveals Himself, telling Joshua to remove his shoes because he was standing on holy ground (verse 15). Joshua obeys and removes his shoes while worshipping God. We read about another side at the beginning of the chapter (verse 1). The local kings and the people in their kingdoms have now heard about how God stopped the water from the Jordan river by piling up the water far upstream in the city called Adam (Joshua 3:16). They saw how the Israelites crossed the Jordan walking through a dry riverbed in the middle of seasonal floods. They recognize that God is coming against them. Now, they are afraid (verse 1). They are so scared that their “fight or flight” responses kick in that the rulers and the people could not breathe! But when they recognize that God is fighting the battle, instead of turning to God (like the people of Nineveh), they continue to think they, in their human strength, can defeat the Almighty God who gave them their humanity and strength. The question for us today is where do we stand when we recognize the activity of God and the presence of God in our lives? Do we chalk it off to random coincidence or something else and continue to rely on our strength or do we turn to God and worship Him? [i] Keil and Delitzsch
<urn:uuid:f7c71b3f-4387-45d8-8f89-05040e00e4d0>
{ "dump": "CC-MAIN-2023-40", "url": "https://www.mcn.world/post/the-two-sides", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233509023.57/warc/CC-MAIN-20230925151539-20230925181539-00782.warc.gz", "language": "en", "language_score": 0.9595305323600769, "token_count": 378, "score": 2.75, "int_score": 3 }
Chandler House / Rocklyn National Historic Site of Canada Chandler House / Rocklyn Maison Chandler / Rocklyn Links and documents Listed on the Canadian Register: Statement of Significance Description of Historic Place Chandler House / Rocklyn National Historic Site of Canada is located in the town of Dorchester, New Brunswick. Built in 1831 in the Classical Revival style, this well-proportioned, two-storey, five bay house has a worked stone exterior and a low, hipped roof flanked by high stone chimneys. The front door is approached through an open porch with a pediment and columns. Official recognition refers to the building on its legal lot at the time of designation. Chandler House / Rocklyn was designated a national historic site of Canada in 1971 because: - this Classical Revival house was built for Father of Confederation Edward Barron Chandler and remained his property throughout his long career in public office. The heritage value resides in theClassical Revival style of the house. Fine touches in the design of the building include the considered proportions, the manner in which the pediment on the porch repeats the angle of the hipped slate roof, and in the rusticated walls on the ground floor, which contrast with the smooth ashlar facing above. Triglyphs and fluted columns enrich the handsome wooden portico, set on a stone base. This classically inspired design, with its fine detailing and use of durable materials reflect the social and economic position of Edward Barron Chandler, a leading position in mid-nineteenth century Atlantic Canada. Sources: Historic Sites and Monuments Board of Canada, Minutes, May 1971; Plaque Text, June 1976. Key elements that contribute to the heritage character of the site include: - its location in Dorchester, east of the Petitcodiac River; - its landscaped setting on the Chandler property with lawns and trees; - the rectangular massing under a low hipped roof; - the masonry construction material detailed on the main façade with rustication on the ground floor and smooth ashlar above; - the five-bay, symmetrically arranged façade with classically detailed and pedimented portico; - the typical Classical Revival fenestration with double-hung six-over-six windows flanking the central bay with its tripartite upper window and door with sidelights and fanlight; - the centre-hall plan; - the fine interior detailing, including the interior casement shutters, recessed shelved cupboards, original fireplace mantles, and fine mouldings of the reception rooms and hallway. Government of Canada Historic Sites and Monuments Act National Historic Site of Canada 1831/01/01 to 1890/01/01 Theme - Category and Type - Governing Canada - Politics and Political Processes - Expressing Intellectual and Cultural Life - Architecture and Design Function - Category and Type - Single Dwelling Architect / Designer Edward Barron Chandler Location of Supporting Documentation National Historic Sites Directorate, Documentation Centre, 5th Floor, Room 89, 25 Eddy Street, Gatineau, Québec. Cross-Reference to Collection
<urn:uuid:114619a4-f43c-43ae-9bd9-a94aea70e758>
{ "dump": "CC-MAIN-2013-48", "url": "http://www.historicplaces.ca/en/rep-reg/place-lieu.aspx?id=12982&pid=0", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163933724/warc/CC-MAIN-20131204133213-00001-ip-10-33-133-15.ec2.internal.warc.gz", "language": "en", "language_score": 0.8812172412872314, "token_count": 671, "score": 2.515625, "int_score": 3 }
The oldest humanoid skeleton ever found has been taken out of Ethiopia for a controversial tour of American museums. The public has only seen the real Lucy remains twice in Ethiopia Archaeologists say the 3.2m-year-old remains - known as Lucy - are far too fragile to be moved around. But Ethiopia said it would use cash raised from the six-year tour to fund museums back home and build new ones. The discovery of Lucy in north-eastern Ethiopia in 1974 led scientists to rethink existing theories about early human evolution. The fossilised partial skeleton of what was once a 3.5ft (1m) tall adult was the earliest known hominid, Australopithecus afarensis. The real Lucy remains have only been exhibited publicly in Ethiopia twice. A replica is on display at the Natural History Museum in the capital Addis Ababa. Zelalem Assefa, an Ethiopian who works at the Smithsonian Institution, a prestigious US research institute, said in Addis Ababa he disapproved of the tour. He told the Associated Press news agency: "These are original, irreplaceable materials. These are things you don't gamble with." The BBC's Elizabeth Blunt in Addis Ababa said Ethiopian exiles in the US have mounted a vociferous campaign against the exhibition. The skeleton will go on display first at the Houston Museum of Natural Science. Ethiopian officials said New York, Denver and Chicago were among the other US tour stops.
<urn:uuid:21a5d3ba-08cd-4f5d-8b01-4897c2e584ea>
{ "dump": "CC-MAIN-2015-11", "url": "http://news.bbc.co.uk/2/hi/africa/6934230.stm", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461944.75/warc/CC-MAIN-20150226074101-00151-ip-10-28-5-156.ec2.internal.warc.gz", "language": "en", "language_score": 0.9466776251792908, "token_count": 312, "score": 2.78125, "int_score": 3 }
is the first member of the alkali metal family. It is the lightest of all the metals and the least dense of all solids. The major use of lithium compounds is in rechargeable and non-rechargeable batteries. It is used in wide range of products, including heat-resistant glass, ceramics, and high strength/low weight alloys which are principally utilized in aircraft. Compounds of lithium have also been widely in used in the medical industry to treat bipolar disorder. The Lithium Compounds market is driven by various end-user industries such as li-ion batteries, glass & ceramics and others. The increase in use of portable devices using li-ion batteries, increasing focus of consumer & government agencies towards environmental concerns and converting from fuel-burning cars to electrical vehicles are driving the growth of the market. Lithium compound’s global demand could possibly rise to about XX tons by 2021. Lithium Carbonate is one such compound from which lithium is extracted and it has a market of xx tons being produced every year. Lithium chloride is one of the most hygroscopic constituents used in air conditioning, while Lithium stearate is used as an all-purpose and high temperature lubricant. Lithium carbonate is used in drugs to treat manic depression. Lithium hydride is used as a means of storing hydrogen which is used as a fuel. The market is divided into by derivatives, by application, by end user and by geography. Non-rechargeable batteries include coin or cylindrical batteries used in calculators and digital cameras. Lithium batteries have a higher energy density compared to alkaline batteries and are of low weight. Rechargeable batteries are used in powering cell phones, laptops, other hand held electronic devices and power tools. The advantages of the lithium secondary battery are its higher energy density and lighter weight compared to nickel-cadmium and nickel-metal hydride batteries. Asia-Pacific is the global leader in the consumption of lithium compounds, and is expected to dominate in the coming years. China, Japan, and South Korea are the key countries in this region. The Asia-Pacific market is expected to grow at the highest CAGR from 2016 to 2021. North America is an emerging market for lithium compounds, due to the increasing demand from end-user industries. This region constitutes the second-largest market for lithium compounds consumption in 2015. Sample Companies Profiled in this Report are: - SQM (Chile) - Shanghai China Lithium Industrial Co., Ltd. (China) - Albemarle Corporation (U.S.) - Sichuan Tianqui lithium (China) - FMC Corporation (U.S.)
<urn:uuid:137dfa1c-6ded-4d3b-bc11-65df68364aa8>
{ "dump": "CC-MAIN-2019-18", "url": "https://industryarc.com/Report/15221/lithium-compounds-market.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529813.23/warc/CC-MAIN-20190420120902-20190420142902-00550.warc.gz", "language": "en", "language_score": 0.9471544623374939, "token_count": 561, "score": 2.578125, "int_score": 3 }
Accidental Discovery: (1) Finding something good or useful without looking for it; could also be called “Serendipity.” (2) Innovation as a by-product of research and development in a random field. Adaptable Products: (1) Products that are able to adapt themselves to new environments, new states, or new user defined tasks. (2) The characteristic of a product that lends its usefulness to a variety of tasks. Adoption Curve: (1) A bell-shaped curve that arranges innovators, early adopters, early majority, late majority, and laggards in reference to marketing. (2) A model that classifies adopters of innovations into various categories, based on the idea that certain individuals are inevitably more open to adaptation than others. Affinity Charting: Involves each member of a group writing their ideas about a subject, then all of the ideas being clustered together under appropriate heading. Helps groups understand each others’ qualitative data and opinions to come up with a consensus view on the subject on hand. Alliance: An agreement between two entities, (for example, two companies), motivated by mutual gain such as cost reduction and improved service for the customer. Alpha Test: (1) The first test of newly developed hardware of software in a laboratory setting. (2) Testing by either potential customers, or an independent test team at the developers’ site. Analogical Thinking: Using the solutions from similar problems as reference to solve a current problem; taking ideas from one context and applying them to another context to produce a new idea. Analytical Hierarchy Process (AHP): A structured technique which helps organize complex decisions in order to find to best solution for the goal, it is useful for group decision making and long term goals. There are three main elements: the goal, criteria, and alternatives. Top in the hierarchy is the goal, given a value of 1, and it is broken down into criteria, then subcriteria, and so on per necessity. The criteria are given a numerical value that is weighted by importance. The alternatives are the “answers” to the decision and can be ranked by their numerical value given by how they interact with the criteria. Analyzer: A device or person that performs an analysis. Ansoff Matrix (Product/Process Matrix): A tool that helps businesses decide their product and market growth strategy. Anticipatory Failure Determination (AFD): A method of failure prediction or failure analysis which intentionally seeks to find all of the ways a system can fail. For failure analysis, one looks to intentionally create the failure that occurred. For failure prediction, one looks to intentionally create all possible failures. This method transforms a low knowledge area, “how did this/can this fail?” into a high knowledge area, “these methods are how it fails.” Once this knowledge is acquired solutions can be found for the problems so that a high quality product is produced. Applications Development: (1) The development of a software product in a planned and structured process to perform a task, such as keeping inventory and billing customers. (2) The creation of programs that perform micro-tasks or functions for software and hardware. These are useful for managing large quantities of data quickly and automatically. Architecture: A structural design of shared environments, methods of organizing and labeling websites, intranets, and online communities. Ways of bringing the principles of design and architecture to the digital landscape. Asynchronous Groupware: A system or process that allows groups to interact and collaborate at different times through email, writing systems, or other electronic means. Audit: An overall analysis of a person, business, organization or process. The purpose is to evaluate a particular means such as energy conservation, project management, or financial accountability. Augmented Product: The non-physical part of a product, such as warranty. Autonomous Products: Products which also make decisions and work with little to no user interaction. Many autonomous products have sensors which supply constant data about the environment to enable the decision making process. Autonomous Team: also: autonomous work group. A self-contained group which works towards specific goals without the influence of an outside party like a manger. Instead the group itself determines goals, timelines, and work practices. Back-Up: Evidence that an innovation is successful and worth investing in. Balanced Scorecard: A way of measuring performance in order to provide feedback to an organization so that management can better implement strategies. Benchmarking: The method of searching for new/better procedures by comparing your own procedures to that of the very best, it involves quantitative and qualitative data and can apply to both services and products. Benefit: Advantage, desirable outcome, desirable attribute of a service or good, what a customer receives from making a purchase. Best Practice: A technique that has continually shown superior results compared to other methods, used as a benchmark. Best Practice Study: The procedure of figuring out which method results in the best practice, or best way to do something. Beta Test: Pilot test of a product before commercial production. Brainstorming: Can be a spiral, a model, a web, any means by which companies solve problems through creative ideation. Brand: Unique combination of symbols, words, sounds or images that identify a product and separate from competitors. Brand Development Index (BDI): Percentage of brand’s sale in a particular area in relation to the country population in a state, city county etc. The BDI is derived by dividing an area’s percent of total U.S. sales by that area’s percent of population. Breadboard: A thin, blank, often white board on which a prototype circuit with numerous connections for circuit elements is constructed. Break-even Point: The volume of sales at which a company’s net sales just equals its costs. No profit is made at break even point. Business Analysis: The practice of enabling change in an organizational context, by defining needs and recommending solutions that deliver value to stakeholders. Business Case: A document that facilitates a decision to start or continue, a new project, major product development or feature enhancement. It should contain the information necessary for the business to make a decision. Business Model Innovation: Centers on taking an existing business model and adapting it to a new direction in order to have a sole market niche. Business-to-Business: Transactions between business, such as a manufacturer and wholesaler or between wholesaler and retailer. Buyer: Party which acquires, or agrees to acquire, ownership (in case of goods), or benefit or usage (in case of services), in exchange for money or other consideration under a contract of sale. Also can be referred to as a purchaser the one who bought the product. Buyer Concentration: Measures the extent to which a large percentage of a given product is purchased by relatively few buyers. Cannibalization: A phenomenon that results when a firm develops a new product or service that steals business or market share from one or more of its existing products and services. Capacity Planning: The process of determining the production capacity needed by an organization to meet changing demands for its products. In the context of capacity planning, “capacity” is the maximum amount of work that an organization is capable of completing in a given period. Centers of Excellence: Most often unique to the organization or business unit that creates them. They are usually comprised of elements that make it a centralized collection of subject matter, research, standards and policy design, educational opportunities and success criteria in the given subject. Certification: Refers to the confirmation of certain characteristics of an object, person, or organization. This confirmation is often, but not always, provided by some form of external review, education, assessment, or audit. Champion: An individual who is intensely interested and involved with the overall objectives and goals of the project and who plays a dominant role in many of the research-engineering interaction events through some of the stages, overcoming technical and organizational obstacles and pulling the effort through its final achievement by the sheer force of his will and energy. Charter: A document outlining the principles, functions, and organization of a corporate body. Checklist: A type of informational job aid used to reduce failure by compensating for potential limits of human memory and attention. It helps to ensure consistency and completeness in carrying out a task. Chunks: Small parts, or tasks, of a problem. They form one solution when pulled together. Clock Speed: The time it takes for innovation cycles to pass. Co-location: A term used to describe a central shared network or server utilized by all members of a facility or organization, such as a central server in an office used by all employees. This helps keep individual employees easily connected within their own business community. Cognitive Dissonance: The state of having inconsistent thoughts, beliefs, or attitudes, especially as relating to behavioral decisions and attitude change. Cognitive Model: An approximation to animal cognitive processes (predominantly human) for the purposes of comprehension and prediction. Cognitive Walk Through: A usability inspection method used to identify usability issues in design or product, focusing on how easy it is for new users to accomplish tasks with the system. Collaborative Product Development CCPD): A business strategy or technique involving the use of specialized software applications and work processes to allow multiple organizations to easily cooperate on the development of a product. This cooperation may utilize resources such as email, chat/video software, desktop sharing, and visualization tools like computer assisted drawing (CAD). Commercialization: The process or cycle of introducing a product to the market. Similarly: bringing a product out of its development stage to make it available for mass market use and distribution. Competitive Intelligence: The process of gathering, analyzing, and distributing information about products, competitors, and customers to support the strategic decision making ability of organization executives. Computer-Aided Design (CAD) also known as Computer-Aided Drafting and Design (CADD): The use of computer technology and specialized software to develop precise drawings and computer models that can be easily stored, shared, and modified. Computer-Aided Engineering (CAE): The use of computers and specialized software to perform automated analysis of engineering designs to calculate their performance and properties. Computer-Enhanced Creativity: The phenomenon by which the utilization of computer technology and software enhances and eases the creative abilities of the individual; the use of computers to expand personal imagination and its translation into new or innovative concepts. Concept Generation: The means by which new ideas, designs, and models are created; sometimes used to define the second phase of the innovation process also called idea generation or ideation. Concept Statement: A brief official summary of the purpose of a project. Concept Study Activity: A activity in which a concept can be tested to find flaws in the concept. Concept Test: The process of using quantitative and qualitative methods to evaluate consumer response to a product or idea prior to the introduction of the product to the market; a preemptive survey of the consumer market for a product or idea. Concurrency: Doing multiple steps of a project at once instead of tackling one at a time. Concurrent Engineering: Designing and manufacturing a product at the same time. Conjoint Analysis: When different features of a product or service are evaluated by having consumers test them. Consensus: Agreement in judgment or opinion reached by the entirety of a group or organization; general agreement; a form of decision making requiring all parties to agree completely with issues. Consumer: The target demographic/s or group/s of people that the product/s or service/s is targeting. Consumer Market: The purchasing of product/s or service/s by the consumers. Consumer Need: A problem that the consumer/s would like fixed by a service or product. Consumer Panels: Sample of consumers in a consumer market whose buying behavior is assumed to represent the entire consumer market. Contextual Inquiry: An open-ended form of research-based interview, in which a researcher observes a subject during the course of two hours of normal activity, and then provides feedback regarding the activity. It is highly flexible in its methodology, allowing it to be adapted to almost any activity or workplace. It provides detailed and relevant information regarding the nature of the activity, and is useful in uncovering tacit knowledge often unknown by the subject itself. Contextual Market Research: A hands-on inquiry into the information, segmentation, and trends of a potential market. It distinguishes the need, market size, and competition, and allows an edge on the competition. Contingency Plan: A preemptive course of action if a desired behavior or procedure goes awry with potentially catastrophic consequences. Continuous Improvement: An ongoing effort to improve a product, services, or goods. It can be incremental or breakthrough, and delivery processes are constantly under scrutiny as in order to their efficacy. Continuous Learning Activity: An individual-based form of ongoing work activity that keeps employees brushed up in areas of their field, such as disciplines, policy initiatives, leadership skills, and career broadening. A CLA is essential in keeping an organization mobile and growing in a changing environment. Contract Developer: A form of a mutual obligation among elements of a software system that comes from a supplier providing a certain good or service, entitled to obtain a fee from a client for providing such services. Controlled Store Testing: Provides a statistically reliable set of results based off of a closely monitored real-world and small scale test environment. Useful for product testing, retail, price points, and is used to help make a go/no go decision on a capital investment. Convergent Thinking: A single, well established answer to a problem, such as standardized tests in public school. Emphasizes logic and repetition, and does not encourage unconventional solutions. Cooperation (Team-Based): A group of more than one person working towards a common goal, cohesively, and productively. Crowdsourcing: Outsourcing tasks to a broad and loosely defined group of people outside of the company. Customer Value Added Ratio: A comparison of what a company’s products are worth (WWPF or worth-what-paid-for) to the WWPF of a competitor’s products. Customer-based Success: Success that is based on customers and the owners’ consideration of the value that a firm can provide to a customer and the value that a customer can provide to the firm. Cycle Time: The period required to complete one cycle of an operation; or to complete a function, job, or task from start to finish. Cycle time is used in differentiating total duration of a process from its run time. Dashboard: Shows a user useful files and other objects that they have recently accessed or are relevant to what they are doing. Database: A systematically organized or structured repository of indexed information (usually as a group of linked data files) that allows easy retrieval, updating, analysis, and output of data. Stored usually in a computer, this data could be in the form of graphics, reports, scripts, tables, text, etc., representing almost every kind of information. Most computer applications (including antivirus software, spreadsheets,word-processors) are databases at their core. Decision Screens: A screen that shows the decisions that are typically made during the processing rounds. Decision Tree: A type of tree-diagram used in determining the optimum course of action, in situations having several possible alternatives with uncertain outcomes. The resulting chart or diagram (which looks like a cluster of tree branches) displays the structure of a particular decision, and the interrelationships and interplay between different alternatives, decisions, and possible outcomes. Defenders: A group of individuals who protect rights, individuals, or information. Deliverable: A report or item that must be completed and delivered under the terms of an agreement or contract. Delphi Processes: A method of structuring communication and interaction to focus effectively on a large task. They target emerging trends rather than current statuses. A group is given a poll of anonymous assessments from which a correct answer is gradually derived by feedback and response. Demographic: Recent statistical evaluations of populations. They illustrate the large masses of individuals to be targeted and catered to in big business. Derivative: Work which relies on previously copyrighted material. Derivative work is protected by copyright law only if it displays creativity and deviation from simply copying. Design for Excellence (DFX): Designing for excellence is a label which encompasses the general attitude of improving various parts of a product including: development, production, utilization, and disposal. Design guidelines aim to improve and invent better components and methods. Design for Maintainability (DFMt): The process of designing a product with the intention of decreasing the requirements for maintainability. This includes decreasing maintenance difficulty, decreasing frequency of maintenance requirements, increasing availability to consumers, and decreasing logistics costs. Design for Manufacturability (DFM): The engineering practice of defining a product by the methods of most efficient, least expensive, and most quickly repeatable production. Design for the Environment (DFE): Is the new “Green” movement. Focuses on carbon emission, expenditure of non-replenishable resources, and impact on natural ecosystems and aims to reduce these effects. Design of Experiments (DOE): Essentially the process of conducting experiments in order to determine the root cause of a desired (or undesired) outcome. This involves the establishment of independent variables, an adequate selection of subjects, and a controlled manipulation of variables. Design to Cost: Aims to create a product with the lowest cost possible with an acceptable product. Usually a maximum cost is established and trade-offs and optimizations are analyzed. Cost is treated as a limitation rather than a variable. Design Validation: The process of verifying that the product meets the requirements of the target demographic. Designs may be validated through experimentation, but the ultimate test it the marketplace. Development Change Order (DCO): A component of the change management process whereby changes in the scope of work agreed too. Development: The act or process of developing; a product, concept or idea. Diffusion: The process by which a new idea or a new product is accepted by the market. Digital Mock-Up: A concept that allows the description of a product usually in 3D, for its entire life cycle. Discontinuous Innovation: Innovation that, if adopted, requires a significant change in behavior. Discounted Cash-Flow (DCF) Analysis: A valuation method used to estimate the attractiveness of an investment opportunity. Uses future free cash flow projections and discounts them (most often using the weighted average cost of capital) to arrive at a present value, which is used to evaluate the potential for investment. Discrete Choice Experiment: Widely used for the analysis of individual choice behavior and can be applied to choice problems in many fields such as economics, environmental management, urban planning, etc. Dispersed Teams: A group of individuals who work across time, space, and organizational boundaries with links strengthened by webs of communication technology. Distribution: One of the four elements of the marketing mix. An organization or set of organizations (go-between) involved in the process of making a product or service available for use or consumption by a consumer or business user. Divergent Thinking: To generate many different ideas about a topic in a short period of time. Diversification: Refers to when a company decides to focus on creating new products and services to sell rather than improving on existing ideas. This often means going into territory the company is not familiar with and can be very risky yet very rewarding. Dynamically Continuous Innovation: The introduction of new products with an element of significant innovation that could require major reassessment of the product within customer’s buying behavior. Early Adopters: A person who starts using a product or technology as soon as it becomes available. Economic Value Added (EVA): A measurement of shareholder value as a company’s operating profits after tax, less an appropriate charge for the capital used in creating the profits. Empathic Design: A user-centered design approach that pays attention to the user’s feelings toward a product. Engineering Design: The systematic and creative application of scientific and mathematical principles to practical ends such as the design, manufacture, and operation of efficient and economical structures, machines, processes, and systems. Engineering Model – (Model-Driven Engineering – MDE): A software development methodology which focuses on creating and exploiting domain models (that is, abstract representations of the knowledge and activities that govern a particular application domain), rather than on the computing (or algorithmic) concepts. The MDE approach is meant to increase productivity by maximizing compatibility between systems (via reuse of standardized models), simplifying the process of design (via models of recurring design patterns in the application domain), and promoting communication between individuals and teams working on the system (via a standardization of the terminology and the best practices used in the application domain). Enhanced New Product: A new product that has enhanced features that enable it to claim superiority over competitors on the basis of a common ground. In addition to unique features, it heavily advertises that it offers something that other brands lack. Entrance Requirement: Something essential to the existence or occurrence of something else prior to embarking on an opportunity. Entrepreneur: One who organizes, manages, and assumes the risks of a business or enterprise. Environmental Sustainability: Addressing the economic, environmental and social responsibilities and managing them accordingly towards the attainment of a desired level of extracting natural resources without destroying the ecological balance of an area. Ergonomic Design: The application of human factor data to the design of products and spaces to improve function and efficiency. Ethnocentric Approach/Company: When persons of the parent country of a business or company fill key positions within the company at home and abroad. This technique is useful when introducing a new product or technology to another country. Ethnography: A detailed, often scientific, description of a particular society or culture. Evaluative Market Research: Research done in order to gain a more through understanding of a particular market in order to use resources and sell products more efficiently. Also performed when determining critical market issues or problems. Event: A turning point in the process of innovation, usually a critical or dramatic occurring, for the better or worse of the company, innovation, etc. Event Map: Organizing the when, where, benefits, losses, costs, and/or overall results of events in sequential order. Excursion: A trip taken by company workers in order to benefit or advertise a particular product or idea. Exit Requirement: a.k.a. Exit Criteria: A specific set of requirements for businesses, investors, and/or companies to leave a particular business. Those requirements include time; profit margins, product checkpoints met, etc. Exit Strategy: A long-term plan for the future of the company. Typically, companies are passed down through families, bought by other companies, or traded within the stock market. Experience Curve: The experience curve shows the relationship between production cost and cumulative production quantity for a given product. New technologies also affect the curve. Explicit Customer Requirement: A requirement of the customer that is voiced through feedback and other sources that dictates the success of the product. Without such a requirement, the product will fail. Factory Cost: The cost in money to produce a product, usually measured by currency and excluding any blood, sweat, or tears that went into its creation. Failure Mode Effects Analysis (FMEA): A process in which you systematically analyze a product’s faults and rate them based on their severity, occurrence, noticeability, and reveals many new avenues for more innovation to occur. Failure Rate: See (Success Rate) Feasibility Determination: A measurement of how humanly possible an idea is, and whether or not it is something to be pursued. Feature: (1) A facet of a given product. (2) An unintended fault that becomes preferable. Feature Creep: Innovation at its most common pace. Features added on or improved in increments. Field Testing: The process of which one discovers many, many new faults to improve on. Financial Success: An unpredictable situation given up to chance that only has one constant: innovation. Firefighting: Patching up faults, without ever fixing them. May lead to actual firefighting in some cases. Firm-Level Success: Success measured by a company’s ability to thrive. First-to-Market: When a company thinks they have a hot new idea or product, it will try to operate at hyper-speed in order to seize first-mover advantage and gain market share. Focus Groups: A focus group is a form of qualitative research in which a group of people are asked about their perceptions, opinions, beliefs, and attitudes towards a product, service, concept, advertisement, idea, or packaging. Forecast: The outlook and prediction of a market. Function: Specific process, action or task that a system is able to perform. Functional Elements: Parts of a product that serve a specific purpose. Functional Pipeline Management: Determine whether, and how a set of projects in the portfolio can be executed by a company in a specified time, given finite development resources in the company. Functional Schematic: A presentation of the element-by-element relationship of all parts of a system. Functional Testing: Test done as quality control, performed to ensure the product meets all of its specifications. Fuzzy Front End: The organization formulates a concept of the product to be developed and decides whether or not to invest resources in the further development of an idea. Fuzzy Gates: One of the various stage gates in discussing process versus time to market. Gamma Test: Measures the strength of association of the cross tabulated data when both variables are measured at the ordinal level. Gantt Chart: Bar chart that illustrates a project schedule. Gap Analysis: A tool that helps companies compare actual performance with potential performance. Gate: A set of values that serve to isolate a specific group of cytometrics events from a large set. Gatekeepers: The hiring of agent to pursue the principal’s interests. Generative Market Research: Market research (focus groups, surveys, etc.) used to produce results of how to attract certain markets. Geocentric Approach/ Company: Company with offices in multiple nations that attempts to achieve goals on a local/international level. Globalization: A product/service that is developed to be distributed globally, but is also fashionable to accommodate the user/consumer in a local market. Graceful Degradation: Also known as fault-tolerance, is the property that enables a system to continue operating properly in the failure of some of its components. Green Architecture: Environmentally conscious design techniques in architecture to minimize the negative environmental impact of buildings. Green New Product Development (NPD): A new product whose greenness is significantly better than conventional or competitive products. Gross Rating Points (GRPs): Term used in advertising to measure the size of an audience reached by specific media vehicle. Calculated by multiplying the percentage of target audience reached by the frequency the audience saw the advertisement. Groupware: Computer software designed to help the way documents and rich media are shared to enable more effective team collaboration. Growth Stage: Third stage in a product’s life cycle where sales revenue increases rapidly, and profits reach a peak. Heavyweight Team: Developmental project team consisting of specialized experts led by a project manager. The project manager has direct access to, and responsibility for, the work of all those involved in the project. Hunting Ground: New business opportunity. Hunting for Hunting Grounds: Discovering fundamentally new opportunities and exploring new areas of growth for a business. Hurdle Rate: Minimum acceptable rate of return a project manager or company is willing to accept before starting a project, given its rick and the opportunity cost of foregoing other projects. Idea: An idea can be a “plan of action” or an “intention” to do or create something. Idea Exchange: Incorporating different people with their different ideas and perspectives through conversing. Idea Generation (Ideation): To form an idea; imagine or conceive. Idea Merit Index: Systematic way to understand the particular merit of an idea. Implementation Team: A cross-functional executive team representing various areas of the company, project, or idea. Implicit Product Requirement: In explicit requirements that are not directly expressed or captured but are essential to meet a company’s goal. They are things that are assumed to be “there”. In-Licensing: A partnership between two companies to share the benefits and risks of licensing a new product due to shared interests, goals, or intentions. Incremental Improvement: Improvements that are implemented on a continual basis. Incremental Innovation: “Sustaining innovation” by using existing forms or technologies as a starting point. Then by using incremental improvements or reconfiguration to serve some other process. Industrial Design (ID): The use or combination of applied art and applied science to improve aesthetics, ergonomics, and usability, of a product; may also be used to improve the products marketability and production. Information: Knowledge communicated, gained, or received concerning a particular idea, fact, or circumstance. Information Acceleration: Refers to the staggering rate at which information is created globally. This concept is sometimes used to create models, or virtual buying environments, for new or developing products that simulate the environment available at the time the consumer will be purchasing the product. Informed Intuition: The process of gathering as much information about your consumers as possible and developing a deep understanding of the customer base (who are they, how do they perceive you, what marketing techniques work and what doesn’t, the overall economical situation, etc.) in order to make an informed gut decision on how to move forward. Initial Screening: The elimination of product concepts and ideas early on in the process of new product development before irrationally devoting resources and funds to an idea that may not be further developed. Innovation: The creation of new or improved products, ideas, processes, or services, which are developed and marketed. Innovation differs from invention in that invention focuses on the manufacturing of a product whereas innovation is based around new ideas and creativity. Innovation Engine: A set of social practices and organizational structures that promote ongoing stimulation of new ideas, combined with mechanisms that can reliably and effectively channel those ideas into a flourishing network of collaborative projects. Innovation Strategy: A road map containing clear goals and tactics put together by a company or an individual to facilitate innovation. Innovative Problem Solving: Combines rigorous problem definition, pattern-breaking generation of ideas, and action planning which results in new, unique, and unexpected solutions. Stages of innovative problem solving include: framing the problem, diagnosis, generating solutions, making choices, and taking action. Integrated Architecture: The convergence of components to simplify and optimize a process or structure. Integrated Product Development (IPD): Involves the integration of both the design of a new product and the design of the manufacturing process in order to achieve fast, low cost development and production while still providing a high quality product. Intellectual Property: An idea, design or concept that doesn’t manifest itself in a physical form. This could include copyrights, trademarks, patents, trade secrets, etc. Interlocking Teams: The collaboration of different specialized groups, all with the task of accomplishing the same objective. These teams could specialize in business, innovation, engineering, design, art, marketing, etc. Internal Rate of Return (IRR): Used in capital budgeting to measure the profitability of investments. The IRR makes the net present value of all cash flows from a particular project equal to zero. Generally speaking, the higher a project’s internal rate of return, the more desirable it is to undertake the project. Intrapreneur: A member of a business (most likely a manager) whose task is to encourage risk taking and innovative thinking among fellow coworkers. Introduction Stage: One of the first stages in any products life cycle. In this stage the product is being introduced onto the market, and sales numbers a generally not very high. There is a responsibility for the business to promote the product and “educate the public” about the products existence. Invention: Taking already existing ideas, concepts, and designs and utilizing them in a new way that achieves a new task. ISO-9000: A quality management system that was designed to help organizations ensure that they meet the needs of customers and other stakeholders. Issue: A problem or dilemma that needs to be addressed. Journal of Product Innovation Management: The leading academic journal devoted to the latest research, theory, and practice in new product and service development. It is published six times a year, and is one of the important benefits of being a member of PDMA (Product Development and Management Association). Kaizen: A Japanese word for “improvement.” The word comes from Japan following World War II. It comes from the Japanese words 改 (“kai”) which means “change” or “to correct” and 善 (“zen”) which means “good”. Launch: The date of and events surrounding the release of a product. Lead User: A specific type of user of a product or service that is on the leading edge of significant market trends. There is a strong incentive to find solutions to meet the needs of these users. When development of products or services are completed, they often become important commercially as user needs become the mainstream. The term was coined in 1986 by Professor Eric von Hippel. Lean NPD: Product development which maximizes output with a minimum of input or resources. Learning Curve: The learning curve of a product is the average rate for a user to come to understand a product and use it with ease. a high curve is a product that’s difficult to learn to work with and is considered less consumer friendly. Learning Organization: An organic group/company which always seeks to evolve by educating its staff. Life Cycle Cost: The projected total costs of owning a property from inception to end use. Lightweight Team: A lightweight team pares down to about 3-5 members each with strong expertise in a key area. the lightweight team if ideologically aligned can provide strong security. Line Extension: A line extension is the addition of available options which don’t drastically change the basic product but offer a new flavor or color for example. Long-term Success: This is the ability for a company or product to last over time and through changing climates. M-Shaped Curve: A curve that demonstrates the relationship between good ideas and innovative ideas over time. Maintenance Activity: This can be seen as a large scale gear-oiling in a sense. maintenance activity is work done to promote harmony and therefore productivity in the workforce. Manufacturability: Extent to which a good can be manufactured with relative ease at minimum cost and maximum reliability. Manufacturing Assembly Procedure: The way in which a product is assembled and readied for the market. Manufacturing Design: The general engineering art of designing products in such a way that they are easy to manufacture. Manufacturing Test Specification and Procedure: The testing of a manufactured prototype in order to gauge the general preparedness for mass production. Market Conditions: Characteristic of a market in which a new product will be introduced such as the number of competitors, level of intensity or competitiveness, and the market’s growth rate. Market Development: The expansion of the total market for a product or company by entering the new segments of the market, converting non-users into users, and/or increasing usage by users. Market Innovation: Can be in the form of technology or ideas. These are positive changes in the market environment that can result in companies changing to adapt or may have been caused by companies themselves. Market Penetration: When a company or someone with a product introduces it to a market. Often the product or service is tailored specifically to survive and be profitable in the market. Market Research: Component of marketing research whereby a specific market is identified and its size and characteristics are measured. Market Segmentation: The process of defining and subdividing a large homogenous market into clearly identifiable segments having similar needs, wants, or demand characteristics. Its objective is to design a marketing mix that precisely matches the expectations of customers in the target segment. Market Share: A percentage of total sales volume in a market captured by a brand, product, or company. Market Testing: An examination to see if a sample of product will sell in a market. Market-Driven: A strategy where the firms allow the marketplace decide its own product innovation and the primary users of this strategy tends to be the consumer product firms. Maturity Stage: It is the third stage of product life cycle which demonstrates the sales and profit margins decreasing. Metrics: The measurement of quantifiable components on the company’s performance. Mindmapping: A visualization method used to organize thoughts, ideas and tasks around a central idea. These diagrams can aid in studying, solving problems, or making decisions by creating a structure in which ideas are linked by different categories or classifications. Modular Architecture: System of any design which contains any separate components that can be connected together. Monitoring Frequency: The scale of an acceptable range within which the operations’ will be conducted by next monitoring inspection. It can be determined by an assessment within the risk factors at the operation. Morphological Analysis: A creative problem-solving strategy that uses a matrix to organize and moreover forces the associations between the parts of a problem in an order to attempt and produce a novel solution idea. Multifunctional Team: A group of members from at least two or more departments that comes together to solve a problem. They also handle the situation where it requires three parts; capabilities, knowledge, and training where it cannot be found from any one source. Needs Analysis: A process in user centered design that focuses on the user’s satisfaction with the new product or system. It concentrates on requirements related to the goals and needs of the community and the user in order to define the requirements of the system being developed. Needs Statement: A statement which provides the detail descriptions of the functional specifications, technical requirements, and security standards that determines the selection for the technology solution. Net Present Value (NPV): a finance term (also Net Present Worth (NPW)) that refers to the incoming an outgoing series of cash flow and is defines as the sum of all present values. Network Diagram: An interconnected groups or systems that is made up from network and is use in computer telecommunication to draw the graphical chart of a network. New Concept Development Model: A model which provides a common language and terminology necessary to optimize the “Front End of Innovation.” New Product: Products that are new to a company or to a market. They may include existing products which have been improved or revised, brand extensions, additions to existing lines, repositioned products targeted to a new market and new to the world products. New Product Development (NPD): The creation of new products needed for growth or to replace those in the decline stage of their life cycle. New Product Development Process (NPD Process): A seven step process describing how a new product comes to the market. The steps in the process include idea generation, idea screening, concept development and testing, business analysis, beta testing and marketing, technical implementation, commercialization and new product pricing. New Product Development Professional (NPDP): A certification which confirms mastery of new product development principles and practices. New Product Idea: Generating new ideas about products using basic research, SWOT analysis, market and consumer trends, competitors, and focus groups. New Product Introduction (NPI): The introduction of a new product or product line, usually by an advertising campaign. New-to-the-World Product: Products which serve a purpose for which no product has previously existed. Nominal Group Process: A controlled variant of brainstorming used in problem solving sessions to encourage creative thinking, without group interaction at idea-generation stage. Non-commercialized Concept Statement: A text description of a new product presented to the consumer which is used to get the customers attention and gauge initial interest, not convince them to buy the product. Non-Destructive Test: Testing that is comprised of test methods used to examine an object, material or system without impairing its future usefulness. Non-Product Advantage: Competitive advantage gained not from product attributes but through identity and image of the firm, widespread distribution, effective communications, customer service, and technical support. Not-Invented-Here-Syndrome: Used to describe persistent social, corporate or institutional culture that avoids using or buying already existing products, research, standards, or knowledge because of their external origins. Off-Shoring: A type of outsourcing that involves moving a business’ functions to foreign country. This is usually done to reduce business costs seeing that the foreign country has more favorable economic conditions. Open Innovation: The use of purposive inflows and outflows of knowledge to accelerate innovation. With knowledge now widely distributed, companies cannot rely entirely on their own research, but should acquire inventions or intellectual property (such as patents) from other companies when it advances the business model. Operations: Specific processes or sets of functions of practical or mechanical nature in some form of work or production. Operator’s Manual: A guide that communicates and assists the operator in using a particular system. Most operator manuals are associated with electronic goods and software; manuals also contain written guides and associated images. Opportunity: A good position, chance, or prospect that is favorable for attainment of a goal or other advancement. Organizational Innovation: Improvements on a company’s organizational methods that make it more reliable or efficient or easy to manage. Outsourcing: Any task, operation, or job that could be performed by employees within an organization, but is instead contracted by a third party for a significant period of time. Outstanding Corporate Innovator Award: The Product Development and Management Association’s Outstanding Corporate Innovator (OCI) Award is the only innovation award that recognizes sustained (five or more years) quantifiable business results from new products and services. The OCI Selection Committee uses a rigorous process to evaluate each year’s nominees and to select those companies that have proven themselves exceptionally capable of integrating strategy, culture, process and technology to consistently create and capture value through successful product and service innovation. Pareto Chart: A type of chart that contains both bars and a line graph, where individual values are represented in descending order by bars, and the cumulative total is represented by the line. The left vertical axis is the frequency of occurrence, but it can also represent cost or another important unit of measure. The right vertical axis is the cumulative percentage of the total number of occurrences, total cost, or total of the particular unit of measure. Participatory Design: Also known as “Cooperative Design”, an approach to design attempting to actively involve all employees and end users in the design process in order to help ensure the product designed meets everyone’s needs and is usable. It is used in a wide array of fields to ensure a way of creating environments that are more responsive and appropriate to their inhabitants’ and users’ cultural, emotional, spiritual and practical needs. Patent: A form intellectual property consisting of a set of exclusive rights granted by a sovereign state to an inventor for a limited period of time in exchange for public disclosure of the invention. Payback: To reap the rewards, financially and otherwise of innovation. Payout: The expected financial return from an investment over a given period of time. Perceptual Mapping: A graphical technique used by marketers that attempts to visually display the perceptions of current and potential customers. Performance Indicators: A measurement used by an organization to evaluate its success in various endeavors. These indicators can be represented in terms of specific strategic goals. Performance Measurement System: A process for collecting and reporting information regarding the performance of an individual, group, or organization. Phase Review Process: A review conducted at the end of each phase in a stage-gate process, used to review the work conducted in each phase. Physical Elements: Components, parts, and assemblies that make up an object, which do not vary with time. Pilot Gate Meeting: A trial gate meeting usually held at the launch of a stage-gate process to test the design of a process and familiarize participants. Pipeline (product pipeline): A tool that incorporates multiple stages in the systematic application of innovation in the product development process. Pipeline Alignment: Ensuring that organizational objectives match pipeline inputs. Pipeline Inventory: The existing process components of each phase that collectively make up the pipeline. Pipeline Management Enabling Tools: Supporting tools which management uses to improve the overall effectiveness of the pipeline process. Pipeline Management Process: Process of establishing effective methods of evaluation throughout the pipeline phases. Pipeline Management Teams: Groups that act as support function in the overall goal of the project. Pipeline Management: The process management of all activities associated with creating a product. Platform Product: Strategically designed product that is meant to enable a company to use the same components in different product offerings. Polycentric Approach/ Company: An organizational structure that incorporates cross-functional methods to achieve the company’s objectives. Portfolio: A collection of current and potential project/product ideas that an organization possesses. Portfolio Criteria: The requirements a portfolio to be considered safe. Some might include diversified risk and structure decision making. Portfolio Management: Process of making investment decisions using money other people have placed under his or her control or a person who manages a financial institution’s asset and liability (loan and deposit) portfolios. Pre-Production Unit: Between a prototype and the production model, still has some kinks to work out. Process Champion: Champions have the lead role within the business units. They define how business processes are to be executed. Process Innovation: Changing the way something is done. Whether it’s how a product is made or business is conducted. Process Managers: Oversee the ensemble of activities of planning and monitoring the performance of a process. Process Map: Refers to activities involved in defining exactly what a business entity does, who is responsible, to what standard a process should be completed and how the success of a business process can be determined. Process Mapping: A workflow diagram to bring forth a clearer understanding of a process or series of parallel processes. Process Maturity Level: How established and stable a process is within a business. Process Owner: The person who is responsible to design the processes necessary to achieve the objectives of the business plans that are created by the Business Leaders. The process owner is responsible for the creation, update and approval of documents (procedures, work instructions/protocols) to support the process. Process Re-Engineering: The analysis and design of workflows and processes within an organization. Product: An object produced by a particular action or process; the result of mental or physical work or effort. Product and Process Performance Success: The measurement of how effectively the product development process is. Product Approval Committee (PAC): A group of people who follow certain procedures to ensure the product can go further into development or to a new stage of the process. Product Architecture: Description of the way(s) in which functional elements of a product or system are assigned to its constituent sections or subsystems, and of the way(s) in which they interact. Product Definition: Producer’s view of a product that includes product concept, design requirements and specifications, features, target market, pricing points, positioning strategy, etc. See also product description. Product Development: A process many companies use to come up with new things or services to bring to the market. Product development usually uses the stage gate process. Product Development & Management Association (PDMA): The Product Development and Management Association (PDMA) is the premier global advocate for product development and management professionals. Product Development Check List: A tool used in order to organize the development process and visualize the methods used to create the product. Product Development Portfolio: An exhibit of progress made through the project. Product Development Process: System of defined steps and tasks such as strategy, organization, concept generation, marketing plan creation, evaluation, and commercialization of a new product. It is a cycle by means of which an innovative firm routinely converts ideas into commercially viable goods or services. Product Development Strategy: The plan by which products are developed or innovated and offered to new or existing customers. Product Development Team: A team composed of people specializing in multiple fields and professions who work together to develop and execute production of a new product, or innovate an existing product. Product Discontinuation: When a company stops offering a given product in a given market because it is obsolete, uncompetitive, or other reasons. Product Discontinuation Timeline: A plan which dictates when and how a product is to be discontinued. Product Failure: When a product does not meet the requirements and expectations set by the company which produces it. Product Family: A group of products with either a similar use and purpose, or similar consumer base. Product Innovation: Making improvements on an existing product in order to make it function better or adapt to a changing market. Product Innovation Charter (PIC): A definition of a product development team including its product idea, its goals and plan for turning that idea into a product, and how the team is to be structured. Product Interfaces: A product as an end-user sees it, devoid of its internal workings. Product Life Cycle: The typical product life as seen in four stages – birth, growth, maturity, and decline. Product Life-Cycle Management: The management of production, sale, and identity of a product through its life. Product Line: A group of related products manufactured by a single company. The marketing strategy of offering for sale several related products. Product Management: An organizational lifecycle function within a company dealing with the planning, forecasting, or marketing of a product or products at all stages of the product lifecycle. The role consists of Product development and product marketing, which are different efforts, with the objective of maximizing sales revenues, market share, and profit margins. Product Manager: The product manager is often responsible for analyzing market conditions and defining features or functions of a product. The role of product management spans many activities from strategic to tactical and varies based on the organizational structure of the company. A product manager considers numerous factors such as intended demographic, the products offered by the competition, and how well the product fits with the company’s business model. Product Plan: The process of creating a product idea and following through on it until the product is introduced to the market. Additionally, a small company must have an exit strategy for its product in case the product does not sell. A product plan entails managing the product throughout its life using various marketing strategies, including product extensions or improvements, increased distribution, price changes and promotions. Product Platforms: A set of subsystems and interfaces that form a common structure from which a stream of derivative products can be efficiently developed and produced. Common design, formula, or a versatile product, based on which a family (line) of products is built over time. Product Portfolio: A combination of two or more product families. The range of products a company has in development or available for consumers at any one time. Managing product portfolio is important for cash flow. The set of different products that an organization produces, ideally balanced so that some products are mature, some are still in their growth stage while others are waiting to be introduced. Product Rejuvenation: Creating a large surge in sales for a product through intense marketing effort. Product Requirements Document: A document written by a company that defines a product they are making, or the requirements for one or more new features for an existing product. A PRD is often created after a marketing requirements document (MRD) has been written and been given approval by management, and is usually written before a technical requirements document. It is designed to allow people within a company to understand what a product should do and how it should work. PRDs are most frequently written for software products, but can be used for any type of product. A PRD should generally define the problems that a product (or product feature) must solve, but should avoid defining the technical solution to those problems. This distinction allows engineers to use their expertise to provide the optimal solution to the requirements defined in the PRD. A PRD sometimes serves as a marketing requirements document as well, particularly if the product is small or uncomplicated. Product Superiority: Product has the best quality or concept than any other product that maybe similar. Program Evaluation and Review Technique (PERT): A strategic tool designed to analyze and represent the tasks involved in completing a given project. Program Manager: Has an oversight of the purpose and status of all projects in a Program and can use this oversight to support project-level activity to ensure the overall program goals are likely to be met, possibly by providing a decision-making capacity that cannot be achieved at project level or by providing the Project Manager with a program perspective when required, or as a sounding board for ideas and approaches to solving project issues that have program impacts. Typically in a program there is a need to identify and manage cross-project dependencies and often the PMO (Program or Project Management Office) may not have sufficient insight of the risk, issues, requirements, design or solution to be able to usefully manage these. The Program manager may be well placed to provide this insight by actively seeking out such information from the Project Managers although in large and/or complex projects, a specific role may be required. However this insight arises, the Program Manager needs this in order to be comfortable that the overall program goals are achievable. Resource Plan: Detailed summary of all forms of resources required to complete a product development project, including personnel, equipment, time, and finances. Responsibility Matrix: An arrangement that shows the percentage of how each non-managerial person’s time that is to be devoted to each of the current projects in the firm’s portfolio. Return on Investment (ROI): A standard measure of project profitability, this is the discounted profits over the life of the project expressed as a percentage of initial investment. Rigid Gate: A review point in a Stage-Gate™ process at which all the prior stage’s work and deliverables must be complete before work in the next stage can commence. Risk Acceptance: An uncertain event or condition for which the project team has decided not to change the project plan. A team may be forced to accept an identified risk when they are unable to identify any other suitable response to the risk. Risk Avoidance: Changing a project plan to eliminate a risk or to protect the project objectives from any potential impact due to the risk. Risk Management: The process of identifying, measuring, and mitigating the business risk in a product development project. Risk Mitigation: Actions taken to reduce the probability and/or impact of a risk to below some threshold of acceptability. Risk Tolerance: The level of risk that a project stakeholder is willing to accept. Tolerance levels are context specific. That is, stakeholders may be willing to accept different levels of risk for different types of risk, such as risks of project delay, price realization, and technical potential. Risk Transference: Risk transference refers to the transfer or risk, or the burden of loss due to uncertainty, failure, accident, etc, from one party to another. There is a variety of technique on how to execute this process through initial assessment and subsequent maneuver to reduce risk i.e. legislation, insurance, etc. Risk: An event or condition that may or may not occur, but if it does occur will impact the ability to achieve a project’s objectives. In new product development, risks may take the form of market, technical, or organizational issues. For more on managing product development risks, see Chapters 8 and 15 in the PDMA ToolBook 1 and Chapter 28 in The PDMA HandBook 2nd Edition. Roadmapping: Roadmapping is the planning process that applies goals, both short term and long term, to specific proposed ‘mapped’ solutions, generally in a flow design. Robust Design: Robust design, also referred to as the Taguchi method, seeks to maximize engineering productivity by accounting for noise such as environmental and manufacturing variation, etc. “S” Curve: The S-Curve, or Sigmoid Curve, is used to represent the various expenditures of resources over the life of a project. Scrum Process: An agile software development method for managing software projects and product or application development. S-Curve (Technology S-Curve): The technology s-curve is the idea that technology evolves through an initial slow period of growth, then a fast growth period, followed by a final general plateau. Scanner Test Markets: Scanner test markets are specially designed to provide scanner data from consumer panels to analyze a product’s performance. Scenario Analysis: Scenario analysis is the process of future analysis through consideration of all alternative possible outcomes, which is a type of projection that is strengthened by concurrently considering all possible scenarios. Screening: Screening is the process of analyzing many subjects to identify those subjects with a particular set of characteristics, or the targeted demographic. Segmentation: Segmentation is the segmented grouping of potential customer groups by characterizing their needs, demand, and recognized qualities of the proposed product. An ideal market segment will be internally homogeneous, externally heterogeneous, and cost efficiently reached. Senior Management: Senior management is the highest level of management in an organization, sometimes a management team of individuals with experience in the field. This level of management is generally responsible for corporate governance and the stakeholders. Sensitivity Analysis: Sensitivity analysis is the technique used to analyze how different independent variable values will affect a dependent variable under a certain set of conditions, and is a prediction of a decision’s outcome. Serendipity: Unexpected advantages or benefits incurred due to positive synergy effects of the merger. Unforeseen good fortune. Service: Something intangible that is paid for. Short-Term Success: Success occurring over or involving relatively a short period of time. Should-Be Map: A chart or graph that plans and shows how one imagines things should be. Simulated Test Market: A marketing research technique where consumers are subjected to engineered advertising and purchase decisions to examine their response to a new product or service. Six Sigma: A disciplined, data-driven approach and methodology for eliminating defects. Slip Rate: The rate at which something slips past its predicted value. Social Sustainability: The idea that future generations should have the same or greater access to social resources as the current generation while there should also be equal access to social resources within the current generation. Specification: Describing or identifying something precisely or of stating a precise requirement. Speed to Market: The elapsed time from order placement to arrival on the retail sales floor. Sponsor: Individual or entity that organizes and is committed to the development of a product, program, or project. Stage: Is a specific part or section of a project development or a task. Stage-Gate™ Process: A map for getting the product from an idea into a development to launch. Start at a gate, the idea, then a stage of scoping and modeling is done, step through the development gate, then arrive at a stage of development, etc. Staged Product Development Activity: A preliminary step to developing a new product for the market. Standard Cost: An estimation of the cost of an operation or a good. Stop-Light Voting: Voting by assigning ideas a color from those used in a stop-light. Strategic Balance: Reaching an equilibrium by combining long or short term objectives or factors. Strategic Bucket: Strategic Planning for business success broken into three buckets - Bucket 1: process improvements- doing what we do today, better - Bucket 2: new initiatives- growing new stuff for tomorrow - Bucket 3: stop doing – yesterday’s good ideas that are no longer a priority Strategic New Product Development (SNPD): A preliminary step to developing a new product for the market. Strategic Partnering: To complete a common objective, forming an alliance between others to share intellect and resources. Strategic Pipeline Management: The distributing of sales by new channel, value, or geography. Strategic Plan: To plan aimed to obtain desired results. Strategy: A plan of action designed to achieve a vision. generalship. Sub-assembly: A unit built separately, but meant to be used as a part to a larger product. Success: Favorable or desired outcome. Supply Chain Innovation: Changes in the way a product gets from production to distributors. Innovation in the supply chain can make a product more available or cheaper to ship. Supply Chain Innovation: Advances in technology and socioeconomic development that businesses coordinate on a global scale to provide goods and services. Support Service: Activity or function required for the successful completion of a project. Sustainable NPD: Product Development that can be continued at a rate that does not supersede K. System Hierarchy Diagram: A diagram similar to a food chain. There is the highest position, usually the CEO/Founder. Positions branch out from under him/her to provide specific roles due to their specific places in the chain. Someone is always on top. Systems and Practices Team: A team developed to study and establish standard activities and methods used with a company to target different factors that might hinder the company strategy. Systems and Practices: Standard activities and methods established to handle day to day and routinely occurring events, depending on their usefulness to the company. Target Market: A particular segment of the market, targeted by by ads and a general marketing campaign. Task Target Cost: Determining the highest allowable pricing for a New product once it hits the market. Team Leader: Someone (or more than one) who provides guidance, instruction, direction, to a group of individuals (See team, above) focus is not only on the goal to be a achieved but on the team itself and their well-being. Often the person who speaks or represents the group. Team Spotter’s Guide: A questionnaire used by a team leader (or team members) to diagnose the quality of the team’s functioning. Team: Multiple people (with multiple skill sets) working together to achieve a common goal. Technology Road Map: Plan that matched short-term and long-term goals with specific technology solutions help meet these goals. Technology Stage Gate: A process of managing technology development; especially with high uncertainty and risk. Technology Transfer: Refers to a process within governments, universities and other institutions that helps make technological developments and knowledge available to a wider range of users. These departments work to transfer the skills, knowledge, technology, and manufacturing methods in order to allow for the development of new products, applications, materials, and services. Technology-Driven: The concept that breakthrough innovations are made at firms that have technical abilities and push for the development of new goods and services that are not necessarily in high market demand. Technopreneurship: The combination of technology and entrepreneurial skills by someone who is savvy, creative, innovative and passionate about their work in trying to make the world a more technologically connected place. Test Markets: In business and marketing, it is the geographic region or demographic group used to test a product or service before officially releasing it to market. Think Links: A question that encourages and challenges students to think creatively; may be used as a springboard for classroom discussions or left for students to work out on their own. Think Tank: Mostly non-profit organizations who engage in research and advocacy of a social policy, political strategies, economics, military, as well as technology fields. Thought Organizers: Computer program that aids in grouping ideas in varying hierarchies to form a rational plan for a project or task. Three R’s: (1) Reduce, Reuse, and Recycle—thinking green when considering a company’s sustainability. (2) Record, Recall, and Reconstruct—the creative process for coming up with ideas for new products. Threshold Criteria: Standard for the performance targets of any proposed product development project. Thumbnail: Small sketch, usually in pencil, of an approximated image proposal used by graphic artists. Time-to-Market: Critical amount of time a product idea takes to get from the brainstorming phase to the final product available for the market. Tone: Feelings and emotions consumers associate with a product. Total Quality Management (TQM): Continual team building and management improvement. Tracking Studies: Consumer studies conducted after a product’s launch that track the quality of the product’s reception. Trade Secret: Confidential intellectual property that gives the property owner a competitive advantage. Trade-Off Analysis: The process of understanding how focusing on a certain aspect of a project will necessarily affect other aspects; the goal of this process is to predict whether re-allocating resources will yield a better result. Trademark: A unique word, sign, phrase, or sound used by a person or company to distinguish a product or service. Trialability: A measure of how easily a new product or service can be preliminarily evaluated by the target consumer. TRIZ: A systematic method for problem solving developed based on an analysis of patent literature to understand how problems are solved. This method starts with contradictions in needs, for example a strong but light weight vehicle, and then identifies possible avenues to solve said problem. Uncertainty Range: An interval within which a value is expected to lie based on unknown factors or factors known to have some variation. User: The person or persons that a product or service is intended for. Utility: A measure of how important a product or service is to a person or persons. If a product has increased utility it will command a higher demand. Value Analysis: A process to understand and hopefully increase the value of a new product or service. Value Chain: A description of a process undergone by a product during which raw materials undergo processing and their value is increased through each step. Throughout the “value chain” each step adds value to the product. Value Proposition: An expected benefit that the customer will experience from a new product or service. Vision: An aspirational description of what an organization would like to achieve or accomplish in the mid-term or long-term future. It is intended to serves as a clear guide for choosing current and future courses of action. See also mission statement. Voice of the Customer (VOC): A term used in business and Information Technology to describe the in-depth process of capturing a customer’s expectations, preferences and aversions. Voice of the Engineer (VOE): A term used in designing and implementing a project to describe the in-depth process of the creation and expectations. We-ness: A language that companies use to better able to resolve conflicts. Workflow Design Team: Progression of steps (tasks, events, interactions) that comprise a work process, involve two or more persons, and create or add value to the organization’s activities. In a sequential workflow, each step is dependent on occurrence of the previous step; in a parallel workflow, two or more steps can occur concurrently. Worth What Paid For (WWPF): Customer’s judgment on the satisfaction derived from a purchase. It is determined more by the item’s perceived value than by the price of contents or materials.
<urn:uuid:9872f571-a918-4bf2-8100-c7854d379582>
{ "dump": "CC-MAIN-2014-49", "url": "http://www.davincicenter.vcu.edu/glossary/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400382386.21/warc/CC-MAIN-20141119123302-00145-ip-10-235-23-156.ec2.internal.warc.gz", "language": "en", "language_score": 0.9302175641059875, "token_count": 14307, "score": 2.71875, "int_score": 3 }
Scientists believe they have found the long-sought “missing link” in black hole evolution — intermediate-mass black holes (IMBHs), which are believed to lead to the creation of supermassive black holes at the centers of galaxies. Researchers have spotted an object called NGC-2276-3c, which is about 100 million light-years from Earth in the spiral galaxy NGC-2276, according to a Space.com report. Astronomers believe that IMBHs have the mass of a few hundred to a fewhundred thousand suns — tiny compared to supermassive black holes, which can be the size of billions of solar masses. They are larger, however, than stellar-mass black holes, which form when a massive star collapses, and usually have a size of five to tens of solar masses. While scientists have not had any trouble finding collapsed stars and supermassive black holes, it hasn’t been so easy to track down IMBH, which are often on their way to becoming supermassive black holes. Although scientists have long believed them to exist, the intermediate masses have proved very elusive. However, study co-author Tim Roberts of the University of Durham in the UK said in a statement that he and his colleagues at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts had used NASA’s CHandra X-ray Observatory and the European Very Long Baseline Interferometry Network to spot NGC-2276-3c, which is believe to be the long-lost IMBH, according to the report. Using their observations and the known relationship between black hole mass and X-ray and radio wavelength luminosity, the team was able to calculate that NGC-2276-3c had a mass of 50,000 suns, placing it squarely in the IMBH category. It’s an exciting find for scientists, as it “helps tie the whole black hole family together,” said study co-author Andrei Lobanov of the Max Planck Institute for Radio Astronomy in Bonn, Germany, as quoted by the report. And that’s not all: the black hole is blasting a radio jet 2,000 light-years into space, clearing a 1,000 light-year path where no young stars exist, indicating that it blasted away the gas clouds that would normally collapse into stars. A separate study looks into the origin of NGC-2276-3c. Scientists are have found that the equivalent of about five to 15 solar masses are created each year in the NGC-2276 galaxy, a rapid rate that suggests a dwarf galaxy collided with NGC-2276 at some point, and NGC-2276-3c may actually have been near the core of the dwarf galaxy.
<urn:uuid:2745a4ca-3f96-4f4d-9df5-56987f73ee68>
{ "dump": "CC-MAIN-2019-18", "url": "https://natureworldreport.com/2015/03/scientists-excited-with-sudden-discovery-of-missing-link-in-black-hole-evolution/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578636101.74/warc/CC-MAIN-20190424074540-20190424100540-00450.warc.gz", "language": "en", "language_score": 0.9551907181739807, "token_count": 585, "score": 3.9375, "int_score": 4 }
Condensation is caused simply by too much humidity being created, and not enough ventilation to take it away. Other factors such as sufficient heating is quite important. Condensation is the most common form of dampness found in properties and the method of successful control is to reduce the relative humidity to an acceptable level. Mould can only grow in properties that suffer with a relative humidity level persistently above 71%. The problem is that in some cases negative ventilation can cause as much condensation as no ventilation at all.... Condensation is temperature driven. Therefore when warm humid air comes into contact with a cold surface. The water in the air will condense on that surface. If you are ventilating an already cold room, then the temperature will be lowered therefore raising the relative humidity leading to more condensation and mould. This is known as negative ventilation. You have two options to control this. 1. Install humidity controlled fans to only to water making areas. Kitchen and bathroom. These fans are preset at around 60% RH which means that they will activate automatically when this level of humidity is reached. 2. fit a positive pressure fan to the loft area. These appliances has shown remarkable results in the control of condensation. The information on the fans can be found on the Nuaire website. Just search for Dri-master for the positive pressure fans, or Genie for the wall mounted extractors. You will need an electrician to fit either appliances so you will need to post your job into that category. I hope that this helps. Peter Barber CSSW.,.CRDS. London Waterproofing Solutions. First source the problem, it may be that the room has little ventilation. You may have an external issue where damp is coming in but that would only happen on outside walls. A dehumidifier would extract the moisture from the atmosphere as a temporary fix. Next address the affected area, rub away the mould and paint with a mould resistant paint then finish with the top coat.
<urn:uuid:8f6d02cf-e62c-4005-80cb-934bc19938ed>
{ "dump": "CC-MAIN-2018-43", "url": "https://www.ratedpeople.com/diy-advice/q/diy/25145/mould-on-2year-old-plastered-wall-in-lounge-front-what-to-do", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511365.50/warc/CC-MAIN-20181018001031-20181018022531-00386.warc.gz", "language": "en", "language_score": 0.9326571822166443, "token_count": 410, "score": 2.84375, "int_score": 3 }
Project Mercury was a NASA program. It launched the first Americans into space. Astronauts made six flights during the Mercury project. Two of those went to space and came right back down. Four of them went into orbit and circled Earth. The first of the six flights was in 1961. The last flight was in 1963. What Spacecraft Was Used for Project Mercury? The Mercury capsule was small. It only held one person. The capsule had very little room inside. The astronaut had to stay in his seat. Two types of rockets were used for Mercury launches. The first two of the six flights with an astronaut on board used a Redstone rocket. The other four manned flights used an Atlas rocket. Both rockets were first built as missiles for the military. The project was named Mercury after a Roman god who was very fast. Each astronaut named his spacecraft. Alan Shepard included a 7 in the name of his capsule. This was because it was the seventh one made. The other astronauts included a 7 also. This was in honor of the seven astronauts chosen for the project. Who Were the Mercury Astronauts? NASA chose seven astronauts for Project Mercury in 1959. It was one of the first things NASA did. NASA was only six months old. Alan Shepard made the first Mercury flight. He was the first American in space. He named his spacecraft Freedom 7. The 15-minute flight went into space and came back down. Shepard later walked on the moon during the Apollo 14 mission. Gus Grissom was the second astronaut to fly in Project Mercury. Grissom named his capsule Liberty Bell 7. The third person to fly was John Glenn. In 1962, he was the first American to orbit Earth. His capsule was Friendship 7. The second American to orbit Earth was Scott Carpenter. He flew on Aurora 7. Wally Schirra (Shuh-RAH) was next, on Sigma 7. Gordon Cooper flew on the last Mercury mission. He spent 34 hours circling Earth. His capsule was Faith 7. Deke Slayton was also one of the "Mercury Seven" astronauts. A health problem stopped him from flying a Mercury mission. He flew into space in 1975 on a different mission. How Did NASA Make Sure Mercury Was Safe? Before astronauts flew, NASA had test flights. People were not on these launches. The flights let NASA find and fix problems. The first Atlas rocket that launched with a Mercury capsule exploded. The first Mercury-Redstone launch only went about four inches off the ground. NASA learned from these problems. NASA learned how to fix them. NASA made the rockets safer. Three other "astronauts" also helped make Mercury safer. A rhesus monkey, Sam, and two chimpanzees, Ham and Enos, flew in Mercury capsules. Enos even made two orbits around Earth. Since the monkey and the chimpanzees made it back safely, NASA knew it was safe for astronauts. Why Was Project Mercury Important? NASA learned a lot from Project Mercury. NASA learned how to put people in orbit. It learned how people could live and work in space. NASA learned how to fly a spacecraft. These lessons were very important. NASA used them in later space projects. After Mercury came the Gemini program. The Gemini spacecraft had room for two astronauts. NASA learned even more with Gemini. Together, Mercury and Gemini prepared NASA for the Apollo program. During Apollo, NASA landed human beings on the moon for the first time. More about Project Mercury: › Mercury Interactive Feature Return to Homework Topics Grades K-4 David Hitt/NASA Educational Technology Services
<urn:uuid:7a5df45d-079e-405c-abdd-951adcfe04ab>
{ "dump": "CC-MAIN-2015-11", "url": "http://www.nasa.gov/audience/forstudents/k-4/stories/what-was-project-mercury-k4.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936460577.67/warc/CC-MAIN-20150226074100-00216-ip-10-28-5-156.ec2.internal.warc.gz", "language": "en", "language_score": 0.972344696521759, "token_count": 743, "score": 3.203125, "int_score": 3 }
How seriously should we take Darwinian hypotheses in formulating ideas about the mind? The old nature-nurture arguments have gained added heft and new twists with our sophisticated brain-mapping and genetic technologies. In the US, the NIMH has recently put funds into mapping disorder in the brain, which, it is claimed, can be linked to schizophrenia as well as bipolar depression. Prediction and preventive medication may soon be available for major disorders. Other scientists have argued that early childhood environments help to shape and create the infant brain: resulting disorders may have more to do with environmental triggers than with biological inheritance. Anthony David, Professor of Psychiatry at King's College probes the disordered mind with Professor Peter Hobson, author of The Cradle of Thought and prize-winning novelist A.S. Byatt. Philosopher David Papineau, of the King's Centre for the Humanities and Health, chairs. © Copyright King's College London. Films directed by Poppy Sebag Montefiore Website designed and created by Fifteen Ten Ltd
<urn:uuid:a1643701-6928-4b34-8167-3903c68af13e>
{ "dump": "CC-MAIN-2018-09", "url": "http://thebrainandthemind.co.uk/The_Talks/Talk2/index.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813088.82/warc/CC-MAIN-20180220185145-20180220205145-00073.warc.gz", "language": "en", "language_score": 0.93479323387146, "token_count": 214, "score": 2.703125, "int_score": 3 }
This vegetation tells a story of stoical fortitude, of life on the edge. It is sparse but enduring, rich in understated achievement but impressive nevertheless. Exposed sheets of sedimentary sandstone are being and have been slowly eroded by the elements for a very long time. Successive generations of plant roots find and found their ways into the slow-growing cracks and crevices to eke out a life. Penetration by offspring exceeds that of ancestors, the drilling and cracking a continual to continuous endeavour. Rock destruction is the building exercise of a plant home, a vegetation domain, pioneered by the plants' early ancestors, although not by them alone. Apart from the contribution by the elements, nobody will ever know how many disappeared, anonymous, predecessor plant species with leaves, flowers and fruits no longer seen here, have contributed to leaving the collective mark on the land. Although extinct, much of their bodily leftovers, reused in atom and molecule form by later plants many times, are present in the plants in picture and the soil that feeds them. Coarse grains of quartz and several other minerals mix with the plant detritus and microorganism inhabitants to establish a particular soil type. This accumulates in the lowdown gaps where the rock gives way, moistened occasionally by winter rain. Collective action by countless living and dead contributors resulted in this matrix-like ecological phenomenon. Its narrow tentacles hemmed in and shaped by hard rock patches, has become a remarkable Cederberg veld type. The photo was taken at Kagga Kamma. What flowered here in recent seasons, albeit modest, left seeds that responded and will respond to bring about a brave, enduring plant community. Defiant though sparse, resilient though delicate, there is no flinching in the face of the extremes sure to challenge (Norman and Whitfield, 2006).
<urn:uuid:dfa58dce-9009-4b96-9dca-1fc1dc818454>
{ "dump": "CC-MAIN-2023-06", "url": "https://operationwildflower.org.za/index.php/albums/habitat/cederberg-sandstone-plates-thabo-3-11440", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00041.warc.gz", "language": "en", "language_score": 0.9551851153373718, "token_count": 378, "score": 3.09375, "int_score": 3 }
The renaissance of Britain's rivers was underlined on Tuesday when waterways once considered polluted to death were revealed as teeming with life. Among the list of the 10 most improved rivers published by the Environment Agency were the River Wandle, a tributary of the Thames which runs through southwest London. It was declared a sewer in the 1960s but is now one of the best urban fisheries in the country. In the northeast, the River Wear in Northumberland and its better-known sibling the Tyne are now the top two rivers in England to catch salmon -- and recent surveys show that more fish are present on the Wear than ever before. The Environment Agency said the most remarkable turnaround was made by the River Taff in south Wales, which the agency said once ran black with coal dust but is now a leading site for fishing competitions. The agency attributed the improvement in the rivers' state of health to thousands of habitat improvement projects and tighter regulation of polluting industries. Ian Barker, Head of Land and Water at the Environment Agency, said: "Work that we have done with farmers, businesses and water companies to reduce the amount of water taken from rivers, minimise pollution and improve water quality is really paying off -- as these rivers show. "Britain's rivers are the healthiest for over 20 years and otters, salmon and other wildlife are returning for the first time since the industrial revolution." The recovery of the Thames itself was recognised last year when it was awarded the International Theiss Riverprize which celebrates outstanding achievement in river management and restoration. Explore further: Maine salmon may be facing extinction
<urn:uuid:9393a66a-c7d5-4665-a98f-c1336c8703cd>
{ "dump": "CC-MAIN-2015-40", "url": "http://phys.org/news/2011-08-british-rivers-healthiest-years.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673081.9/warc/CC-MAIN-20151001215753-00015-ip-10-137-6-227.ec2.internal.warc.gz", "language": "en", "language_score": 0.9692816734313965, "token_count": 329, "score": 2.921875, "int_score": 3 }
How are K-12 students faring compared to traditional education? Quite well as a matter of fact! Check the stats. If you like facts and figures, here are some studies on home education. It’s encouraging to see that financial status, being formally educated as a teacher, race, etc., do not negate the parent’s ability to “provide a very successful academic environment.” Dr. Brian Ray has been studying the home education movement for over thirty years. His findings are published by National Home Education Research Institute (NHERI). Some of the most pertinent are: - Research Facts on Homeschooling – March 2020 Click here for detail on the following topics: • General Facts, Statistics & Trends • Reasons and Motivations for Home Educating • Academic Performance • Social, Emotional, and Psychological Development (Socialization) • Gender Differences in Children and Youth Respected? • Success in the “Real World” of Adulthood • General Interpretation of Research on Homeschool Success or Failure - Homeschool SAT Scores for 2014 Higher Than National Average – June 7, 2016 “The SAT 2014 test scores of college-bound homeschool students were higher than the national average of all college-bound seniors that same year… The homeschool students’ SAT scores were 0.61 standard deviation higher in reading, 0.26 standard deviation higher in mathematics, and 0.42 standard deviation higher in writing than those of all college-bound seniors taking the SAT, and these are notably large differences.” Click here for more detailed information. - Academic Achievement and Demographic Traits of Homeschool Students “Most children of about ages 6 through 17 have been placed in institutional schools with formally trained teachers and administrators for the past several generations. Homeschool parents, on the other hand, provide the majority of their children’s academic and social and emotional instruction and training in and based out of their homes without sending their children away to a place called school. Therefore, policymakers, educators, school administrators, judges, and parents often wonder whether ordinary mothers and fathers, who are not government-certified teachers, are capable of effectively teaching and rearing their children after age five.” Click here to read this extensive study from 2010. The National Center for Education Statistics is another source of studies on homeschooling. One of them shows details of the increase in homeschoolers. Click here to read report.
<urn:uuid:4fa508bc-9cc9-4bc7-978f-b6f2a433bf0a>
{ "dump": "CC-MAIN-2023-06", "url": "https://christianheritagecorona.org/keeping-up/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00863.warc.gz", "language": "en", "language_score": 0.949069082736969, "token_count": 513, "score": 2.65625, "int_score": 3 }
A mature all-white male orca, the only one of its kind known, has been spotted in the North Pacific off the east coast of Russia, scientists announced April 23. After seeing its towering white dorsal fin breaking through the water's surface, the team named the distinctive beast "Iceberg." Researchers first spotted the mature killer whale with his pod of 13 relatives in August 2010 in waters around the Commander Islands; he was seen twice that month, and photographed. When the researchers, part of the Far East Russia Orca Project, returned during the summer of 2011, they couldn't find him. His pod is one of 61 identified social orca units, according to data collected by the Far East Russia Orca Project (FEROP) during their 12 years studying the killer whales. [See photos of Iceberg the Orca] "In many ways, Iceberg is a symbol of all that is pure, wild and extraordinarily exciting about what is out there in the ocean waiting to be discovered," said Erich Hoyt, FEROP co-director and research fellow from the Whale and Dolphin Conservation Society. "The challenge is to keep the ocean healthy so that such surprises are always possible." In fact, the researchers suggest the area around the Commander Islands where Iceberg was first seen should form part of a network of reserves that would protect habitat for whale, dolphin and porpoise species off eastern Russia. Their call for such reserves, they say, is in response to overfishing and increased oil and gas exploration, which threatens marine mammals due to high noise levels, ship traffic and potential oil spills. Iceberg appears very healthy at his ripe-old age of 16, an age determined by the size of his dorsal fin, which extends nearly 6.6 feet, or 2 meters, high, Hoyt said, adding that he seems to be doing OK socially as well. But the team had wondered about Iceberg's health, particularly because of the only other record of a white orca. A 2-year-old female, all-white orca was captured at Sealand of the Pacific in Victoria, British Columbia, in 1970. Named Chimo, the whale died at age 4. Scientists discovered she had a rare genetic disorder called Chediak-Higashi Syndrome, which leads to partial albinism and a compromised immune system. "But there is no evidence that Iceberg has anything like this," Hoyt told LiveScience. "We don't even know if he is a true albino." From Iceberg's morphology, researchers think Iceberg is a fish-eater, unlike some groups of killer whales that primarily eat other marine mammals. "His pod's sounds are different than most of the orcas around that area, so they could be from an area closer to the Arctic or in the Aleutians, we don't know," Hoyt said. In fact, one of the key elements of the orca project is to learn about the animals' unique dialects. The team is leaving next month to begin their 13th year in the region studying orcas. "We are hoping to find his pod again," Hoyt said. Related on LiveScience: Copyright 2012 LiveScience, a TechMediaNetwork company. All rights reserved.
<urn:uuid:43596cc2-1093-4ba7-b821-1d2738336ddc>
{ "dump": "CC-MAIN-2017-47", "url": "https://www.mnn.com/earth-matters/animals/stories/albino-killer-whale-spotted-off-east-coast-of-russia", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806066.5/warc/CC-MAIN-20171120130647-20171120150647-00250.warc.gz", "language": "en", "language_score": 0.9751354455947876, "token_count": 671, "score": 2.65625, "int_score": 3 }
Thirsk, J. 1966 The origins of the common fields. Past and Present 33(April):142-147. Thompson, G.D., and P.N. Wisen 1994 Ejido reforms in Mexico: Conceptual issues and potential outcomes. Land Economics 70(4):448-465. Tiffen, M., M. Mortimore, and F. Gichuki 1994 More People, Less Erosion: Environmental Recovery in Kenya. Chichester, Eng. John Wiley & Sons. Turner, M.D. 1999 Conflict, environmental change, and social institutions in dryland Africa: Limitations of the community resource management approach . Society and Natural Resources 12(7):643-658. Uphoff, N., and J. Langholz 1998 Incentives for avoiding the tragedy of the commons. Environmental Conservation 25(3):251-261. Varughese, G., and E. Ostrom 1998 The Contested Role of Heterogeneity. Unpublished paper, Workshop in Political Theory and Policy Analysis, Indiana University. Wade, R. 1994 Village Republics: Economic Conditions for Collective Action in South India. San Francisco, CA: ICS Press. Wilson, E.O. 1992 The Diversity of Life. New York: W.W. Norton. Wolf, E. 1982 Europe and the People Without History. Berkeley: University of California Press. Yelling, J.A. 1977 Common Field and Enclosure in England, 1450-1850. London: MacMillan. Young, K.R. 1994 Roads and the environmental degradation of tropical montane forests. Conservation Biology 8(4):972-976.
<urn:uuid:b28d2719-2995-48f8-b5d5-48fa598c4f29>
{ "dump": "CC-MAIN-2014-23", "url": "http://www.nap.edu/openbook.php?record_id=10287&page=85", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510271648.4/warc/CC-MAIN-20140728011751-00041-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.7317062616348267, "token_count": 356, "score": 2.734375, "int_score": 3 }
Type D Personalities Should Be Very Concerned About Their Health You may not have paid much attention to your 'personality type,' those rubrics of A, B, and C, developed by psychologists in the 1950's. A comprehensive study of Type D personality (a Type added to the group in the 1990's), its relationship to heart disease, and the findings will no doubt be disappointing to those in that group. This will make them really grumpy. Briefly, Type A personality (which we probably hear about more than other Types) is the driven type that works and plays excessively and almost compulsively; they also tend towards perfectionism. Type B personalities are more laid back, easy-going types, patient and relaxed. Type A's become impatient with Type B's and Type B's try to stay away from Type A's. Type C personalities are thorough and perfectionist. They get to the bottom of every challenge they face and are very detail-oriented. Unfortunately Type C's have difficulty expressing their emotions and are often unaware of having them. Accountants and engineers are often Type C personalities. Now, Type D's have been described as anxious, irritable, and pessimistic. They are the grumps and the grouches of the world. They are not necessarily clinically depressed; negativity is just an overwhelming feature of their personalities. Though some early studies had shown that Type A's were at the greatest risk for cardiovascular disease, later research found fault with that data. Now an in depth literature research of 49 previous studies involving 6000 patients, found that it's Type D personalities who are the most likely to have cardiovascular disease and are those most likely to have another cardiac incident after experiencing the first one. They are three times more likely than other personality types, according to this research, to experience peripheral artery disease, heart failure, and heart attack. The research was led by Johan Denollet, a medical psychology professor at Tilburg University in the Netherlands, and is published as "A General Propensity to Psychological Distress Affects Cardiovascular Outcomes: Evidence From Research on the Type D (Distressed) Personality Profile" in the journal of Circulation: Cardiovascular Quality and Outcomes. Denollet strongly suggests that cardiologists learn more about their patients' personalities and give them advice about changing their normal thinking habits. Grouchy just has to go!
<urn:uuid:3f004629-76f6-47a5-85f8-1d9cca4ae6c1>
{ "dump": "CC-MAIN-2016-40", "url": "http://inventorspot.com/articles/type_d_personalities_should_be_very_concerned_about_their_health", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660712.3/warc/CC-MAIN-20160924173740-00101-ip-10-143-35-109.ec2.internal.warc.gz", "language": "en", "language_score": 0.9691630601882935, "token_count": 486, "score": 2.796875, "int_score": 3 }
Intergovernmental partnership advances hydropower development A report about developments in hydropower was recently released by the Department of the Interior, the U.S. Army Corps of Engineers, and the Department of Energy. In 2010, the Department of the Army and the Departments of the Interior and Energy signed a Memorandum of Understanding (MOU) committing the agencies to the development of clean, reliable, cost-effective, and sustainable hydropower. The report highlights the collaborative accomplishments the three departments have made since the MOU was signed, and details progress made toward achieving the 13 goals included in the MOU. Assistant Secretary of the Interior-Water and Science, Anne Castle, presented the report during the National Hydropower Association conference in April, saying that “Through collaboration and partnerships among federal agencies, the hydropower industry, the research community, and numerous stakeholders, we are succeeding in advancing the development of hydropower as a clean, reliable, cost-effective and sustainable energy source.” The cooperation of the three agencies has enabled the U.S. Army Corps of Engineers, the Department of the Interior, and the Department of Energy to coordinate their hydropower research and development efforts, which has led to advances in hydropower technology, assessment of the potential for adding hydropower generation at existing facilities, and the development of a database for all existing U.S. hydropower infrastructure. For more information about the MOU on hydropower, please follow the links below. Memorandum of Understanding for Hydropower: Two Year Progress Report, Department of the Interior Partnerships Generate Support for US Hydro Resources, International Water Power and Dam Construction Magazine READ THE LATEST UPDATES FROM THE SIMONS CENTER "*" indicates required fields
<urn:uuid:b040c58a-e22a-45ff-be49-206b57456d2e>
{ "dump": "CC-MAIN-2023-23", "url": "https://thesimonscenter.org/ia-news/intergovernmental-partnership-advances-hydropower-development/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654012.67/warc/CC-MAIN-20230607175304-20230607205304-00667.warc.gz", "language": "en", "language_score": 0.9373453259468079, "token_count": 368, "score": 2.671875, "int_score": 3 }
During an impending hurricane, people are focused on survival and evacuation. However, once the storm has passed, there is a critical need for hygiene items. Following a natural disaster, proper sanitation and hygiene are critical to preventing the spread of disease. Distributing hygiene kits is one way we help hurricane survivors maintain good hygiene and avoid illness. Following Hurricane Fiona, the Clean the World Foundation team in the Dominican Republic partnered with Hilton Supply Management and the Quimocaribe Company to distribute over 120 of our hygiene kits to orphanages and people affected by Hurricane Fiona in the province of La Altagracia. The contents of Clean the World’s hygiene kits vary and may include items such as a bar of our recycled soap, a hand towel, shampoo, conditioner, a toothbrush, and razor. At the time of the hygiene kit distribution our team also distributed an additional 600 hundred bars of soap! We also partnered with the Rotary Club to distribute soap and bags of food to families in six different communities in the easternmost part of the island. A total of 150 bags of food and 600 bars of soap were distributed to families in need. Excessive rainfall during a hurricane often leads to flooding. Floodwaters, in turn, become breeding grounds for mosquitoes, leading to a surge in mosquito populations and mosquito-borne viruses such as dengue. In response, Clean the World Foundation donated 4,349 mosquito nets to the Director of Risk Management for the Ministry of Public Health for further distribution. With your support, we are reaching 1,000 in-need families in different regions throughout the Dominican Republic with hygiene products, meals and mosquito nets to combat Dengue and Malaria. Photo credit: Clean the World Foundation
<urn:uuid:84d8ae24-bff8-45bf-bb83-e755537aad26>
{ "dump": "CC-MAIN-2023-14", "url": "https://cleantheworldfoundation.org/impact/wash-for-survivors-hurricane-fiona/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00209.warc.gz", "language": "en", "language_score": 0.9422076940536499, "token_count": 349, "score": 2.8125, "int_score": 3 }
Basketball is a game of ten players from two teams, each of which has five players. Athletes play with limited contact on a rectangular field where a team can score a goal by pulling a ball through a raised basket that the opposing team defends at the end of the field. Basketball is of completely known and exact origin, unlike most sports that have dark origins. Specifically, the inventor of basketball wrote an account of the game indicating the first basketball game played in December 21, 1891. Origin of basketball Dr. James Naismith started basketball in 1891 at Springfield College. Dr. Luther Gulick, the director of physical education, asked James to develop a new game that his students could play during the winter periods when they were indoors. The game aimed to help keep track and shape between the runners on the field. James invented basketball as the development of a game from his childhood, “Duck on a Rock”. Dr. Naismith has chosen to use the soccer ball to replace the rock because it is quite safe when launched and cannot cause many injuries when it hits the players. The nature of the game was to throw this ball into the peach basket set at a certain height of the wall. James positioned the basket high on the wall to also minimize injuries he had mostly observed near the goals of other sports, while the defenders and players of the opposing team became more aggressive at these points. The “Conspiracy Theory” Origin of basketball The “conspiracy theory” was born in 1950 and has become the alternative explanation for the origin of basketball. The theory states that the inventor of the game was Lambert G. Will. He was the head of Herkimer’s YMCA, in New York, and the theory claims to have invented basketball almost a year before Dr. Naismith’s first game was announced. The main test for the theory thesis is the image of the 91-92 game written on it that probably (but not necessarily) implied that there was a team formed before the game of Dr. Naismith. Developments in basketball over time An interesting fact about the first basketball is that the game’s fishing baskets had no openings on the bottom. Therefore, this means that every time a team player has scored a goal (bringing the ball to the basket), someone has to recover it by climbing a ladder to the basket. The process of recovering the soccer ball was time consuming since the game had to be suspended. As a result, a small hole was placed at the bottom of the basket so that a ball could be eliminated using a wooden plug that was long enough to reach the basket. However, the ball could not come out of the basket by itself because the hole was not big enough to allow it to fall, but at least it was more efficient than when there was no hole. Fortunately, in today’s basketball, the hole in the lower part of the basket has been made even bigger to allow the ball to fall through it every time a player scores a goal. Also, in the first basketball, dribbling was forbidden, and he just passed the ball which was allowed while a player with the ball was in place. A significant development in modern basketball is that dribbling is now part of the game.
<urn:uuid:cedd565c-156e-4b51-8aa1-a1f35c812510>
{ "dump": "CC-MAIN-2020-45", "url": "https://notesread.com/where-does-basketball-come-from/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878633.8/warc/CC-MAIN-20201021205955-20201021235955-00267.warc.gz", "language": "en", "language_score": 0.984978199005127, "token_count": 674, "score": 3.59375, "int_score": 4 }
Written by Peter Hoare, Library Volunteer at Salisbury Cathedral. In the Battle of Cape Finisterre off the coast of Galicia, on 22 July 1805 (a bare three months before Trafalgar) a British fleet under Sir Robert Calder attacked a combined French and Spanish fleet which was returning from the West Indies. The battle was indecisive, but it persuaded the French admiral Villeneuve not to sail to Brest to join the rest of the French fleet, which might have led to an invasion of England. The rearmost ship in the British fleet, HMS Malta, was surrounded by five Spanish ships, but by using her 84 guns to fire devastating broadsides to port and starboard she fought off the enemy ships. (Ironically she had originally been a French ship, the Guillaume Tell, captured in 1800 and re-commissioned into the Royal Navy.) By 8pm Malta forced two Spanish ships to surrender. One was the 74-gun Firme, and the other the larger 80-gun San Rafael - and it was this warship that yielded up a book which for the last hundred years has been in Salisbury Cathedral Library. This stout but battered volume, in a late 18th-century black leather binding, is a Roman missal, printed in Madrid in 1784. It probably belonged to the chaplain of the San Rafael, since it bears the signature of “Fr. Fran. Morales”, who must have been taken prisoner and taken his treasured missal with him into captivity. A series of inscriptions records the book’s capture, and shows that it was acquired by Captain Edward Buller, captain of HMS Malta, who died in 1824 as Sir Edward Buller, a baronet and a vice-admiral. The book later passed to another member of his family, the Revd. John Buller (died 1847), who was vicar of St Just-in-Penwith, in Cornwall, and wrote a history of the parish in 1842. His bookplate, showing the family arms, is also in the Spanish missal. It may have been John Buller who added notes about the battle and the capture of the San Rafael. It is not clear how the missal eventually came into the hands of John Wordsworth, Bishop of Salisbury 1885-1911, but it has a purple ink-stamp recording his ownership. Many of his books passed into the Cathedral Library after his death, and this volume, as well as having a decidedly interesting back story, is an important addition to the Library’s strong collection of liturgical works, going back to the Sarum Rite established by St Osmund, which by the time of the Reformation was the predominant liturgy used in England. This volume is one of several Roman missals in the Library, dating from the 16th to the 20th century, but appears to be rare - no other copy of this particular edition seems to be in a UK library. It has another missal, of the special festivals celebrated in Spain, bound at the back. Both the missals in the volume were printed by Antonio de Sancha, a well-known Madrid printer, in 1784 and 1787 - so the volume was less than 20 years old when it was captured. Like many liturgies of the period they are printed in red and black, with printed music and some striking engravings. The dramatic print on the title-page of the second missal commemmorates the Spanish victory over the Muslims in the mythical battle of Clavijo in the ninth century, in the person of St James the Moor - Slayer, one of Spain's patron saints.
<urn:uuid:fadd8fea-f4b9-4eb1-81b4-9c926da415d9>
{ "dump": "CC-MAIN-2020-40", "url": "https://www.salisburycathedral.org.uk/news/library-spotlight-spoils-war-library", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00316.warc.gz", "language": "en", "language_score": 0.9808328151702881, "token_count": 759, "score": 2.90625, "int_score": 3 }
The holiday we celebrate now has its roots in Massachusetts Governor William Bradford's proclamation of the first Thanksgiving Day on November 29, 1623. He ordered a public ceremony to "render thanksgiving to ye Almighty God for all His blessings." However, that was not the first thanksgiving on our nation's soil. Long before, on September 8, 1565, Spanish explorers celebrated Mass in gratitude for their safe arrival at what is now St. Augustine, Florida. Another Mass of thanksgiving was celebrated by Spanish explorers in present-day Texas on April 30, 1598. The first official Thanksgiving ceremony in the American colonies was December 4, 1619, when English settlers arrived at the Berkeley Hundred settlement in Virginia. So Bradford was only following honorable suit. The infant nation continued the practice. Thomas Jefferson introduced a resolution in the Virginia Assembly in 1774 calling for a Day of Fasting and Prayer, as did Richard Henry Lee in 1777. The governor of New Hampshire, John Langdon, proclaimed official days of thanksgiving, fasting, and prayer in 1785 and 1786. Massachusetts Governor John Hancock issued A Proclamation for a Day of Thanksgiving on November 8, 1783, to celebrate victory of the colonies in the Revolutionary War because, "the Interposition of Divine Providence in our Favor hath been most abundantly and most graciously manifested, and the Citizens of these United States have every Reason for Praise and Gratitude to the God of their salvation." Congress also acted nobly in those days. The Continental Congress issued the First National Proclamation of Thanksgiving to all colonies to thank God for victory at Saratoga during the Revolutionary War. It established Thursday, December 18, 1777 as a day of "solemn thanksgiving and praise," to show gratitude to Almighty God, reminding Americans that "it is the indispensable duty of all men to adore the superintending Providence of Almighty God [and] to acknowledge with gratitude their obligation to Him for benefits received." The proclamation called upon citizens to "consecrate themselves to the service of their Divine Benefactor" and to confess their "manifold sins" and beg God to "mercifully forgive and blot them out of remembrance." It also asked God's continued blessings and recommended that everyone abstain from servile labor, as would be "unbecoming ... on so solemn an occasion." Three years later, on October 18, 1780, the Continental Congress issued a similar proclamation after Benedict Arnold's traitorous plans were exposed. Congress unanimously approved yet another National Day of Thanksgiving on September 25, 1789, calling for public prayer. President George Washington also signed a "National Thanksgiving Proclamation" on January 1, 1795, pointing to, "Our duty as a people, with devout reverence and affectionate gratitude, to acknowledge our many and great obligations to Almighty God, and implore Him to continue and confirm the blessings we experienced." Thanksgiving Day was officially set on the last Thursday of November in 1863 when Congress passed an act signed by President Abraham Lincoln, who said, "[It is] announced in Holy Scripture and proven by all history, that those nations are blessed whose God is the Lord... It has seemed to me fit and proper that God should be solemnly, reverently and gratefully acknowledged, as with one heart and one voice, by the whole American people." By 1939, times had changed. Thanksgiving was no longer about giving thanks but about ringing in the Christmas shopping season. Retailers pressured President Franklin Delano Roosevelt to change Thanksgiving from the last Thursday of November (which fell on November 30 that year) to the fourth Thursday so they could have an extra week to peddle their Christmas wares. He did so to the confusion of many, since calendars, and school and vacation schedules, were already set according to tradition. About half of the country celebrated one week before the other half, while Texas and Colorado celebrated both dates. The next year was just as confusing, so Congress stepped in to clear things up. It approved a 1941 joint resolution that Thanksgiving Day would always be observed on the fourth Thursday of November. The resolution became law in 1977 (Public Law 77-379), declaring "the fourth Thursday of every November: A National Day of Thanksgiving." What a pitiful contrast to the inspiring tributes made by our founders in regard to what should be a God-centered holiday. The tombstone of Governor William Bradford is inscribed in Latin: "What our fathers with so much difficulty attained do not basely relinquish." That is sound advice! We should not basely relinquish our Christian heritage to materialism or to the modern idea that a nation belongs to the people who live here now rather than those who helped build her. On this Thanksgiving Day, as we thank God for our great nation, let us beg Him to convert our leaders into models of those godly statesmen of yore. This New American article was originally published on Nov. 26, 2009. Sources used by the author: Federer, William J., America's God and Country: Encyclopedia of Quotations, Amerisearch, Inc. 1999; Walls, Lt. Col. Timothy. "Giving Thanks is Tradition that Began with Pioneers," Hawaii Army Weekly, December 5, 2008.
<urn:uuid:37176584-fcb9-4bb3-8ca9-9ed8a6331c57>
{ "dump": "CC-MAIN-2019-43", "url": "https://www.thenewamerican.com/culture/faith-and-morals/item/540-thanksgiving-giving-thanks-to-god", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986661296.12/warc/CC-MAIN-20191016014439-20191016041939-00204.warc.gz", "language": "en", "language_score": 0.9636465907096863, "token_count": 1062, "score": 3.90625, "int_score": 4 }
Hurricane Katrina, which hit the Gulf Coast in 2005, was one of the deadliest hurricanes in the US, and the costliest disaster in the world so far. The entire district of the Lower Ninth Ward in New Orleans lies below sea level and was completely destroyed and its population displaced. A year after the catastrophe, nothing had been done and residents could not return to rebuild their homes and neighbourhoods. Brad Pitt visited the area while shooting a movie and decided something had to be done. In 2007, GRAFT started the “Make It Right Foundation” (MIR) together with Brad Pitt, Bill McDonough and the Cherokee Foundation, to rebuild the Lower 9th Ward. In order to raise funds and create awareness, the foundation designed the Pink Project to officially launch the rebuilding efforts and prepare the first round of financing after Brad Pitt’s initial donation. Using an approach that blends set design and architecture, 150 pink scaffolding structures were erected on the site that was still empty 2 years after the floods. These structures served as placeholders for future buildings and building parts and acted as a compelling tool to raise awareness. Over time, as monetary donations came in, the pink placeholders were reassembled to resurrect the urban pattern of the community before the disaster. Over the course of many months Pitt, GRAFT and the other parties involved met with former homeowners and local community leaders. GRAFT took on the role of an architectural curator on the board of MIR and invited a group of 21 architects to create a range of architectural solutions. GRAFT also contributed two designs for affordable, sustainable and safe houses based on popular dwelling typologies and the rich traditions of New Orleans. Sustainability strategies follow the cradle-to-cradle principle established by William McDonough and Michael Braungart. This and other efforts contributed to reducing energy demand, lowering monthly utility costs from up to US$ 300 before Katrina to only US$ 25 per month. Make It Right designed a catalogue of houses with diverse styles but similar floor spaces and prices. The residents could then choose the house they wanted. All architects volunteered their time and designs for the community to create a sustainable, climate-responsive and diverse neighbourhood. To date, more than 100 homes have been built, all of which have earned LEED Platinum status, making it the largest community in the USA with this certificate. More than 350 people now live in Make It Right homes. Graft’s proposal for housing merges metaphorical abstractions of traditional and modern architecture, drawing on the more successful components of each to create a new, robust whole. Our proposal began with a traditional New Orleans house type, the shot gun house, which is abstractly represented through an expressive, almost exaggerated gable roof and generous front porch. The fluidity of the relationship between home and community, and the provision of areas designed for interaction with neighbours and friends, is one of the things that makes the Lower Ninth so incredibly special. We felt it important to pay homage to this. These traditional typological elements are coupled with modern affordable sustainable amenities. The cross-section of the house transitions progressively towards the rear of the house, beginning as a traditional frontage facing the street and ending as a flat-roofed modern, rectilinear building at the back. This flat roof also doubles as a safe haven that the residents can flee to in the event of another flood. The building’s sustainable features include: Solar panels, water catchments system, a geothermal system with heat pump, tankless water heating, high ceilings for stack ventilation, operable windows which assist stack ventilation and cross ventilation, highly insulated hurricane resistant windows, High R-value insulation, no off-gassing paints and finishing materials, permeable paving, energy star appliances, ceiling fans, and low-flow toilets. By April 2009 a total of six houses have been finished as part of the Make It Right Program in the Lower 9th Ward, the owners were able to move back and enjoy the benefits of their new homes. Two of these houses were designed by GRAFT and chosen by the homeowners, as the process at Make It Right is popular vote. Nine more houses are currently under construction, one of them also designed by GRAFT, ten houses are in the permit process. The houses designed by GRAFT are inspired by the Cradle to Cradle Philosophy and received LEED Platinum certification. They are prefabricated modular units, constructed off-site. After the huge success of the first round of designs for the Lower 9th Ward a new group of architects was invited to design dwellings. GRAFT contributed another design for free, this time with a larger building for up to two families. The Round 2 house deploys a similar formal strategy of blending as does GRAFT’s Round 1 shotgun house. A strong visual connection to the Round 1 house was maintained in order to bring consistency of character to the New Orleans’ Lower 9th Ward, which will continue to be populated by these types of dwellings. Here, we have additionally drawn inspiration from the camelback shotgun typology. Historically, camelbacks emerged as a way for residents to add a partial second story to a residence, whether simply to gain more space for a single-family home or to add a rental unit at the rear of a structure as an additional source of income in a traditionally low income neighborhood. In our design, we utilize the camelback strategy to stack a second efficiency unit above a first floor shotgun house. A critical programmatic goal within the design is to establish a strong connection between the private interior zone of the house and the shared public space of the street. The primary challenge in achieving this goal lies in negotiating the 8’-0” first floor height that is required to make the houses safer from future flooding with the street level. The broad and spacious deck located in the front yard mediates the relationship between public and private by raising the deck 5’-0” above grade. This offers a welcoming gesture to the street while at the same time creating a semi-private space for the inhabitants of the house to enjoy. Residents may enter the house from the side porch landing, leading them into a large open space, containing living, dining and kitchen functions. The lower unit has a flexible three bedroom layout that can be converted into a two bedroom and office layout if desired. The master suite at the rear of the house contains an en-suite bathroom that shares a common wet wall with the unit’s other bathroom and kitchen making a cost-efficient plumbing core. An exterior stair carries the inhabitants of the efficiency unit up to a rooftop terrace entry deck. This secondary deck level may be utilized as a private deck for the upper dwelling. It provides a generous outdoor living space, views of the neighborhood, space for a small vegetable or herb garden, and easy access to the solar panel array for maintenance. The upper unit itself is designed to be a simple one bedroom dwelling with a living room and dining area facing the backyard. Here, the efficiency kitchen shares a wall with the bath to form a cost-efficient plumbing core. The kitchen forms an ‘L’ at the perimeter of the living and dining area in order to create an open and inviting space.
<urn:uuid:a97cdff9-5275-496a-beec-435b4c4631c2>
{ "dump": "CC-MAIN-2020-10", "url": "https://graftlab.com/portfolio_page/make-it-right/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145648.56/warc/CC-MAIN-20200222023815-20200222053815-00428.warc.gz", "language": "en", "language_score": 0.9557467699050903, "token_count": 1477, "score": 2.625, "int_score": 3 }
Happy #WomensEqualityDay! Despite the gender inequalities that still exist in today’s society, we are happy to highlight the intellectual and capable women that inspire us. One of which is Dr. Emily Stowe, Canada’s first female doctor and a tireless fighter for women’s rights. Born in 1831, Emily Stowe (born Emily Jennings) was raised in Ontario. Her mother, an American-educated woman, was so unhappy with the quality of education for girls that she decided to take matters into her own hands - she taught all of her children herself. At 15, Emily followed in her mother’s teaching footsteps and taught in a one-room house for seven years. When Emily was 22, she applied for admission to Victoria College and was rejected simply because she was female. It was this incident that sparked her passion to strive for equal rights. Despite her rejection to Victoria College, Emily was accepted to Toronto’s Normal School for Upper Canada - one of the few schools open to women then. Here she thrived, graduating with first-class honours two years later. In 1856, she married her husband John Stowe and gave birth to three children over the course of the next seven years. After the birth of her third child, John contracted the respiratory disease tuberculosis. Thus began Emily’s interest in homeopathic medicine, an area of medicine her mother had also studied. Combined with her belief that women doctors were needed immensely, Emily decided to become a physician. Go Emily! At 34, Emily decided once again to apply for admission, this time at the Toronto School of Medicine. Once again, Emily had to relive the same fate she received when she applied at 22 - rejection. The reason? “The doors of the University are not open to women”, said by the school’s vice-president. Rightfully enraged, Emily did not let this stop her from pursuing her dream of becoming a physician. Instead, she went to the New York Medical College for Women in the United States and obtained her degree in 1867. Once her degree was completed, Emily came back to Canada and set up a practice in Toronto - becoming Canada’s first practising female physician in the process. Dr. Emily Stowe’s story is one of grit, perseverance, and passion. She inspires the Think Dirty team to never give up on our dreams, no matter the obstacle. Happy Women’s Equality Day to all the great women in the world. Please take a moment to thank all of the great women who have paved the way for equal opportunities today.
<urn:uuid:22ba4299-cb4a-4db5-9472-b8f6e0c69a0a>
{ "dump": "CC-MAIN-2022-21", "url": "https://cleanbeautique.com/blogs/think-dirty-blog/womens-equality-day-dr-emily-stowe-1", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510138.6/warc/CC-MAIN-20220516140911-20220516170911-00679.warc.gz", "language": "en", "language_score": 0.9802703857421875, "token_count": 543, "score": 3.265625, "int_score": 3 }
- Types of chemical weapons - Defense against chemical weapons - Chemical weapons in history - Proliferation and detection of chemical weapons programs - Chemical weapons and terrorism Chemical weapons did not become true weapons of mass destruction (WMD) until they were introduced in their modern form in World War I (1914–18). The German army initiated modern chemical warfare by launching a chlorine attack at Ypres, Belgium, on April 22, 1915, killing 5,000 French and Algerian troops and momentarily breaching their lines of defense. German use of gas and mustard was soon countered by similar tactics from the Allies. By war’s end, both sides had used massive quantities of chemical weapons, causing an estimated 1,300,000 casualties, including 91,000 fatalities. The Russian army suffered about 500,000 of these casualties, and the British had 180,000 wounded or killed by chemical arms. One-third of all U.S. casualties in World War I were from mustard and other chemical gases, roughly the ratio for all participants combined. By the war’s end, all the great powers involved had developed not only offensive chemical arms but also crude gas masks and protective overgarments to defend themselves against chemical weapon attacks. Altogether, the warring states employed more than two dozen different chemical agents during World War I, including mustard gas, which caused perhaps as many as 90 percent of all chemical casualties (though very few of these casualties were fatal) from that conflict. Other choking gas agents used included chlorine, phosgene, diphosgene, and chloropicrin. The blood agents included hydrogen cyanide, cyanogen, chlorine, and cyanogen bromide. Arsenic-laced sneeze agents were also used, as were tear gases like ethyl bromoacetate, bromoacetone, and bromobenzyl cyanide. The horrific casualties of World War I helped persuade many world leaders of the need to ban the use of chemical weapons. A number of proposals were made during the 1920s, and at the 1925 Geneva Conference for the Supervision of the International Traffic in Arms (see Geneva Conventions) a protocol was approved and signed by most of the world’s states. The 1925 Geneva Protocol made it illegal to employ chemical or biological weapons, though the ban extended only to those who signed the treaty. The Geneva Protocol did not ban the production, acquisition, stockpiling, or transfer of such arms, and, critically, it did not contain any verification procedure to ensure compliance. Despite the popular reaction against this form of warfare and the international agreement banning the use of chemical weapons, chemical arms were used a number of times in the years between the two World Wars. For example, chemical weapons were employed by British forces in the Russian Civil War (1919), Spanish forces in Morocco (1923–26), Italian forces in Libya (1930), Soviet troops in Xinjiang (1934), and Italian forces in Ethiopia (1935–40). During the Sino-Japanese War (1937–45), Japanese forces employed riot-control agents, phosgene, hydrogen cyanide, lewisite, and mustard agents extensively against Chinese targets. There is no record of chemical warfare among World War II belligerents other than that of the Japanese. The Axis forces in Europe and the Allied forces adopted no-first-use policies, though each side was ready to respond in kind if the other acted first. Indeed, all the major powers developed extensive chemical warfare capabilities as a deterrent to their use. After World War II, chemical weapons were employed on a number of occasions. Egyptian military forces, participating in Yemen’s civil war between royalists and republicans, used chemical weapons, such as nerve and mustard agents, in 1963, 1965, and 1967. During the Soviet intervention into the Afghan War (1978–92), chemical arms, such as mustard and incapacitating agents, were used against the mujahideen rebels. In 1987 Libya used mustard munitions against rebels in Chad. The most extensive post-World War II use of chemical weapons occurred during the Iran-Iraq War (1980–88), in which Iraq used the nerve agents sarin and tabun, as well as riot-control agents and blister agents like sulfur mustard, resulting in tens of thousands of Iranian casualties. Chemical weapons enabled Iraq to avoid defeat, though not obtain victory, against the more numerous Iranian forces. In response to Iraq’s use of chemical weapons, Iran made efforts to develop chemical weapons and may have used them against Iraq, a contention that Iran has denied. Furthermore, Iran claims to have ended its program when it signed (1993) and ratified (1997) the CWC. Iraq also used chemical weapons (thought to be hydrogen cyanide, sarin, or sulfur mustard gas) against Iraqi Kurds who were considered unfriendly to the regime of Ṣaddām Ḥussein. The most notorious such attack was the killing of 5,000 Kurds, including many civilians, in the city of Halabjah in 1988.
<urn:uuid:dd6883d4-363a-494e-8a27-b760f2e0cf55>
{ "dump": "CC-MAIN-2014-52", "url": "http://www.britannica.com/EBchecked/topic/108951/chemical-weapon/274179/Weapons-of-mass-destruction", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769581.93/warc/CC-MAIN-20141217075249-00065-ip-10-231-17-201.ec2.internal.warc.gz", "language": "en", "language_score": 0.9680311679840088, "token_count": 1043, "score": 3.390625, "int_score": 3 }
This tutorial will teach you how to use PHP (PHP: Hypertext Preprocessor) to help creating dynamic and more user interactive websites. History of PHP Here is brief history of PHP language: - Hypertext Preprocessor commonly known as PHP is extensively used open source scripting language which is executed on a server. PHP is specially used for the web development and free to download. - PHP is a server side scripting language and runs on various platforms such as Windows, Linux, etc... It is compatible with all the major servers (Apache, IIS). PHP can access all kinds of databases, collects information from tables, alters the database and fetches the contents of the database table. - PHP is capable of generating dynamic page contents, send and receive cookies and encrypt data. - PHP started as a home project by Rasmus Lerdorf, who wrote Common Gateway Interface (CGI) in 1994. At that time it has the ability to work with forms and to communicate with the database. In 1995, Lerdorf decided to public PHP (Personal Home Page) Tools version 1.0 in order to identify bugs and improve the codes. It embedded HTML and the syntax resembled with Perl in a simpler manner. - PHP version 2.0 released in 1997 after a long beta testing. Later, Zeev Suraski and Andi Gutmans started working on PHP and rewrote the parser and released PHP 3 in 1998. - PHP 4 was introduced in year 2000 by Zend Engine and finally PHP 5 in year 2004 was released by new Zend Engine II. It includes new features such as object-oriented programming, Data Object (PDO) extension and huge performance enhancement. PHP can easily embed with HTML: Fig. 1 Example of PHP embedded in HTML PHP code embedded in HTML in Fig. 1 in which instead of lengthy commands like in C or Perl, it only uses one sentence and prints “Hello World”. In the above example, the PHP code starts with <?php” and ends with “?> which allows the browser to jump into PHP mode and out of it. Anything in between start and end is considered a PHP script and referred to its libraries. Each chapter has attached an interactive example that can be easily modified and tested as wanted. All examples can be found at the end of the tutorial. At the end of the tutorial you will find a complete reference list with data types, operators, statements, functions, etc...
<urn:uuid:05dc41ff-bc66-4702-bf05-faf66848c6ee>
{ "dump": "CC-MAIN-2018-51", "url": "http://www.brenkoweb.com/tutorials/php/php-basics/introduction-to-php", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823588.0/warc/CC-MAIN-20181211061718-20181211083218-00322.warc.gz", "language": "en", "language_score": 0.9063622951507568, "token_count": 512, "score": 3.640625, "int_score": 4 }
Creating the next generation of space innovators Canadian Space Agency announces winning student satellite projects May 4, 2018, Winnipeg, Manitoba Young Canadians are the innovators who will take the Canadian Space Program into the future. What better way to learn about space engineering than to design, build, launch and operate your own satellite? Post-secondary students from each province and territory have won the chance to design, build and launch into space their own CubeSat through the Canadian CubeSat Project. Today, Canadian Space Agency (CSA) astronaut Jenni Sidey unveiled the teams selected to participate in this new national student space initiative. The opportunity to work on a real space mission from start to finish, including operating the satellites and conducting science experiments in space, will help students learn about science and engineering. It will also give them useful experience and skills in project management, leadership, marketing and communications. This will equip them well for the jobs of the future. CSA experts, as well as representatives from the Canadian space industry, will guide the teams throughout the Canadian CubeSat Project, to optimize the success of each mission. This initiative is part of the Government of Canada's Innovation and Skills Plan, a multi-year strategy to create well-paying jobs for the middle class. "The CubeSat project is training Canada's next generation of innovators, engineers and astronauts. Congratulations to all the winning teams and their professors. These students are learning critical skills that will help them get the middle-class jobs of tomorrow. We can't wait to see these satellites launched!" - The Honourable Navdeep Bains, Minister of Innovation, Science and Economic Development "The Canadian CubeSat Project invites Canadian students to rise to the challenge of space, and after reading the winning proposals, I can say that they are ready to take it on. What better way to engage Canadian students in STEM activities than to give them an opportunity to take part in a real space mission!" - Dr. Jenni Sidey, Astronaut, Canadian Space Agency The Canadian Space Agency is awarding a total of 15 grants, ranging from $200,000 to $250,000. The CSA will also cover costs associated with the CubeSat launches. A total of 15 teams composed of 37 organizations will participate in the Canadian CubeSat Project, thanks to several inter-regional, inter-provincial and international collaborations that even include universities from Europe, Australia and the USA. Students must be at the post-secondary level, although several teams will be engaging younger students in their communities through outreach activities. It is expected that 532 students in all will work on this initiative. Once tested and ready for space, the 15 CubeSats will be launched and deployed from the International Space Station in 2020–2021. A CubeSat is a tiny, cube-shaped satellite measuring 10 cm × 10 cm × 10 cm. Report a problem or mistake on this page - Date modified:
<urn:uuid:8f626461-d29a-4879-8c63-805d3c6c7279>
{ "dump": "CC-MAIN-2021-17", "url": "https://www.canada.ca/en/space-agency/news/2018/05/creating-the-next-generation-of-space-innovators.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038119532.50/warc/CC-MAIN-20210417102129-20210417132129-00531.warc.gz", "language": "en", "language_score": 0.9407589435577393, "token_count": 603, "score": 3.015625, "int_score": 3 }
World Water Day 2016 Theme: Water for people, Water by people World Water Day occurs each year on March 22, as designated by United Nations General Assembly resolution. Starting on World Water Day, Martin Strel will try to swim the entire circumference of Earth. Mr. Strel, a 61-year-old Slovene, plans to swim about 25,000 miles, passing through 107 countries, in about 450 days. That means he would finish around July 2017. That's a great deal farther than his 3,278-mile Amazon swim, which was chronicled in the 2009 documentary "Big River Man." Boats escorting him down the river poured blood over the side to distract piranhas. Mr. Strel appears to be different from other people. His team says that his body cannot develop lactic acid, which is produced during exercise and causes muscle fatigue. 'His past swims have promoted environmental awareness, and this time will be no different. Over the next 15 months, Mr. Strel hopes to draw attention to water pollution. It's an issue he'll literally jump into when he swims in waters like the Nile, the Yangtze and the Ganges. ELLIE'S WORLD BLOG CRYSTALINKS HOME PAGE PSYCHIC READING WITH ELLIE 2012 THE ALCHEMY OF TIME
<urn:uuid:debf8706-b877-4c6d-9b2c-450331fd8dc7>
{ "dump": "CC-MAIN-2018-47", "url": "http://www.crystalinks.com/WorldWaterDay2016.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746301.92/warc/CC-MAIN-20181120071442-20181120093442-00544.warc.gz", "language": "en", "language_score": 0.9598634243011475, "token_count": 281, "score": 2.671875, "int_score": 3 }
Argentina and the evolution of the sauropod hind foot. 2016. Nature. Notocolossus gonzalezparejasi has a humerus 1.76 m (5 ft 9 in) in length, and an estimated mass of 40,000–60,000 kg. It is one of the largest dinosaurs, and indeed, one of the largest land animals, yet known to science. One of the two known Notocolossus specimens preserves the hind foot in its entirety, providing the first complete look at this important part of the skeleton in a super-massive dinosaur. The Notocolossus hind foot shows distinctive features not found in any other sauropod that appear to constitute adaptations for supporting extremely great weight. PR Congrats to Matt Lamanna and his colleagues on this great new paper!
<urn:uuid:edf797eb-f728-4e0d-ba1f-2a73367e3b08>
{ "dump": "CC-MAIN-2016-44", "url": "http://palaeoblog.blogspot.com/2016/01/notocolossus-gonzalezparejasi-giant-new.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720737.84/warc/CC-MAIN-20161020183840-00066-ip-10-171-6-4.ec2.internal.warc.gz", "language": "en", "language_score": 0.9020680785179138, "token_count": 166, "score": 3, "int_score": 3 }
By: Ralph Rudolph R. Rudolph Consulting LLC @ www.temperatureconsultant.com A technique called the Wedge Method or Roll Nip method is finding increased use in measuring strip temperatures in the metal production/processing industries as it is touted as providing the dual advantages of appearing to be independent of the material emissivity and the presence of any ambient reflected radiation. Basically, the concept is quite simple: Picture a horizontal steel strip that contacts and at least partially wraps around a large roll, usually a deflector roll used to change strip direction or a bridle roll used to set strip tension. Aim a radiation thermometer almost parallel to the strip into the gap formed between the roll and strip tangent point, as deep as you can go. (Viewing at an angle from the side is fine). This gap, as the claims state, can be treated as a blackbody with an emissivity of 1.0 (see Figure 1). Hence, you don’t have to worry about ambient radiation as reflectivity is 0.0 and you don’t have to worry about changing material emissivity. This is partly true and partly wishful thinking. Blackbody conditions exist for a cavity if and only if all sides of the cavity are at the same temperature. If the roll being used has a very low thermal mass (heats up easily) and there is a large wrap around the roll and sufficient strip tension to allow heat transfer to occur between the strip and the roll, then the roll will heat up to near strip temperature over a time period, but because the roll has natural convection, conduction and radiation losses, the roll can never quite reach the strip temperature. Emissivity never reaches 1.0. It should be obvious that if the strip abruptly changes temperature, which can happen with strip thickness or furnace temperature changes, it will take time for the roll to change temperatures. Heat transfer between the two can take quite a while during which time the temperature reading from the wedge system will be quite inaccurate. So, if a system is designed well, with a major roll wrap, low thermal mass roll, sufficient strip tension and steady long term operation (no major changes in strip temperature), this method can work as claimed (except that emissivity must be set somewhat lower than 1.0 to compensate for the roll being at a slightly lower temperature than the strip). Given human nature, however, I’ve seen numerous instances where folks have not understood why the wedge method can work and who have misapplied it. Believe it or not, I’ve seen an instance where a so-called wedge method has been applied with zero roll wrap, with the strip simply passing over a support roll. And this system was (unfortunately) designed by the equipment provider who should have known better. I would guess that a majority of wedge method applications that I’ve seen have been poorly designed, with little attention paid to the amount of roll wrap or roll material and with little understanding of what occurs during changes in strip temperature. There is a modification to the wedge method that can provide a significant improvement: Mount a second Radiation Thermometer to monitor roll temperature and compare this reading to that of the wedge RT. Using a PC with input and output cards (and most any older PC will work), abrupt deviations between the two readings which occur as strip temperature changes can be used to correct for errors. If accuracy is desired, it’s well worth the extra expense. You get what you pay for.
<urn:uuid:3b09a32f-547c-4d24-b884-6827ae1b2cac>
{ "dump": "CC-MAIN-2017-17", "url": "http://irinformir.blogspot.com/2013/09/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119995.14/warc/CC-MAIN-20170423031159-00436-ip-10-145-167-34.ec2.internal.warc.gz", "language": "en", "language_score": 0.9412491321563721, "token_count": 720, "score": 3.28125, "int_score": 3 }
Native Americans and African Americans - LAST MODIFIED: 28 June 2016 - DOI: 10.1093/obo/9780190280024-0038 - LAST MODIFIED: 28 June 2016 - DOI: 10.1093/obo/9780190280024-0038 Since the mid-20th century, a significant literature has developed at the intersection of Native American and African American lives in North American history. Much of this work has emerged within regional literatures: the American West, South, and Northeast. Some of the earliest studies emerged within African American scholarship and the literature on the black West and the long-standing presence of African Americans in Indian country. At the same time, scholars of Native Americans and African Americans in the US South have increasingly engaged and debated the historical development and legacies of racial slavery in Indian country. Since the early 21st century, scholarship has been increasingly engaged with the intersection of African American and Native American lives in early America, including especially the colonial South and Northeast. Numerous scholars of Native American history have highlighted the history of Indian captivity, exploring the intersection of the Indian and African slave trades. Within and beyond regional studies, scholars of American history and expansion have documented and theorized the historical intersection of slavery and colonization. Finally, scholarship in this field has included numerous biographies, family histories, and other microhistorical approaches at the intersection of Native American and African American history. General Overviews and Documents A number of syntheses have emerged at the intersection of African American and Native American histories. Horsman 1981 and Rothman 2007 engage and theorize the relationship between slavery and colonization in North American history, while Snyder 2010 (cited under Lower South) provides an overview of the historical relationship between Indian captivity and southern slavery. Katz 2012 considers the broad history of black Indian peoples in North America. Forbes 1993 engages this intersection in the precolonial era, including analysis of the Caribbean and Europe. A growing number of scholars, including the author of Bennett 2009, have engaged such intersecting histories within the context of Latin America and the Caribbean. Minges 2004 compiles 20th-century interviews with ex-slaves of Native American slaveholders and those of Native American descent. Finally, TallBear 2013 and Tayac 2009 explore early-21st-century legacies of the historical relationship between Native Americans and African Americans. Bennett, Herman L. Colonial Blackness: A History of Afro-Mexico. Blacks in the Diaspora. Bloomington: University of Indiana Press, 2009. This pathbreaking study narrates the history of Mexico through the experiences of Africans and Afro-Mexicans. Forbes, Jack D. Africans and Native Americans: The Language of Race and the Evolution of Red-Black Peoples. 2d ed. Urbana: University of Illinois Press, 1993. This study offers important discussions of precolonial contact between Native Americans and Africans in North America, the Caribbean, and Europe, as well as racial formation and classifications in the Americas. Horsman, Reginald. Race and Manifest Destiny: The Origins of American Racial Anglo-Saxonism. Cambridge, MA: Harvard University Press, 1981. This groundbreaking study documents the centrality of race to American nationalism, including the racialization of African Americans, Native Americans, and Mexican Americans. Katz, William Loren. Black Indians: A Hidden Heritage. Rev. ed. New York: Atheneum, 2012. First published in 1986, this accessible study brought popular attention to the “hidden heritage” of black Indians. The 2012 edition includes updated chapters. Minges, Patrick, ed. Black Indian Slave Narratives. Real Voices, Real History. Winston-Salem, NC: John F. Blair, 2004. This is a collection of Works Progress Administration (WPA) interviews with former slaves who reference Native American slaveholders, Native American descent, and Native American relations. Rothman, Adam. Slave Country: American Expansion and the Origins of the Deep South. Cambridge, MA: Harvard University Press, 2007. This compelling study, first published in 2005, illustrates the expansion of racial slavery in the wake of the American Revolution. One chapter especially engages the intersection of slavery and Indian removal, and the racialization of Native American slaveholding. TallBear, Kim. Native American DNA: Tribal Belonging and the False Promise of Genetic Science. Minneapolis: University of Minnesota Press, 2013. Groundbreaking work engaged in discussion of race, nation, sovereignty, science, and “blood” politics. Tayac, Gabrielle, ed. IndiVisible: African-Native American Lives in the Americas. Washington, DC: Smithsonian, 2009. This edited collection originated with the pathbreaking 2009 IndiVisible exhibition at the National Museum of the American Indian, intended to make visible African-native lives in North America. The collection includes the work of twenty-seven scholars and includes significant discussion of early-21st-century policies, communities, and aesthetic traditions. Users without a subscription are not able to see the full content on this page. Please subscribe or login. How to Subscribe Oxford Bibliographies Online is available by subscription and perpetual access to institutions and individuals. For more information or to contact an Oxford Sales Representative click here. Purchase an Ebook Version of This Article Ebooks of the Oxford Bibliographies Online subject articles are available in North America via a number of retailers including Amazon, vitalsource, and more. Simply search on their sites for Oxford Bibliographies Online Research Guides and your desired subject article. If you would like to purchase an eBook article and live outside North America please email [email protected] to express your interest. - African American Writers and Communism - American Military, Blacks in the - Anglo-African Newspaper, The - Apollo Theater - Baldwin, James - Black Press in the United States, The - Black Radicalism in 20th-Century United States - Black Theology - Black Women Writers in the United States - Brotherhood of Sleeping Car Porters - Bureau Of Refugees, Freedmen, And Abandoned Lands (BRFAL) - Chesnutt, Charles W. - Dominican Republic, Annexation of - Federal Government, Segregation in - Federal Writers’ Project - Fiction, Urban - Fisk Jubilee Singers - Food and African American Culture - HIV/AIDS from an African American Studies Perspective - Holiday, Billie - Hopkins, Pauline - Johnson, James Weldon - Liberation Theology - Middle Class, Black - Muslims, Black - Native Americans and African Americans - New African Diaspora - New Negro - No Child Left Behind - Print Culture - Reconstruction in Literature and Intellectual Culture - Reparations and the African Diaspora - Revolutionary War and African Americans, The - Simone, Nina - Slavery, Visual Representations of - Social Science and Civil Rights - Speculative Fiction - Till, Emmett, The Lynching of - United States House of Representatives, African Americans ... - Visual Arts - Wells, Ida B. - Wheatley, Phillis - Woodrow Wilson, Administration of - Wright, Richard
<urn:uuid:90b11ffd-3ea6-49fd-bced-8fa15bb5c41a>
{ "dump": "CC-MAIN-2017-13", "url": "http://www.oxfordbibliographies.com/view/document/obo-9780190280024/obo-9780190280024-0038.xml", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186774.43/warc/CC-MAIN-20170322212946-00529-ip-10-233-31-227.ec2.internal.warc.gz", "language": "en", "language_score": 0.8615716695785522, "token_count": 1515, "score": 3.5, "int_score": 4 }
Perfectly Preserved Woolly Mammoth Provides New Hope Of Resurrecting The Mighty Mammal Resurrecting pre-historic animals that roamed the Earth's surface thousands of years ago, has been something scientists have been fantasizing about for many years. However in most cases, it is just not possible either because the DNA is not available or in the case of an animal like the dinosaur, is just too old. The one exception may be the Woolly Mammoth that lived during the Ice Age, about 8,000 years ago. Thanks to their relatively recent demise and the ice-cold weather they lived in, there have been many well-preserved specimens discovered, especially in the Arctic North. In the last few years, scientists have gone as far as piecing together the mammal's genetic code with the help of frozen hair and also, recreating its blood using DNA found inside fossilized bones. However, to actually recreate a living specimen, scientists have to first find at least one 'living' cell of the mighty mammal, a quest which so far, has not met with much success. Now a recent mammoth fossil discovery in Yakutia in Eastern Russia is getting scientists all excited again. Found under a 100 meter layer of frozen land by members of the International expedition Yana 2012 that took place between August 9th to September 5th, the ancient animal is so well-preserved that it still has soft fatty tissue, hair and bone marrow. However, the team of scientists led by Semyon Grigoryev, Director of the Mammoth Museum at North-Eastern Federal University in Yakutsk are not sure if any of these perfectly intact cells are still living - That, is currently being tested by a team of South Korean scientists. Even if they do discover the live cell, which most experts think is unlikely, you are not going to find the mighty mammals roaming around your neighborhood any time soon. Resurrecting the animal is a lengthy procedure that begins with injecting the DNA sample from the live cell into an empty elephant egg, the mammoth's closest present day relative. Then, by zapping an electric current into it, the scientists will try trick the egg to grow and divide just like any other embryo. After it has matured for a few days, the researchers will implant it inside the womb of a female elephant, who will act as a surrogate mother. After that begins the waiting game - For it will about 600 days, for the Woolly Mammoth baby to fully mature - That is, if everything goes well and the surrogate mother doesn't reject the implanted egg. Even if completely successful at first attempt, which given past experiences is highly unlikely, the scientists are not sure if and how they would breed more, or if they would even display the one specimen to the public. But If by some miracle we do have a baby Mammoth in our midst, scientists are hoping they will be able to study it and answer the age-old question of how they became extinct in the first place - Were they hunted down by humans or did they die because of climate change? Will the mighty Woolly Mammoth ever come back to life? Check back with us in a few years to find out! Resources: rt.com, news.yahoo.com
<urn:uuid:49a3cccf-2ed6-454e-9978-67bee1d885b0>
{ "dump": "CC-MAIN-2014-49", "url": "http://www.dogonews.com/2012/9/15/perfectly-preserved-wooly-mammoth-provides-new-hope-of-resurrecting-the-mighty-mammal", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008899.16/warc/CC-MAIN-20141125155648-00045-ip-10-235-23-156.ec2.internal.warc.gz", "language": "en", "language_score": 0.9638254642486572, "token_count": 670, "score": 3.234375, "int_score": 3 }
Glaucoma is a group of eye diseases that gradually steals sight without warning or without symptoms. This loss of vision is caused by damage to the optic nerve. The nerve acts like an electric cable with over a million wires and is responsible for carrying the images we see to the brain. It was once thought that high eye pressure was the main cause of this optic nerve damage. Although eye pressure is clearly a risk factor, we now know that other factors must also be involved because even people with “normal” IOP (intraocular pressure) can experience vision loss from glaucoma. What Are The Risk Factors For Glaucoma? Glaucoma can occur in anyone. The chance of developing glaucoma increases if you are African-American or Hispanic; have a relative with glaucoma; are very nearsighted; are over 35 years of age; have diabetes and/or hypertension, and/or any vascular diseases. What Are The Different Types Of Glaucoma? The most common types of glaucoma include primary open-angle glaucoma, angle-closure glaucoma, secondary glaucoma, normal-tension glaucoma, pigmentary glaucoma, and cataracts and glaucoma. The most common type of glaucoma is Primary Open Angle Glaucoma, affecting about three million Americans. It happens when the eye’s drainage canals become clogged over time. The IOP rises because the correct amount of fluid can’t drain out of the eye. Most people have no symptoms and no early warning signs. If open-angle glaucoma is not diagnosed and treated, it can cause a gradual loss of vision. This type of glaucoma develops slowly and sometimes without noticeable sight loss for many years. It usually responds well to medication, especially if caught early and treated. How Is Glaucoma Diagnosed And Monitored? Tonometry – used to measure eye pressure. A technician will use a special device that measures the eye’s pressure. Ophthalmoscopy – used to examine the inside of the eye, especially the optic nerve. In a darkened room, the doctor will magnify your eye by using a magnifying lens to look at the shape and color of the optic nerve. Perimetry – During this test, you will be asked to look straight ahead and then indicate when a moving light passes your peripheral (or side) vision. This helps draw a “map” of your vision. Gonioscopy – a painless eye test that checks for open or closed-angle glaucoma. Nerve Fiber Layer Analyzer – uses a computerized machine that takes pictures of your nerve fiber layer. This test helps diagnose and monitor treatment. How Is Glaucoma Treated? Medicines – Glaucoma is generally treated with eye drops and/or pills if necessary. To be effective, glaucoma medications must be taken as prescribed. Side effects will be discussed when the medications are given. Laser Surgery – Laser surgery is being employed more and more as a first-line and adjunctive form of treatment. After the eye is numbed with drops, the laser beam is applied to the trabecular meshwork in the doctor’s office. The procedure takes only a few minutes and results in improving the rate of drainage. If the laser surgery is successful, it may reduce the need for additional eye drops or, possibly, even reduce the need for current eye drops. (show pic of SLT) Filtration Surgery – Generally performed when medications and laser fail to control the eye pressure. During this procedure, a new drainage channel is formed to allow fluid to drain from the eye. Your ophthalmologist will thoroughly discuss the surgery with you. Is There A Cure? Currently, there is no cure for glaucoma. Glaucoma is a chronic disease that must be treated for life; however, much is happening in research that makes us hopeful a cure may be realized in our lifetime. Persons Over Age 50 Should See An Eye Care Professional Every 2 years. Schedule your appointment today. Drs. Campbell, Cunningham, Taylor and Haun are standing by ready to offer personal care and state-of-the-art technology. For an appointment, call (865) 584-0905.
<urn:uuid:93e6603b-5e93-478d-8b9a-a51e7d3682a2>
{ "dump": "CC-MAIN-2021-43", "url": "https://www.ccteyes.com/common-conditions/glaucoma/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00340.warc.gz", "language": "en", "language_score": 0.921057403087616, "token_count": 923, "score": 3.46875, "int_score": 3 }
How to cite quotes within an essay How so that you can refer to on any essay: whatever MLA and also APA are Click so that you can share in the event that an individual for instance it: How for you to report within a particular essay: what exactly MLA and also APA are Citing a particular essay: getting familiarized by using MLA along with APA It doesn’t subject if you will will be just an important excessive school scholar or perhaps you will usually are definitely an important experienced writer: everyone ought to be capable to help you tell of the actual origins anyone employ utilizing your particular pattern involving formatting. A few a lot of common and regularly utilised through citing how to tell of estimates throughout a powerful essay happen to be MLA and APA. Individuals produce certainly in that respect there is zero plagiarism within any word as well as give typically the traffic through this url at which individuals could see additional details on the subject of the particular paper. As details can be extremely very important area with any essay or dissertation, everyone must straightaway go along with the particular directions. MLA important rules MLA is certainly the abbreviation via the particular Contemporary Terminology Connection. It again might be some format trend for the most part made use of around humanities such when Language research, foreign road safe practices article A couple of key phrases essays, document, relative novels or perhaps personal scientific studies. Therefore precisely how to be able to report a fabulous e-book through a great dissertation in respect for you to MLA? - Write your past company name connected with that writer put into practice by just a new comma and also all the first identity accompanied just by an important timeframe. Citing an essay: becoming recognizable using MLA together with APA Afterward position this article designate around quotes (the interval ought to get on the inside how for you to cite prices inside of a essay keep going one) together with get a first characters from key phrases capital. - Write a subject during italics (if you actually grip generate, after that only just underline it). Well before everyone write this designate associated with any publisher, usage “Ed.”. - Location with the e-book really should end up penned used by way of some your intestinal tract together with afterward – name involving all the publisher. You could employ a how in order to refer to insurance quotes in just the essay instance in order to know exactly what we mean: Harris, Muriel. “Talk that will Me: Fascinating Too embarrassed Writers.” A good Tutor’s Guide: Facilitating Freelancers You that will A particular. e Billy Rafoth. Portsmouth, NH: Heinemann, 2000. Welcome that will typically the Purdue OWL Writing some sort of Purpose and also Consequence Essay: Step-by-Step Guide APA basic rules APA is usually the particular abbreviation from all the U . s . Unconscious Association, in addition to the formatting model can be definitely employed inside online business, friendly sciences and looking after. The correct way towards cite some sort of page for a powerful composition corresponding to APA style? - Write the actual go on brand for typically the publisher adhered to simply by some sort of comma not to mention your to start with name succeeded just by any period. - Write the dissertation distinction ceasing it all using the phase. You must take benefit basically your first of all notice with any 1st word of mouth not to mention in no way virtually all regarding individuals prefer through MLA. - Write it all inside italics or possibly underline together with and then stop this along with a fabulous period. You are able to implement a sticking with example so that you can comprehend what many of us mean: Bjork, l A good. (1989). Retrieval inhibition when a strong adaptive tool through people recollection. Around h MLA standard rules t Roediger Iii, & P oker. When i. n Craik (Eds.), Varieties regarding reminiscence & brain (pp. How to make sure you Report as well as Arrangement a fabulous Line to be able to Take advantage of with a Essay 309-330). Hillsdale, NJ: Erlbaum. How to help tell of your websites with a strong essay MLA formatting layout actually possibly not want adding that Rotation, Even so, many have to have a person to make sure you comprise any writer for a web site or simply it's bring in (and typically the software might be some business, not likely an individual). For instance: Continue, Earliest t “Article Title.”Website Title. Internet site Writer, Go out with 4 weeks 12 months Revealed. How to help Report a good Piece of writing Inside any Essay Online. Go out with 30 days Year Accessed. In APA you actually should really only cite any broad web-site document with the help of any author. By any solution, so that you can generate sure a person's quotation is definitely designed for any adequate means cultural experience with well-being care and attention document essay employ computer sport habit essay quotation generators devices, in which will be have the ability benefit utilizing a formatting style How towards cite a good quote for a strong essay According that will MLA, you whenever citing any quote most people need to omit estimate marks, get started it along with a completely new line, benefit from double spacing and even comprise of your citation once sample regarding scenario review from an important child punctuation closes. In addition to within APA you will might simply just can include the particular survive title of typically the journalist, the particular 365 days as well as moreover typically the website how for you to tell of quotes around the essay Also Free Essay Checker Can be Readily available Just for An individual Best Now That is normally this 🙂
<urn:uuid:4385ec47-c8e5-483c-a9de-a4b4a237938f>
{ "dump": "CC-MAIN-2020-45", "url": "http://collegesinoklahoma.org/fire-essay/how-to-cite-quotes-within-an-essay.php", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874637.23/warc/CC-MAIN-20201021010156-20201021040156-00386.warc.gz", "language": "en", "language_score": 0.9290480017662048, "token_count": 1221, "score": 2.734375, "int_score": 3 }
Embargo: 00:01 Tuesday 11th August 2009 Today GeneWatch UK published a new report exploring the use of genetic technologies for the production of agrofuels (industrial-scale biofuels). The report questions whether the substantial investment being made in a new generation of agrofuels, often being developed using genetically-modified (GM) organisms and new GM crops, will solve the problems now acknowledged with the current generation. The concerns include: Over-reliance on claims that difficult technical problems will be overcome, particularly that new GM micro-organisms will be able to convert cellulose in woody plants to fuel; Failure to consider the environmental impacts, including impacts on biodiversity and the creation of potentially hazardous waste streams of GM micro-organisms. Existing agrofuels have been widely criticised because greenhouse gas savings over fossil fuels maybe low or non existent and because the diversion of land and food crops to fuel production contributed to rising food prices and consequent riots in some countries in recent years. However, the EU and the UK Government are set to continue to increase their use, assuming that a new generation of agrofuels will solve these problems, as outlined in the recent UK Renewable Energy Strategy (1) and the Carbon Reduction Strategy for Transport (2). In January this year the BBSRC (Biotechnology and Biological Research Council) launched its £27m Sustainable Bioenergy Centre which it says will focus on widening the range of raw materials, and altering crops to be more useful for bioenergy production including biofuels (3). GeneWatch UK's new report into the use of genetically modified organisms and new GM crops in agrofuel production includes the following policy recommendations: 1) A more realistic and independent appraisal of the potential impact of second-generation GM agrofuels is needed to inform policy decisions. This should include an assessment of the likely performance against key criteria, including: impact on climate, biodiversity, food supply and land use, and technical feasibility. It should be open about uncertainties, economic interests and how different social values (such as how people value biodiversity and impacts on food supplies in poorer countries) are likely to affect policy decisions. 2) Important gaps in research and regulation should be addressed. These include: research on environmental impacts, including invasiveness, energy balance and the impact of factory-scale waste streams containing genetically modified microorganisms consideration of major gaps in regulation, including regulation of waste streams containing genetically modified micro-organisms, and how the possible contamination of food crops with new traits from GM agrofuels will be addressed. In general, more public involvement and debate is also needed to ensure that policy decisions, including research funding decisions, are not driven by a narrow range of vested interests. Notes to editors: 1) Low Carbon Transport: A Greener Future. A Carbon Reduction Strategy for Transport. Department for Transport (July 2009) http://www.dft.gov.uk/pgr/sustainable/carbonreduction/ 2) The Renewable Energy Strategy (RES) July 2009 www.decc.gov.uk/en/content/cms/what_we_do/uk_supply/energy_mix/renewable/res/res.aspx 3) BBSRC Sustainable Bioenergy Centre http://bsbec.bbsrc.ac.uk/ For further information contact: Becky Price, Mobile: 07949396328
<urn:uuid:f1d9ef95-e5ff-4a60-8128-9a0c77da62b6>
{ "dump": "CC-MAIN-2015-35", "url": "http://genewatch.org/article.shtml?als[cid]=492860&als[itemid]=565053", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645220976.55/warc/CC-MAIN-20150827031340-00067-ip-10-171-96-226.ec2.internal.warc.gz", "language": "en", "language_score": 0.9154893159866333, "token_count": 710, "score": 3.078125, "int_score": 3 }
Solar Surpasses Biomass to Become Third-Most Prevalent Renewable Electricity Source Electricity generation from solar resources in the United States reached 77 million megawatthours (MWh) in 2017, surpassing for the first time annual generation from biomass resources, which generated 64 million MWh in 2017. Among renewable sources, only hydro and wind generated more electricity in 2017, at 300 million MWh and 254 million MWh, respectively. Biomass generating capacity has remained relatively unchanged in recent years, while solar generating capacity has consistently grown. Annual growth in solar generation often lags annual capacity additions because generating capacity tends to be added late in the year. For example, in 2016, 29% of total utility-scale solar generating capacity additions occurred in December, leaving few days for an installed project to contribute to total annual generation despite being counted in annual generating capacity additions. In 2017, December solar additions accounted for 21% of the annual total. Overall, solar technologies operate at lower annual capacity factors and experience more seasonal variation than biomass technologies. Biomass electricity generation comes from multiple fuel sources, such as wood solids (68% of total biomass electricity generation in 2017), landfill gas (17%), municipal solid waste (11%), and other biogenic and nonbiogenic materials (4%).These shares of biomass generation have remained relatively constant in recent years. Solar can be divided into three types: solar thermal, which converts sunlight to steam to produce power; large-scale solar photovoltaic (PV), which uses PV cells to directly produce electricity from sunlight; and small-scale solar, which are PV installations of 1 megawatt or smaller. Generation from solar thermal sources has remained relatively flat in recent years, at about 3 million MWh. The most recent addition of solar thermal capacity was the Crescent Dunes Solar Energy plant installed in Nevada in 2015, and currently no solar thermal generators are under construction in the United States. Solar photovoltaic systems, however, have consistently grown in recent years. In 2014, large-scale solar PV systems generated 15 million MWh, and small-scale PV systems generated 11 million MWh. By 2017, annual electricity from those sources had increased to 50 million MWh and 24 million MWh, respectively. By the end of 2018, EIA expects an additional 5,067 MW of large-scale PV to come online, according to EIA’s Preliminary Monthly Electric Generator Inventory. Information about planned small-scale PV systems (one megawatt and below) is not collected in that survey. Principal contributor: Richard Bowers
<urn:uuid:92cb5741-9410-4401-8575-6aa53c2f7844>
{ "dump": "CC-MAIN-2018-30", "url": "https://www.energycentral.com/c/ec/solar-surpasses-biomass-become-third-most-prevalent-renewable-electricity-source", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593208.44/warc/CC-MAIN-20180722100513-20180722120513-00293.warc.gz", "language": "en", "language_score": 0.928421139717102, "token_count": 535, "score": 3.453125, "int_score": 3 }
Do a subject search using the subject headings below to find primary sources in the Peninsula Library System Catalog. These are examples of some of the books we have that collect primary sources. Internet History Sourcebook: Medieval History A collection of primary sources from Forham University. Search by keyword or browse by subject. Monastic Matrix A scholarly resource for the study of women's religious communities from 400 to 1600 CE. Part of Ohio State University. Epistolae: Medieval Women's Letters Epistolae is a collection of letters to and from women dating from the 4th to the 13th century AD. These letters from the Middle Ages, written in Latin, are presented with English translations and are organized by the women participating. Biographical sketches of the women and descriptions of the subject matter or the historic context of the letter is included where available. Part of Columbia Center for New Media Teaching and Learning, Columbia University. World History in Context moves chronologically over 5,000 years from antiquity to the present and geographically around the globe, to ensure that the events, movements and individuals that defined, informed and shaped world history are covered with a sense of balance. In this database you will find: Ask a librarian or watch an introductory tutorial here for questions about using this database. To find primary sources go to "Advanced Search" and do a subject search for "sources" along with your keywords on another line.
<urn:uuid:8db321a4-e14d-4d05-865b-56d42ce6089d>
{ "dump": "CC-MAIN-2018-13", "url": "http://guides.canadacollege.edu/c.php?g=208490&p=1375904", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648198.55/warc/CC-MAIN-20180323063710-20180323083710-00103.warc.gz", "language": "en", "language_score": 0.9124075770378113, "token_count": 290, "score": 2.6875, "int_score": 3 }
Part of the Zagros Mountains on the border of West Azerbaijan Province with Turkey, due to its high altitude and vastness, with the storage of huge amounts of snow, has created large rivers on its slopes and has created pristine nature and pleasant climate. These rivers eventually lead to a huge lake, which is the largest inland lake in Iran. Lake Urmia is located between the two provinces of East and West Azerbaijan and it is not possible to assign it to one of these two provinces. There are access routes to the lake from both provinces and it can be visited. One of the important features of Lake Urmia is its abundant salt content. The salinity of this lake is such that many aquatic animals are not able to live in it and this marine life is limited to a variety of bacteria and algae. Many mammals and birds can be seen on the shores and islands of this lake. In addition, a variety of insects can be seen along the water, which is another attraction. Due to the salinity of the water, the environment near the lake is empty of trees and dense vegetation, but at farther distances, in the spring, the whole environment is filled with green meadows and colorful flowers, which create a unique image with the view of snow-capped mountains. Visiting Lake Urmia is possible from different situations. But one of the most famous and attractive places is an area called “Kazem Dashi” which is located in the north of the lake. From here you can see a beautiful picture of the lake and some beautiful peninsulas. This place can be reached by car and can be easily accessed. One of the features of Urmia Lake is its water color, which tends to be red in some seasons. This color is due to the presence of microscopic animals in the water, which changes of water colors.
<urn:uuid:69fda482-8b79-49fd-9040-0f3062a42d01>
{ "dump": "CC-MAIN-2023-40", "url": "https://persiaplanet.com/lake-urmia/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511369.62/warc/CC-MAIN-20231004120203-20231004150203-00695.warc.gz", "language": "en", "language_score": 0.9722471833229065, "token_count": 384, "score": 2.890625, "int_score": 3 }
Saskatoon’s Bicycle Bylaw has been updated to bring cycling rules and regulations in line with national best practices. All revisions can be found within the gallery above along with other cycling tips and information on where to ride. Before you hop on your bike, familiarize yourself with Saskatoon’s Bicycle Bylaw for guidance on how to bike safely around town. When riding a bike on city streets or pathways, it is important for people understand and comply with the regulations found within the Bicycle Bylaw and Traffic Bylaw. Below are few highlights on how to prepare for your ride and safely interact with other road and pathway users. Before you head out, make sure your bike is equipped with the following: - Brakes in good working condition. - A headlight and a red rear reflector or light for riding at night and in poor visibility conditions. - A horn or bell to warn pedestrians of your presence. While not required, wearing a helmet is encouraged. Operating your Bicycle - Use due care and attention – Be aware of your surroundings and where you and others are positioned on the street or pathway. - Be courteous – Don’t partake in any activity or stunt that is likely to distract, startle, endanger or interfere with pedestrians, vehicles or other street users. - Indicate your intentions - When changing lanes or turning at intersections, use the appropriate hand signals, then move to the appropriate lane ahead of time to turn safely. - Control your bicycle - Sit on your bicycle seat and keep at least one hand on your handlebars at all times. - Respect the bicycle’s design - Only carry the number of persons at one time that the bicycle is designed and equipped for. - Carry cargo safely - If you’re transporting cargo such as groceries ensure that the items don’t obstruct your view or interfere with your ability to safely operate your bicycle. - Respect lane widths - When riding on the street, no more than two bicycles can ride beside one another except when passing. - Obey traffic controls - Watch for and obey signs and pavement markings. Dismount and walk your bike if signs or pavement markings indicate that cycling is not allowed. When cycling on a shared-use path: - Ride your bicycle at a moderate rate of speed and be courteous toward other path users. - Pedestrians have the right of way, yield to them at all times. - Ride your bicycle to the right of the centre of the shared-use path except when passing a pedestrian or other path user. - When you’re about to pass another pathway user, use your horn or a bell a reasonable amount of time before passing. - Pay attention to and comply with all traffic signs. Permitted Places to Ride - When riding on a street, pathway, or designated cycling facility, travel in the designated direction of travel. For example, in one-way protected bike lanes or raised cycle tracks be sure to travel in the same direction as traffic. - When riding on a street with a dedicated cycling facility – such as a painted bike lane, protected bike lane, or shared-use path – you may choose to ride on the street with traffic or in the cycling facility. - If you’re 14 and over, you may only ride on sidewalks for a bridge crossing or sidewalks that are designated as a shared-use path by a traffic sign. - When operating a bicycle on a bridge where cycling is permitted (see Schedule A in the Bicycle Bylaw for restricted cycling routes), you may use either the motor vehicle travel lane or the sidewalk portion of the bridge. - Cycling isn’t permitted on all streets, you can find a map and list of restricted streets in Schedule A of the Bicycle Bylaw. - Driving, stopping and parking are not permitted in designated cycling facilities, unless operating a bicycle. Where the cycling facility pavement marking is dashed, drivers may merge into the cycling facility to make a turn when safe to do so. When the cycling facility is located between the travel lane and the parking lane, drivers may cross to park when it is safe to do so. - When passing a bicycle while driving on a street with one driving lane in the direction of travel, drivers must leave a distance of at least one metre between a person riding a bicycle and their vehicle and maintain that distance until safely past the bicycle. The one metre distance is measured between the extreme right side of the vehicle and the extreme left side of the bicycle, including all projections and attachments. Cyclists have the same rights and responsibilities as a driver of a motor vehicle, including being legally allowed to ride in the centre of any traffic lane. When a cyclist dismounts, they have the same rights and responsibilities as a pedestrian.
<urn:uuid:359a5ad5-fb0b-4a7d-a830-80e68de5ff3d>
{ "dump": "CC-MAIN-2022-21", "url": "https://www.saskatoon.ca/moving-around/cycling/cycling-safety/rider-safety", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662625600.87/warc/CC-MAIN-20220526193923-20220526223923-00696.warc.gz", "language": "en", "language_score": 0.9309126734733582, "token_count": 1011, "score": 2.734375, "int_score": 3 }
The Aquaculture Program has been developing tilapia production systems for the US Virgin Islands since 1979. The initial efforts focused on cage culture of tilapia in watershed ponds that had multiple uses for livestock watering and crop irrigation. Early work in aquaponics began with research by Barnaby Watten and Robert Busch, recorded in the journal article "Tropical Production of Tilapia (Sarotherodon aurea) and Tomatoes (Lycopersicon esculentum) in a small-scale recirculating water system." Aquaculture, 41 (1984) 271-283. Elsevier Science Publishers. James Rakocy joined the team in 1979 and expanded the Aquaculture Program with the development of research and demonstration systems in aquaponic and biofloc systems Early barrel and wading pool systems developed by Watten and Busch, circa 1980. Expanded facilities, 1985 and now. UVI Aquaponics Workshops are offered from January - May each year. The Workshop is offered to interested students, entrepreneurs and farmers. This education program makes extensive use of the facilities with hands-on training in the practical aspects of aquaponic and biofloc systems and tilapia production. Contact the program leadership for more information. News and Links from Workshop Participants UVI considering proposal for eco-industrial park on St. Croix Egypt's agricultural evolution: Paving the way for fish and vegetable production in water-scarce GCC Egypt aims for revolution in desert farming From city banker to high tech farmer Fish-greenhouse system proving efficient, effective Closed-Loop Aquaponic Growing System Combines Land And Lake Aquaponics Project at the Cylburn Arboretum Aquaponics farms awash in promise – and, farmers hope, profits Virginia Man Creates Elaborate System To Reduce Food Waste A new source of fertilizer in Richmond – koi fish Berea College Jackson L. Oldham Aquaponics Facility
<urn:uuid:d438ed7f-4076-4e18-adbd-8c2d04c21c5a>
{ "dump": "CC-MAIN-2015-22", "url": "http://www.uvi.edu/research/agricultural-experiment-station/aquaculture-home/default.aspx", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927767.46/warc/CC-MAIN-20150521113207-00197-ip-10-180-206-219.ec2.internal.warc.gz", "language": "en", "language_score": 0.8637773394584656, "token_count": 419, "score": 2.71875, "int_score": 3 }
|Birth Year : 1886 Death Year : 1980 Oskar Kokoschka was born in Pöchlarn, Bohemia. He began his course of study at the Vienna School of Arts and Crafts in 1905. At that time, Vienna was not only the capital of an empire but also the center of the new Freudian school of thought. Kokoschka was expelled from the school in 1908 for his Expressionist drawings and plays, which shocked the public. Between 1908 and 1914, Kokoschka worked as a designer and illustrator. He also painted a famous series of portraits, mainly of actors and writers, psychological studies illuminated by his penetrating vision. Severely wounded in battle in 1916, Kokoschka settled in Dresden in 1917 and taught at the Academy there from 1920 to 1924, working in an Expressionistic style in which brilliant and symbolic color is more important than the figurative content of the work. He then traveled widely throughout Europe, North Africa, and Asia Minor, and returned to Vienna in 1931. After his involvement in the struggle against Nazism, he fled to Prague in 1934, and in 1938 left Central Europe for London to become an English subject. He lived at Villeneuve on Lake Geneva from 1954 until his death in 1980. Kokoschka believed that "for the creative man the problem is, first, to identify and define what darkens man's intellect; secondly, to set the mind free." His paintings and drawings express the distress of the creative mind faced with the brutalities of the world. all Oskar Kokoschka Books about Oskar
<urn:uuid:a45e1f0b-72b2-4a50-a612-f2443c9c29a3>
{ "dump": "CC-MAIN-2018-17", "url": "http://dropbears.com/a/art/biography/Oskar_Kokoschka.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937440.13/warc/CC-MAIN-20180420100911-20180420120911-00013.warc.gz", "language": "en", "language_score": 0.941045880317688, "token_count": 354, "score": 2.65625, "int_score": 3 }
Imagine a world free from material constraints. One in which fixed-function objects and static materials were ancient history. It sounds like science fiction. But believe it or not, researchers at Intel are attempting to create a super substance that will deliver all that and more. Known as Dynamic Physical Rendering, or DPR for short, it's an incredibly ambitious project that might just revolutionise the way we think about and interact with objects and materials. The basic idea involves a shape-shifting substance composed of millions of tiny, semi-autonomous units capable of intelligently reconfiguring themselves on the fly to assume almost any form you can imagine. Currently, of course, Intel's DPR project is at a distinctly embryonic stage. The challenges that the research team faces span fields as diverse as nanotechnology, chip production and complex-system control software. TechRadar caught up with Jason Campbell, one of Intel's leading researchers in the DPR field, out in sunny Santa Clara, California recently. The conversation that followed was fascinating. In simple terms, the project's aim is to build objects capable of changing shape. The shape of things to come "Our idea is to use lots of little parts that can rearrange themselves," Campbell explains. In theory, these individual parts, or nodes, would be tiny, semi-autonomous spheres grouped together into complex systems. The basic concept involves a system, "that can scale down in size to microscopic nodes and scale up in complexity to millions, tens of millions or even hundreds of millions of nodes." Size-wise, Campbell reckons things really start to get interesting in the 1mm down to 1/10th mm range. "We think the most interesting applications for this technology involve interaction with human beings," he says. It's at 1mm and below that the resolution of a material made of spheres becomes convincing for humans, both to the visual and tactile senses. Most of Intel's research since the inception of the DPR project two years ago has involved a two dimensional analogue of the sphere model. "For research purposes we've been building cross sections of those spheres," continues Campbell. "It makes the initial research simpler to conduct and also makes it easier for us to understand what's going on." Each of the salt shaker-sized cylinders have electromagnetic actuators placed around their circumference, providing both locomotion and allowing them to retain contact as they "roll" around the surface of adjacent nodes and reposition themselves. The next step in the development process is already under way. "More recently we've begun building millimetre-scale devices using electrostatic rather than magnetic fields. In the near term, the aim is to integrate circuitry into 1mm tubes, including an array of actuators and a small control chip, allowing multiple tubes roll around each other's surfaces. The next step from there is to go to full spheres," he says. Incredibly, Campbell estimates that those 1mm spheres could be up and running within five years. That's right, a sphere which houses a control chip, communications interface, power source and actuators within a 1mm diameter. Of course, that's assuming he and the DPR team solve the many tough technical challenges. How, for instance, would these tiny sphere's be powered? "The use of a central power source and surface contacts is an option. But we think there's also enough energy available from daylight or strong artificial light to power these spheres using solar cells on the surfaces of the individual nodes. "What's more, the nodes could have some translucency to transparency. So, light could penetrate multiple layers deep and power the whole ensemble," Campbell says. Then there's the software challenge. It's one that Campbell believes will be every bit as hard to solve as the hardware. Controlling potentially millions of individual nodes is a problem of incredible complexity.
<urn:uuid:210efc83-4df1-4cc2-8e18-a7502ba75114>
{ "dump": "CC-MAIN-2014-42", "url": "http://www.techradar.com/news/world-of-tech/cutting-edge-intel-s-shape-shifting-super-substance-419884", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067214.90/warc/CC-MAIN-20141017150107-00329-ip-10-16-133-185.ec2.internal.warc.gz", "language": "en", "language_score": 0.9478206038475037, "token_count": 789, "score": 2.84375, "int_score": 3 }
Are chemicals shrinking otter penis bones? CARDIFF U. (UK) — Despite a population increase, male otters show negative changes in their reproductive organs, according to a new report that asks if endocrine disruptors are to blame. The authors of the report looked at several indicators of male reproductive health and found several signs of change that give cause for concern: - A decrease in penis bone (baculum) weight over time - An increase in cysts on the tubes that carry sperm during reproduction (vas deferens) - An increase in undescended testicles (cryptorchidism) It is not possible to determine exactly what the causes of these changes are, but various studies, both in the laboratory and in wildlife, have suggested links between hormone-disrupting chemicals (EDCs) and problems with male reproductive health. “The otter is an excellent indicator of the health of the UK environment, particularly aquatic systems,” says Eleanor Kean of Cardiff University’s School of Biosciences. “Our contaminant analyses focused on POPs that were banned in the 1970s, but which are still appearing in otter tissues now—other chemicals, in current usage, are not yet being monitored in wildlife. There is a clear need to regularly revise the suite of contaminants measured—failure to do so may lead to a false sense of security and cause emerging threats to otters and UK wildlife to be missed.” “If we are to protect our wildlife, we need good information on the reproductive health of key species in both the terrestrial and aquatic environments,” adds Gwynne Lyons, director of the Chemicals, Health, and Environment Monitoring Trust (CHEM Trust). “These findings highlight that it is time to end the complacency about the effects of pollutants on male reproductive health, particularly as some of the effects reported in otters may be caused by the same EDCs that are suspected to contribute to the declining trends in men’s reproductive health and cause testicular cancer, undescended testes, and low sperm count.” CHEM Trust is calling for the UK Government and the European Union to urgently identify hormone disruptors to ensure that chemicals suspected of playing a role in male reproductive health problems are substituted with safer alternatives. Kean and Elizabeth Chadwick are researchers with the Cardiff University Otter Project. Source: Cardiff University
<urn:uuid:142b74c6-29cb-443c-84e2-351fd00138c0>
{ "dump": "CC-MAIN-2016-07", "url": "http://www.futurity.org/are-chemicals-shrinking-otter-penis-bones/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148834.81/warc/CC-MAIN-20160205193908-00018-ip-10-236-182-209.ec2.internal.warc.gz", "language": "en", "language_score": 0.9460115432739258, "token_count": 502, "score": 2.875, "int_score": 3 }
6 April 2016 Laws alone will not end child marriage, but they are a necessary and important starting point in de-legitimizing a practice that violates the rights of millions of adolescent girls in Sub-Saharan Africa. This is why the Southern African Development Community (SADC)Model Law on Eradicating Child Marriage and Protecting Children Already in Marriageis so important. The good news is that child marriage is very low in some SADC Countries,such as Swaziland and South Africa. But there are differences within and between countries. For example, while the prevalence of child marriage is 6% in Swaziland and 7% South Africa, other countries,such as Mozambique and Malawi have a prevalence of 48% and 50% respectively. The decline ofchildmarriage in Sub-Saharan Africa and SADC countries is limited to girls in the richest urban communities and those with secondary and higher educational levels.Poverty, gender inequality, insecurity, and tradition perpetuate child marriage. Millions of girls are denied their fundamental rights, as well as the skills, knowledge and opportunities that would enable them to lift themselves and their families out of poverty. The consequences are long-lasting and devastating. Child marriage means higher levels of adolescent pregnancies and births that put the lives of the youngat risk – death during child birth, anaemia and the devastatingobstetric fistula are common among adolescents. Nine out of 10 adolescent births take place within marriages in Sub-Saharan Africa. The risks to their born and unborn children are also very real–neonates and infants are at higher risk of death,low birth weight, poor health and malnutrition. Eliminating child marriage will have significant benefits for Africa’s development. More adolescent girls will be able to stay in school and laterhave the opportunityto work, and escape inter-generational poverty. In countries where the HIV epidemic is well established, such as Zambia, studies have used biomarkers to confirm HIV infection rates that are 48–65 percent higher among married girls compared to sexually active unmarried girls. In Zimbabwe, the prevalence of HIV is 6.2% among unmarried young women aged between 15-24 years, compared with 14.2% of young women married according to a UNFPA report on ending child marriage. All 15 SADC member states have ratified the UN Convention on the Rights of the Child and the African Charter on the Rights and Welfare of the Child. These international and regional treaties define a child as anyone under the age of 18 years. Yet the SADC member states have not yet enacted comprehensive legislation on child marriage, despite many of them having laws in place regarding the minimum age of marriage. The fact remains that girls as young as 14 can marry with parental or judicial consent, and are vulnerable to customary, traditional and religious practices which have effectively created legal loopholes allowing child marriage to flourish. This finally looks set to change. In a bold move by the SADC Parliamentary Forum, a regional Model Law is currently being drafted with the support of UNFPA. Building on regional and international treaties on gender and children, the Model Law aims to be the yardstick of best practice for member states to adopt or adapt in order to end child marriage. It establishes a strong legal and policy framework which cuts across customary, religious and civil marriage systems, as well as suggesting concrete measures and interventions to prevent and mitigate the effects of child marriage. There is also an emphasis on collating and sharing up to date child marriage data through comprehensive monitoring and evaluation. This is undoubtedly a ground-breaking initiative. More than 30 civil society organisations were consulted throughout the process and their messages were succinct, clear, and based on experience. For the new Model Law to carry weight there needs to be political commitment in the form of resources, budgets and monitoring. Girls must have access to quality primary and secondary education, skills development programmes, as well as sexual and reproductive health information and services. The voice of girls, boys, parents, elders and community leaders should be heard. Civil society organisations will play a key role in ensuring this happens, and will promote the Model Law through mock trails and lobbying national parliamentarians across the region. The root causes of child marriage must also be addressed, such as the economic and social vulnerabilities faced by young girls, families and communities.Empowering young girls is central to this. Evidence shows that when girls are aware of their rights, they are empowered to make informed choices about their future. This not only transforms their lives, it transforms communities and benefits future generations. EDITOR’S NOTE- This article is produced jointly by the following individuals: - Dr Esau Chiviya Secretary General Southern Africa Development Community, Parliamentary Forum - Julitta Onabanjo, Regional Director UNFPA East and Southern Africa Region - Heather Hamilton, Interim Executive Director of Girls Not Brides, London - Mr. Roland Angerer, Director Plan international, East and southern Africa - Nyasha Chingore, Southern Africa Litigation Center
<urn:uuid:9d423ce6-9de5-4c48-b0b3-9bf65899ee42>
{ "dump": "CC-MAIN-2020-10", "url": "https://www.southernafricalitigationcentre.org/2016/04/08/salc-in-the-news-one-step-closer-to-eliminating-child-marriage-in-southern-africa/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142323.84/warc/CC-MAIN-20200217115308-20200217145308-00138.warc.gz", "language": "en", "language_score": 0.949826717376709, "token_count": 1029, "score": 3.234375, "int_score": 3 }
More than 3.5 percent of the world’s population is on the move, considered international migrants. That’s more than 250 million people living in a country different than their country of birth or nationality. To put that another way, if all migrants lived in a single country, their population would be the 5th largest country in the world! A recent podcast from globalgoalscast.org highlighted the movement of migrants, and we created a visualization to let users see for themselves where migrants are moving. What countries are migrants moving to? Of all the migrants moving to a specific country, where do they come from? Planning the visualization We wanted to allow users to easily explore these questions, so we created an interactive visualization using SAS Visual Analytics. We analyzed migrant data from the United Nations (UN Department of Economic and Social Affairs - Population Division) and started building reports. But what is the best way to display this data? There were many variables in the data, but the key variables were year, country origin, country destination and number of migrants (see example below). Other variables in the analysis, like gender and region, were omitted from the following screen shot due to space considerations. Visualizing movement on a map Since we are dealing with geographic data we wanted to display the visualization on a map. We filtered the map to just a single origin country. Not sure which way might show best, we tried two of the standard ways to display geo maps (shown below). - Bubble Plot – the larger the bubble, the more migrants that moved to the destination country. - Regions – the darker the shading of the country, the more migrants. At a glance, you can see which destination country has more migrants. But where are they coming from? The map doesn’t easily show that. That got us rethinking how to display the movement of migrants in another way. But how? You can find the answer in a place you may not think to look: a network analysis object. If you haven’t used this object before, your first thought might be that they look like a spider web (many of them do). The object below is a network analysis of the same UN data presented on the above maps, filtered to a single origin country. The size of the node represents the number of migrants to the destination country. This initial view makes it difficult to quickly understand what we're looking at. Instead, let’s add a map background to the network analysis. [Note: the map background option only becomes available if both your source and target are set as geographic items. If you are using a network analysis object and not able to add a map background, most likely it is because your source and target are not defined as geographic objects.] Believe it or not, the following map is the same default network analysis object as above, but with a map background. It even looks very similar to the bubble chart shown earlier, except it adds links between the nodes (and a different map service was selected). So why choose this object? After a few quick option changes we get a more useful object: - Change the node size to 0 to eliminate the bubbles. - Add arrows in the destination direction. - Change the link curvature to 50% to avoid straight lines. - Change the map background to add your own style. SAS provides a wide selection of options. - Add a link color – in this case by destination region. The following map reflects these changes. It includes the same information as the previous maps, but it tells a much more dynamic story. The network analysis shows the origin of the migrants, the thickness of the lines compares the volume and the color represents the destination region. At a glance you can tell where migrants are moving from and where they are going! To bring this all together, we added a word cloud based on the migrant destination. Now when you filter by migrant origin, you can quickly see the migrant volume on the map and in the word cloud. The World is on the Move – Explore for Yourself I’ve walked through the thought process of creating a SAS Visual Analytics report using a network analysis object. Now, explore the data yourself. This report is included in GatherIQ, a free crowdsourcing app from SAS with the sole purpose of using data for good. It takes data about some of the most pressing global issues and makes them personally relevant to people through interactive visualizations. The app promotes a more data-driven culture and encourages you to start discussions based on that data. The interactive reports were created in SAS Visual Analytics and embedded into GatherIQ (a custom app made possible by the SAS Mobile SDKs). You can access GatherIQ from your desktop as a web app, or as a mobile app (iOS and Android). In the app, open the migration report titled, “The world is on the move.” Start exploring what countries migrants are moving to. Or look at it the other way. Pick a country and see where migrants to that country are coming from. These migration reports, as well as other topics are waiting to be explored in GatherIQ. What data visualizations will you build? A network analysis object displays the relationships between data item values as a series of linked nodes. While this feature has been available in SAS Visual Analytics for years, it may not be an obvious choice when you think of creating geographic maps. Give it a try. Interact with it in GatherIQ. Think about how network analysis can help tell your story.Explore more visualizations for a cause in GatherIQ
<urn:uuid:3b1d8c10-7036-40fd-bf73-bb6235dc57a8>
{ "dump": "CC-MAIN-2020-24", "url": "https://blogs.sas.com/content/sascom/2018/06/21/the-world-is-on-the-move-exploring-migration-with-network-analysis/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00327.warc.gz", "language": "en", "language_score": 0.9150117039680481, "token_count": 1151, "score": 3.375, "int_score": 3 }
For years researchers have tried to detect dark matter and failed and this past year was no exception. Dark matter is made of elusive particles that supposedly make up approximately 27% of all matter and over five times the amount of normal matter in the known universe. The concept of dark matter came about almost fifty years ago when astronomers discovered that the stars and gas in the galaxies they observed all rotated around their galactic centers at pretty much equal velocities regardless of their distance from the centers. This is contrary to what was predicted for galaxies that were supposedly dominated by gravity concentrated in their centers. In this prediction, the rotation speeds of the material in these galaxies should decrease with distance from the centers of the galaxies. It was therefore assumed that it was the gravitational influence of some sort of invisible and interspersed matter that was binding all of this material together and causing the flat rotation curves of the galaxies. Decades have been spent searching for this mysterious “dark” matter using several different techniques and instruments. This past year several dark matter experiments concluded without any of them finding even a single particle. The aptly named XENON dark matter research experiments attempt to detect scintillation and ionization in liquid xenon caused by collisions with weakly interacting massive particles (WIMPs) within the xenon. WIMPs are one of two types of particles researchers think could make up dark matter. XENON100 was the second phase of the experiments and utilized a detector containing 165 kilograms of liquid xenon located under 1,400 meters of rock at the underground Laboratori Nazionali del Gran Sasso in central Italy. Researchers reported in September last year that after a combination of 477 days of live operation between January 2010 and January 2014 they detected no dark matter collisions. Despite this another even larger detector, XENON1T was built and began operation in March of this year. XENON1T utilizes 3,500 kilograms of liquid xenon held in a 10 meter tall water jacketed tank. PandaX is a liquid xenon detector located at the China Jin-Ping underground Laboratory (CJPL) in the Sichuan province of south-west China. The CJPL is the world’s deepest underground laboratory at more than 2,400 meters (1.5 miles) below ground. The laboratory’s depth and location under marble rock makes it the world’s best shielded from muons with a flux rate a hundred times lower than the Gran Sasso laboratory. As with all liquid xenon detectors, PandaX is used to try to detect WIMPs that interact with the refrigerated and circulated noble gas. Weighing in at 500 kilograms it was the largest detector of its kind at the time of its operation. However, the project leader reported in July that after an exposure of 33,000 kg-day of liquid xenon no trace of dark matter was observed. Despite these lack of results, PandaX-III begin construction this year and a 20+ ton PandaX-IV is planned for 2020 through 2025. Large Underground Xenon (LUX) Another aptly named dark matter research project is the Large Underground Xenon (LUX) experiment, yet another liquid xenon detector featuring a time-projection chamber (TCP) like XENON and PandaX. This detector contains 370 kilograms of liquid xenon and is located 1,510 meters (.93 miles) below ground in the Sanford Underground Research Facility (SURF) in the former Homestake Mine in the Black Hills of South Dakota, United States. Researchers reported in July that after a 20-month (332 live days) run from October 2014 to May 2016 they had detected no trace of a dark matter particle. And just as with the other aforementioned liquid xenon detector-based experiments, despite its failure to detect a single dark matter particle, a bigger and more sensitive instrument is being planned. The 7-ton LUX-ZEPLIN (LZ) experiment is under construction and planned to be operational by the year 2020. Fermi Large Area Telescope Researchers from the University of Amsterdam’s (UvA) GRAPPA Center of Excellence analyzed over six years of gamma-ray background fluctuation data gathered by the Fermi Large Area Telescope between August 2008 and May 2015. The Large Area Telescope (LAT) is the main instrument of the Fermi Gamma Ray Space Telescope spacecraft. The LAT is an imaging high-energy gamma-ray telescope that scans the entire sky every three hours. The Fermi spacecraft was launched into a near-earth orbit on June 11, 2008 and is operated by the National Aeronautics and Space Administration (NASA). The researchers published their results in December of last year. In their paper they describe two different classes of sources that contribute to gamma-ray background fluctuations, but no traces of a contribution of dark matter particles were found in their analysis. Large Hadron Collider In December of 2015 the two teams operating the A Toroidal LHC ApparatuS (ATLAS) and Compact Muon Solenoid (CMS) particle detectors on the Large Hadron Collider (LHC) both reported they had detected excessive pairs of high energy photons (gamma rays) being released during a proton collision experiment. It was speculated that not only could this be an indication of a new elementary particle, it could perhaps be the discovery of dark matter. But this past year on August 5th at the International Conference on High Energy Physics (ICHEP) representatives from ATLAS and CMS reported that after collecting and analyzing nearly five times as much data this year as they did last year they had come to the conclusion that the “diphoton bump” was nothing more than a statistical fluctuation. Of course many scientists still remain optimistic that one day the LHC will finally detect dark matter particles. It is this kind of blind optimism that has contributed to what appears to be an endless search for what is essentially a mathematical construct. Not all scientists are convinced that dark matter exists but many of their alternative theories can be just as esoteric, involving Modified Newtonian Dynamics (MOND) or extra dimensions or quantum field topology defects. Unfortunately very few scientists are willing to consider the simplest explanation for why normal matter in galaxies does not behave the way gravity dictates it should: gravity is not the dominate force in the universe. Over 99.999% of all the matter in the universe is plasma and plasmas can generate magnetic fields. Many mainstream scientists argue that astrophysical plasmas are overall electrically neutral and therefore have very little influence on surrounding matter. However, considering that the electromagnetic force is 1039 (or one duodecillion) times stronger than the force of gravity, even if only one quadrillionth of the plasma in galaxies contains a net charge it could still exert an overall potential force a trillion trillion times stronger than gravity. Such a force can explain why all the stars and most other material in galaxies rotate around their galactic centers together as though suspended in some sort of unseen matter. In fact a electromagnetically dominated universe can explain many things observed in astronomy and astrophysics, such as the redshifts of stars and galaxies, the lensing of light around galaxies and galactic clusters, and even the existence of the cosmic microwave background (CMB). But scientists have a lot of time and money invested in the belief in a gravity dominated universe. And understandably none are eager to admit that they are wrong or need to change the path of their research that some have spent their entire careers pursuing. So, as noted above, they will continue to build bigger, deeper and more sensitive machines through this year and beyond. Many of these experiments are under tremendous pressure to eventually produce results, particularly those that receive public funding. Because of this pressure many of these experiment failures are already being played off as successful eliminations of dark matter candidates. But eventually researchers are going to run out of candidates, so I am making a prediction for 2017 based on what happened shortly after I posted my summary of failed gravitational wave experiments at the end of 2015. I predict that sometime this year one or more of the dark matter research teams is going to resort to claiming an unknown signal spike or a statistical anomaly or perhaps even a noise pattern match as the first actual direct observation of dark matter…and finally get away with it. It has already been attempted by the research team for the DAMA/LIBRA (Dark Matter Large Sodium Iodide Bulk for Rare Processes) experiment also located with XENON at the Laboratori Nazionali del Gran Sasso in Italy. Since 1998 they have claimed that an annual signal spike in their detector is dark matter being detected as a result of the Earth’s movement through our galaxy’s halo of dark matter. As the name implies, the DAMA/LIBRA experiment attempts to detect scintillation and ionization in sodium iodide crystals instead of liquid xenon. While there is no doubt an annual signal is being observed, no other dark matter detector can substantiate this signal. So it is assumed to be generated by some other source than dark matter despite the continued claims of the research team. But it is my hope that not only the researchers themselves but also the public, who frequently helps fund these experiments, will continue to scrutinize their results, not only in 2017 but throughout the years to come. Perhaps one day we will come to celebrate a New Year where astronomy and cosmology is based on fact and reason, rather than fear and dogma.
<urn:uuid:6164f571-0579-4862-b837-137a1c860171>
{ "dump": "CC-MAIN-2018-26", "url": "http://plasma.pics/2016-the-year-of-the-missing-dark-matter/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867644.88/warc/CC-MAIN-20180625092128-20180625112128-00315.warc.gz", "language": "en", "language_score": 0.9565321207046509, "token_count": 1945, "score": 3.875, "int_score": 4 }
Pothole maintenance around the municipality Chatham-Kent Public Works crews have begun to patch and repair potholes. Public Works officials say potholes are caused when moisture enters a crack in the granular base below the surface asphalt, then freezes and expands during cold temperature periods. The expansion puts pressure on the crack, causing the asphalt to break away, resulting in a pothole. As vehicular traffic passes over the pothole, the sides crumble and the pothole size increases. “As the weather continues to warm up, Public Works crews will be cold patching pot holes”, said Miguel Pelletier, Director of Public Works. “Once roadways are dry and clear of winter traction materials (salt and brine), and temperatures remain consistently above freezing, our crews will begin more permanent repair processes.” Motorists are reminded to proceed carefully through standing water, as potholes may be hidden beneath the water. In order to maintain safe working environments, motorists are also asked to reduce speeds and obey traffic control persons when passing road crews. Some facts about potholes in the Municipality of Chatham-Kent: • There are over 1,153 km of paved roadways in the Municipality • Last year, 313 potholes were reported to Customer Service Residents are encouraged to report the location of potholes on municipal roadways by phoning 519.360.1998, or by emailing [email protected].
<urn:uuid:c738f31d-3725-4829-8aea-c12a287f7d0c>
{ "dump": "CC-MAIN-2018-39", "url": "http://www.ckreview.ca/2018/03/pothole-maintenance-around-the-municipality/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159561.37/warc/CC-MAIN-20180923153915-20180923174315-00102.warc.gz", "language": "en", "language_score": 0.9150654077529907, "token_count": 313, "score": 3.140625, "int_score": 3 }
When most people think back to their childhood, they likely recall many memories that involve their parents, as our parents are often the ones who help us develop our sense of self and provide the resources that allow us to have these memories. Our parents also provide the basic needs that help us to survive, such as food, water, and shelter. For children who have recently lost a parent, it can sometimes be hard to understand that this resource they have counted on for their entire lives is gone. Being able to effectively explain this to these children and provide them with support in physical, emotional, and social domains is absolutely vital to the success of their recovery from this unthinkable event. Grief is often conceptualized using the DABDA model created by Elisabeth Kubler-Ross. This model is certainly helpful when recalling the stages that may exist during a grieving period, but it is important to remember that these stages may not necessarily happen in the order they are presented, and these variabilities may especially exist for children. Children may experience anger and depression at the same time, or they may not display bargaining behaviors in the same way that adults do. Further, children may seem like they have reached the acceptance stage when they actually are still experiencing other feelings that they are simply unable to express. The death of a parent results in a complete shift in the way a child experiences the world. If it’s the loss of both parents, or a solo caregiver, the child may move to another individual’s home or may even enter the Child Welfare System if a suitable caregiver is not found. These abrupt changes in environment can make this time even more complex for a child, and it is important to consider these factors as well when interacting with these children. There is no “perfect mold” that explains how children experience grief, and individual children will display different feelings at different times. It is important for those close to these children to be especially attuned to the way these children express feelings and to encourage them to get in touch with them. The path to recovery from the loss of a parent is a difficult one, but it can be aided by caring and thoughtful support systems. If your child has lost one of their parents, therapy is crucial to helping them. Please contact me today to schedule a time to speak to help your family through this challenging time.
<urn:uuid:bdc6505f-12a4-4279-bfce-6e1682de753b>
{ "dump": "CC-MAIN-2021-10", "url": "https://leucadiacounseling.com/the-struggles-children-face-when-grieving-the-death-of-a-parent/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360107.7/warc/CC-MAIN-20210228024418-20210228054418-00401.warc.gz", "language": "en", "language_score": 0.9690306186676025, "token_count": 479, "score": 2.8125, "int_score": 3 }
Here's the latest article from the Astronomy site at BellaOnline.com. In the Shadow of the Moon Review What would it be like to leave Earth's protective embrace and journey to an alien world? Only twenty-four men have ever experienced this - Apollo astronauts. "In the Shadow of the Moon" uses original footage & astronaut interviews to tell the story of one of the defining events of human history. On April 12, it will be exactly half a century since the first person circled Earth from space. To celebrate Yuri Gagarin's achievement, a film "First Orbit" recreates what he would have seen. It was made with the cooperation of astronauts on board the International Space Station and includes voice recordings from Gagarin's flight and music by Philip Sheppard who scored "In the Shadow of the Moon." You can find a trailer for it here http://www.firstorbit.org/watch-the-film. There may be a party near you showing the film on its global premiere -- if you're interested, here is the map http://www.firstorbit.org/join-us. *The Mercury Seven* As long as we're in historic space flight mode, I'll mention that on April 9, 1959 NASA selected its first seven astronauts. President Eisenhower said that the astronauts had to be test pilots. The Mercury spacecraft itself set limits too, as it was pretty small. No one could be accepted who was taller than 5 feet 11 inches (180 cm) or who weighed more than 180 lbs (82 kg). They also needed considerable flying time, be under 40 and have a college degree or equivalent. Only two of the seven are still alive: John Glenn, the first American in space, who also has the distinction of being the oldest person to go into space; and Scott Carpenter, the second American in Space. *A Cosmological Fantasy* Here's a treat for you. Burrell Durrant Hifle did the visual effects for Brian Cox's "Wonders of the Universe" series. In this fifteen-minute video he's put together a montage of stunning effects to the music of Timo Baker. It would have been good to have some subtitles so viewers would know what all the effects represented are, but it's still gorgeous. How many things can you recognize? *Global Astronomy Month* April is Global Astronomy Month. Have a look at some of the projects of Astronomers without Borders. http://www.astronomerswithoutborders.org/about-astronomers-without-borders.html That's all for this now. Wishing you clear skies. Please visit astronomy.bellaonline.com for even more great content about Astronomy. To participate in online discussions, this site has a community forum all about Astronomy located here - I hope to hear from you sometime soon, either in the forum or in response to this email message. I welcome your feedback! Do pass this message along to family and friends who might also be interested. Remember it's free and without obligation. Mona Evans, Astronomy Editor
<urn:uuid:fbd4487b-8469-431f-b21b-b0321d151908>
{ "dump": "CC-MAIN-2016-44", "url": "http://www.bellaonline.com/newsdtl.asp?name=astronomy&date=4/6/2011%207:50:23%20PM", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719416.57/warc/CC-MAIN-20161020183839-00055-ip-10-171-6-4.ec2.internal.warc.gz", "language": "en", "language_score": 0.939460039138794, "token_count": 643, "score": 2.5625, "int_score": 3 }
The Plot Against America Dust jacket of first U.S. edition The Plot Against America is a novel by Philip Roth published in 2004. It is an alternative history in which Franklin Delano Roosevelt is defeated in the presidential election of 1940 by Charles Lindbergh. The novel follows the fortunes of the Roth family during the Lindbergh presidency, as antisemitism becomes more accepted in American life and Jewish-American families like the Roths are persecuted on various levels. The narrator and central character in the novel is the young Philip, and the care with which his confusion and terror are rendered makes the novel as much about the mysteries of growing up as about American politics. Roth based his novel on the isolationist ideas espoused by Lindbergh in real life as a spokesman for the America First Committee and his own experiences growing up in Newark, New Jersey. The novel depicts the Weequahic section of Newark which includes Weequahic High School from which Roth graduated. The novel is told from the point of view of Philip Roth as a child. It begins with aviation hero Charles Lindbergh, already criticized for his praise of Hitler's government, joining the America First party. As the party's spokesman, he speaks against American intervention in World War II, and openly criticizes the 'Jewish race' for trying to force American involvement. After making a surprise appearance on the last night of the 1940 Republican National Convention, he is nominated as the Republican Party's candidate for President. Although criticized from the left, and hated by most Jewish-Americans, Lindbergh musters a strong tide of popular support from the South and Midwest, and is endorsed by conservative rabbi Lionel Bengelsdorf. Lindbergh wins the election over incumbent president Franklin D. Roosevelt in a landslide under the slogan 'Vote for Lindbergh, or vote for war.' He nominates Burton K. Wheeler as his vice president, and Henry Ford as Secretary of the Interior. With Lindbergh as president, the Roth family begin increasingly to feel like outsiders in American society. Lindbergh's first act is to sign a treaty with Nazi Germany and Adolf Hitler promising that the United States will not interfere with German expansion in Europe (known as the 'Iceland Understanding' after the place it is signed), and with Imperial Japan, promising non-interference with Japanese expansion in Asia (known as the 'Hawaii Understanding'). The new presidency begins to take a toll on Philip's family. Philip's cousin Alvin joins the Canadian army to fight in Europe. He loses his leg in combat, and comes home with his ideals destroyed. He leaves the family and becomes a racketeer. A new government program begins to take Jewish boys to spend a period of time living with exchange families in the South and Midwest in order to 'Americanize' them. Philip's brother Sandy is one of the boys selected, and after spending time on a farm in Kentucky he comes home showing contempt for his family, calling them 'ghetto Jews.' Philip's aunt Evelyn marries Lionel Bengelsdorf and becomes a frequent guest of the Lindbergh White House, even being invited to a dinner party for German Foreign minister Joachim Von Ribbentrop. This causes further strain in the family. A new government act is instituted relocating whole Jewish families to neighborhoods out west. Many of Philip's neighbors move to Canada. Philip's shy and innocent school friend Seldon Wishnow, an only child, is moved to Kentucky with his mother. In protest against the new act, radio broadcaster Walter Winchell openly criticizes the Lindbergh administration and is fired from his station. He then decides to run for President and begins a speaking tour. His candidacy causes anger and antisemitic rioting in southern and Midwestern states, and mobs begin targeting him. Making a speech in Louisville, Kentucky he is shot to death. Winchell's funeral in New York City is presided over by Mayor Fiorello La Guardia, who praises Winchell for his opposition to fascism, and openly criticizes President Lindbergh for his silence over the riots and Winchell's death. After making a short speech, Lindbergh's personal plane goes missing. Body hunts turn up no results and Vice President Wheeler assumes command. The German State Radio discloses 'evidence' that Lindbergh's disappearance, as well as the kidnapping of his son, were part of a major Jewish conspiracy to take control of the American government. This announcement causes further antisemitic rioting. Wheeler and Ford, acting on this evidence, begin arresting prominent Jewish citizens, including Henry Morgenthau, Jr., Herbert Lehman and Bernard Baruch, as well as Mayor LaGuardia. Seldon calls the Roths when his mother doesn't come home. They later discover that Seldon's mother was killed by Ku Klux Klan members who beat and robbed her before setting fire to her car with her in it. The Roths eventually call Sandy's exchange family in Kentucky and have them keep Seldon safe until Philip's father and brother drive to them and bring him back to Newark. Months later, he is taken in by his mother's sister. The rioting stops when first lady Anne Morrow Lindbergh makes a statement asking for the country to stop the violence and move forward. With the body searches called off, former president Franklin D. Roosevelt runs as an emergency presidential candidate, and is reelected. Months later, the Japanese attack Pearl Harbor, and America enters the war. As an epilogue, Philip's aunt Evelyn confides a theory of Lindbergh's disappearance. According to her, after Lindbergh's son Charles was kidnapped his murder was faked, and he was then raised in Germany by the Nazis as a Hitler Youth member. The Nazis' price for the boy's life was Charles Lindbergh's full cooperation with a Nazi-organized Presidential campaign, by which they hoped to bring the Final Solution to America. When Lindbergh informed them that the United States would never permit such a thing, he was kidnapped, and the Jewish conspiracy theory was put forward hoping to turn America further against the Jewish population. Roth admits that this theory is the most far-fetched, but 'not necessarily the least convincing' explanation for Lindbergh's disappearance. Inspiration for the novel Roth has stated that the idea for the novel came to him while reading Arthur Schlesinger, Jr.'s autobiography, in which Schlesinger makes a comment that some of the more radical Republican senators of the day wanted Lindbergh to run against Roosevelt. The title appears to be taken from that of a communist pamphlet published in support of the campaign against Burton K. Wheeler's re-election to the U.S. Senate in 1946. The novel depicts an antisemitic United States in the 1940s. Roth had written in his autobiography, The Facts, of the racial and antisemitic tensions that were a part of his childhood in Newark. Several times in that book he describes children in his neighborhood being set upon simply because they were Jewish. Literary significance and criticism Roth's novel was generally well received. Jonathan Yardley of The Washington Post, exploring the book's treatment of Lindbergh in some depth, calls the book "painfully moving" and a "genuinely American story." Blake Morrison in The Guardian also offered high praise: "The Plot Against America creates its reality magisterially, in long, fluid sentences that carry you beyond scepticism and with a quotidian attentiveness to sights and sounds, tastes and smells, surnames and nicknames and brandnames — an accumulation of petits faits vrais — that dissolves any residual disbelief." Writer Bill Kauffman in The American Conservative wrote a scathing review of the book, objecting to its criticism of the movement of which Lindbergh was a chief spokesperson, a movement sometimes referred to as isolationist but which Kauffman sees as anti-war, in contrast to Roosevelt's pro-war stance. He also criticizes its portrayal of increasing American antisemitism, in particular among Catholics, and for the nature of its fictional portrayals of real-life characters like Lindbergh, claiming it was "bigoted and libelous of the dead", as well as for its ending, featuring a resolution to the political situation that Kauffman considered a deus ex machina. Many supporters and critics of the book alike took it as something of a roman à clef for or against the Bush administration and its policies, but though Roth was opposed to the Bush administration, he has denied such allegorical interpretations of his novel. In 2005, the novel won the James Fenimore Cooper Prize for Best Historical Fiction given by the Society of American Historians. It was not especially well received by the science fiction community, not being nominated for a Hugo or Nebula and coming in 11th for the 2005 Locus Awards; but did win the Sidewise Award for Alternate History and was a finalist for the John W. Campbell Memorial Award. The Plot Against America depicts or mentions several historical figures: - Bernard Baruch - Louis D. Brandeis - Charles Coughlin - Henry Ford - Adolf Hitler - Edward Flanagan - Robert M. La Follette, Jr. - Leo Frank - Felix Frankfurter - Joseph Goebbels - Hank Greenberg - William Randolph Hearst - J. Edgar Hoover - Harold L. Ickes - Fritz Julius Kuhn - Fiorello H. LaGuardia - Herbert H. Lehman - John L. Lewis - Charles Lindbergh - Anne Morrow Lindbergh - Henry Morgenthau, Jr. - Vincent Murphy - Gerald P. Nye - Westbrook Pegler - Joachim Prinz - Joachim von Ribbentrop - Franklin D. Roosevelt - Leverett Saltonstall - Gerald L. K. Smith - I.F. Stone - Dorothy Thompson - Burton K. Wheeler - David T. Wilentz - Wendell Willkie - Walter Winchell - Abner "Longy" Zwilman - Axis victory in World War II: an extensive list of Wikipedia articles regarding works of Nazi Germany/Axis/World War II alternate history. - Yardley, Jonathan. "Homeland Insecurity". The Washington Post. October 3, 2004. p. BW02 - Berman, Paul (October 3, 2004). "'The Plot Against America'". The New York Times. - Morrison, Blake (October 2, 2004). "The Relentless Unforeseen". The Guardian. Retrieved 2010-07-21. - Kauffman, Bill. "Heil to the Chief". The American Conservative. September 27, 2004. - West, Diana. "The unnerving 'Plot'". Townhall.com. October 11, 2004. - "Best Fiction". The Daily Telegraph. 8 December 2004. Retrieved 3 January 2011. - List of winners of the James Fenimore Cooper Prize - "2005 Locus Awards" Locus Index to SF Awards - List of Sidewise Award Winners - "2005 John W. Campbell Memorial Award" Locus Index to SF Awards - Rossi, Umberto. "Philip Roth: Complotto contro l'America o complotto americano?", Pulp Libri #54 (Mar-Apr 2005), 4–7. - Swirski, Peter. "It Can't Happen Here or Politics, Emotions, and Philip Roth's The Plot Against America." American Utopia and Social Engineering in Literature, Social Thought, and Political History. New York, Routledge, 2011. - Stinson, John J. "'I Declare War': A New Street Game and New Grim Realities in Roth's The Plot Against America." ANQ: A Quarterly Journal of Short Articles, Notes and Reviews #22.1 (2009), 42-48. - Charles, Ron. "Lucky Lindy Unfortunate Jews", review in Christian Science Monitor, 28 September 2005. CSMonitor.com, accessed 27 September 2014. - Gessen, Keith. "His Jewish Problem", review in New York Magazine, 27 September 2004. NYMag.com, accessed 27 September 2014. - Kakutani, Michiko. "BOOKS OF THE TIMES; A Pro-Nazi President, A Family Feeling The Effects" review in The New York Times, 21 September 2004. NYTimes.com, accessed 27 September 2014. - Risinger, Jacob. "Imagined History", review in the Oxonian Review, 15 December 2004. OxonianReview.org, accessed 27 September 2014.
<urn:uuid:07a53031-6108-492e-b4c1-bfaa1e8a195c>
{ "dump": "CC-MAIN-2016-30", "url": "https://en.wikipedia.org/wiki/The_Plot_Against_America", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825358.53/warc/CC-MAIN-20160723071025-00088-ip-10-185-27-174.ec2.internal.warc.gz", "language": "en", "language_score": 0.950639009475708, "token_count": 2617, "score": 2.5625, "int_score": 3 }
Ocean Ecosystem Management (OEM) Project Ocean Ecosystem Management (OEM) is a component project of Discovery 2010: integrating Southern Ocean ecosystems into the Earth System science research programme, part of the British Antarctic Survey research strategy Global Science in an Antarctic Context (GSAC) 2005–2009 Maintaining long-term food security in a changing environment is one of the greatest challenges to the sustainable exploitation of the Earth System, including its oceans. The well publicised collapses of commercial fish stocks are one of the most striking examples of the failure to manage natural resources sustainably. All harvesting, however well managed, will have an effect on the ecosystems that support those fish populations. Nevertheless, these effects have had little influence on traditional approaches to managing fisheries. A recognition of the wider consequences of harvesting on the different components of the ecosystem lies at the heart of the scientific and political initiatives to implement ecosystem-based approaches to the management of fisheries. The development and successful use of ecosystem-based approaches to fisheries management requires a scientific understanding of the fundamental ecosystem processes affected by harvesting and the scales over which these interactions operate. The OEM project will use the Southern Ocean as a model to address two primary objectives that have direct relevance to the global implementation of ecosystem approaches to the management of fisheries. - The determination of the analytical procedures and feedback mechanisms required to incorporate the results from long-term monitoring of the exploited ecosystem into management processes, and - The development of a methodology for the implementation of ecosystem-based fisheries management at the space and time scales appropriate to the operation of the ecosystem and the fishery. Relevance to Global ScienceThe Commission for the Conservation of Antarctic Marine Living Resources (CCAMLR) incorporates the ecosystem approach to the management of Antarctic fisheries in its Convention that came into force in 1982. Since that time the general concept of ecosystem approaches to fisheries has gained increased recognition such that the UN World Summit on Sustainable Development in 2002 suggested that it should be implemented in global fisheries by 2010. Recently there has been a great deal of emphasis placed on the implementation of such approaches with respect to UK/EU fisheries, with NERC identified as one of the bodies that should be involved in providing the scientific basis. Delivering the ResultsThe primary role of the OEM project will be to integrate scientific output from the other DISCOVERY 2010 projects, particularly FOODWEBS and FLEXICON, in the most policy-relevant form including options for how that science can be made operational in a fisheries management context. By delivering this science this Project will deliver national capability in respect of UK input to fisheries management, especially through CCAMLR, and will deliver international scientific leadership in the field of sustainable resource management both in the Southern Ocean and Domestic/EU fisheries.
<urn:uuid:00fc33a5-54a9-4e70-b2cb-43bffa07369c>
{ "dump": "CC-MAIN-2013-48", "url": "http://ipy.org/bas_research/our_research/previous_research/gsac/discovery2010/oem/index.php", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164641332/warc/CC-MAIN-20131204134401-00013-ip-10-33-133-15.ec2.internal.warc.gz", "language": "en", "language_score": 0.8964405059814453, "token_count": 556, "score": 3, "int_score": 3 }
State Rep. Phil Lopes says that with his proposal to provide government-run health insurance to every resident of the state any person could visit a doctor any time, yet no health care provider would be forced to accept lower payments. That's not exactly the way government health programs work. Patients in Canada and the United Kingdom commonly wait up to a year to see a doctor, while providers, especially of primary care, are becoming scarce due to declining revenue. Our Medicare and Medicaid systems are plagued with the same problems. There is no evidence that expanding health insurance coverage would improve the health or longevity of the poor compared to other approaches. Moreover, more health insurance wouldn't improve the quality of medical care. Nor would it reduce health disparities across racial and socioeconomic lines, according to Harvard's Christopher Murray. Meanwhile, the private sector has some good ideas to improve access to health care. Walk-in clinics are sprouting up in Wal-Marts, CVS pharmacies, and other retail outlets, where minor conditions can be treated for $40 to $60. The staff of these clinics refers patients for higher levels of care when needed. To assure continuity of care, reports are e-mailed to personal physicians. Patients report a 90 percent satisfaction rate. Government could do more to make medical care more affordable and accessible, but not by taking over the insurance business. We should all be able to purchase health insurance with the same tax benefits that employers get. We should be able to purchase health insurance across state lines, to avoid the costly mandates imposed by state governments. And health savings accounts should be available to all. Dr. Tom Patterson is the Chairman of the Goldwater Institute, a former state senator, and emergency room physician. A longer version of this article originally appeared in the East Valley Tribune.
<urn:uuid:4d6f1a01-4c32-48a7-92f5-8d01b262e55b>
{ "dump": "CC-MAIN-2014-35", "url": "http://www.goldwaterinstitute.org/blog/more-more-more", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823333.10/warc/CC-MAIN-20140820021343-00355-ip-10-180-136-8.ec2.internal.warc.gz", "language": "en", "language_score": 0.9628186821937561, "token_count": 365, "score": 2.609375, "int_score": 3 }
Raisins for breakfast ‘not bad for kids’ teeth’ Children who add a handful of raisins onto their breakfast cereal will not increase their chances of developing tooth decay, a study has found. Bran cereals with raisins that do not have added sugar is no worse for children’s teeth than regular bran flakes. Scientists suggest eating raisins, which contain a natural source of sugar, with their bran cereal does not contribute towards tooth cavities in children. The sugar disappears from the tooth rather than fermenting, as it does with fruits such as bananas and apples. Professor Christine Wu, from the University of Illinois in Chicago, explained:’Some dentists believe sweet, sticky foods such as raisins cause cavities because they are difficult to clear off the tooth surfaces. Studies have shown that raisins are rapidly cleared from the surface of the teeth.’ The study, from the US university, looked at the acid produced by the plaque bacteria on the surface of the child’s teeth after they ate a mix of raisins, flakes of bran, a high street raisin bran cereal and mix of bran flakes with raisins and no added sugar. Plaque which stays on the tooth can ferment into sugars from glucose, fructose and sucrose which all produce acid that develop into decay. Raisins do not contain sucrose, which is thought to be the main sugar that forms a sticky barrier, helping bacteria to grow. The study is published in the journal Pediatric Dentistry.
<urn:uuid:e02fcd80-8ddf-46ef-9a6b-3f60684d2769>
{ "dump": "CC-MAIN-2021-10", "url": "https://dentistry.co.uk/2009/12/22/raisins-breakfast-not-bad-kids-teeth/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357929.4/warc/CC-MAIN-20210226145416-20210226175416-00381.warc.gz", "language": "en", "language_score": 0.9536886215209961, "token_count": 328, "score": 3.078125, "int_score": 3 }
From the author's Atlantic Neptune. This chart of the southern coasts of Massachusetts and Rhode Island is representative of the level of detail contained in the charts of The Atlantic Neptune. The surveys of this coastline began in 1774, and were performed by engineers working under the supervision of Samuel Holland. With the inclusion of numerous soundings, the chart is able to convey the depth of the sea at various places in a two-dimensional form. Modern nautical charts use the same placement of soundings along coastlines to inform navigators of underwater dangers. It was imperative for 18th century mariners to have such safety information, as navigation of intricate and unknown coasts could be perilous.
<urn:uuid:8cd7d961-1a43-47b8-988a-08bc8450c03f>
{ "dump": "CC-MAIN-2015-35", "url": "http://maps.bpl.org/id/12295", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645356369.70/warc/CC-MAIN-20150827031556-00079-ip-10-171-96-226.ec2.internal.warc.gz", "language": "en", "language_score": 0.9617781639099121, "token_count": 135, "score": 3.578125, "int_score": 4 }
Memorial Day - is a federal holiday in the United States for remembering the people who died while serving in the country's armed forces. The holiday, which is observed every year on the last Monday of May, originated as Decoration Day after the American Civil War in 1868, when the Grand Army of the Republic, an organization of Union veterans founded in Decatur, Illinois, established it as a time for the nation to decorate the graves of the Union war dead with flowers. By the 20th century, competing Union and Confederate holiday traditions, celebrated on different days, had merged, and Memorial Day eventually extended to honor all Americans who died while in the military service. It typically marks the start of the summer vacation season, while Labor Day marks its end. (definition found here) Many people mix up Memorial Day with Veteran's Day - and I sometimes get thanked for my service on Memorial Day. Now, the thought is nice - but I wanted to make sure people understood the difference between Memorial Day and Veteran's Day. This is my great-Uncle Gerald. He was killed in action on 23 February, 1945 at Iwo Jima. Mom and I were lucky enough to borrow the letters he wrote to my great-grandparents and inside the box of letters was also the letter from his commander, the telegram of his death, a letter from the chaplain and a letter from his buddy. Along with the letter from the chaplain he inserted a poem written by P1/Sgt. Michael Nuzzola, United States Marine Corp. (I may have gotten the rank wrong - the paper is starting to be hard to read) At Iwo Jimas' Graves Dear God, 'neath Iwo Jima's sky I offer simple prayer, For these Marines who buried lie, Please give your special care. No weaklings, for they all were men, They stormed the creature in his lair; And let the world know once again, That the fighter did his share. In fullest measure they have paid, The price of liberty. For theirs has been a great crusade, To make the whole world free. And guide those men, who peace will make, Whose price of duty's not so high. Be sure this time, there's no mistake, On why, these men did die. So, please remember those who have given their lives for our freedom to enjoy this holiday weekend.
<urn:uuid:4a470193-8818-4f18-a77d-57bdd9802ea8>
{ "dump": "CC-MAIN-2017-22", "url": "http://ebbeadandmetalworks.blogspot.com/2016/05/memorial-day-in-remembrance.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612553.95/warc/CC-MAIN-20170529203855-20170529223855-00420.warc.gz", "language": "en", "language_score": 0.9659361839294434, "token_count": 506, "score": 2.53125, "int_score": 3 }
the postmodern world The world in which we live is changing. For the past three hundred years we have been part of an age called modernity. The modern age is now giving way to a postmodern age. This transformation will change how people view the world, how they understand reality and truth, and how they approach the fundamental questions of life. This will have a tremendous impact on Christianity. The church has its roots in an ancient pre-modern Mediterranean worldview. Slowly it has accommodated itself to the modern world. But many critics wonder whether it will be able to survive the shift to the postmodern age. the pre-modern worldview The pre-modern worldview developed during the time of the ancient temple-state, in which an alliance of king and priesthood closely intertwined religion and political power. Religion’s role was to legitimize the king’s rule by providing a moral and religious authority for his decrees. The king was viewed as God’s representative on earth. He was sometimes spoken of as the “Son of God” (as was ancient Israel’s King David), and was sometimes seen as divine himself. To these ancient societies the ruler and the social order reflected the will of God on earth. The pre-modern worldview is thus characterized by an unquestioning acceptance of authority and a belief in absolute truths. Pre-modern people believe what they are told by authority figures, both religious and secular. They trust religion to provide the answers to life’s mysteries. The Bible is a product of two pre-modern societies. The priests of ancient Israel produced the Hebrew Bible or the Old Testament, and evangelists of early Christian communities produced the New Testament. The pre-modern view of the world represented in these documents was accepted without question by the audiences to which they were written. the modern worldview The modern worldview began in the Enlightenment of the eighteenth century. Modernity was founded on the pursuit of objective knowledge and the scientific method. It is characterized by a questioning of authority and tradition. Modernity believes that truth is based on facts. In the modern worldview, people should believe only what they can observe. Modernity trusts the power of reason and critical thinking to solve the world’s problems. It looks to science, and not to religion, to provide the answers to life’s mysteries. Modern people have often developed an optimistic faith in the progress of humanity through knowledge, scientific inquiry, innovation, invention, and rational thought. The rise of modernism led to the rise of secularism. The two go hand in hand. Secularism is defined as a system of ideas or practices that rejects the primacy of religion in our corporate life. In its hard form, secularism is atheistic. It denies the reality of God. But in its softer, more widespread form, it accepts God’s reality but rejects the church as a controlling force in the life of the national community. It believes that the church and state should be separate entities in modern life. This doesn’t mean that individual faith cannot inform our politics; it simply means that the state should not sponsor a particular religion and give it preferential treatment and power. In this sense, the founding fathers of the United States were secularists. As modernity developed and spread, an intense reaction developed among religious traditionalists firmly entrenched in a pre-modern worldview, primarily within the religions of Christianity, Judaism, and Islam. Beginning nearly 300 years ago, European biblical scholars began to question the literal truth of the biblical accounts, both in the Old and New Testaments. Nothing was considered sacred. The virgin birth of Jesus, his miracles, and his resurrection were all subjected to scrutiny and question. The doubts posed by modern philosophers, biblical scholars, and theologians threatened traditional religious dogma. As a result, in the late nineteenth and early twentieth centuries, reactionary religious movements tried to reinforce traditional religious fundamentals and re-establish belief in the literal truth of the biblical stories. If modernity wanted to deal with factuality, the fundamentalists responded in kind. They were not content to simply say that the Bible expressed eternal truths or that its stories were metaphorically true. Now they demanded that Christians accept scripture as factually and literally true. Even texts that for centuries had been regarded as metaphorical, now assumed the status of factuality. By the 1920s, the pre-modern worldview of the fundamentalists came into increasing conflict with modern secular thought. The clash between the two sides created a crisis in the church, particularly over the theory of evolution and the literal acceptance of the creation account in Genesis. The 1925 Scopes “Monkey Trial” was a public battle between these two competing positions and marked the transition point at which modernity became the new majority worldview in American society. Over eighty years later, Christian fundamentalists continue to demand that public school districts teach the parable of creation as “creation science” alongside the scientific theory of evolution. the church in the global south The modern worldview is not in the majority everywhere. On a worldwide basis, Christianity continues to embrace a pre-modern worldview. In the Global South (the areas that we often call the Third World) huge and growing Christian populations—currently 480 million in Latin America, 360 million in Africa, and 313 million in Asia (compared with 260 million in North America)—now make up what the Catholic scholar Walbert Buhlmann has called the Third Church. It is a form of Christianity as distinct as Protestantism or Orthodoxy, and one that is likely to become the dominant Christian faith on the globe. There is increasing tension between what one might call a liberal Northern Reformation, in which many U.S. and European churches have embraced modernity, and a conservative Southern Counter-Reformation, in which the Third World churches are staunchly pre-modern. The church in the Global South is unfortunately dominated by the pre-modern institution of patriarchy with all of its negative implications, including subjugation of women and abhorrence of gays. An enormous rift seems inevitable and global denominations spend enormous effort and time calling for unity while seemingly irreconcilable theological differences drive the two factions apart. In the twenty-first century, Christians are facing a shrinking population in the “Liberal West” and a growing majority of the “Conservative Rest.” During the past half-century the critical centers of the Christian world have moved decisively to Africa, to Latin America, and to Asia, and the balance will never shift back. The growth in Africa has been relentless. In 1900, Africa had just 10 million Christians out of a continental population of 107 million—about nine percent. Today the Christian total stands at 360 million out of 784 million, or 46 percent. And that percentage is likely to continue rising, because Christian African countries have some of the world’s most dramatic rates of population growth. Meanwhile, the advanced industrial countries are experiencing a dramatic birth dearth. Within the next twenty-five years, the population of the world’s Christians is expected to grow to 2.6 billion (making Christianity by far the world’s largest faith). By 2025, 50 percent of the Christian population will be in Africa and Latin America, and another 17 percent will be in Asia. Those proportions will grow steadily. By about 2050 the United States will still have the largest single contingent of Christians, but all the other leading nations will be Southern—Mexico, Brazil, Nigeria, the Democratic Republic of the Congo, Ethiopia, and the Philippines. By then the proportion of non-Latino whites among the world’s Christians will have fallen to perhaps one in five. The vast majority of Christians remain divided into pre-modern and modern camps. Yet, while these two worldviews continue to spar, a new group of people in the industrial West has declared them both irrelevant. the postmodern worldview A new historical epoch is unfolding before our eyes. It began about the middle of the twentieth century and is continuing to develop today. For lack of the better designation it is being called postmodernism—the successor of modernism. We are not sure how it will play out in the long term, but some initial observations are being made about its nature. Postmodernity is a different reaction to modernity. Postmodern people are essentially disenchanted modernists. They are convinced that human reason and cleverness cannot achieve the happiness we seek. They have witnessed the environmental ravages of the industrial revolution, the bloody history of the twentieth century, and continued misery, poverty and hunger around the globe. None of these problems were solved by scientific knowledge. On the contrary, the by-products of science and the industrial revolution exacerbated many of our human problems. Science has provided cures to disease, but it has also created the threat of global warming and nuclear annihilation. In fact, the bombing of Hiroshima and the resulting nuclear arms race may have been the spark that marked the demise of modernity and ignited the rapid rise of a global postmodern culture. But, unlike fundamentalism, postmodernism does not seek to return to an earlier time. Nor does it see a return to authoritarian religion as the answer. Postmodernism is characterized by the belief that both religion and science have failed us. Neither can be trusted to provide the answers to life’s mysteries or to solve life’s perplexing problems. truth and experience Postmodern people reject the notion of absolute truth. They no longer trust authority and they reject any institution that claims to have a claim on the truth. They have become highly suspicious of facts. They believe that all truth, even to some extent scientific knowledge, is subjective, biased, and socially constructed. Truth depends on what one’s culture regards as truth. Therefore the truth is not really true. In the postmodern worldview, people become their own authority and accept only what they personally experience. There is a sense that feeling is all that counts because, in the end, feeling is all there is. The postmodern attitude is, “If I can feel it, if I can touch it, then it must be true.” Among postmoderns there is a pervasive cultural pessimism that is cynical about political and ideological grandstanding of authorities and institutions. In a century of bombs, holocausts, and ecological disasters, many people have become disillusioned with their inherited faiths, the institutional church, political parties, and the political process. In the United States, Watergate and the Vietnam War created a pervasive anti-institutional mood among Baby Boomers, and it has spread to their children. As a result, voter apathy is on the rise and church membership is on the decline. Generation Xers are deeply suspicious of grand claims. They see life as complex and they distrust simple solutions. Churches which claim they have the last and final word on everything will find it very hard to attract this generation who cannot believe that there is just ‘one way for all’. They will look at Christianity as one of the many options that can be considered in a world in which they see each person as finding his or her own truth and meaning. In the 1990s, the TV program “The X-Files” contrasted the modern and postmodern paradigms. FBI agent Dana Scully, played by Gillian Anderson, was the epitome of the modern scientific approach to life. Agent Mulder Fox, played by David Duchovny, however, was a postmodern person who cautioned us to “trust no one” in authority and to believe that, although we do not yet fully comprehend it, “the truth is out there.” Whereas Scully trusted her head, Mulder trusted only his experience. the roots of postmodernism The movement from modernity to postmodernity in America began with the Baby Boomers. Born between 1946 and 1964, this was the first generation raised under the threat of nuclear weapons. Boomers knew in their guts that science had created a demon that could destroy the world. They saw their school and church basements filled with civil defense emergency supplies, they practiced ineffective “duck and cover” drills in classrooms, and they listened to their parents discuss the need for backyard fallout shelters. In the 1960s, they observed the unmasking of the entrenched racism, sexism and militarism that pervaded American culture. And they reacted to it with protests and social action. The only authority figures that they trusted were assassinated—first John Kennedy in 1963, and then Martin Luther King, Jr. and Bobby Kennedy in 1968. In all of these issues they saw the traditional church as a complicit conspirator with the prevailing societal powers in a culture of rigid moralism, oppression and violence. The Baby Boom generation began a search for a more authentic faith, away from authoritarian religion and toward experiential spirituality. Their suspicion towards pre-packaged truths of religious institutions led them to seek spirituality in many new forms—charismatic Christianity, Eastern religions, and New Age spirituality. When the Baby Boomers had children, their sons and daughters exhibited the same characteristics—but to an even greater degree. The attitudes and traits that are attributed to Generation X, born between 1965 and 1981, are often precisely those that researchers have identified as typical of the Baby Boomer generation. The difference lies in that the young men and women of Generation X have held these values from childhood. Generation Y, born after 1982, are carrying these ideas even further. Observers are discovering that this shift in attitudes is indicative of a fundamental change around the globe. In many respects, Europeans are ahead of Americans in the move to postmodernity. The abandonment of traditional Christianity is certainly much stronger there. Historical epochs are not neatly separated. They are not lined up end to end. It is possible to continue to live in an era that is essentially over. While one era prevails, its successor is already forming, and its predecessor continues to exert influence for a very long time. These three worldviews—pre-modern, modern, and postmodern—coexist side-by-side today in all parts of American culture. But it is particularly apparent in our churches. Some Christians accept what they are told by religious authorities. Others question authority and use reason as a guide. Still others reject institutional religion and trust only their own spiritual experiences. But regardless of generation, culture, or attitude, we all are moving together toward a postmodern world. And the movement is rapidly accelerating.
<urn:uuid:0583f392-6536-4571-bad4-10fbe32049e2>
{ "dump": "CC-MAIN-2019-51", "url": "https://followingjesus.org/the-postmodern-world/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530857.12/warc/CC-MAIN-20191211103140-20191211131140-00043.warc.gz", "language": "en", "language_score": 0.959140956401825, "token_count": 2973, "score": 3.0625, "int_score": 3 }
The Delmarva Peninsula fox squirrel will be removed from the Endangered species list next month, the U.S. Fish and Wildlife Service (FWS) announced. Watch "Racing Extinction" on Discovery Channel, Dec. 2, at 9 PM ET/PT. The move has been a long time coming. The squirrel was one of 78 species to be listed under the original Endangered Species Preservation Act of 1967, a forerunner of today's Endangered Species Act, which became law in 1973. At about 15 inches in body length, minus the tail, Delmarva fox squirrels are larger than other squirrel species, and unlike more typical squirrels they're not usually seen in urban and suburban environments. Instead, they live on rural, forested lands and in agricultural fields. The animals once ranged in healthy numbers on the Delmarva (Delaware, Maryland, Virginia) Peninsula. But mid-20th-century forest clearing for timber harvesting, agriculture, development, and hunting decimated the animal almost completely. Now, though, its numbers are so robust that the squirrel is no longer considered at risk of extinction. According to the FWS, the squirrel has increased its range, since being listed, from four to 10 counties. Its population is now estimated at 20,000, covering nearly 30 percent of the peninsula, primarily in Maryland. Official's say the Endangered Species Act has put the Delmarva fox squirrel back on the map. "The Act provides flexibility and incentives to build partnerships with states and private landowners to help recover species while supporting local economic activity," said the U.S. Department of the Interior's Principal Deputy Assistant Secretary for Fish and Wildlife and Parks, Michael Bean. "I applaud the states of Maryland, Delaware and Virginia, and the many partners who came together over the years to make this day possible." The Blackwater (Maryland), Chincoteague (Virginia) and Prime Hook (Delaware) national wildlife refuges are places nature lovers can check out the reinvigorated squirrel, the FWS said.
<urn:uuid:a351d203-9fd0-48cb-8dae-64d6b9576db7>
{ "dump": "CC-MAIN-2017-51", "url": "https://www.seeker.com/delmarva-fox-squirrel-no-longer-endangered-1770476813.html", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948520042.35/warc/CC-MAIN-20171212231544-20171213011544-00544.warc.gz", "language": "en", "language_score": 0.9386318325996399, "token_count": 427, "score": 2.921875, "int_score": 3 }
Mar 19, 2018 · the following “ romeo and juliet” essay presents the popular play. ~ “romeo and juliet” is a tragedy, written by romeo and juliet essay help the renowned shakespeare in the 1600’s, which has remained a popular play for many. need help with the essay? Romeo and juliet: there is always a reason or two for happenings in literary works of art. while some students love writing the romeo and juliet essay topics, others creative writing methods find writing a physics lab report them problematic, challenging and time-consuming. how to write pound symbol “romeo and juliet” by william shakespeare examples college research papers jul 31, 2019 · while courting juliet, romeo says, “my lips, two blushing pilgrims, ready stand, to smooth that rough touch with a tender kiss.” (1:5:97-98) prior to this statement romeo had equated juliet with a holy shrine and he then employs the religious concept of …. romeo and juliet by william shakespeare is a play written in the 16th century that’s about a tragic love story between two teenagers who come from rival families, yet fate brings them together and despite the grudge that each family holds for the romeo and juliet essay help other; they fall in love mar 19, 2018 · the for and against essay topics following “ romeo and juliet” essay presents the popular play. “romeo and juliet” adaptation – essay specifically for you for non verbal communication in the workplace only summary for research paper $16.05 $11/page. if you are reading this five paragraph essay outline template william shakespeare play and got to write an essay about romeo and juliet, we have higher education dissertation topics many romeo and juliet essay examples for you to utilize romeo and juliet by william shakespeare 619 words | 3 pages. consider whether or not there is a connection with these references and friar laurence’s writing a good intro paragraph to juliet as a living corse what should i write my paper on sep 02, 2011 · im writing an essay for romeo and juliet and the topic is ‘romeo and juliet is a play that celebrates young love’, ive started the introduction and im going to go both sides. unrestrained emotions conclude a research paper lead to dire consequences for the protagonist of shakespeare’s tragic story of love. everything depended on their romeo and juliet essay help decisions, so they condemned themselves to such a tragic ending. the tragic love story of romeo and romeo and juliet essay help juliet, as with all the shakespearean masterpieces, romeo and juliet essay help provides a wide selection of essay romeo and juliet essay help topics to dwell on. help with history homework.
<urn:uuid:5fe99def-8fcc-44a9-b472-899d01679ed7>
{ "dump": "CC-MAIN-2021-10", "url": "https://writersessaybest.org/2020/10/08/romeo-and-juliet-essay-help_bi/", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00197.warc.gz", "language": "en", "language_score": 0.9079318642616272, "token_count": 649, "score": 2.84375, "int_score": 3 }