content
stringlengths
275
370k
This resource contains 3 shaped booklets for students to show their understanding of the different parts of speech. In each of the booklets, students will write the definition and examples of the following parts of speech: Ways to use this: *Create example booklets *Glue into interactive notebooks *Use in a literacy or writing center *Use as an assessment tool I'm planning on using all three of these over three months (Apples in September, Pumpkins in October, and Acorns in November). Please view the preview to see every single page.
This is a lesson about the ‘anti-language’ Polari. Polari is a slang used by gay men (predominantly) in the years before the partial decriminalisation of homosexuality in 1967. This lesson usualises and actualises gay and bisexual men. Learning Objectives: LO1 Understanding and responding to what speakers say in formal and informal contexts LO2 Recognise different […] Lesson Title: Poetry/Music lyrics analysis Key Stage: 4 National Curriculum Key Stage and Targets: (2.2b) Understand how meaning is constructed within sentences and across texts as a whole. (2.2g) Relate texts to their social and historical contexts and to the literary traditions of which they are a part. Cross-Curricular Elements: History – timeline of events […] Lesson Title: Persuasive/Argumentative devices Key Stage: 3 National Curriculum Key Stage and Targets: (2.2 j, l+m) How texts are crafted to shape meaning and produce particular effects. How writers’ uses of language and rhetorical, grammatical and literary features influence the reader. How writers present ideas and issues to have an impact […] Lesson Title: CREATING EMPATHY THROUGH RHETORICAL QUESTIONS (by studying the poem ‘Quiet Kid’) National Curriculum Key Stage: KS4 National Curriculum Targets: 2.1 h. Listen with sensitivity, judging when intervention is appropriate. 2.1 j. Work purposefully in groups, negotiating and building on the contributions of others to complete tasks or reach consensus. 2.2 c. Recognise […] Lesson Title: Biography Key Stage: 3 Barbara Burford 1944–2010 Barbara Burford was a lifelong champion of equality and diversity in the health service and has directly inspired hundreds of health professionals and managers through her work. Throughout her career, Barbara worked closely with the NHS, government departments, minority groups and public sector organisations […] - KS4 English – Shakespeare, Sonnets and Sexuality - NO OUTSIDERS IN OUR SCHOOLS Teaching the Equality Act in Primary Schools & RECLAIMING RADICAL IDEAS IN SCHOOLS Preparing Young Children for Life in Modern Britain By Andrew Moffat - PSHE – Alphabet Soup - KS3 – PSHE – Shark Bait - KS4 – PSHE Omar: Young, Gifted and Gay 2 Part 1 SUBMIT YOUR OWN If copyright is claimed in any photograph which appears on this website, please contact us and we will happily remove it. Subscribe / Connect Subscribe to our e-mail newsletter to receive updates.
It's National Cataract Awareness Month According to the World Health Organization, cataracts are responsible for 51% of cases of blindness worldwide - although this blindness is preventable with treatment. In fact, research shows that in industrialized countries about 50% of individuals over the age of 70 have had a cataract in at least one eye. This is partially because cataracts are a natural part of the aging process of the eye, so as people in general live longer, the incidence of cataracts continue to increase. What are Cataracts? Cataracts occur when the natural lens in the eye begins to cloud, causing blurred vision that progressively gets worse. In addition to age, cataracts can be caused or accelerated by a number of factors including physical trauma or injury to the eye, poor nutrition, smoking, diabetes, certain medications (such as corticosteroids), long-term exposure to radiation and certain eye conditions such as uveitis. Cataracts can also be congenital (present at birth). The eye’s lens is responsible for the passage of light into the eye and focusing that light onto the retina. It is responsible for the eye’s ability to focus and see clearly. That’s why when the lens is not working effectively, the eye loses it’s clear focus and objects appear blurred. In addition to increasingly blurred vision, symptoms of cataracts include: “Washed Out” Vision or Double Vision: People and objects appear hazy, blurred or “washed out” with less definition, depth and color. Many describe this as being similar to looking out of a dirty window. This makes many activities of daily living a challenge including reading, watching television, driving or doing basic chores. Increased Glare Sensitivity: This can happen both from outdoor sunlight or light reflected off of shiny objects indoors. Glare sensitivity causes problems with driving, particularly at night and generally seeing our surroundings clearly and comfortably. Often colors won’t appear as vibrant as they once did, often having a brown undertone. Color distinction may become difficult as well. Compromised Contrast and Depth Perception: These eye skills are greatly affected by the damage to the lens. Often individuals with cataracts find that they require more light than they used to, to be able to see clearly and perform basic activities. Early stage cataracts may be able to be treated with glasses or lifestyle changes, such as using brighter lights, but if they are hindering the ability to function in daily life, it might mean it is time for cataract surgery. Cataract surgery is one of the most common surgeries performed today and it involves removing the natural lens and replacing it with an artificial lens, called an implant or an intraocular lens. Typically the standard implants correct the patient’s distance vision but reading glasses are still needed. However as technology has gotten more sophisticated you can now get multifocal implants that can reduce or eliminate the need for glasses altogether. Usually the procedure is an outpatient procedure (you will go home the same day) and 95% of patients experience improved vision almost immediately. While doctors still don’t know exactly how much each risk factor leads to cataracts there are a few ways you can keep your eyes healthy and reduce your risks: - Refrain from smoking and high alcohol consumption - Exercise and eat well, including lots of fruits and vegetables that contain antioxidants - Protect your eyes from UV radiation like from sunlight - Control diabetes and hypertension Most importantly, see your eye doctor regularly for a comprehensive eye exam. If you are over 40 or at risk, make sure to schedule a yearly eye exam.
Helicobacter pylori (H pylori) is a bacterium that is present in approximately half of the people in the world. However, not everybody infected with H pylori develops the signs and symptoms of the H pylori infection. H pylori infection is caused most commonly by consuming food or water (or any liquid) that is contaminated with the fecal matter containing the H pylori bacteria. It can also spread by mouth-to-mouth contact. However, it is unclear why some people with H pylori develop the symptoms while others do not. Some people with H pylori infection develop a variety of digestive symptoms, such as persistent inflammation of the stomach (chronic gastritis), inflammation of the duodenum (duodenitis), stomach ulcers, and even stomach cancer. The bacteria infect the body in the following way: - The bacteria infect the protective tissue (mucosa) that lines the stomach - Certain enzymes and toxins are released - The protective mucous barrier is destroyed by the bacteria, exposing cells to toxins - The immune system gets activated - Together, these factors may injure the cells of the digestive tract (stomach or duodenum) that result in digestive system disorders Who is at risk for H pylori infection? In developing countries, you have greater chances of getting an H pylori infection during your childhood. You are at risk for H pylori infection if - You live in crowded homes. - You live in places where there is no reliable supply of clean water. - You live in a developing country (since there are problems with sanitation, most people get affected before age 10). - You live with someone who has an H pylori infection. - Hispanics and African-Americans have a higher rate of infection than Caucasians. This may point to the genetic susceptibility of H pylori to certain people. In the United States and other developed countries, H pylori infections are more common in adults. Which tests are used to diagnose H pylori infection? The most common tests used to diagnose H pylori are: - Urea breath tests: You will be asked to drink a specialized solution and then blow through a straw into a glass tube. After that, your breath will be tested. The presence of carbon molecules in the breath makes the test positive (H pylori infection is present). - Stool tests: Stool antigen tests are used to detect H pylori proteins in the stool. - Blood tests: Certain blood tests can help identify if you have an active infection or have been previously infected. What is the best treatment for H pylori infection? The most effective treatment for H pylori often involves taking several medications for 2 weeks and is known as triple therapy. These medications include: - A proton pump inhibitor: Some examples include lansoprazole (Prevacid), omeprazole (Prilosec), pantoprazole (Protonix), and rabeprazole (AcipHex) - Two different antibiotics (Metronidazole and Tetracycline or Metronidazole and Amoxicillin) - Bismuth subsalicylate (Pepto-Bismol) Successful treatment of H pylori can help in the healing of the ulcer, prevent the recurrence of ulcers, and reduce the risk of ulcer complications (like bleeding). To get maximum results from H pylori treatment, it is important to take the entire course of all medications. Health Solutions From Our Sponsors Patient education:Helicobacter pylori infection and treatment (Beyond the Basics). Available at: https://www.uptodate.com/contents/helicobacter-pylori-infection-and-treatment-beyond-the-basics
We can graph the terms of a sequence and find functions of a real variable that coincide with sequences on their common domains. The reader may notice a connection between this explicit formula and the function whose domain consists of all real nonzero -values. - and . - and . Similarly, if we evaluate the function at for any positive integer , we will find the result matches . Let’s plot the terms in the sequence and the function on the same axes. The sequence and the function coincide where they are both defined. A useful first attempt to associate a function of a real variable to a sequence generated by an explicit formula is to try replacing in with and let be the function obtained this way. Sometimes, we have to work a little harder to find a function of a real variable from which our sequence can be modeled. Note that while the expressions and may look very different, they generate the same values for each integer . The latter expression, , is necessary to use when finding a function of a real variable that coincides with . The last example suggests that while there is a connection between sequences and functions of real variables, care must be taken when finding a function of a real-variable that coincides with the sequence on their common domains. Another important point to consider is that many different functions of a real variable can agree when they are evaluated at the integers, as the following example will show. Let’s see what happens if use a positive integer as an input for each function. - Since , . - To evaluate , note that since for all integers . Thus, . It should not be too surprising that several functions can be used to represent the same sequence because the sequence is defined only for integers, while the functions above are defined for all real -values in their domains. While there are many examples of sequences, there are two important types of sequences that arise in applications. The first type of sequence we examine is an arithmetic sequence. By requiring that the starting index be , the sequence , the terms in the sequence are given explicitly by the formula , or recursively by the rule and . The difference between subsequent elements in this sequence is always four. In general, the terms in an arithmetic sequence in which subsequent terms differ by can be written as Alternatively, we could describe an arithmetic sequence recursively, by giving a starting value , and using the rule that . You should check that this general statement holds for our two examples. Recall that arithmetic sequences are those where the difference between neighboring elements is constant. Arithmetic sequences are analogues of lines. Consider the earlier example. By requiring that the lower index of the sequence be , we can check out a graph of the sequence. By noting that the explicit formula for the -th term in this sequence is , we can try to find a function of a real variable that coincides with this. The simple rule of replacing with works here; we can set and graph both on the same set of axes. We can graph the terms. By setting , we can find an explicit formula for the -th term in the sequence; indeed . As such, we can easily find a function of a real variable that coincides with the sequence on their common domains by replacing with to obtain . We can graph both of these on the same set of axes. A second very important family of sequences we consider are geometric sequences. A geometric sequence can also decrease as it progresses. In general, a geometric sequence in which the ratio between subsequent terms is can be written as Alternatively, we could describe a geometric sequence recursively, by giving a starting value , and using the rule that . As usual, you should check that these general rules hold for the specific examples we’ve considered. Let’s look at plots of some of our special sequences. The formula for the terms in the sequence in this example was . We can easily find a function of a real variable that coincides with the sequence on their common domains by replacing with to obtain and graph them both on a common set of axes. Now let’s consider the second example of a geometric sequence from before, which is illustrated below. In this example, the explicit formula the terms were given by the formula , so we can find a function of a real variable that coincides with the sequence by setting . We can plot these on the same set of axes. Can all geometric sequences be modeled using an exponential? A third example explores this. Consider the sequence modeled below. Replacing with as we did in a previous example gives us a function that agrees with the sequence on their common domains. Geometric sequences play an important role in the coming sections, so it is useful to have a good feel for what plots of these sequences do! “Obvious” is the most dangerous word in mathematics” -E.T. Bell
Did the South Start the War Between the States? The South is often charged with having started the War Between the States when Confederate forces in South Carolina fired on Fort Sumter. What is not generally known is that South Carolina had freely ceded property in Charleston Harbor to the federal Government in 1805, upon the express condition that "the United States... within three years... repair the fortifications now existing thereon or build such other forts or fortifications as may be deemed most expedient by the Executive of the United States on the same, and keep a garrison or garrisons therein." Failure to comply with this condition on the part of the Government would render "this grant or cession... void and of no effect." The State then appointed commissioners and paid for the land to be surveyed out of its own treasury. Work on Fort Sumter did not begin until 1829 and had still not been completed by 1860. Unfinished and unoccupied for over thirty years, the terms of the cession were clearly not fulfilled. Consequently, the fort was never the property of the United States Government, as Abraham Lincoln claimed in his First Inaugural Address, and, upon secession from the Union, the only duty which South Carolina owed, either legally or morally, to the other States was "adequate compensation... for the value of the works and for any other advantage obtained by the one party, or loss incurred by the other." Such being the case, the occupation of Fort Sumter by U.S. troops was technically an act of invasion and the Confederate forces in Charleston were wholly justified in firing upon them when it became evident that Lincoln intended to use military force against the State. Was the War Between the States a Civil War? A civil war is defined as a conflict between two opposing factions within the same country, and thus the term reflects the position of Abraham Lincoln that the Southern States never lawfully seceded from the American Union. The truth of the matter is that secession was never viewed as unlawful or unconstitutional by the majority of Americans until after the war, and there is therefore no valid reason not to consider the Confederate States of America to have been an independent republic from 1861 to 1865. Thus, it is historically inaccurate to refer to the War Between the States as a civil war.
The government of the State of Hawaii is structured with a system of checks and balances so that power is not confined to just one branch. There are three separate but equal branches whose powers are outlined in the Constitution of the State of Hawaii. Roughly speaking: - The Legislative Branch makes the laws. - The Executive Branch implements the laws. - The Judicial Branch interprets the laws. As a representative democracy, the registered voters in Hawai’i choose the people to serve as their voices in legislative government. The Hawai’i legislature is “bi-cameral” (has two chambers): the Senate (25 elected members who serve staggered four-year terms) and the House of Representatives (51 elected members who serve two-year terms). Every street address in the state is located in a particular Senate District and House District. To find your districts (and the legislators who currently represent you), visit the Legislature’s website linked below. Elections are held in even-numbered years and are overseen by the Office of Elections. Hawaii operates with a biennial (two-year) legislative session and is considered a part-time legislature. Legislators may hold other positions of employment. The legislature convenes in Regular Session from January through late April or early May (a total of sixty session days, which excludes weekends, holidays and recess days). The rest of the year is the interim. While legislators are not in session, their offices remain open as they research issues, help constituents (people who live in their district), discuss proposals, and draft legislation. Bills are introduced by legislators at the beginning of session; if successful, they become acts of law. Legislators may also propose resolutions and constitutional amendments. A calendar of deadlines requires legislation to move quickly through committee hearings (where lawmakers hear testimony from the public) and the mandatory readings (floor votes) in order to survive. Both chambers establish rules governing their procedures, and organize themselves into various leadership roles and standing committees. Such roles and assignments allocate power and responsibility, and anyone attempting to influence legislation is aided by an understanding of these dynamics.
Dark matter is estimated to make up a vast majority of the universe's mass, yet no one has ever definitively detected it before. Now researchers from MIT are working to modify a particle accelerator to allow it to test select theories on what dark matter could actually be like. The work, described on Friday in Physical Review Letters, is primarily going to test if dark matter takes the form of a photon-like particle with one particularly notable difference: it has mass. "You could not put any material in the beam's path" In the paper, the researchers confirm that the tool they've built will be able to appropriately modify a particle accelerator in order to perform the tests. The tool would use the particle accelerator at Jefferson Lab in Virginia to produce a narrow beam of electrons containing a megawatt of power. "You could not put any material in that path," Richard Milner, a co-author of the paper, says in a statement. Should dark matter take the hypothesized, photon-like form, the beam will allow the researchers to detect two particles that it would decay into. If they can detect those two additional particles, they'll be able to prove the hypothesized particle's existence. The experiment, known as DarkLight, would need another two years of testing followed by two years of data-gathering before it could produce any results, however. And it's far from the only experiment examining dark matter out there: physicists working at the Large Underground Xenon (LUX) detector are actually planning to announce the first results of their dark matter experiments on Wednesday. "It can have enormous consequences for our theories and our understanding," Milner says, were DarkLight to detect dark matter. "It would be absolutely groundbreaking in physics."
Growing up, my teachers always had a fondness for Roald Dahl, and for that, I am grateful. Roald Dahl was a British author famous for such works as Matilda, Charlie and the Chocolate Factory, The BFG, James and the Giant Peach, and The Witches. If you haven’t read those books, chances are you’ve seen the above movies. Dahl has sold more than 250 million books worldwide and continues to be popular among many teachers and students today. This blog post will cover different techniques for teaching writing with Roald Dahl and his marvelous books. Roald Dahl’s birthday was September 13. Therefore, we thought we would go down memory lane and discuss how his books can be used as mentor text to teach writing. Mentor texts serve as real-life examples of writing to provide a model for students to emulate. Students can learn how to write effectively from Dahl’s published works. Matilda: Forming Ideas and Creating Characters My personal favorite of Dahl’s is Matilda. This novel is appropriate for grades third and up. It is a wonderful read-aloud for younger grades as well. This story is about a five-and-a-half-year-old girl with remarkably intelligence. With the ability to read advanced works and the knowledge to do upper-level math, Matilda stands out as the oddball in her family of self-centered people. Her family doesn’t appreciate her need for learning, and they treat her in borderline neglectful ways. Matilda even inhabits some interesting magical gifts. Once Matilda starts school, her precious teacher Miss Honey sees the young girl’s remarkableness. However, the mean oaf-like principal, Miss Trunchbull, does not. This novel is heartwarming and unique. Matilda is a wonderful example to use as mentor text to teach your students how to write interesting, complex, and multi-faceted characters for their stories. Matilda is chock full of them from an interesting whimsical pigtail-wearing friend of Matilda’s named Lavender to the sweet Miss Honey whose name alludes to her even sweeter demeanor. Students could be inspired by Bruce, the boy who loves chocolate cake, and Matilda’s whacky and tacky used car salesman father. Ms. Trunchbull and her scary disciplinary ways is another example of a captivating character. Analyzing the characters, listing their character traits, and evaluating their motivations (why do they do what they do), can help students hone in on some spellbinding characters of their own. Many students struggle with how to get their ideas flowing that are needed for writing. Matilda has magical elements mixed with realistic characters that could inspire children to use their imagination to form ideas. Some enthralling ideas students could explore range from how to intermingle realistic elements with fantasy and magical ones to how to write shocking scenes. Roald Dahl is famous for this. I believe anyone who has read Matilda remembers the scene in which Bruce eats an entire chocolate cake in front of the school, or when Miss Trunchbull spins Lavender over her head and throws her into a field. Shocking scenes make for interested readers and students can learn that from Roald Dahl. James the Giant Peach: Setting and Descriptive Language James and the Giant Peach is my mother’s personal favorite. A four-year-old boy named James becomes an orphan when his parents are eaten by rhinos. (Classic Roald Dahl to throw in the shock factor.) He is sent to live with his Aunt Spiker and Aunt Sponge who are evil. Three years later, James is extremely depressed and begs his aunts to take him to the sea where he lived with his parents, yet they say no as evil aunts will do. James, distraught, runs outside to find an old man in a green suit. This old man gives James magic green crystals and tells James to eat them and in doing so, amazing things will happen. James is about to do as commanded, but then trips over his aunts’ peach tree’s roots and the crystals burrow into the ground. Immediately a gigantic peach starts to grow. The aunts immediately see this as an opportunity to get rich as crowds form around the spectacle. Next, James is sent to clean up after the crowds and sees a hole in the peach. He crawls in and discovers a whole world with bugs his size from a spider to a ladybug. From this encounter, the story becomes more and more enthralling and kooky as the giant peach falls off and squashes the aunts. The peach rolls and rolls and finally plunges into the sea with James floating inside the peach with this kooky crew of bugs. More and more spellbinding scenarios occur leaving readers intrigued. Since the majority of the story takes place in a peach, analyzing the setting will help students come up with their own interesting settings. Evaluating how settings impact stories is crucial when teaching writing. James lives on top of the peach as it continues to travel and float in the sea. That is until seagulls lasso it into the sky. Students learn that if it weren’t for the setting of the peach, the story would be entirely different. They can imagine a scenario in which the peach isn’t part of the story. Next, they can discuss what elements of the story would then change. Dahl emphasizes that setting plays such an important role that it takes on the life of being another character in a way. Encourage students to think outside of the box in terms of where their story will take place. Settings can range from outer space to even a piece of fruit. Descriptive language is one of Dahl’s strengths. Dahl spins words together in such a spellbinding fashion. By taking excerpts from James and the Giant Peach and analyzing them, students can become inspired with their own writing. Here is an example of how Dahl uses descriptive language to illicit wonderful imagery: Another example of Dahl’s use of descriptive language: “Everybody was feeling happy now. The sun was shining brightly out of a soft blue sky and the day was calm. The giant peach, with the sunlight glinting on its side, was like a massive golden ball sailing upon a silver sea.” Teachers can have students analyze what word choice created this imagery. Dahl’s writing is full of similes, metaphors, and descriptive language. His works are an amazing example of mentor texts that helps students write their own unique works. Boy: Roald Dahl’s Autobiography Roald Dahl not only wrote amazing fantasy novels, but he also wrote an autobiography. Boy Tales of Childhood details his childhood into early adulthood, his life in the public schools of England, and his very first job. Boy Tales of Childhood chronicles how his life experiences shaped him into becoming a writer. His autobiography spans the 1920s-1930s in Wales, which makes for an interesting backdrop. His childhood contains comical mishaps with teachers and mischievous musings with his friends. Dahl was inspired to create Trunchbull from memories of his semi-abusive principal from his boarding school. Students can get a glimpse into Dahl’s early life and see how many aspects of his childhood have influenced his works, including how the Cadbury company used Dahl and his classmates as focus group members and how this inspired Charlie and the Chocolate Factory. Boy Tales of Childhood makes a perfect example to use to teach students how to write a narrative or their life story. Dahl includes many quotes that are perfect tips for writing an autobiography. “When writing about oneself, one must strive to be truthful. Truth is more important than modesty.” “An autobiography is a book a person writes about his own life and it is usually full of all sorts of boring details.” (We admittedly would tell students we don’t want all the boring details.) Boy Tales of Childhood also has many shock factor scenes that show students how honesty in narratives will keep readers interested. I first read this book when my Miss Frizzle-like fifth-grade teacher read it aloud to us. The scene I remember the most was when Dahl’s boil was lanced. The way he describes it and his feelings minute-by-minute made for a shocking yet hilarious story. Additionally, Dahl describes how he put a dead mouse in a candy jar at one of his favorite candy stores which makes for laugh-aloud moments. Dahl had many interesting adventures which can inspire students not to hold back when writing about their own life experiences. Roald Dahl is an author who will always stick with you once you read his works. If you have not introduced your students to him, now is the time for a new generation to experience his writing. By using his works as mentor texts, students can learn how to write unique characters and settings, come up with compelling ideas all while using descriptive language. Roald Dahl is one of our personal favorites and we hope he becomes one of yours too.
The influenza virus, most commonly referred to as the flu, sickens roughly 13-20% of the United States each year. Symptoms can range from uncomfortable but manageable to requiring hospitalization. More than 200,000 people are hospitalized yearly, and complications that can arise from the flu include bacterial pneumonia, dehydration, and worsening of existing medical conditions like congestive heart failure, asthma, or diabetes. In children, sinus and ear problems are also common. The very young, very old, pregnant women, and immune-compromised patients are at higher risk for severe infections and complications. Roughly 2,000 people are estimated to die from the flu each year in this country. How the Flu Affects Productivity The CDC estimates that the peak flu season in the US is November through March, when flu cases rise. Data from 2010 estimates that 100 million workdays are lost annually due to flu related illnesses, amounting to $6.8 million dollars in lost wages for uncompensated sick time. However, over 60% of employers offer paid sick time, which costs companies $10 billion in lost productivity. Studies also indicate that roughly 80% of those who contract the flu still go to work, potentially exposing co-workers to the virus. Some of those workdays missed are not even for personal illness. 32 million school days are missed due to the flu, causing challenges for school-aged children and their working parents. Transmission of the Flu Virus The flu can be spread from person to person from up to six feet away. Experts agree that the flu is spread via droplet aerosolization; secretions made in a cough, sneeze or through saliva can land in the face of nearby people or be inhaled into the lungs. Less often, the influenza virus is spread through surface contact of contaminated objects. The incubation period for the flu is about one to four days, during which infected but asymptomatic people can spread the virus to others. Symptoms can start as soon as day two, and the virus is contagious up to seven days after initial infection. Children are able to spread the infection past day seven. The Flu and Other Viruses in HIAs Recent studies estimate that at least 5% of nosocomial infections are viral. Perhaps not surprisingly, such healthcare acquired infections (HAIs) generally parallel changes in contagious infections of those in the local community. Pediatric and Geriatric hospital wards are more susceptible to seasonal infections like the flu. Because such patterns are generally related to community illness, as well as the potential for asymptomatic infections and repeat exposure in hospital settings, detecting and monitoring such HAIs can be difficult. Preventing the Spread of the Flu and Other HAIs Pedestrian as it sounds, the most crucial aspect of preventing the spread of viral infections is adequate hand-washing. This is especially true of health-care providers and clinic workers, but remains important for the general public attempting to avoid infection as well. Touching a contaminated surface, like a bed rail or a tablet screen is an easy way to spread the flu, though it’s easily avoidable as well. In the clinical setting, high touch surfaces include bed rails, supply carts, portable medical equipment, and as of recently also include portable devices and tablets. Electronic surfaces should be wiped down with disposable wipes containing appropriate disinfecting solutions, paying particular attention to especially high touch areas, like keyboards. Portable devices such tablets can be disinfected in the same way or though alternative approaches such as bathing the tablet in UV light. If you’re home sick with the flu, chances are your time in bed has you surfing the web on your mobile device – perhaps even more than one device! The importance of keeping device surfaces germ-free applies at home, as well. Look for disinfecting cleaning wipes or sprays that kill microbes, and make sure to allow the disinfectant to sit for as long as recommended. If devices are shared with family members with immature or compromised immune systems, be aware that sharing your device could mean sharing your disease too, which can be riskier for the immunocompromised. About the Author Dana Carter, PhD is an academically trained, experimental neuroscientist. Currently, Dana is a science writer who focuses on different aspects of psychology, physiology, and overall health and wellness. Prior to her current role, she spent a combined seven years researching the genetic components of mental illnesses, and the effects of drugs and alcohol on fetal brain development. She received her PhD in Neuroscience from the Texas A&M Institute for Neuroscience and her B.Sc. in Psychology from Texas A&M University. She enjoys traveling, writing, and promoting learning about healthy, active minds and lifestyles. Aitken C and Jeffries DJ. Nosocomial spread of viral disease. Clin Microbiol Rev. 2001:14(3):528-46. Bures S, Fishbain JT, Uyehara CFT, et al. Computer keyboards and faucet handles as reservoirs of nosocomial pathogens in the intensive care unit. Am J Infect Control. 2000;28:465–70. Cleaning High Touch Surfaces. Fight Bac.www.fightbac.org/storage/documents/SA3._Cleaning_High-Touch_Surfaces.pdf CDC. Seasonal Influenza. http://www.cdc.gov/flu/about/qa/disease.htm. 2013. CDC. Seasonal Influenza: Vaccine effectiveness – how well does the flu vaccine work? http://www.cdc.gov/flu/about/qa/vaccineeffect.htm. 2013. Newsline. 100 Million Work Days, $7 Billion in Wages Lost to Flu Season. www.worldatwork.org/adimComment?id=56037. 2011.
[4 Steps] Create User Defined Exception in Python with Example Want to create your own defined custom exception in Python? At the end of this tutorial, you will be able to define your own exception in Python. Before that, you should know the basic exception handling. Example for simple exception handing in Python: When you divide any number by zero, it raises ‘division by zero’ exception. Python Program for Exception Handling: try: 1/0 except Exception as err: # perform any action on Exception instance print("Error:", err) Error: division by zero Note: You can also use assert to raise the exception in Python. If you understand this simple example, creating a user-defined exception is not much difficult. Let’s begin with creating your own custom exception. Create User Defined Exception in Python Follow the steps below. Step 1: Create User Defined Exception Class - Write a new class (says YourException) for custom exception and inherit it from an in-build Exception class. - Define function __init__()to initialize the object of the new class. You can add as many instance variables as you want, to support your exception. For simplicity, we are creating one instance variable called This is how it looks like. class YourException(Exception): def __init__(self, message): self.message = message You have created a simple user-defined exception class. Step 2: Raising Exception Now you can write a try-except block to catch the user-defined exception in Python. For testing, inside the try block we are raising exception using raise YourException("Something is fishy") It creates the instance of the exception class YourException. You can pass any message to your exception class instance. Step 3: Catching Exception Now you have to catch the user-defined exception using except YourException as err: print(err.message) We are catching user defined exception called Step 4: Write a Program for User-Defined Exception in Python Let’s club all these steps. Here is how the complete program looks like. class YourException(Exception): def __init__(self, message): self.message = message try: raise YourException("Something is fishy") except YourException as err: # perform any action on YourException instance print("Message:", err.message) Message: Something is fishy Congrats you have defined your own exception in Python! Python makes things very simple, isn’t it? Here is one more thing you can try for handling user defined exception. You can define multiple instance variable for newly created exception class. With this you can pass multiple values while raising exception. In below example, along with the error message we are also defining the level of difficulty for exception. class YourException(Exception): def __init__(self, message, level): self.message = message self.level = level try: raise YourException("Something is fishy", "Level 5") except YourException as err: # perform any action on YourException instance print("Message:", err.message) print("Difficulty Level: ", err.level) Message: Something is fishy Difficulty Level: Level 5 This is the simple way of creating user defined exception in Python. If you have any question, let me know in the comment.
The use of information and communication technology is an integral part of the National Curriculum and is a key skill for everyday life. Computers, tablets, programmable robots, digital and video cameras are a few of the tools that can be used to acquire, organise, store, manipulate, interpret, communicate and present information. At Galley Common Infant School we recognise that pupils are entitled to quality hardware and software and a structured and progressive approach to the learning of the skills needed to enable them to use it effectively. The purpose of this policy is to state how the school intends to make this provision. · Provide a relevant, challenging and enjoyable curriculum for IT and Computing for all pupils. · Meet the requirements of the National Curriculum programmes of study for computing. · Use IT and computing as a tool to enhance learning throughout the curriculum. · To respond to new developments in technology. · To equip pupils with the confidence and capability to use IT and computing throughout their later life. · To enhance learning in other areas of the curriculum using IT and computing. · To develop the understanding of how to use IT and computing safely and responsibly. By the end of the Foundation Stage children should recognise that a range of technology is used in places such as homes and schools and they can select and use technology for particular purposes. It is important in the Foundation Stage to give children a broad, play-based experience of IT in a range of contexts, including outdoor play. IT is not just about computers. Early years learning environments should feature IT scenarios based on experience in the real world, such as in role play. Children gain confidence, control and language skills through opportunities to ‘paint’ on the whiteboard or programme a toy. Recording devices can support children to develop their communication skills. This is particularly useful with children who have English as an additional language. The KS1 National Curriculum Programme of Study aims to ensure that pupils : - understand what algorithms are; how they are implemented as programs on digital devices; and that programs execute by following precise and unambiguous instructions - create and debug simple programs - use logical reasoning to predict the behaviour of simple programs - use technology purposefully to create, organise, store, manipulate and retrieve digital content - recognise common uses of information technology beyond school use technology safely and respectfully, keeping personal information private; identify where to go for help and support when they have concerns about content or contact on the internet or other online technologies.
Peregrine Falcon is the common name. The scientific name is Falco peregrinus. The name means "wandering falcon." Peregrines are a species of the order Falconiformes, family Falconidae, which includes 39 species of falcons. The peregrine is one of six falcons found in the United States. The others are gyrfalcon, prairie falcon, merlin, American kestrel, and the aplomado falcon. The peregrine falcon is cosmopolitan, meaning that the species is found around the world, from the Arctic to South America. The subspecies found in the Eastern United States is anatum, and referred to as the American peregrine falcon. For more information about identification, life history and range of the peregrine falcon visit All About Birds - Cornell University
Squirrel Hibernation May Help Stroke Survivors As winter approaches, squirrels collect nuts and bears gorge on food to gain weight. Animals of all sizes prepare for hibernation, in which they may sleep for up to seven months – something many of us would like to try. This process takes place without eating, drinking and even reduced breathing. How do animals hibernate, and what can squirrel hibernation teach us about new therapies in stroke patients? Table of Contents Hibernation: The Long Nap Winter in the wild isn’t a warm fireplace surrounded by hot food and great company. Far from it; winters are harsh periods with little food and an unforgiving environment. But some animals have found a way to improve their chances of survival by skipping winter altogether! Hibernation is the process of passing a period of time inactive, with reduced body temperature and metabolism. True hibernation is observed in endotherms, also known as ‘warm-blooded’ animals. It is characterized by a suppression of metabolism to conserve energy, which results in decreased body temperature and inactivity. While cold-blooded animals like fish and reptiles may also experience periods of low activity, they cannot actively reduce their body temperature of metabolic rate. In the winter, many fish have to endure temperatures close to freezing and hence their metabolism naturally decreases. Smaller animals like squirrels tend to have a higher metabolic rate, which means hibernation benefits them better as they will save more energy. A mouse’s metabolic rate, for example, is around 20 times higher than that of a sheep1. It is important to note that while hibernating, the animal itself is extremely vulnerable, therefore the benefits of hibernation have to be great; as a result, it is rare to find larger animals hibernating. Many species of bears love doing it anyway, but probably because few natural predators have evolved to attack them in their sleep. Unlike bears, squirrels do not have huge fat stores and have to store their food outside their body. Squirrel hibernation is characterized by a frantic search of food, hoarding it away to prepare for the long nap. During their hibernation, their metabolism can drop to as little as 1% their active rate. Hibernation periods of animals vary from just a few days up to several months, depending on the area, temperature and species. Rodents like dormice can hibernate for around 6 months, sometimes even longer1. More Complicated Than Just Going To Bed The model animal to study hibernation has got to be alpine marmots, they can spend 9 months a year in this state. Their hibernation cycle consists of 4 different stages: entrance into hibernation, maintenance of deep hibernation, arousal and a euthermic period. In alpine marmots, each cycle lasts around 15 days and animals undergo 15 to 20 such cycles in a season. Entry into hibernation is characterized by a rapid decrease in metabolic rate, body temperature and heart rate. It is not fully understood how hibernation is triggered, but hibernating animals produce certain ‘hibernation chemicals’. Research into these may provide medical benefits – in preserving organs for transplant, for example2. Following this comes sustaining the state of hibernation, which lasts several days. The heart rate decreases to a minimum while ventilation (breathing) also slows, leading to periods of apnea where the breathing mechanism is ‘paused’. During this time, the alpine marmot’s heart rate slows to five beats per minute and its breathing to 1–3 breaths per minute. Oxygen deficiency is not a problem for hibernators, which can consume it through their trachea without actually breathing. Deep hibernation is terminated by the stage of arousal in which the animal wakes from its torpor. This is followed by the euthermic period lasting 1 to 2 days, to slowly warm up and return metabolism to normal levels. These stages require a lot of energy and time, also leaving the animal vulnerable to predation. These four stages are similar in all studied hibernators; however, some animals experience this on a daily cycle with the hibernation period around 12 hours a day. While still considered hibernation, this is known as daily torpor and is not restricted to the winter months. While hibernating saves around 90% of energy requirements, daily torpor saves 60 to 70%3. Though physiological mechanisms and biochemical reactions are only partly known, some theories have already been established. One of the main features of hibernation is the decrease of adenosine triphosphate (ATP) synthesis and the inhibition of activities that require ATP. This has been shown by reduced respiration in the mitochondria, a process that is responsible for the conversion of stored energy to ATP. Cellular processes like transcription and translation require large amounts of ATP to take place and are therefore inhibited to conserve energy3. Glycolysis (the metabolism of the sugar glucose) in the liver is reduced by inhibiting glycogen phosphorylase4. Squirrel Hibernation and Brain Injuries One of the most interesting properties of hibernation is the protection of organ function despite a lack of blood and oxygen supply. In studies of squirrels that had suffered head injuries, hibernating squirrels showed less brain damage than non-hibernating ones5. This discovery led to the idea that a hibernation-like state could be brought about for medical benefit. Hypothermia and hibernation are physiologically similar, both characterized by decreased oxygen consumption, glycolysis and activity. This can be exploited as it reduces the body’s need for oxygen and blood supply after an event such as a stroke. There are several techniques to reach a hypothermia-induced hibernation state such as whole-body surface cooling with ice, or cooling the blood directly by infusing cold saline. Another option that is being researched is the use of phenothiazine-derived drugs. Many have already been approved by the FDA for the treatment of schizophrenia, such as chlorpromazine (Thorazine). As a side effect, some phenothiazine drugs are able to induce hypothermia with or without external cooling methods. While their reliability for this purpose has not yet been examined in clinical trials, these drugs definitely show promise6. In patients who have just suffered a stroke, speed is crucial to prevent damage to brain tissue. As the brain requires a huge amount of oxygen, it is heavily reliant on having steady blood flow to keep its cells alive. A drug that induces ‘hibernation’ could be useful in preserving brain function, especially in such an event that immediate surgical treatment is not possible. Many stroke survivors find themselves permanently disabled after medical intervention. By following in the (tiny) footsteps of squirrels, a hibernation-like state could provide a novel way to preserve brain tissue immediately after such an event. - Lyman C. P. (1982). Hibernation and Torpos in Mammals and Birds. Chapter 1, 1-10. - Bolling, S. F., & Tramontini, N. L. (1997). Use of “natural” hibernation induction triggers for myocardial protection. The Annals of thoracic surgery, 64(3), 623-627. - Heldmaier G., Ortmann S., Elvert R. (2004). Natural hypometabolism during hibernation and daily torpor in mammals. Respiratory Physiology & Neurobiology. Vol. 141, 317-329. - Storey K. B. (1987). Regulation of Liver Metabolism by Enzyme Phosphorylation during Mammalian Hibernation. The Journal of Biological Chemistry. Vol. 262, Nr. 4, Issue of February 5, 1670-1673. - Forreider B., Pozivilko D., Kawaji Q., Geng X., Ding Y. (2017). Hibernation-like neuroprotection in stroke by attenuating brain metabolic dysfunction. Progress in Neurobiology. Vol. 157, 174-187. - Hopkin, D. B. (1956). Some Observations on the Use of the Phenothiazine Derivatives in Anæsthesia and Their Mode of Action, with Special Reference to Chlorpromazine. Canadian Medical Association Journal, 75(6), 473.
Making things invisible is a pretty neat trick. In 2006, a team of Duke University scientists bent rays of light around a copper ring (which was still visible, thanks to pesky visible light). Now researchers say they are getting close to bending visible light, too, but along the way they've uncovered a rather odd real-world application for the technology: protecting against the power of the sea. The model you see above is a prototype 10 centimeters across, representative of how the technology would work. Developed at the Fresnel Institute in Marseille, France, the pillars placed along the protective ring form a static maze of sorts that water won't fully penetrate. As water enters the concentric circles of pillars, it'll interact in such a way that the force drives the liquid around in a whirlpool-like motion, moving around the interior of the ring faster and faster — rather than through it. Water will be trapped inside and thrown out — mostly to the south — and will pass by whatever is in the center as if it wasn't there. Depending on the size of the barrier ring employed, a system such as this could protect anything from nature's wrath, such as offshore oil rigs. Larger areas needing protection, such as islands and coastlines, would take a far larger network — maybe even several artificial barrier islands employing the technology. CORRECTION: The water barrier rings do not actually spin, as previously published. Thanks, RampantGnome.
People have believed that the Earth is flat since the beginning of humanity, but the modern Flat Earth hypothesis stemmed from an experiment called the Bedford Level Experiment, conducted in the mid-1800s by a man named Samuel Rowbotham. Rowbowtham, who wrote a book named Earth Not a Globe, started the modern movement by debating scientists publicly and accumulating followers. In the experiment, Rowbowtham attempted to measure the curvature of the earth by observing the curvatures at a local river. He took his results as disproving the theory of a round earth, but future scientists have said that the results he obtained could be accounted for by the parallax effect. 10) Ship captains in navigating great distances at sea never need to factor the supposed curvature of the Earth into their calculations. Both Plane Sailing and Great Circle Sailing, the most popular navigation methods, use plane, not spherical trigonometry, making all mathematical calculations on the assumption that the Earth is perfectly flat. If the Earth were in fact a sphere, such an errant assumption would lead to constant glaring inaccuracies. Plane Sailing has worked perfectly fine in both theory and practice for thousands of years, however, and plane trigonometry has time and again proven more accurate than spherical trigonometry in determining distances across the oceans. If the Earth were truly a globe, then every line of latitude south of the equator would have to measure a gradually smaller and smaller circumference the farther South travelled. If, however, the Earth is an extended plane, then every line of latitude south of the equator should measure a gradually larger and larger circumference the farther South travelled. The fact that many captains navigating south of the equator assuming the globular theory have found themselves drastically out of reckoning, more so the farther South travelled, testifies to the fact that the Earth is not a ball. Also it kind of ties in with this dinosaur thing, with the flat earth and everything revolves around us. how did we get hit with some "meteor" to even wipe out dinosaurs exactly? please look into this because I would love to see your perspective and how it ties into all these lies. look into the false carbon dating and how completely inaccurate it is and how the earth is no where near as old as they say it is. So many lies to be blown wide open. A fresh testing of any volcano will never test as 1 or even 10 years with carbon dating, it will always test as millions of years! 37.) If the Earth were a globe, there would, very likely, be (for nobody knows) six months day and six months night at the arctic and antarctic regions, as astronomers dare to assert there is: – for their theory demands it! But, as this fact – the six months day and six months night – is; nowhere found but in the arctic regions, it agrees perfectly with everything else that we know about the Earth as a plane, and, whilst it overthrows the "accepted theory," it furnishes a striking proof that Earth is not a globe. 107) Ring magnets of the kind found in loudspeakers have a central North pole with the opposite “South” pole actually being all points along the outer circumference. This perfectly demonstrates the magnetism of our flat Earth, whereas the alleged source of magnetism in the ball-Earth model is emitted from a hypothetical molten magnetic core in the center of the ball which they claim conveniently causes both poles to constantly move thus evading independent verification at their two “ceremonial poles.” In reality the deepest drilling operation in history, the Russian Kola Ultradeep, managed to get only 8 miles down, so the entire ball-Earth model taught in schools showing a crust, outer-mantle, inner-mantle, outer-core and inner-core layers are all purely speculation as we have never penetrated through beyond the crust. In 1610, Galileo Galilei observed the moons of Jupiter rotating around it. He described them as small planets orbiting a larger planet—a description (and observation) that was very difficult for the church to accept, as it challenged a geocentric model where everything was supposed to revolve around the Earth. This observation also showed that the planets (Jupiter, Neptune, and later Venus was observed too) are all spherical, and all orbit the sun. 6.) If we stand on the sands of the sea-shore and watch a ship approach us, we shall find that she will apparently "rise" – to the extent, of her own height, nothing more. If we stand upon an eminence, the same law operates still; and it is but the law of perspective, which causes objects, as they approach us, to appear to increase in size until we see them, close to us, the size they are in fact. That there is no other "rise" than the one spoken of is plain from the fact that, no matter how high we ascend above the level of the sea, the horizon rises on and still on as we rise, so that it is always on a level with the eye, though it be two-hundred miles away, as seen by Mr. J. Glaisher, of England, from Mr. Coxwell's balloon. So that a ship five miles away may be imagined to be "coming up" the imaginary downward curve of the Earth's surface, but if we merely ascend a hill such as Federal Hill, Baltimore, we may see twenty-!five miles away, on a level with the eye – that is, twenty miles level distance beyond the ship that we vainly imagined to be " rounding the curve," and "coming up!" This is a plain proof that the Earth is not a globe. 97.) Mr. Hind, the English astronomer, says – "The simplicity, with which the seasons are explained by the revolution of the Earth in her orbit and the obliquity of the ecliptic, may certainly be adduced as a strong presumptive proof of the correctness" – of the Newtonian theory; "for on no other rational suppositions with respect to the relations of the Earth and Sun, can these and other as well-known phenomena, be accounted for." But, as true philosophy has no "suppositions" at all – and has nothing to do with, "suppositions" – and the phenomena spoken of are thoroughly explained by facts, the "presumptive proof" falls to the ground, covered with the ridicule it the dust of Mr. Hind's "rational suppositions" we are standing before us a proof that Earth is not a globe. 57.) The Newtonian hypothesis involves the necessity of. the Sun, in the case of a lunar eclipse, being on the opposite side of a globular earth, to cast its shadow on the Moon: but, since eclipses of the Moon have taken place with both the Sun and the Moon above the horizon, it follows that it cannot be the shadow of the Earth that eclipses the Moon; that the theory is a blunder; and that it is nothing less than a proof that the Earth is not a globe. In Mr. Proctor's "Lessons in Astronomy," page 15, a ship is represented as sailing away from the observer, and it is given in five positions or distances away on its journey. Now, in its first position, its mast appears above the horizon, and, consequently, higher than the observer's line of vision. But, in its second and third positions, representing the ship as further and further away, it is drawn higher and still higher up above the line of the horizon! Now, it is utterly impossible for a ship to sail away from an observer, under the, conditions indicated, and to appear as given in the picture. Consequently, the picture is a misrepresentation, a fraud, and a disgrace. A ship starting to sail away from an observer with her masts above his line of sight would appear, indisputably, to go down and still lower down towards the horizon line, and could not possibly appear - to anyone with his vision undistorted - as going in any other direction, curved or straight. Since, then the design of the astronomer-artist is to show the Earth to be a globe, and the points in the picture, which would only prove the Earth to be cylindrical if true, are NOT true, it follows that the astronomer-artist fails to prove, pictorially, either that the Earth is a globe or a cylinder, and that we have, therefore, a reasonable proof that the Earth is not. a globe. On January 25th, 2016 Atlanta rapper B.o.B., who has self-identified as a member of the Flat Earth Society, tweeted a photograph of himself against a skyline, then tweeted a screenshot from Flat Earth Movement literature that proclaimed that Polaris (the North Star) can be seen 20° south of the Equator. Neil DeGrasse Tyson answered the rapper's question, writing "Polaris is gone by 1.5 deg S. Latitude. You’ve never been south of Earth’s Equator, or if so, you've never looked up." 177) In the documentary “A Funny Thing Happened on the Way to the Moon,” you can watch official leaked NASA footage showing Apollo 11 astronauts Buzz Aldrin, Neil Armstrong and Michael Collins, for almost an hour, using transparencies and camera-tricks to fake shots of a round Earth! They communicate over audio with control in Houston about how to accurately stage the shot, and someone keeps prompting them on how to effectively manipulate the camera to achieve the desired effect. First, they blacked out all the windows except for a downward facing circular one, which they aimed the camera towards from several feet away. This created the illusion of a ball-shaped Earth surrounded by the blackness of space, when in fact it was simply a round window in their dark cabin. Neil Armstrong claimed at this point to be 130,000 miles from Earth, half-way to the Moon, but when camera-tricks were finished the viewer could see for themselves the astro-nots were not more than a couple dozen miles above the Earth’s surface, likely flying in a high-altitude plane! 50.) We read in the inspired book, or collection of books, called THE BIBLE, nothing at all about the Earth being a globe or a planet, from beginning to end, but hundreds of allusions there are in its pages which could not be made if the Earth were a globe, and which are, therefore, said by the astronomer to be absurd and contrary to what he knows to be true! This is the groundwork of modern infidelity. But, since every one of many, many allusions to the Earth and the heavenly bodies in the Scriptures can be demonstrated to be absolutely true to nature, and we read of the Earth being "stretched out" "above the waters," as "standing in the water and out of the water," of its being "established that it cannot be moved," we have a store from which to take all the proofs we need, but we will just put down one proof – the Scriptural proof – that Earth is not a globe. 106) The so-called “South Pole” is simply an arbitrary point along the Antarctic ice marked with a red and white barbershop pole topped with a metal ball-Earth. This ceremonial South Pole is admittedly and provably NOT the actual South Pole, however, because the actual South Pole could be demonstrably confirmed with the aid of a compass showing North to be 360 degrees around the observer. Since this feat has never been achieved, the model remains pure theory, along with the establishment’s excuse that the geomagnetic poles supposedly constantly move around making verification of their claims impossible. 94.) In " Cornell's Geography" there is an "Illustrated proof of the Form of the Earth," A curved line on which is represented a ship in four positions, as she sails away from an observer, is an arc of 72 degrees, or one-fifth of the supposed circumference of the "globe" – about 5,000 miles. Ten, such ships as those which are given in the picture would reach the full length of the "arc," making 500 miles as the length of the ship, The man in the picture, who is watching the ship as she sails away, is about 200 miles high; and the tower, from which he takes an elevated view, at least 600 miles high. These are the proportions, then, of men, towers, arid ships which are necessary in order to see a ship, in her different positions, as she "rounds the curve" of the "great hill of water" over which she is supposed to be sailing: for, it must be remembered that this supposed "proof" depends upon lines and angles of vision which, if enlarged, would still retain their characteristics. Now, since ships are not built 500 miles long, with masts in proportion, and men are not quite 200 miles high, it is not what it is said to be – a proof of rotundity – but, either an ignorant farce or a cruel piece of deception. In short, it is a proof that the Earth is not a globe. 20.) The common sense of man tells him – if nothing else told him – that there is an "up" and a "down" in -nature, even as regards the heavens and the earth; but the theory of modern astronomers necessitates the conclusion that there is not: therefore, 'the theory of the astronomers is opposed to common sense – yes, and to inspiration – and this is a common sense proof that the Earth is not a globe. The only explanation which has been given of this phenomenon is the refraction caused by the earth’s atmosphere. This, at first sight, is a plausible and fairly satisfactory solution; but on carefully examining the subject, it is found to be utterly inadequate; and those who have recourse to it cannot be aware that the refraction of an object and that of a shadow are in opposite directions. An object by refraction is bent upwards; but the shadow of any object is bent downwards, as will be seen by the following very simple experiment. Take a plain white shallow basin, and place it ten or twelve inches from a light in such a position that the shadow of the edge of the basin touches the centre of the bottom. Hold a rod vertically over and on the edge of the shadow, to denote its true position. Now let water be gradually poured into the basin, and the shadow will be seen to recede or shorten inwards and downwards; but if a rod or a spoon is allowed to rest, with its upper end towards the light, and the lower end in the bottom of the vessel, it will be seen, as the water is poured in, to bend upwards–thus proving that if refraction operated at all, it would do so by elevating the moon above its true position, and throwing the earth’s shadow downwards, or directly away from the moon’s surface. Hence it is clear that a lunar eclipse by a shadow of the earth is an utter impossibility. 70.) Mr. Lockyer, in describing his picture of the supposed proof of the Earth's rotundity by means of ships rounding a "hill of water," uses these words: – "Diagram showing how, when we suppose the earth is round, we explain how it is that ships at sea appear as they do." This is utterly unworthy of the name of Science! A science that begins by supposing, and ends by explaining the supposition, is, from beginning to end, a mere farce. The men who can do nothing better than amuse themselves in this way must be denounced as dreamers only, and their leading dogma a delusion. This is a proof that Earth, not a globe. If the earth is flat we are held hostage to one of the biggest hoaxes of the last 500 hundred years of human existence. Every day I see evidence of the flat earth. Just look at how at time the clouds are not moving at all --completely still. If the earth was spinning at 1600 klms per hour --how could it be that at times the clouds are completely still not moving in the sky at all.(how could the clouds be moving at the same speed as the rotating earth?)When you look at a long distant horizon --from right to left --there is no curviture in the earth whatsoever. If you try to point this out to people they will think you have lost your mind.. we have been brainwashed to believe a lie which is very difficult to prove to the world populous that there is a flat earth. The deception is wrapped in scientific mumbo jumbo which is very easy to accept as it is backed up by a lot of scientific explanation--if you want to take the easy way out--you just accept that the earth is a globe spinning at 1600 klms per hour . But how does all the water of the seas stay stuck to the planet when the earth is spinning around at 1600 klms per hour? (gravity cannot hold water upside down--water will flow to the lowest point--how can that happen on a spinning globe?) 64) Quoting “Earth Not a Globe!” by Samuel Rowbotham, “It is known that the horizon at sea, whatever distance it may extend to the right and left of the observer on land, always appears as a straight line. The following experiment has been tried in various parts of the country. At Brighton, on a rising ground near the race course, two poles were fixed in the earth six yards apart, and directly opposite the sea. Between these poles a line was tightly stretched parallel to the horizon. From the center of the line the view embraced not less than 20 miles on each side making a distance of 40 miles. A vessel was observed sailing directly westwards; the line cut the rigging a little above the bulwarks, which it did for several hours or until the vessel had sailed the whole distance of 40 miles. The ship coming into view from the east would have to ascend an inclined plane for 20 miles until it arrived at the center of the arc, whence it would have to descend for the same distance. The square of 20 miles multiplied by 8 inches gives 266 feet as the amount the vessel would be below the line at the beginning and at the end of the 40 miles.”
Whether you watch the news, read the paper or surf on social media, the chances are you will have come across the PCR-test method to determine whether someone is contaminated by the COVID-19 virus or not. But whilst journalists are quick to explain how the test is carried out (nasal swabs, blood test etc.) the science behind it is rarely explained. PCR is currently the front line laboratory assay for the screening and diagnosis of COVID-19, in conjunctions with the clinical symptoms and imaging (CT scan). Serological testing or antibody tests can be used to identify whether people have been exposed to the virus. 1. Molecular Testing: PCR PCR stands for Polymerase Chain Reaction and simply said means “molecular photocopying”. Basically when a PCR test is performed, the cells of the patient are screened and if the RNA virus is present then it is massively photocopied such that a significant quantity of the virus can be seen on the imaging apparatus. If no virus is present, then the photocopying is blank and the test comes back negative. For this photocopying to be successful however, the “photocopying machine” needs to be able to detect the virus’ RNA present in the cells…and that is why the efficiency of the PCR testing is dependent on the method used. Data has shown that PCR-tests based on human cells present in human sputum (thick mucus) are 72% accurate whereas those performed on cells derived from nasal swabs are only 63% accurate, meaning that in 37% of cases, patients are coronavirus-positive and yet appear as negative. Similarly PCR tests performed on cells derived from a bronchoalveolar lavage fluid are 93% reliable whereas tests derived from a fibro bronchoscope brush biopsy, pharyngeal swabs, feces, blood and urine are respectively reliable only up to 46%, 32%, 29%, 1% and 0%. This can be explained by the virus’ lifecycle and therefore the viral load present in the cells at the time when patients get tested. Most frequently, patients are tested when the symptoms they exhibit could indicate a variety of different diagnoses and therefore a test is performed to confirm or infirm that they are coronavirus positive. At this stage in the life cycle of the virus, it is little present in the pharynx, the feces, the blood or the urine. Similarly, whilst measuring the temperature of people upon entry in a country is an efficient method to screen for patients who are already sick, many patients will not exhibit fever at all and others will be asymptomatic carriers. 2. Serological Testing The additional question for the medical community and the public, is the level of immunization of an individual, meaning that he faced to the disease by activating his immune system and produced antibodies. This could mean that he had the infection or later on, when will be available, that he immunized after vaccination. They enable to measure who has been infected and who is potentially immune to the virus. The production of antibodies production (IgM, IgM and IgG) starts within a week after the beginning of the disease and IgG will remain positive several months after disease. The assays for measuring the presence of antibodies are called serology tests. Serology tests are performed on blood samples using different kind of assays, lateral flow assays (rapid tests but not quantitative), semi-automated immunoassays (ELISA) or fully automated quantitative immunoassays with chemiluminescence detection (CLIA). Testing will play a crucial role in stopping the spread of Covid-19 and needs to be deployed across the global healthcare ecosystem as soon as possible. Singapore, South Korea and Germany have demonstrated a higher testing capability and consequently a better management of the pandemic. Only once we have a clear idea of who is vulnerable and who isn’t, can we effectively start stopping the spread of the virus. Now no matter what, stay safe, be prepared. and stay connected for more articles from Absolutys Institute. About Absolutys: Absolutys is the healthcare platform that brings experts, technologies and solutions in one place. Absolutys helps companies and institutions to build innovative products, hire healthcare experts or develop complex projects.
There are three structural classifications of joints: fibrous, cartilaginous, and synovial. - Describe the three structural categories of joints - The type and characteristics of a given joint determine the degree and type of movement. - Structural classification categorizes joints based on the type of tissue involved in their formations. - There are three structural classifications of joints: fibrous, cartilaginous, and synovial. - Of the three types of fibrous joints, syndesmoses are the most movable. joints allow more movement than fibrous joints but less than synovial joints. - Synovial joints ( diarthroses ) are the most movable joints of the body and contain synovial fluid. - periosteum: A membrane that covers the outer surface of all bones. - manubrium: The broad upper part of the sternum. - synovial fluid: A viscous fluid found in the cavities of synovial joints that reduces friction between the articular cartilage during movement. A joint, also known as an articulation or articular surface, is a connection that occurs between bones in the skeletal system. Joints provide the means for movement. The type and characteristics of a given joint determines its degree and type of movement. Joints can be classified based on structure and function. Structural classification of joints categorizes them based on the type of tissue involved in formation. There are three structural classifications of joints: fibrous, cartilaginous, and synovial. Fibrous joints are connected by dense, tough connective tissue that is rich in collagen fibers. These fixed or immovable joints are typically interlocked with irregular edges. There are three types of fibrous joints. Sutures are the types of joint found in the cranium (skull). The bones are connected by Sharpey’s fibres. The nature of cranial sutures allows for some movement in the fetus. However, they become mostly immovable as the individual ages, although very slight movement allows some necessary cranial elasticity. These rigid joints are referred to as synarthrodial. Syndesmoses are found between long bones of the body, such as the radio-ulnar and tibio-fibular joints. These moveable fibrous joints are also termed amphiarthrodial. They have a lesser range of movement than synovial joints. Gomphosis is a type of joint found at the articulation between teeth and the sockets of the maxilla or mandible (dental-alveolar joint). The fibrous tissue that connects the tooth and socket is called the periodontal ligament. Fibrous joints: Image demonstrating the three types of fibrous joints. (a) Sutures (b) Syndesmosis (c) Gomphosis. Cartilaginous joints are connected by fibrocartilage or hyaline cartilage. They allow more movement than fibrous joints but less than that of synovial joints. These types of joints are further subdivided into primary (synchondroses) and secondary (symphyses) cartilaginous joints. The epiphyseal (growth) plates are examples of synchondroses. Symphyses are found between the manubrium and sternum (manubriosternal joint), intervertebral discs, and the pubic symphysis. Cartilaginous Joints: Image demonstrates a synchondrosis joint with epiphyseal plate (temporary hyaline cartilage joint) indicated (a) and a symphysis joint (b). Synovial Joint: This diagram of a synovial joint delineates the articular cartilage, articular capsule, bone, synovial membrane, and joint cavity containing synovial fluid. This is the most common and movable joint type in the body. These joints (also called diarthroses) have a synovial cavity. Their bones are connected by dense irregular connective tissue that forms an articular capsule surrounding the bones’ articulating surfaces. A synovial joint connects bones with a fibrous joint capsule that is continuous with the bones’ periosteum. This joint capsule constitutes the outer boundary of a synovial cavity and surrounds the bones’ articulating surfaces. Synovial cavities are filled with synovial fluid. The knees and elbows are examples of synovial joints.
Tropical fruits and vegetables are widely available in many large cities. Tropical fruits stem usually grown in warm climate, around the equator. Tropical fruits grow on plants of all habitats. Tropical fruits require a tropical or subtropical weather to grow in and cannot tolerate frost. Therefore, from tropical countries, most well known tropical fruits are exported. There are different types of tropical fruits across the world. Famous tropical fruits are exported; some fruits are still only grown and marketed locally. Bananas, mangoes, papayas, pineapples, coconut, guava, dragon fruit, and avocados are the well-known tropical fruits. - Custard apple - Dragon Fruit - Jack Fruit - Star apple - Star Fruit - Water apple Bananas are accessible in a wide variety of shapes, flavors, skin colors, and sizes, which can be as short as 3 inches and as long as 18 inches. Its flavor is sweet and mild to slightly acidic. Bananas are generally of two types such as sweet/desert bananas that are eaten raw, usually and a starchier type called plantains that are always cooked. Domesticated bananas are containing high food value, vitamins (B & C) and minerals (potassium). The dessert bananas are usually eaten raw, but can also be used in bread or cakes. Skin of undeveloped banana is generally green; it becomes yellow when it ripens. When it is getting over-ripe, it begins to show brown spots. The coconut palm is an amazingly versatile plant. People used it as food, oil, fiber or wood. Coconut fruit are of different uses, which depend on its age. Immature coconuts include slightly sweet, effervescent water, which is a satisfying drink on a hot day. The water of coconut is safe to drink due to freshly opened coconut is sterile inside, it is served at room temperature, and it always seems cool. The mature coconuts is added to cooked dishes for its flavor and texture. Cooking oils and cosmetics are made by using its mature flesh, which is called copra. Mature coconuts should be at least 1/3 full of juice. Coconut is quite tasty and can be eaten raw. Custard apple is grown around the world but it is native to tropical America and centuries ago, it was introduced into tropical Asia. Custard apple is available in many varieties and all of them share the same distinctive appearance. With overlapped fleshy green segments, the exterior of the fruit is covered. In the internal of the fruit, white segment of flesh is included, which is sweet and slightly acid. A shiny black seed is included in each segment. Custard apples are juicy and used as an appealing fruit drink. Its taste is little similar to banana and pineapple combined. The dragon fruits are a group of closely related cactus plants that grown in tropical lowlands and it is also called pitaya. Dragon fruit is available in dozens of commercial varieties in production. The plants will produce fruit year round in the right climate, however, during the rainy season, yields may be reduced (and prices higher). When the fruit’s skin begins to turn from green to red, two to five days after that Dragon fruit is ready to pick. Skin of Dragon fruit can be yellow to pink or red. A large mass of sweetly flavored pulp and small black seeds are enclosed by the thin rind. The fruit is juicy, with subtle fruity flavors and is usually eaten chilled. Duku and langsat are intimately related, members of the same botanical family (meliaceae), and are very similar fruits. With a thin, leathery skin, they both grow in clusters and are larger than a golf ball. Duku fruit is of golden brown color; the langsat is more of a cream color. Langsat’s skin contains sticky latex, which is annoying but not harmful. Duku does not have this latex sap, for that reason duku is considered a superior fruit. Each fruit contains 5 segments in its interior, these segments are similar to a small grapefruit and flesh is sweet and juicy, and tastes similar to a very mild, sweet, grapefruit. Small, bitter seeds are included in some segments. A unique fruit, durian is dissimilar of anything else, which has ever eaten. Durian trees are very tall, and the fruits are quite heavy, hard sharp spikes all around their surface and about the size of an unhusked coconut. Durian is a native of Southeast Asia and it is highly appreciated for its distinctive flavor. It is the topic of a wonderful diversity of local legends and folklore. In some places, it is thought to be an aphrodisiac, unsafe to your health in others. In five segments, each fruit is divided lengthwise. Two or three pieces of soft, creamy flesh are included in each segment. Durians are usually eaten fresh, but in the Philippines and Viet Nam, durian ice cream is very popular. Jackfruit is huge and some specimens can weigh over 100 pounds, while most are smaller. A white central core surrounded by yellow fruit sections is included in the fruit and each containing a light brown seed. Jackfruits grew on very short stalks, and appeared to be growing directly from the trunk of the tree. With the individual pieces of fruit sold in plastic bags, fruit is usually sectioned in the market. The flesh of Jackfruit has a crisp texture, with a strong, sweet flavor. It makes an excellent dessert, so people take it after dinner. After cooking by boiling in salted water, or roasting like a chestnut, its seeds can be eaten. The large lime, kaffir lime, and kalamansi lime are the three types of limes that are commonly found in Southeast Asia. The large lime is almost round, with a thin green skin, which turns yellow when ripe. For its abundant juice, large lime is used, or added to food to provide acidity and taste. With dimples, the kaffir lime has a rough green skin that makes it seem a bit like a golf ball. Its flesh is too sour to eat but it has little or no juice. To flavor cooked dishes, the grated skin and leaves can be used. In Bali and Thailand, the leaves are widely used for seasoning. The kalamansi lime is inhabitant to the Philippines and it is very small, round, thin-skinned, and green in color, and as the lime ripens, it turns into yellow. The flesh is very juicy and juice is quite flavorful. Mangoes are available in dozens of varieties throughout the tropics. Mangoes are eaten fresh and can be pickled. Mango has sap in the leaves, stem, and other parts, which can be allergenic, causing skin rashes. Mangos come in many sizes, shapes, and colors. All mangos have a large, flat seed inside and mangos can be eaten green, generally in salads or as pickles or chutney. Ripe mangos are generally eaten fresh, either alone or mixed with other fruits. In pies or crepes, jams, ice cream, etc, mangoes can be used. Fibers are attached to the see, which seem when mango is sliced down. To grow, mangosteen is very difficult and if handled roughly, it bruises easily. Its color is dark purple and has a thick rigid skin. Six or seven segments of juicy white flesh are included in it and each segment is containing a large dark seed. The flesh is sweet and slightly tart. By slight pressure on each side, near the stem, the skin of Mangosteen can be opened. Taste of this fruit is very fine; it is served as fresh fruit. Throughout Southeast Asia, a wide variety of melons is available. Honeydew and casaba melons, cantaloupe, and others are the verities of melon. As a dessert or snack, Melons are often simply sliced and eaten fresh. To make a number of desserts, melon can also be made into juices or mixed with other foods. Watermelon is a large juicy round fruit made up out of various types of plant that trail along the ground. Melons are a fruit that can easily be improved and crossed. As a result, there are many of different tasty sorts. Keep them at room temperature or in the sun depending on when you want to have them. If they are ripe, you best store them in the refrigerator. Serve melon cold as well. Up to about 14-16 inches long, Papayas are large fruits when mature. Its skin is green and turning yellow or orange as the fruit ripens. The ripe flesh is a dark pink color, is eaten raw and black seeds are edible. Unripe papaya is also widely eaten in salads or cooked in soups. Papain, which is used as a meat tenderizer is an enzyme contained by Papayas. Papaya trees are either male or female but only the females bear fruit. There must be a male tree nearby to provide pollen for fertilization. The female trees bear their short flowers directly on the trunk of the tree. The males grow their flowers on long stems. Bromelain enzyme, which breaks down protein, is including in fresh pineapple. It can be used as a meat tenderizer like fresh papaya. Fresh pineapple should not be added to cottage cheese or yogurt until just before serving, or it will digest the milk proteins. The bromelain enzyme is deactivated by Cooking. To produce a new plant, top of the pineapple can be transplanted. The very young fruit is of bright red color, as are the surrounding leaves. When pineapples have turned about 2/3 yellow, they are ready to eat. Fresh pineapple should be used immediately, either eaten raw, or baked, grilled, sauteed or stir-fried. Rambutan is similar to lychee fruit. Skin of Rambutan is fairly thick, coated with soft “hairs”, and is available in red or golden colored varieties. In its skin, there is no sap. The flesh is sweet and white and it surrounds an oval seed, and may cling to the seed in some cases. It is a good source of vitamin C and eaten fresh. To release the fruit, twist the skin with both hands. Rambutan is not so juicy as to drip on eating and it will not create a mess of your hands. The salak is a native of Indonesia and named for its scaly brown skin. Salak is also grown in Thailand and Malaysia. Salak grows at the base of a short palm tree. Salak has thin and strong skin but it is easily peeled. Three or four segments are included in its flesh, which is quite dry, crunchy, and tangy due to high tannin content. The flavor of Salak is quite different. Taste of salak is depends on its cultivars, some are semi-sweet, dry and crunchy but some are slightly juicy, soft and acidic. Taste of salak is different and unusual from other common fruits. You can smell the sourish aroma of salak fruit, if you put salak in an enclosed room.
Reduced heat exposure and mortality by limiting global warming to 1.5 °C Photo: The 2003 heat event led to an excess of 15,000 deaths in France, of which 735 in Paris. The Paris Agreement makes a distinction between 1.5 °C and 2 °C global warming (above pre-industrial conditions). Warming should be limited to 2 °C global warming, but a limit at 1.5 °C is strongly preferred to avoid severe impacts of climate change. A recent study shows that the proportion of the population exposed to hot summers above the current record increases dramatically from 1.5 °C to 2 °C global warming. This study looked at the population exposure to heat extremes in summers in Europe in the period 1950-2017. With this reference in mind, they quantified the exposure of the European population to historically unprecedented heat extremes in a 1.5 °C and 2 °C warmer world. Clearly, unprecedented hot summers are more likely in a 2 °C world than in a 1.5 °C world. In parts of southern and eastern Europe the difference in likelihood is a factor two. Strong increase exposure In the current climate in Europe, on average more than 45 million people are exposed to temperatures above the observed record during the summer. This number would increase to 90 million people in a 1.5 °C warmer world, and to 163 million people in a 2 °C warmer world. These numbers refer to 11% (1.5 °C) and 20% (2 °C) of the continent’s population, respectively. The chance of having a summer with such widespread heat that at least 400 million people (or almost 50% of the continental population) experience a summer temperature exceeding the historical record is negligible in the current climate. In contrast, in a 1.5 °C warmer world such an event would occur on average once in 18 years, and in a 2 °C world once every seven years. Strong increase mortality In another study, the likelihood of a heat-mortality event similar to the one of 2003 was estimated in a 1.5 °C and 2 °C warmer world for the cities of London and Paris. According to this study, stabilizing global warming at 1.5 °C rather than 2 °C would make a 2003 heat-mortality event 2.4 times less likely in London, and 1.6 times less likely in Paris. - King et al., 2018. Nature Climate Change 8: 549-551. - Mitchell et al., 2018. Nature Climate Change 8: 551-553.
The term Renaissance may possibly be described as “the effort by artists at the commencement of 1400s to figure out, and subsequently utilize the suggestions, and artistic works that were utilized by the early Romans and also Greeks.” It symbolized a sudden relocation of viewpoint/perspective. The term Art on the other hand refers to “the articulation of the human brain.” Perspective shaped renaissance art in a number of ways. One, as much as architecture is concerned, during the building of the places of worship for the Christians, the constructors didn’t utilize wooden or metallic bars that had cross appearance. These bars were replaced with circular ones. They believed that the circular bars symbolized the excellence of the creator of universe. Two, as far as painting is concerned, the artists (painters) were more creative in the sense that they painted men’s/women’s bodies in a manner they believed was more sensible as compared to those of ancient Greeks/Romans. The backdrop/backgrounds of these types of paintings were typified by objects of the natural globe. In sum, the painters in this era demonstrated their artistic works with more pragmatism. Also, they crafted them with a lot of care, and precisely. Three, as far as music is concerned, music experts during this regime started learning proportions. It is a topic that had previously been authored by a certain Greek mathematician. His name is Pythagoras. From this topic, the renaissance music experts were able to describe the technique of crafting diverse sounds on stringed musical gadgets/instruments. These experts in addition learned, and understood the Greek drama. As a consequence, they were able to make the music to replicate the lyrics within their respective songs. The drama composed by the Greek musicians made most viewers to turn out to be sad. Due to this reason, musicians in this era (Renaissance) re-composed it so as to make it interesting. These efforts gave rise to opera.
Farina Tritici.—Wheaten Flour. The sifted flour of the grain of Triticum sativum, Lamarck (Triticum vulgare, Villars). COMMON NAMES: Common flour, Wheat flour. ILLUSTRATION (of plant): Bentley and Trimen, Med. Plants, 294. Botanical Source.—This plant, the common winter wheat, described as Triticum sativum, var. hybernum, has a fibrous root, and a round, smooth, straight stem, 3 to 4 feet or more in height, the internodes being somewhat inflated. The leaves are lance-linear, veined, roughish above, with truncate and bristly stipules. The flowers are borne on a 4-cornered, imbricated, terminal spike, 2 or 3 inches in length, with a tough rachis. The spikelets are crowded, broad-ovate, about 4-flowered; the glumes ventricose, ovate, truncate, mucronate, compressed below the apex, round and convex at the back, with a prominent nervure. The paleae of the upper florets are somewhat bearded. The grains are loose (W.—L.—Wi.). History.—Several species of Triticum are cultivated in different countries, among which may be named the Triticum sativum (Triticum vulgare), the species most generally raised in this country and Europe. It has two varieties, Triticum aestivum, or spring wheat, and Triticum hybernum, or winter wheat. Linnaeus considered these as distinct species, but botanists of the present day generally refer them to one common stock. Barley and oats have the perianth attached to the grain, which is not the case with wheat. Wheat is supposed to be a native of Central Asia, in the country of the Baschkirs. The medicinal part is the seeds, deprived of their husk, and ground to a fine flour. Wheat is subject to ravages from several parasitical fungi, viz.: (1) Bunt, smut-balls, or pepper-brand, produced by Uredo Caries, and giving a disgusting odor to the flour. This fungous plant is also called Tilletia Caries, and infests corn grains and other grasses. (2) Smut, dust-brand, or burnt-ear, produced by Ustilago segetum (Uredo segetum, Ustilago carbo). (3) Rust, red-rag, red-robin, or red-gum, caused by the young state of Puccinia graminis. (4) Mildew, produced by the more advanced growth of P. graminis. (5) Ergot, caused by the Claviceps purpurea, and which is as powerful in its action on the uterus as ergot of rye. Two diseases of wheat are produced by parasitical animalcules, viz.: (1) Ear-cockle, purples, or peppercorn, caused by a microscopic, eel-shaped, animalcule, called Vibrio tritici, or Anguillula tritici. (2) Wheat midge, an abortion of the grains caused by a minute, 2-winged fly called Cecidomyia tritici (P.). Description and Chemical Composition.—Flour is the finely sifted, amylaceous constituent of the wheat; the broken, integumentary structures constitute bran. Good wheat flour is very white, has a faint, peculiar odor, and is nearly tasteless. One hundred parts of air-dry wheat contain, on an average, 13.37 per cent moisture, 12.04 per cent gluten and other nitrogenous matter, 1.91 per cent fatty matter, 69.07 per cent starch, gum, dextrin, and sugar, 1.9 per cent crude fiber, and 1.71 per cent ash. These figures, recorded by J. König (Nahrungs- und Genussmittel, 3d ed., 1893), represent the average of 1358 analyses of wheat from all parts of the globe. The ash consists of silica, and phosphates of potassium, calcium, magnesium, and sodium, these bases also occurring as sulphates and chlorides. According to the same authority, the composition of wheat-bran is subject to great variation, and the average of 166 analyses is as follows: 15.66 per cent moisture, 14.61 per cent nitrogenous matter, 3.9 per cent fat, 53.6 nitrogen-free extractive matter (starch), etc., 6.7 per cent crude fiber, and 4.94 per cent ash. Richardson and Crampton (1886) found allantoin ½ per cent, a quickly drying oil, wax, cane sugar, and a sugar possessing strongly dextrogyre properties. The proportion of these constituents in wheat grains varies according to climate, soil, mode of culture, quality of manure, time of cutting, etc. The starch, which constitutes at least one-half of wheat grains is of finer quality, and of greater density than that from most other sources (see Amylum). The nitrogenous or protein matter of wheat consists of small amounts of vegetable albumin (about 1.6 per cent), and predominant amounts of the proteids of gluten, viz.: Gluten casein (Liebig's vegetable fibrin), insoluble in alcohol, and gluten fibrin, gliadin (glutin or vegetable gelatin) and mucedin, the latter three being soluble in alcohol of about 80 per cent strength (Ritthausen). The gluten of wheat is usually assumed as the most perfect form of that principle, and is more abundant in wheat than any other grain, rendering wheat flour superior in the manufacture of bread. It is through the presence of gluten that flour can be made into bread. The added quantity of yeast causes vinous fermentation, with evolution of carbonic acid gas, which expands the gluten into vesicles, and gives to the baked bread its spongy character. If wheat flour be kneaded into a paste with a little water, it forms a tenacious, elastic, soft, ductile mass. This is to be washed cautiously, by kneading it under a small stream of water till the water no longer carries off any starch, and runs off colorless; gluten remains. It is of a gray color, exceedingly tenacious, ductile, and elastic, has a peculiar smell, and is nearly tasteless. On exposure to the air it slowly dries, forming a hard, brittle, slightly transparent, dark-brown substance, resembling glue, which breaks like glass, with a vitreous fracture, imbibes water, but loses its tenacity and elasticity by boiling. It decomposes rapidly in a moist atmosphere, emitting a very offensive odor. Gluten casein in its reactions exhibits some resemblance to the casein of milk. It is insoluble in pure water, soluble in alkaline water, and precipitated from this solution by acids. It is separated from the gluten of the wheat by treatment with successive portions of alcohol of definite strength, in which it is insoluble. Gluten fibrin, mucedin, and gliadin, the constituents of gluten proper, are soluble in alcohol of 60 to 80 per cent, their separation being effected by their difference in solubility in water. Gliadin or vegetable gelatin is the constituent that imparts to the gluten its cohesive qualities. As regards the wheat flour of the markets, we prefer flour ground in the stone mill, bolted in the old style, rather than the white starch flour known as patent process flour. A more recent view of the composition of gluten is that adopted by Osborne as recorded by Dr. H. W. Wiley (Principles and Practice of Agricultural Analysis, 1897, Vol. III, p. 436), from which the following is an abstract: "The gluten of wheat is composed of two proteid bodies, gliadin and glutenin. Gliadin contains 17.66 per cent, and glutenin 17.49 per cent of nitrogen. Gliadin forms a sticky mass when mixed with water, and is prevented from passing into solution by the small amount of mineral salts in the flour. It serves to bind together the other ingredients of the flour, thus rendering the dough tough and coherent. Glutenin serves to fix the gliadin, and thus to make it firm and solid. Glutenin alone can not yield gluten in the absence of gliadin, nor gliadin without the help of glutenin. Soluble metallic salts are also necessary to the formation of gluten, and act by preventing the solution of the gliadin in water, during the process of washing out the starch. No fermentation takes place in the formation of gluten from the ingredients named. The gluten which is obtained in an impure state by the process above described, is therefore not to be regarded as existing as such in the wheat kernel or flour made therefrom, but to arise by a union of its elements by the action of water." The milky liquid produced by washing wheat flour, as above named, contains in solution gum, sugar, and vegetable albumen. Vegetable albumen may be obtained by allowing this fluid to deposit its starch, pouring off the supernatant liquor, and heating it to 60° to 71.1° C. (140° to 160° F.); flakes of coagulated albumen are formed. Vegetable albumen is soluble in water, but when coagulated by heat it is insoluble; it is also insoluble in alcohol and ether. When dry it is opaque, white, gray, brown, or black, according to circumstances, and is not adhesive like gum. Solutions of alkalies readily dissolve it. Vegetable albumen possesses nearly all the characters of animal albumen, and is considered identical in composition with it. Wheat is now much subject to adulteration in this country by the wholesale admixture of white corn flour; the most we have to fear, however, is diseased wheat; but an examination under the microscope will at once detect parasitical growths, or their spores, etc. Action and Medical Uses.—Wheat is very nutritive when made into bread or cakes and baked. Toasted bread, infused in water, forms an agreeable and lightly nourishing drink for invalids, especially those suffering from febrile or inflammatory attacks. It may be sweetened with loaf sugar, or a little molasses, and flavored, if desired, with strawberry juice, raspberry juice, lemon juice, etc., or the syrups of these fruits may be added to flavor it. Wheat flour is occasionally used to lessen the itching and burning sensations produced by urticaria, scalds, burns, erysipelas, etc.; rye-flour, however, is considered to act more efficiently. It is to be dusted upon the affected parts. It cools the part, excludes the air, and absorbs any discharges present, forming with them a crust which effectually protects the part underneath. When bread is soaked in milk, boiling hot, it forms the emollient bread and milk poultice; a small quantity of sweet lard or olive oil added improves it; yeast, with or without charcoal, mixed with this, forms an excellent antiseptic poultice; or, if powdered mustard be added, a sinapism is formed. When a bread poultice is applied to inflamed parts, the addition of a solution of borax will frequently facilitate its action. When it is desired to administer very small doses of remedial agents, this may be accomplished by mixing them with the crumb of bread (mica panis) in pill form. But nitrate of silver, if used thus, will be converted into a chloride, by the reaction ensuing between it and the salt in the bread. Wheat flour lightly baked, so as to acquire a pale buff tint, forms an excellent food for infants, invalids, and convalescents. It may be boiled with milk or milk and water, and lightly salted or sweetened as desired. A very useful article of diet for patients, suitable in nearly all chronic affections has been recommended by Dr. T. J. Wright, of Cincinnati: The seeds of wheat are to be well cleansed by several washings in cold water, saving only those which sink to the bottom. Cover these with water, allow them to stand for 12 or 15 hours, then pour off the water, add some more, and boil for 2 or 4 hours, or until the spermoderm is cracked, then remove the wheat from the water. When cold it is ready for use. Small quantities only should be prepared at a time, especially in warm weather. This may be eaten with molasses, or sugar, the same as with boiled rice, or it may be boiled in milk or water, and be formed into a gruel, with the addition of a sufficient amount of Indian meal. It is nutrient and laxative. BRAN (Furfures Tritici), in decoction or infusion, is sometimes employed as an emollient foot-bath; it is also taken internally as a demulcent in catarrhal affections. Its continued use causes a relaxed condition of the bowels. Bran poultices are sometimes used, warm, in abdominal inflammations, spasms, etc. Bread made from unsifted flour has been found beneficial in indigestion and constipation. The following forms a good bread for patients laboring under diabetes. Wash coarse wheat bran thoroughly with water on a sieve until the water passes through clear; dry this in an oven, grind it to a fine powder, and to 7 eggs, 1 pint of milk, ¼ pound of butter, and a little ginger, add enough of the bran flour to make a paste; divide into 7 equal parts, and bake in a quick oven, say from 20 to 25 minutes (P.). Related Species.—Vicia Faba, Linné (Faba vulgaris, Moench), Horse bean, Windsor bean. The seeds of this plant furnish a flour known as FARINA FABAE. The seeds contain 2 parts of sugar, 2 parts of fat, 36 parts of starch, 9 parts of gummy matter, and 24 parts of legumin. The stalks and husks of this bean, when calcined and digested in white wine, are diuretic; the flowers, in aqueous infusion, are reputed efficient in gravel and gout; and the flour has long been a domestic remedy in Europe for diarrhoea. Phaseolus vulgaris, Linné; Kidney bean; Common bean.—This bean furnishes a flour, FARINA PHASEOLI, whose composition does not vary greatly in proportions from that of the foregoing. The legumes are likewise diuretic, and, according to Soltsien (Archiv. der Pharm., 1884, p. 29), yield an alkaloid, phaseoline. Lolium temulentum, Linné (Lolium arvense, Withering); Bearded darnel.—This plant is of interest chiefly from the fact that its fruit, or caryopsis, is frequently found with wheat or other grains, and is reputed to possess intoxicant and poisonous qualities. Though common in the grain fields of Europe and western Asia, it is not plentiful in this country, where it has been introduced by sowing grain containing its seed. The fruit is about ¼ inch long, oblong-ovoid, usually covered with the palae, smooth, with a convex outer and furrowed inner surface, and of a light-brown color. Internally, the seed is whitish, farinaceous, and has a starchy, bitter taste, but no odor. Several attempts have been made to isolate the toxic principle. The ordinary grain constituents are found in the fruit, a large portion of which (30 to 50 per cent) consists of circular, non-striated starch cells, about ⅓ the size of those of wheat. The toxic principles of the plant, according to Ludwig and Stahl (1864), are an amorphous, bitter, acrid, yellowish glucosid, dissolving in water, ether, and alcohol, and a fixed oil of an acrid character. Others believed the poisonous body to be an acid substance, while still others have ascribed its action to an oily, non-saponifiable body. Loliin, an acrid, dirty-white amorphous body, was isolated by Bley, in 1838. P. Antze (Amer. Jour. Pharm., 1891, p. 568) found what be supposed were two alkaloidal bodies, loliine (volatile) and temulentine, which were shown by Hoffmeister (1892) to be respectively impure ammonia and a mixture containing some of the narcotic principle discovered by him, to which be applied the name temuline. This is an amorphous alkaline body, probably at pyridine derivative, and soluble in water. An amorphous alkaloid and a nitrogenized acid were also detected by the same author. Temuline is poisonous (Amer. Jour. Pharm., 1892, p. 611). The symptoms produced by lolium are analogous to those of alcoholic intoxication. Horses, sheep, and dogs are poisoned by it, while cows and hogs remain unaffected, and ducks and quail fatten upon it. Headache, dizziness, disordered vision (sometimes yellow), tinnitus aurium, praecordial oppression and anxiety, lingual paresis, vomiting, diarrhoea, increased renal action, muscular tremors, cold perspiration, and deep narcosis, sometimes proving fatal, are the effects upon man. Lolium has been applied as a poultice to arrest pains of a neuralgic and rheumatic character, and in pleurisy. Liquors were once adulterated with darnel, and some even suspect its use at the present day, to add to the intoxicating qualities of some beverages. PHALARIS, Canary seed.—The fruit of Phalaris canariensis, Linné, of the Mediterranean basin. It is much used as a food for birds, and, mixed with wheat or rye, has been ground into flour for the use of man. Poultices are also made of it. The fruit is small (⅙ inch) flattened, elliptic or ovate, covered by a shining yellow-gray paleae. The kernel is brownish externally, white internally, inodorous, and feebly bitter. The fruit (Fructus canariensis, or Semen canariensis) is composed mainly of starch.
Dental radiographs (x-rays) are essential, preventative, diagnostic tools that provide valuable information not visible during a regular dental exam. Dentists and dental hygienists use this information to safely and accurately detect hidden dental abnormalities and complete an accurate treatment plan. Without x-rays, problem areas may go undetected. Dental x-rays may reveal: - Abscesses or cysts. - Bone loss. - Cancerous and non-cancerous tumors. - Decay between the teeth. - Developmental abnormalities. - Poor tooth and root positions. - Problems inside a tooth or below the gum line. Detecting and treating dental problems at an early stage can save you time, money, unnecessary discomfort, and your teeth! Are dental x-rays safe? We are all exposed to natural radiation in our environment. The amount of radiation exposure from a full mouth series of x-rays is equal to the amount a person receives in a single day from natural sources. Dental x-rays produce a low level of radiation and are considered safe. Dentists take necessary precautions to limit the patient’s exposure to radiation when taking dental x-rays. These precautions include using lead apron shields to protect the body and using modern, fast film that cuts down the exposure time of each x-ray. How often should dental x-rays be taken? The need for dental x-rays depends on each patient’s individual dental health needs. Your dentist and dental hygienist will recommend necessary x-rays based on the review of your medical and dental history, dental exam, signs and symptoms, age consideration, and risk for disease. A full mouth series of dental x-rays is recommended for new patients. A full series is usually good for three to five years. Bite-wing x-rays (x-rays of top and bottom teeth biting together) are taken at recall (check-up) visits and are recommended once or twice a year to detect new dental problems.
Perhaps one of the main cult objects associated with Hathor was the sistrum, a musical rattle. Its name is derived from the Greek, seiein, meaning “to shake”. The sound of the sistrum is metallic, produced by a number of metal disks or squares, strung onto a set of transverse bars, set horizontally into a frame of varying design. Its sound was thought to echo that of a stem of papyrus being shaken. However, the acoustic effects were frequently extremely limited. The sistrum was suitable for beating a rhythmical accompaniment in open-air processions. Apuleius, the Roman philosopher, described a procession in honor of Isis, in The Golden Ass, where the rhythmic pattern was three beats followed by a pause on the fourth. The sound of the instrument seems to have been regarded as protective and also symbolic of divine blessing and the concept of rebirth. In addition to the symbolic significance of its sound, the shape and decoration of the sistrum relate it to the divine. Two forms of this ceremonial instrument may be distinguished, the oldest of which is probably the naos sistrum (ancient Egyptian ss, ssst). While Hathor’s head was often depicted on the handles of sistrum, an early travertine sistrum inscribed with the name of the 6th Dynasty ruler, Teti, takes the form of a papyrus topped by a naos, which is itself surmounted by a falcon and cobra, thus forming a rebus of the name Hathor (i.e. hwt Hor). Thus, the sistrum known as the naos sistrum dates back to at least the Old Kingdom. It was usually surmounted by twin heads of Hathor upon which a small shrine or naos-shaped box was set. A vulture may crown the naos, and the handle may be covered with the incised plumage of the bird. Rods were passed through the sides of this naos to form the rattle. Carved or affixed spirals framing the sides of the naos represented the horns of the cow-eared goddess. Note that this earliest form of sistrum was often made of faience. Most surviving sistrum usually date to the Greco-Roman Period, when a second type of sistrum was common. It is referred to as a hooped (or arched) sistrum, known in ancient Egypt as shm or ib. It is known from the 18th Dynasty onward, though it seems to be based on earlier prototypes for which we have the hieroglyphic designation but no depictions. This instrument consisting of a handle surmounted by a simple metal hoop. The handle could be either plain, in the shape of a papyrus stem, which was most common, or in the shape of a miniature column adorned with the head of the goddess Hathor. However, the god Bes might also be molded as part of the handle. Like the naos-style sistrum, metal rods set into this hoop supported small metal disks or squares which produced a characteristic tinkling sound when the instrument was shaken. Because of its basic form, this type of sistrum was often made in the shape of the ankh or “life” sign and carried that hieroglyph’s significance. These types of sistrums were most frequently made of bronze. A fairly good example of a hoop sistrum, with Hathor’s head at the top of the handle and Bes at the bottom, but missing its metal disks. This one dates to the Greco-Roman era and is made of bronze In a funerary context, sistrum could sometimes be included in the tomb equipment, but were frequently non-functional, and made of wood, stone or faience. The symbolic value of the sistrum far exceeded its musical potential. It is thought that the instrument may have originated in the practice of shaking bundles of papyrus flowers (hence the onomatopoeic name sesheshet) with which Hathor was associated. In fact, the papyrus plant appears to be at the base of the mythology surrounding the sistrum. It is from a papyrus thicket that Hathor is seen to emerge, and it is also in a papyrus thicket where Isis raised her infant son, Horus. Hence, though originally mostly associated with Hathor, the sistrum eventually entered the cults of other deities and especially those of Amun and Isis. The decoration sometimes included the royal uraeus (cobra), referring to the myth of the solar Eye. In this myth, Hathor is in her role as the rebellious daughter of Re, to be appeased by music and dance. Based on this proven effect of the instrument, the sistrum was, from the New Kingdom on, the instrument that pacified and satisfied any deity, whether female such as Hathor, or male. In the Temple of Amun-Re at Karnak, a noas-shaped sistrum was a prime cult object, perhaps through its connections to Hathor, who sometimes represented the female procreative element needed to sustain Amun-Re’s virility. In Late Period representations, the sistrum was held by priestesses adoring the deity face to face. This intimacy was a female prerogative. Other deities, too, benefited from the presence of the sistrum. As the sistrum reflected in such a visible manner the presence of the gods, it is no wonder that during the Amarna Period, it was virtually deprived of decoration, except for the papyrus handle. But it is significant that it was held by the queen or the princesses during the cult of Aten, the sun disk. The instrument belonged in the realm of cosmic deities. According to the ancient Greek historian Plutarch, the sistrum’s arch was the lunar cycle, the bars were the elements, the twin Hathor heads rendered life and death and the cat, often included in the decoration, was the moon. Many of these instruments carry the names of royal persons. When the sistrum is depicted, it was often in the hands of royal family members. In the Story of Sinuhe, we learn that the princesses received him with music and song. The musical instruments were not refined wind or string instruments, but the sistrum. In the Westcar papyrus, when the goddesses dress up as itinerant musicians to gain access to the birth chamber of the mother of the children of Re, they too accompany themselves only with the sistrum. However, it is with Hathor, her son Ihy (sometimes represented by the king) and her attendants that the instrument is associated in most representational contexts. Apart from the exceptions mentioned, the sistrum appears to have been used only by the priestesses of the cults with which it was associated and its use, at least in certain circumstances, seems to have carried erotic or fertility connotations probably based on the mythological character of Hathor. The small gilt shrine of Tutankhamun has several scenes showing the use of the sistra in this context. On the inner side of the shrine’s right-hand door, for example, Queen Ankhesenamun is depicted holding a hoop-type sistrum and wearing the cow horns and solar disk of the goddess. In another scene the queen holds a naos-type sistrum and proffers the menit necklace, a heavy necklace that when grasped by its inverted keyhole shaped counterpoise, would produce a variant rattling sound, frequently associated with the use of sistra. In more remote times, such as the religious feats celebrated in Thebes during the New Kingdom, we also find groups of women shaking sistrums in honor of the divine procession. These celebrations were for Amun-Re, such as the Opet festival depicted on the walls of the Luxor Temple or the Valley Festival (Beautiful Feast of the Valley) rendered in countless Theban tombs. The world of the funerary cult is depicted in the Valley Festival, for the sistrum is seen presented to the tomb owner and his wife by their daughters. In fact, “bringing” and “receiving” were the key words, rather than making music or maintaining a beat, for the blessings that Hathor bestowed, of well-being and eternal life, were the focus of the ceremony. The scenes show the sistrum often carried by its look, looking similar to the ankh, the sign of life, of which it may be seen to be an equivalent. Closely connected with the sistrum playing is Ihy, the infant born of the union between the sky goddess Hathor of Dendera and the god of light Horus of Edfu. Through his music he performed the part of intermediary between the adorer and the goddess. The distinctive shape of the instrument is found in many contexts ranging from minor objects of mortuary significance to the columns of temples such as the Temple of Hathor at Dendera. These columns are surmounted not only by images of the cow-eared goddess, but also, above these Hathor Heads, the form of a shrine or naos. Thus, in their shafts and capitals, such columns mirror the shape of the naos sistrum. A similar application of the motif is found in the shape of many of the small shrines which were offered to the gods by the devout. During the Greco-Roman Period, the use of the sistrum spread beyond the borders of Egypt with the cult of Isis wherever the Romans went. The use of the sistrum has survived in the Coptic church, were it is directed at the four cardinal points, to demonstrate the extent of God’s creation. See all posts in Glossary |Mar 30 — Apr 08, 2020| |Jun 28 — Jul 12, 2020| |Nov 01 — Nov 15, 2020|
Before beginning this lesson, students should be familiar with the layers of Earth's interior. Although we are unable to study the Earth's interior directly, scientists have studied the interior indirectly using seismic waves. The inner core is a solid sphere of dense metals, while the outer core is liquid metal. The cores are composed of mostly iron and nickel. The convection currents within the outer core create Earth's magnetic field. The layer surrounding the outer core is called the mantle, which consists of extremely hot rock. The upper part of the mantle is called the asthenosphere, this part of the mantle flows very slowly in convection currents. Above the asthenosphere, is the lithosphere, the brittle, rigid part of Earth's crust. The lithosphere is separated into tectonic plates which float slowly on the asthenosphere. Students should also be familiar with the concept of convection currents. Convection currents occur when a heat source is applied to a fluid substance. In Earth's interior, energy from the outer core heats the rock of the mantle, causing it to become less dense. This hot rock rises towards the lithosphere, where it cools, becomes denser, and sinks back towards the outer core. Although the convection currents in the mantle are very slow, they cause plate movements over millions of years. This video clip describes how scientists used indirect methods to discover the interior composition of Earth and describes where Earth's internal energy originates. "100 Greatest Discoveries: The Core of Earth" from the Science Channel-2:39 Minutes The assessment portion of this lesson will include questions relating to the control and experimental groups of the lab activity. Students will need to be familiar with the terms control and variables. Control: This is the trial the scientist performs before testing any variables. Variables: The factor the scientist changes between tests.
Assistive Technology and Accessible Instructional Materials Technology is an integral part of our lives in the 21st Century and our children are growing up as "digital natives". For some students the computer is more than just a helpful tool, it is their access to learning to read, write, and do math. For students with physical disabilities, it becomes their paper and pencil and the method they use to communicate their mastery of concepts. For students with cognitive disabilities it becomes their adapted learning environment presenting information in a format that maximizes learning. For students with disabilities, Assistive Technology (AT) can have impacts that are far reaching and have the potential to yield enormous benefits. Having access to AT allows students with disabilities to hear, to see, to read, to access and to participate in the environments they learn and live in. Having timely access to textbooks in formats that students with disabilities can access is called "Accessible Instructional Materials" (AIM), is essential to students with disabilities, and is a right protected in the IDEIA law. Research in Universal Design for Learning (UDL) proves that ALL students benefit from multiple means of engagement, representation, and expression. Use of UDL, AIM, and AT can create levels of independence that allow students to expand their worlds, unleash and enhance their abilities. Both low tech and high tech AT applications have been successfully used in classrooms throughout Vermont to ensure students’ success in the general education curriculum. Making informed AT decisions is one of the hardest things that IEP teams are charged with doing. To help you in this endeavor we have created this section of our web site. Be sure to visit our "AIM & AT Help line Forms" as our quick AIM and AT "help line" where you can ask your questions so that we can help you with this process. Last modified October 14 2014 10:11 AM
What is TBI? TBI is a sudden injury from an external force that affects the functioning of the brain. It can be caused by a bump or blow to the head (closed head injury) or by an object penetrating the skull (called a penetrating injury). Some TBIs result in mild, temporary problems, but a more severe TBI can lead to serious physical and psychological symptoms, coma, and even death.1 A text alternative is available at: http://www.nichd.nih.gov/news/resources/links/Pages/TBI_VTA.aspx. TBI includes (but is not limited to) several types of injury to the brain: - Skull fracture occurs when the skull cracks. Pieces of broken skull may cut into the brain and injure it, or an object such as a bullet may pierce the skull and enter the brain. - Contusion is a bruise of the brain, in which swollen brain tissue mixes with blood released from broken blood vessels. A contusion can occur from the brain shaking back and forth against the skull, such as from a car collision or sports accident or in shaken baby syndrome. - Intracranial hematoma (pronounced in-truh-KREY-nee-uhl hee-ma-TOH-muh) occurs when damage to a major blood vessel in the brain or between the brain and the skull causes bleeding.1,2 - Anoxia (pronounced an-OK-see-uh), absence of oxygen to the brain, causes damage to the brain tissue. The most common form of TBI is concussion.1 A concussion can happen when the head or body is moved back and forth quickly, such as during a motor vehicle accident or sports injury. Concussions are often called "mild TBI" because they are usually not life-threatening. However, they still can cause serious problems, and research suggests that repeated concussions can be particularly dangerous.3,4 A person who has a TBI may have some of the same symptoms as a person who has a non-traumatic brain injury. Unlike TBI, this type of injury is not caused by an external force, but is caused by an internal problem, such as a stroke or infection. Both types of injury can have serious, long-term effects on a person's cognition and functioning.5,6 TBI can happen to anyone, but certain groups face a greater risk for TBI than others. TBI among members of the military has become a particular concern in recent years because many military personnel in Iraq and Afghanistan have been exposed to such TBI hazards as improvised explosive devices. Head and neck injuries, including severe brain trauma, have been reported in 1 out of 4 military members who were evacuated from those conflicts.7 For more information about brain injury in the military, visit the Defense and Veterans Brain Injury Center website.
Sleep plays an important role in brain development in children and young adults and it is crucial that they get enough sleep for their brains and bodies to grow and develop. Research has shown that sleep is crucial for alertness and many other key functions in school-going children. Our children now have very hectic schedules. Not only do they have additional classes to help them cope with and supplement academic learning, there are sports lessons, music lessons and the homework they may have to catch up with in the evenings. This hectic schedule means that at the end of the day, their tired bodies and brains need to recharge, and the best way to do this is by getting sufficient sleep. The challenge here is to help your child to ‘switch off’ and be mentally ready for sleep at bedtime. Do read on for some tips on how you can achieve this. A study on children aged 7-11 years old showed that extending their sleep time by around half an hour led to marked advancement in handling their emotions the next day. On the other hand, by reducing their total sleep time, the opposite effect was observed. Did you know? Too much of a good thing can be bad – studies have shown that oversleeping regularly can increase the risk of diabetes, obesity, headaches, back pain, depression, and heart disease. Why does your child need his sleep? You may have noticed that whenever your child doesn’t get enough sleep, he not only feels tired but may also be unable to focus on the tasks he has to carry out. In addition, he will likely be very irritable, prone to emotional outbursts, have difficulty following directions, or he may argue with you over something inconsequential. Studies have also shown that sleep deprivation leads to deficits in cognitive performance, which includes poorer memory, reflexes, and attention. However, as these cognitive deficits tend to accumulate over time, kids who run short of sleep now may remain unaware of it. You should do your part to ensure that your child gets sufficient sleep as this will ensure that he performs better academically, cognitively, and emotionally. Bear in mind that taking naps or sleeping in during weekends are not viable solutions as they will not ensure that your child functions at his optimum level. Nothing can substitute regular and sufficient sleep. What happens during sleep? A typical sleep cycle consists of two alternating states of sleep, namely non-rapid eye-movement (NREM) sleep and rapid eye-movement (REM). Over the course of a night’s sleep, your child will go through several sleep cycles. As he progresses through these cycles, his body undergoes certain physiological changes. NREM sleep is characterised by four distinct stages. In Stage 1, he will fluctuate between wakefulness and being asleep, or he may be dozing lightly. Stage 2 begins when he becomes disengaged from his surroundings, his breathing and heart rates become more regulated and his body temperature drops. Stages 3 and 4 are the deepest sleep stages and they allow his body to recuperate from the day’s activities. During these stages, his blood pressure drops, his breathing slows, his muscles become relaxed, his body supplies more blood to his muscles (thus allowing tissue growth and repair), and his energy is replenished. REM sleep typically happens around 90 minutes after your child falls asleep. It then repeats every 90 minutes or so and occurs for longer stretches later in the night. During REM sleep, his brain is active and dreams may occur. It is often accompanied by the trademark rapid movement of his eyeballs darting back and forth. His body remains immobile and relaxed with his muscles lax. REM sleep helps his brain and body to replenish its energy in preparation for the day ahead. Sleep helps regulate the levels of certain hormone such as ghrelin and leptin. Both hormones regulate feelings of hunger and fullness, thus sleep deprivation may lead to eating in excess, which in turn may lead to weight gain. The secretions of melatonin and growth and thyroid hormones are also influenced by sleep. Melatonin helps induce sleepiness and is influenced by the light-dark cycle (i.e. light suppresses it). Growth hormone is typically secreted during the first few hours of sleep while thyroid hormones are secreted later. How much sleep is enough? Children require differing amounts of sleep according to their age, and it is recommended that you follow the guide provided below in order to ensure that your child gets sufficient rest: 0-1 month old: total sleep time (daily) = 15-16 hours 1-4 months old: total sleep time (daily) = 14-15 hours 4-12 months old: total sleep time (daily) = 14-15 hours 1-3 years old: total sleep time (daily) = 12-14 hours 3-6 years old: total sleep time (daily) = 10-12 hours 7-12 years old: total sleep time (daily) = 10-11 hours 12-18 years old: total sleep time (daily) = 8-9 hours Don’t neglect his sleep As parents you need to keep in mind that your child needs his sleep. If you have a young child, helping him to develop good sleep habits can be challenging at first, you will be glad that you took the time to get it right as the benefits will continue into his adulthood. Never underestimate the importance of getting enough sleep. Here are some useful tips for developing good bedtime routines: - Keep bedtime consistent – if you decide on 9pm as your child’s bedtime, do not deviate from it significantly. Maintain the same time every day and you will find it easier to put him to bed. Similarly, keep wakeup times consistent too. - Wind things down before bedtime – even adults need a transition period to get ready for bed. An effective way to get your child ready for bed is to have a short period of between half an hour to an hour filled with relaxing activities just before bedtime. That means no vigorous play and no gadgets, TV or computer games. - Establish a bedtime routine – a simple routine can be getting your child to brush his teeth, reading him a book, and then putting him to bed. Regardless of what routine you pick, stick with it. The important thing is to have a predictable routine that he will associate with sleep. - Restrict after-dinner intake – keep your child away from too many sweet treats or foods/drinks that may contain caffeine. Having a sudden jolt of foods high in sugar or caffeine will make him more alert/active before his bedtime, so limit his intake of candies, sodas, ice cream, etc. Lastly, don’t forget to ensure that he gets enough physical activity throughout the day. An educational contribution by Malaysian Paediatric Association.
August 12th is World Elephant Day, a day for focusing on elephant conservation and recognising the organisations fighting to save elephants from extinction around the world. But what can you do to help elephants? Unfortunately, there’s really never been a worse time to be an elephant in Africa. The World Wildlife Fund (WWF) has estimated that without urgent action, wild elephants across Africa and Asia could be extinct by 2040 – for many of us that’s still well within our lifetime, and definitely within the lifetime of our children and grandchildren. “I suppose the question is, are we happy to suppose that our grandchildren may never be able to see an elephant except in a picture book?” – Sir David Attenborough In the 1970s and 80s, British zoologist Ian Douglas Hamilton flew a light aircraft over Sub-Saharan Africa to count elephant numbers, and discovered what became known as ‘the elephant holocaust’. He estimated that Africa lost 600,000 elephants in a decade. Hamilton’s work led to the ban in the international ivory trade in 1989, which enabled a recovery in elephant numbers until 2008, when corruption, rising Asian demand and the escalation of the ivory trade into criminal gangs increased poaching levels with catastrophic results. Elephants are now in serious trouble. Numbers have dropped more than 60% in the last decade alone, and today it is estimated that there are around 415,000 elephants in Africa (WWF), mainly concentrated in the south of the continent. The population of Asian elephants is even more dire, with only about 50,000 animals remaining. It is thought that in 1800 there may have been as many as 26 million elephants in Africa alone. As of 2011, we are losing more elephants than are being born. Bull elephants with big tusks are the main targets of poachers, and their numbers have been diminished to less than half of females. Female African elephants with tusks are also regularly killed, which has a terrible effect on the stability of elephant societies, leaving an increasing number of orphaned baby elephants. “I have spent hours and hours watching elephants, and come to understand what emotional creatures they are…it’s not just a species facing extinction, it’s massive individual suffering.” – Dr Jane Goodall What are the threats to elephants in 2020? The four main threats to elephants are: 🐘 Increasingly sophisticated wildlife poaching for meat and ivory, controlled by well-financed, heavily armed crime syndicates 🐘 Conflict between communities and elephants over diminishing resources 🐘 Habitat destruction and deforestation for mining and large-scale agriculture 🐘 Poverty and rising human populations. What can we do to help save elephants? 🐘 Donate to a reputable elephant-focused organisation Choose one where you can be sure where your money is going and what projects and activities the donation will be used for. Worthwhile projects include sponsoring individual elephants, supporting anti-poaching rangers, supporting organisations that work on elephant education projects in communities, and those that help mitigate the human-elephant conflict – e.g. helping farmers and communities live side-by-side with their elephant neighbours. 🐘 Volunteer at an ethical elephant conservation project Our recommendations, ones that we have visited ourselves and are 100% sure of, are: ** Family-friendly volunteering! 🐘 Obviously, don’t buy ivory (and encourage your friends and family not to as well). This includes antique ivory (the sale of new ivory is illegal), traditionally used to make jewellery, billiard balls, pool cues, dominoes, fans, piano keys and carved trinkets. 🐘 Buy elephant-friendly coffee and wood. Coffee and timber crops are often grown in plantations that destroy elephant habitats. Make sure to buy Forest Stewardship Council (FSC) certified timber and certified fair trade coffee. 🐘 Be aware of the plight of captive elephants (in zoos and circuses in particular, and when overseas) and don’t take part in events that exploit them. The zoo industry is starting to wake up and is beginning to develop more elephant-friendly environments, yet they have a long way to go. Circuses, even further. Make a difference by boycotting circuses that use animals, and by boycotting zoos that offer insufficient space to allow elephants to live in social groups, and where the management style doesn’t allow them to be in control of their own lives. When on holiday, don’t ride elephants or visit shows where elephants do tricks such as painting or dancing. Only by working together will we be able to save one of the planets most iconic species. – – – – – – – – – – – – – – – – – – – – – – – – – –
Radiology & Diagnostic Imaging Radiology is a branch of medicine that uses imaging technology to diagnose and treat disease. OMC uses all forms of radiant energy (sound, light, and particle) to diagnose medical conditions, including ultrasound, x-ray, magnetic resonance imaging (MRI), nuclear diagnostic imaging, and computed tomography (CT). All OMC outpatient clinics (except FastCare/Skyway locations) have basic radiology equipment. OMC's hospital and Rochester Northwest clinic have advanced radiology equipment. Click on any of the services below to read a brief description: An ultrasound machine creates images that allow various organs in the body to be examined. The machine sends out high-frequency sound waves, which reflect off body structures. A computer receives these reflected waves and uses them to create a picture. Unlike with an x-ray or CT scan, there is no ionizing radiation exposure with this test. Ultrasound tests may be done in OMC's radiology or Ob/Gyn departments. X-rays are a type of radiation called electromagnetic waves. X-ray imaging creates pictures of the inside of your body. The images show the parts of your body in different shades of black and white. This is because different tissues absorb different amounts of radiation. Calcium in bones absorbs x-rays the most, so bones look white. Fat and other soft tissues absorb less, and look gray. Air absorbs the least, so lungs look black. The most familiar use of x-rays is checking for broken bones, but x-rays are also used in other ways. For example, chest x-rays can spot pneumonia. Mammograms use x-rays to look for breast cancer. When you have an x-ray, you may wear a lead apron to protect certain parts of your body. The amount of radiation you get from an x-ray is small. For example, a chest x-ray gives out a radiation dose similar to the amount of radiation you're naturally exposed to from the environment over 10 days. Magnetic Resonance Imaging (MRI) Magnetic resonance imaging (MRI) uses a large magnet and radio waves to look at organs and structures inside your body. Healthcare providers use MRI scans to diagnose a variety of conditions, from torn ligaments to tumors. MRIs are very useful for examining the brain and spinal cord. Before you get a scan, tell your doctor if you are pregnant or have metal/electronic devices inside your body (e.g., a cardiac pacemaker, shrapnel, or a metal artificial joint). Computed Tomography (CT) Computerized tomography (CT) combines a series of X-ray views taken from many different angles, then processed by a computer, to create cross-sectional images of the bones and soft tissues inside your body. The resulting images can be compared to looking down at single slices of bread from a loaf. Your healthcare provider will be able to look at each of these "slices" individually or perform additional visualization to view your body from different angles. In some cases, CT images can be combined to create 3-D images. CT scan images can provide much more information than do plain X-rays. A CT scan has many uses, but is particularly well suited to quickly examine people who may have internal injuries from car accidents or other types of trauma. A CT scan can be used to visualize nearly all parts of the body. A mammogram is an x-ray picture of the breast. It can be used to check for breast cancer. A screening mammography is the type of mammogram that checks you when you have no symptoms. It can help reduce the number of deaths from breast cancer among women ages 40 to 70. The National Cancer Institute recommends that women age 40 or older have screening mammograms every 1 to 2 years. Nuclear medicine is imaging that uses small amounts of radioactive material to diagnose and determine the severity of disease. It also can be used to treat a variety of diseases, including many types of cancers, heart disease, gastrointestinal, endocrine, neurological disorders, and other abnormalities within the body. Because nuclear medicine procedures are able to pinpoint molecular activity within the body, they can identify disease in its earliest stages. Nuclear medicine imaging procedures are noninvasive and are usually painless.
Latin stimulatus, past participle of stimulare, from stimulus goad; perhaps akin to Latin stilus stem, stylus - Date: 1566 Stimulation is the action of various agents (stimuli) on muscles, nerves, or a sensory end organ, by which activity is evoked; especially, the nervous impulse produced by various agents on nerves, or a sensory end organ, by which the part connected with the nerve is thrown into a state of activity. Stimulation in general refers to how organisms perceive incoming stimuli. As such it is part of the stimulus-response mechanism. Simple organisms broadly react in three ways to stimulation: too little stimulation causes them to stagnate, too much to die from stress or inability to adapt, and a medium amount causes them to adapt and grow as they overcome it. Similar categories or effects are noted with psychological stress with people. Thus, stimulation may be described as how external events provoke a response by an individual in the attempt to cope.
BASIC SHOULDER ANATOMY This section is a review of basic shoulder anatomy. It covers the bones, ligaments, muscles and other structures that make up the shoulder. For more information on how the shoulder works please read the section on basic basic shoulder biomechanics. The shoulder complex is made up of three bones, which are connected by muscles, ligaments, and tendons. The large bone in the upper arm is called the humerus. The shoulder blade is called the scapula and the collarbone is called the clavicle. The top of the humerus is shaped like a ball. This ball sits in a socket on the end of the scapula. The ball is called the head of the humerus and the socket is called the glenoid fossa, hence the term "glenohumeral" joint. The glenoid fossa has a rim of tissue around it called the glenoid labrum. The glenoid labrum makes the glenoid fossa deeper. The glenohumeral joint is the most mobile joint in the body. Articular cartilage is a smooth shiny material that covers the humeral head and the glenoid fossa of the glenohumeral joint. There is articular cartilage anywhere that the bony surfaces come into contact with each other. Articular cartilage allows these bones to slide easily over each other as the arm moves. The glenohumeral joint is just one of the joints in the shoulder complex. The other two joints are the sternoclavicular joint and the acromioclavicular joint. The sternoclavicular joint allows a small amount of movement to occur between the inner (medial) part of the clavicle and the breastbone (sternum). The acromioclavicular joint allows a small amount of movement to occur between the outer (lateral) part of clavicle and a projection on the top of the scapula called the acromion process. The scapula sits on the back of the ribs and moves as the arm moves. Ligaments are like strong ropes that help connect bones and provide stability to joints. In the shoulder complex, ligaments provide stability to the sternoclavicular joint, the acromioclavicular joint and the glenohumeral joint. The ligaments around the sternoclavicular joint and the acromioclavicular joint are strong and tight and do not allow for much movement in these joints. The glenohumeral joint is surrounded by a large, loose "bag" called a capsule. The capsule has to be large and loose to allow for the many movements of this joint. Ligaments reinforce the capsule and connect the humeral head to the glenoid fossa of the scapula. These ligaments work with muscles to provide stability to the glenohumeral joint. The glenoid labrum also helps provide stability to the joint. Tendons connect muscles to bone. There are four muscles (supraspinatus, infraspinatus, subscapularis and teres minor) that surround the glenohumeral joint. These four muscles are attached to the scapula. They turn into tendons, which in turn attach to the humerus. The tendons of these four muscles make up the "rotator cuff" that blends into and helps support the glenohumeral joint capsule. The muscles of the rotator cuff and their tendons provide stability to the glenohumeral joint when the arm is in motion. The biceps muscle is located in the front of the upper arm. It has two tendons, one of which attaches above the glenoid fossa. This tendon runs down the front of the glenohumeral joint and provides added stability to the glenohumeral joint. There are muscles that stabilize the scapula and others that help move the arm. The rhomboid muscles, trapezius muscle and serratus anterior muscle are a few of the scapular stabilizing muscles. The pectoralis major muscle, the deltoid muscle and the muscles of the rotator cuff are some of the muscles that move the arm at the glenohumeral joint. The upper part of the trapezius muscle also helps "shrug" the shoulder. All of the muscles that are part of the shoulder complex work together in order to move the arm through its many possible ranges of movement. Finally, a bursa (pl. bursae) is a fluid filled sac that decreases the friction between two tissues. Bursae also protect tissues from bony structures. In the shoulder, the subacromial bursa (also called the subdeltoid bursa) covers the rotator cuff tendons and protects them from the overlying acromion process. Normally, this bursa has very little fluid in it but if it becomes irritated it can fill with fluid, become painful and also irritate the surrounding rotator cuff tendons.
On learning of the birth of Christ, whom the Magi called "the King of the Jews," King Herod felt his throne was in jeopardy. Knowing only that the baby was somewhere in Bethlehem, the king ordered Jewish boys around Bethlehem under two years old to be murdered. Alerted by angels, Christ's parents fled to Egypt and saved him. For this drawing, Amico Aspertini borrowed from the ancient Roman sculpture that he had seen in Rome five or ten years before. The intertwining figures at right parallel those on ancient Roman sarcophagi. Aspertini's art also included unidealized shapes and awkward bodies. Original and unconventional for the date, his figures look like local peasants rather than ideal types. Aspertini's draftsmanship characteristically included encrusted white highlights, squat figures, and manic energy. Between about 1510 and 1520, he often used this colorful combination of red and black chalk with white bodycolor. His extreme white heightening lends the drawing a feeling of near three-dimensionality.
Play games together When you are out and about together, try playing simple verbal maths games. Remind them of the names for 2D and 3D shapes. What fraction of the fish fingers can we each have? My Weekly Training Plan Practise using tables and adding amounts of time with this weekly training plan. Fraction Match Have fun with this matching game and practise finding equivalent fractions. Try Cupcake Fractions and Fraction Match. Do they know that a football is a sphere, or that most tins of food are cylinders? Things to try with your child 1. Cupcake Fractions Practise finding fractions of amounts with this fun game. My Three Day Training Challenge Get seriously good at your favourite sport and practise using graphs with different scales. Try these One Minute Brain Teasers and times table activity sheets. These ideas will help your child practise these skills in everyday life, and to learn to apply their maths skills and understanding. Visit our fun maths activities page for a selection of activities and resources designed to help you enjoy maths with your child. Spot a number plate and think of a calculation using those numbers. Prepare a meal together and ask your child to multiply or divide the quantities of ingredients in a recipe so it makes the right amount for your family. There are 4 of us. Help your child to work out at what time things need to stop cooking. Can they measure all the ingredients and talk about grams, kilograms etc? Try these jam tarts. Want to play some games involving shapes? There are also lots of games you can make or adapt at home, like bingo, snap and pairs with numbers, e. Ask the other person to work it out, or tell them the answer and see if they can work out the calculation. Make and do together Use maths together at home. How many fish fingers would that be?Home to hundreds of learning activities for primary K12 Maths, including favourites from AmbleWeb, BBC Bitesize, HGFL, A Blundred, MathPlayground, Cool Math Games. » Key stage 1 - so what do the reading / maths levels really mean? Start new thread in this topic | Watch this thread | Flip this thread | Refresh the display Show messages; Personally I've found it very useful as my DD1 was seriously struggling in reading, writing & maths in KS1 (largely because she learns through doing and there was. Year 2 maths Here is a list of all of the maths skills students learn in year 2! These skills are organised into categories, and you can move your mouse over any skill name to preview the skill. A Writing numbers in words; A Roman numerals I, V, X; Understand addition. B.1 Add with pictures - sums up to 10; B.2 Addition sentences. Maths skills for ages Using maths skills and knowledge Your child is now becoming increasingly fluent with mental calculations such as addition and subtraction facts and times tables. MATHS: Numbers and the Number System: Place Value. Place Value. We need your resources! Click here to find out how to contribute! Advertisement: Place Value: Reading and Writing Numbers (Jennifer Gibbs) DOC; Tens and Units (Linda Taylor) Finding 10 More (Charlotte Harvey) DOC. SATs Maths Revision Pack KS2 - "One Page Wonder!" KS1 Year 2 Maths SATs Revision - Time - Differentiated Levels See more. In this reading and writing worksheet, your child gets practice reading a myth and interpreting it to answer questions about meaning and words used in context.Download
Discovered by Christopher Columbus in 1494, Jamaica was a Spanish colony until 1655 when it was taken over by the British. The third largest island in the West Indies, it became the focus of British activity in the Caribbean. Its economy was dominated by sugar production based on slave labor imported from Africa. Details on this 1775 map by the British geographer and publisher Thomas Jeffrerys record the racial segregation that was institutionalized during the British colonial regime. Underlining distinguishes what are referred to on the map as "Negro-towns," most of which are situated in the less hospitable mountainous interior. In contrast, the flatter coastal areas, which were more suitable for agriculture, are dotted with small squares and circles, symbolizing the British-owned sugar plantations and sugar mills. In addition, the map's two insets depict the topography and hydrography of the island's busiest ports, Kingston and Bluefields, which had become centers of British colonial administrative and mercantile activities. Insets: The harbour of Bluefields [ca. 1:95,000]--The harbours of Kingston and Port Royal [ca. 1:95,000] Relief shown by hachures. Soundings shown in fathoms. Prime meridian: Ferro and London.
Discrete mathematics is a distinct mathematical term rather than being continuous term. Graph, integers, etc. are the part of discrete mathematics it excludes the topics such as of Calculus and analysis, etc. or in short we can say discrete mathematics deals with the countable Sets. As such there is no specified definition of discrete mathematics. Graph in discrete mathematics can be studied with the help of graph theory which is also considered as the part of combinatorics, but in the present era it is a separate branch of mathematics first investigated by D.Konig in the year 1930. Graph theory discrete mathematics is considered among the omnipresent model of both the natural and the structures which are made by man. They can mold many type structures and Relations such as method dynamics in physical, social and biological system. They have their several uses in many fields like in computer science they represent communication network, organization of data, help in the flow of computation, data organization, etc. in the field of mathematics, they can be used in the Geometry and also in many areas of topology. Also the group theory has a close link with the algebraic graph theory. Discrete mathematics is also known as finite mathematics and the decision mathematics. As it is given above that it studies the countable Sets but it is different from continuous graph, we must not mix continuous graph with the graph theory. The graphs in discrete mathematics are of the many types few of them are simple graph, multi graph, directed graph, pseudo graph and many more. Now we will see brief description about these graphs: 1) The simple graphs are unweighted and not in the particular direction and they have no loops but they have multiple edges. 2) The graphs which have multiple edges are multiple graphs. 3) The graphs which are connected with the Set of nodes and with its edges are directed graph. 4) Pseudo graph has the multiple edges and the graph loops connect them. Even and Odd Degree'sBack to Top A graph consists of nodes and edges, these edges are incident to a node and these nodes are also called as vertices of graph. Degree in graph theory can be defined as number of edges incident to a vertex or node present in graph or we can say number of edges attached to vertex of graph is known as degree of that graph. Degrees of a graph can be classified in two categories, that is even and odd degrees, let us now see definition of even and odd degree: Even Degree: Degree of vertex in a graph is even if number of edges incident on vertex is even. In the graph shown above we can see that there are three vertices present in graph and each vertex in graph has two edges incident on them, so degree of each vertex is equals to $2$ which is even. So each vertex in this graph has even degree. Odd Degree: Degree of a vertex in a graph is odd if number of edges incident on a vertex is odd. In graph shown above we can see that number of edges incident on vertices '$A$' and '$D$' is equals to $3$ which is odd. So we can say that degree of vertices $A$ = $D$ = $3$ and degree of vertices '$A$' and '$D$' is odd. This is all about even and odd degrees in graph theory. Graph Theory Shortest PathBack to Top In field of science, graph theory is one of the very important concepts which describes that how a graph models different type of mathematical structures and also it can be used for all those structures which are created by human beings. • Also graphs are ubiquitous models which may be useful for various Relations. • Graphs has wide use due to their practical importance. They can be implemented practically. • Wide use of graph is its use in to find out the shortest path and minimum cost in various transportation problems. • A graph can be referred to collection of vertices and edges. These vertices are also known as nodes, edges are basically used to connect nodes. • Nodes or vertices are indicated by Circle or dot and line between various nodes represent edges. • Edges can be undirected which are also known as asymmetric or undirected also called symmetric. These edges may be weighted or unweighted. • Weighted edge refers that some distance exists between two nodes and unweighted edge means that there is no distance between two vertices. • Application of graph theory shortest path is the shortest path problem which targets to minimize the sum of weights of its edges which connects two or more vertices. Path in Graph TheoryBack to Top Path can be defined as distance traveled by an object from one Point to another. Points are called vertices of the path. Let there be two vertices $A$ and $B$, then path between these two vertices is called $A - B$ path. The distance between these two points or vertices is known as edge. Hence a path may include several vertices and edges. The Set of vertices and edges actually forms a graph which is called path graph. A path graph can also be defined as set of nodes (vertices) and set of lines that connect these nodes. Open path is a path in which first and last vertices are not connected. If first and last vertices are same, then this type of path will be called as cycle graph. If vertices are connected to each other by various edges in a sequence then this sequence of vertices in a graph is called as paths in graph theory. Figure shown above has five vertices (nodes) and five edges (lines) to connect the vertices. Here first and last vertices that is vertex number $1$ and $5$ are distinct that is between vertices. In above figure, node $1$ is connected to node $2$ and node $3$. Edge is used to define this relationship. Cycle in Graph TheoryBack to Top Cycle graph can be defined as a graph which is comprised of single cycle. In other words, if vertices in given graph are connected in a closed form then graph is called as cycle graph. A cycle graph may can also be referred to circular graph. In a cycle graph, if there are '$n$' vertices then cycle graph will be denoted by $C\ n$ and number of vertices (nodes) in $C$ n is equals to the number of edges and degree of each vertex is $2$ that is each vertex has exactly two edges which are incident to node (vertex). A cycle graph has common first and last vertex. This can be shown by the following figure. In above figure, it is clear that first and last Point is '$a$' that is why this is called as cycle graph. Hence it can be said that cycle is a closed graph. If there are four vertices in the graph then it is called as $4$ cycle. Cycle in graph theory can be Odd or even cycle. These two terms can be defined as follows. If there are odd Numbers of edges in a cycle graph then it will to odd cycle graph and if number of edges (lines) is even in cycle graph then this type of graph is called even cycle graph. Degree of cycle graph is given for vertices and can be calculated by counting the number of lines meeting on that vertex. Lets consider an example as shown below: The degree of node '$A$' in the above graph is three since there are three edges incident on it. Similarly the degree of $B,\ C$ and $D$ will be given as deg $(B)$ = $2,\ deg\ (C)$ = $4$ and $deg\ (D)$ = $1$. Connectivity in Graph TheoryBack to Top Connectivity is related to the network flow problems. It is used to find out the minimum number of vertices or edges which can be used to disconnect the remaining nodes from each other. Connectivity of a graph actually shows the robustness of the graph. If there are two nodes or vertices $A$ and $B$, then they will be said connected if there is a path from $A$ to $B$. Otherwise, they are disconnected.
Scientists from Lawrence Livermore National Laboratory have combined biology with 3D printing in order to build the very first reactor that is able to produce methanol from methane continuously at room temperature, under pressure. The team removed enzymes from methanotroph (a type of bacteria that eats methane) and mixed them with polymers that they had printed (or molded) into innovative reactors. Sarah Baker, lead of the project says the remarkable thing is that the enzymes actually retain as much as 100 percent activity inside of the polymer. The printed enzyme-embedded polymer is extremely flexible, giving it a lot of room for future development and will be useful in a large range of applications, especially those that involve gas and liquid reactions. Thanks to advances in both the extraction techniques of oil and gas, there are now vast amounts of natural gas that are composed of by primarily methane readily available. Unfortunately, large amounts of this methane is leaked during ventilation or flared during these operations. This is due to the fact that this gas is difficult to store and transport in comparison to other more valuable liquid fuels. Methane emissions contribute currently to about one third of the current net global warming problem, mainly from these as well as other distributed sources including agriculture and even landfills. Present industrial technologies that convert methane to far more valuable products such as steam reformation must operate at extremely high temperatures and large pressures. This requires a large number of unit operations and yields a range of products. Because of this, the current industrial technology standards have a very low efficiency when it comes to methane conversion to final products and is only able to operate economically on extremely large scales. A technology that holds the ability to efficiently convert methane into other hydrocarbons is required in order to create a profitable way to convert sources of methane and natural gas to liquids for further processing. Thus far, the only known catalyst both industrial and biological to convert methane to methanol under ambient conditions with high efficiency is the enzyme methane monooxygenase (MMO), which is able to convert methane to methanol. The reaction can be carried out by methanotrophs that contain the enzyme, but this approach requires a lot of energy in order to be maintained and to continue providing metabolism of the organisms. The team instead separated the enzymes from the organism and used the enzymes directly. The team discovered that isolated enzymes offer the promise of highly controlled reactions at ambient conditions with higher conversion efficiency and far better flexibility. Joshuah Stolaroff, an environmental scientist on the team says that up until recently, most industrial bioreactors are stirred tanks, which are inefficient for reactions that occur among gas and liquid reactions. He continues that the concept of printing enzymes into a robust polymer structure opens the door for new kinds of reactors with much larger throughput and lower energy consumption. The team quickly realized that the 3D printed polymer has the potential to be used and reused over a number of cycles and used in higher concentrations than possible with the conventional approach of the enzyme dispersed in solution. The research was published in full in Nature Communications journal and holds the potential to lead to more efficient conversions of methane to energy production.
Romeo And Juliet, Children Of Two Rival Families, Who Take Their Own Lives. Romeo and Juliet, A Play by Shakespeare The play ‘Romeo and Juliet’, written and directed by William Shakespeare between 1594 and 1596, is a very famous and popular play. The main story of the play is about two lovers, Romeo and Juliet, both from rival families. They try to keep their love secret with many methods but ultimately they both end up taking their own lives to overcome the obstacles which hinder their love. The couples death eventually unites the two families. This introduces sacrifice, one of the many themes in the play. The play also contains themes of love – Romeo and Juliet, hate/conflict – the two rival families, destiny – due to the fact that the ending is told at the start of the play, and authority, which is shown with the prince and the leaders of the families. The play, 'Romeo and Juliet' was very popular with its Elizabethan audience. This was due to the theatre being one of the only forms of mass entertainment at the time. It also gave the audience a break from their everyday, normal lives. Also there are many areas such as poetry, comedy, love and violence covered in 'Romeo and Juliet' which provided entertainment for all sorts of people across the social range. The play also dealt with many everyday themes such as love, violence, hate and conflict. This is why the play proves to be so popular at the present time. Within 'Romeo and Juliet' there are many universal themes. The prologue is a device from Ancient Greek Tragedy which gave a commentary on the play. The prologue provides a lot of information to the audience, for example it establishes the setting by saying ‘fair Verona’, which tells us about conflict. We also learn that the main two characters, Romeo and Juliet, both die at the end of the play. This introduces dramatic irony which gives the audience a sense of knowledge as they know what is going to happen, where as the characters don’t. It also gives suspense. Also the prologue in 'Romeo and Juliet' introduces many key themes such as love, hatred, conflict and destiny. These are expressed when the words and phrases such as, ‘star-crossed lovers’, ‘death’, and ‘ancient grudge! The opening scene of 'Romeo and Juliet' helps to establish many themes and events which will happen during the play. The first two characters who appear are servants of the Capulet house, they are talking to each other and are bragging and using humor. One of the servants, Sampson, says that he ‘strikes quickly, being moved’ which introduces the theme of macho pride and status. Also they discuss the actions they plan to take upon the opposing family. They say that ‘the quarrel is between their masters and their men’, this instantly tells the audience the situation in Verona. It tells how the families are in a civil war and hints towards violence. This shows one of the main themes which is conflict. The two slaves also introduce themes such as hate, sex and violence when one says that he will show himself a ‘tyrant’, and when he has ‘fought the men, I will be cruel with the maids’. Also he says he will cut off their heads, or ‘maidenheads’ which means he will take their virginity. Overall, the first scene in 'Romeo and Juliet' contains a lot of foreshadowing, which means that there are many themes, events and contents foretold about the play.
Salt. Its appeal comes from the fact that it intensifies all flavors and enhances them more than any other ingredient. It also heightens our perception of sweetness and sourness while minimizing bitterness—a quality that balances the flavor of everything from grapefruit to coffee. Traditionally, native Hawaiians used salt or pa‘akai (to solidify the sea) to season and preserve food, for religious and ceremonial purposes and as medicine. Preserving fish and other seafood was also vital, especially to provide sustenance during long ocean voyages. Hawaiians were the only Polynesians who produced salt from seawater using properly constructed clay pans. Those early Hawaiians had just three condiments to season their food: seaweed, kukui nut, and salt. Not unlike grapes, the character of salt is built on terroir—the unique aspects of a region that influence and shape its creation. In this particular case, it includes the sea from which it originates, the soil that it’s dried upon, the climatic conditions of the locale, and the culture of the people who harvest it, in addition to meroir—how the specific water of a region affects the taste of a food. One of the first salt ponds, more than 1,000 years old, is on Kaua‘i, in the ahupua‘a (watershed) of Hanapepe, and is blessed with the pristine conditions of a site that encompasses both terroir and meroir. The families of Hanapepe have carried on the tradition of salt making following the same techniques as their kupuna (elders) for hundreds of years. Today, there are some 20-plus families who have been granted the rights to harvest salt in Hanapepe by the State of Hawai‘i. This traditional process is what makes Hanapepe salt unique. In Hanapepe, salt is created by accessing underground salt water from a source in the form of a hand-dug well, or punawai. These wells average eight feet deep and four feet wide. Salt water travels from the ocean through underground lava tubes and fills the wells. The salinity of the water in these wells is about two- and-a-half times that of the ocean. The water is then poured—via bucket—from the punawai into shallow holding clay beds or waiku, where it becomes concentrated and left for eight to 10 days, depending on the weather. This water is then transferred to clay drying pans, or punee. It is here that the salt begins to crystalize and forms layers of snow-white salt. This top layer of white salt is raked, drained, and dried. This is used as table salt/finishing salt. Each of these families has a place on Kaua‘i where they collect their own red (edible) clay called ‘alaea, which gets its hue from iron oxide. This red clay is then added to the white salt. It’s this added process that gives Hanapepe salt its distinctive appearance and flavor. Much as the Hanapepe pa‘akai is sought after by those in-the- know, it is not for sale anywhere in the world. Instead, the salt-harvesting families in the group Hui Hana Pa‘akai uphold the inherited concept of communal stewardship—the salt may only be given or traded, but not sold. If you want a taste of this delightful pa‘akai, ask around those with a Hawaiian connection. Chances are, they may be able to obtain these coveted crystals from friends and/or family. Once you sample it, there’s no going back.
It is now possible to peer inside batteries, in real time, to study how their power is degrading. A team of chemists from New York University has developed a way to yield highly detailed, three-dimensional images of the insides of batteries using a technique based on magnetic resonance imaging (MRI). As well as learning more about why batteries lose power and performance over time, the findings could help develop the next-generation technology desperately needed to power future phones and wearables. The work, described in the Proceedings of the National Academy of Sciences journal, focuses on rechargeable Lithium-ion (Li-ion) batteries. These batteries are used in phones, electric cars, laptops, and most other electronics. Many see lithium metal as a promising, highly efficient electrode material, which could boost performance and reduce battery weight. However, during recharging, it builds up deposits – or "dendrites" – that cause performance loss and safety concerns, including fires and explosions. As a result, monitoring the growth of dendrites is crucial to producing high-performance batteries with this material. Click the image above to explore a 3D, interactive model of a battery with dendrites "One particular challenge we wanted to solve was to make the measurements 3D and sufficiently fast, so that they could be done during the battery-charging cycle," explained NYU Chemistry Professor Alexej Jerschow. "This was made possible by using intrinsic amplification processes, which allow one to measure small features within the cell to diagnose common battery failure mechanisms. We believe these methods could become important techniques for the development of better batteries." Current methods for doing so, developed previously by the same team, used MRI technology to look at lithium dendrites directly. However, this technique resulted in lower sensitivity and limited resolution images, making it difficult to see dendrites in 3D and to understand the conditions under which they build up. With this in mind, the researchers concentrated on the lithium's surrounding electrolytes – substances used to move charges between the electrodes. Specifically, they found that MRI images of the electrolyte became strongly distorted near dendrites, offering a highly sensitive way to measure when and where they grow. What's more, by capturing these distortions, the scientists could construct a 3D image of the dendrites. Alternative methods require the batteries to be opened up which ultimately destroys the dendrite structure and changes the chemistry of the cell. "The method examines the space and materials around dendrites, rather than the dendrites themselves," added Andrew Ilott, an NYU postdoctoral fellow and the paper's lead author. "As a result, the method is more universal. Moreover, we can examine structures formed by other metals, such as, for example, sodium or magnesium–materials that are currently considered as alternatives to lithium. The 3D images give us particular insights into the morphology and extent of the dendrites that can grow under different battery operating conditions."
How did women contribute to victory in World War II? Younger readers can discover the little known history of female pilots known as WASPs in this volume through photos, infographics, timelines, charts and easy-to-read text. As they follow along, readers will learn about the lives of the many women who acted heroically for their country even as they faced hardship and discrimination. With primary source photos, infographics, timelines, and easy-to-read text, BOLT’s All-American Fighting Forces series reveals the little-known history of men and women who acted heroically for their country even as they faced hardship and discrimination. Each nonfiction book focuses on a particular fighting force that represents America’s diverse society and history. Helpful glossaries and indexes in each volume direct readers to the most important terms and topics. The books in the BOLT 1 series feature high-interest topics that students with a 3rd through 5th grade primary reading level will return to again and again. Easy-to-read text helps kids understand the material and enjoy the reading experience. Packed with dynamic photos, charts, diagrams, fun facts, and infographics, BOLT’s attention-grabbing titles are sure to have your readers coming back for more.
LEIPZIG, Germany — The infant’s eyes grow wide at the sight of the eight-legged creature. She’s never been exposed to spiders before, but something inside her signals to pay attention. Demonstrating this instinctual reaction, a new study out of the Max Planck Institute for Human Cognitive and Brain Sciences and Uppsala University shows that even six-month-old babies’ pupils dilate when seeing snakes or spiders. This response, researchers say, adds to the argument that fear of such creatures is facilitated by instinct, rather than just learned. “When we showed pictures of a snake or a spider to the babies instead of a flower or a fish of the same size and colour, they reacted with significantly bigger pupils,” says lead investigator and neuroscientist Stefanie Hoehl in a press release. “In constant light conditions this change in size of the pupils is an important signal for the activation of the noradrenergic system in the brain, which is responsible for stress reactions. Accordingly, even the youngest babies seem to be stressed by these groups of animals.” While this increased attention to the animals means that babies quickly learn to fear them, other studies suggest it is not the fear that is innate, rather it’s the increased arousal and attention to them that is instinctual. Indeed, some of the previous research the authors cite shows that younger babies’ pupils actually dilate more in response to happy faces than fearful ones. In such instances, the authors refer to such dilation as an “arousal” rather than a “stress” response. Other experiments have likewise found that infants are faster at detecting snakes, but not necessarily inherently afraid of them. In an earlier experiment, researcher Vanessa LoBue of Rutgers University helped show that while babies paid more attention to snakes, they weren’t startled more easily when looking at them. “While we find differential responses to snakes early on, meaning they are special, it doesn’t seem to be related to fear early in development,” says LoBue in a BBC article on that experiment. “It’s possible that paying more attention to something might make fear learning easier later on. It facilitates fear learning. As the babies in Hoehl ‘s more recent study were only six months old, and are from a part of the world where there are few poisonous snakes or spiders, the study authors say the reactions –whether they represent stress or just increased interest — must be an ancestral instinct. “We conclude that fear of snakes and spiders is of evolutionary origin. Similar to primates, mechanisms in our brains enable us to identify objects as ‘spider’ or ‘snake’ and to react to them very fast. This obviously inherited stress reaction in turn predisposes us to learn these animals as dangerous or disgusting. When this accompanies further factors it can develop into a real fear or even phobia,” Hoehl says. “A strong panicky aversion exhibited by the parents or a genetic predisposition for a hyperactive amygdala, which is important for estimating hazards, can mean that increased attention towards these creatures becomes an anxiety disorder.” The scientists say that it’s the length of time our ancestors spent around spiders and snakes that makes them scarier to us than other potentially dangerous animals. “We assume that the reason for this particular reaction upon seeing spiders and snakes is due to the coexistence of these potentially dangerous animals with humans and their ancestors for more than 40 to 60 million years—and therefore much longer than with today’s dangerous mammals,” says Hoehl. “The reaction which is induced by animal groups feared from birth could have been embedded in the brain for an evolutionarily long time.” The results of Hoehl and her colleagues study were published recently in a paper in the journal Frontiers In Psychology. - Jumping Spiders Fool Predators By Pretending To Be Ants, Study Finds - Thought To Be Loners, Snakes Coordinate Hunts Together, Study Finds - Male Snakes Practically Killing Themselves For Sex, Study Finds - Bon Appétit: Spiders Devour 800 Million Tons of Insects Annually, Study Finds - All Mammals Take 12 Seconds To Poop, Study Finds - Cute Animals Can ‘Rekindle Marital Spark,’ Study Finds
Reinforcing a behavior is something educators do to make the behavior more likely to occur in the future. The key element of reinforcement is that it is done at the time of the behavior. It’s much different than offering a reward. Rewards mean giving a student something only after they have accomplished something. Reinforcers can be social, (i.e. praise) or tangible (a token, ticket etc.). They should be given for specific positive behaviors. Don’t reinforce a student for not doing something negative (that’s bribery, and it’s a bad idea). Remember that reinforcers don’t work when (a) the student’s need is more powerful than the reinforce, (b) the student doesn’t have the skill to perform the desired behavior, or (c) the environment is the problem. Educators can use consequences to deter behavior that a student is capable of stopping. Like reinforcers, they don’t help when the student has a stronger need, a lack of skill or is in a problematic environment. Consequences imposed by adults should be logical (they relate to the behavior that resulted in the consequence) reasonable and short-term (long term consequences lose their effectiveness) and given immediately in response to the behavior, once the student is calm.
One of the most important and common tasks in virtually every program is the printing of output. Programs use output to request input from a user, to display status messages, and to inform the user of the results of computations that the program performs. For obvious reasons, the manner in which the program displays its output can have a profound effect on the usefulness and usability of a program. When a program prints its output in a neatly formatted fashion, the output is always easier to read and understand. As a result, being able to write programs easily that produce attractive output is an essential feature of most programming languages. C has a family of library functions that provide this capability. All of these functions reside in the stdio library. Although the library contains several functions for printing formatted output, it is likely that you will only use two of them with any frequency. One, the printf (short for "print formatted") function, writes output to the computer monitor. The other, fprintf, writes output to a computer file. They work in almost exactly the same way, so learning how printf works will give you (almost) all the information you need to use fprintf. To use the printf function, you must first insure that you have included the stdio library in your program. You do this by placing the C at the beginning of your program. If you are also including other libraries, you will have other #include directives; the order of these does not matter. When you actually use the printf function, that is, when you want your program to print something on the screen, you will place a call to the function in your source code. When we want a function to perform its task we say we "call" it, and a function call is a type of program statement. All function calls have the same basic format. The first part of the call is always the name of the function (in this case, printf). Following the function name is an argument list or parameter list. The terms "argument" and "parameter" are synonymous in computer programming. An argument list provides information for the function. In the same way that a mathematical function takes some input value, performs a transformation, and produces a result (output), functions in C typically require some input as well. For the printf function, the argument list will provide the information that the function should print to the screen. We generally say that we pass arguments to a function. An argument list has a particular form as well. First, a pair of parentheses always encloses the list of arguments. Inside the parentheses, we separate multiple arguments from each other with a comma. Since a function call is a kind of statement, you will also need to follow the call with a semicolon, just after the closing parenthesis of the argument list. We can formalize a function call then as follows: function_name (argument1, argument2, ...); The "..." signifies that there may be more arguments. In fact, there may also be fewer arguments. Different functions require different kinds of input. Some functions need only one parameter, others may need many, and some do not need any parameters at all. In this last case, the argument list would simply be a pair of parentheses with nothing inside. In most cases, any given function needs a particular number of parameters. That is, we might have a function that requires three pieces of information to perform its task. Every time we call that function, we will need to provide exactly three parameters in its argument list. The printf function is unusual, because the number of parameters it needs is variable. It always requires at least one argument, called the format string. Depending on what this argument contains, we may need to pass other parameters to printf as well. In C, a "string" is a sequence of one or more characters that the programmer intends to use as a unit (such as a word or sentence). The first program you saw printed a sentence, "Hello world," to the screen. The series of characters that forms this sentence is a string. To ensure that the compiler processes a string as a string, rather than as identifiers, we use a pair of double quotes around a string constant to show that it is a string. This is quite similar to using single quotes to indicate a character constant. It is important to remember that the single and double quotes are not interchangeable. Later, you will learn the difference between a one-character string, such as "a" and a single character, such as 'a'. For now, you must simply try to remember that they are different. The following mnemonic may help you remember when to use double quotes and when to use single quotes. A character is always just one character, while a string usually contains several characters. Thus, a character is usually "shorter" than a string. The single quote is also "shorter" than the double quote, so you use the "short" with the "short." In its simplest form, a format string is just a series of characters that you want to print to the screen. It will print exactly as it appears in the argument list, except that the double quotes will not appear. This is generally how you would use printf to print a prompt to request that the user of a program enter some data. For example, if you wanted to ask a user to type a number, you might call printf as printf ("Please type an integer then press Enter: "); When the computer executes this statement, the message will appear on the screen: |Please type an integer then press Enter:| You will often use this simplest kind of format string when you want to display some sort of status message or explanation on the screen. For instance, if you wrote a program that you knew would take some time to perform a task, you would probably want to let the user know that the program was working and had not crashed. You might use printf to tell the user what the program is doing: printf ("Searching, please wait..."); On the screen, you would see: |Searching, please wait...| In both of the examples above, once the message has appeared, the cursor will remain at the end of the printed output. If you called printf again, the next message would appear immediately after the first one. Usually, this will not be what you want. Instead, you will want to print the next message on the next line of the screen, but you will need to tell printf to do this; it will not happen automatically. You know that if you are typing that you press the Enter key to get from one line to the next, but, as a programmer, if you press the Enter key inside the double quotes of the format string, the cursor will go to the next line of your source code file. If you then type the double quote, closing parenthesis, and semicolon and then try to compile the program, the compiler will give you a syntax error. Usually, the error message will tell you that you have an "unterminated string constant." This is because the compiler expects to find the closing double quote on the same line as the opening double quote. Obviously, then, we need another way to tell printf to send the cursor to the next line after printing the rest of the characters in the format string. C uses escape sequences within a format string to indicate when we want printf to print certain special characters, such as the character that the Enter key produces. The escape character for a newline (which sends the cursor to the beginning of the next line on the screen) is \n. The backslash is called the escape character in this context and it indicates that the programmer wants to insert a special character into the format string. Without the backslash, printf would simply print the 'n'. You might guess that the 'n' is an abbreviation for "newline." C provides several escape sequences, but only a few are common. Others that you might find useful appear in the following table: |\n||prints a newline| |\b||prints a backspace (backs up one character)| |\t||prints a tab character| |\\||prints a backslash| |\"||prints a double quote| If we alter the second example above as follows: printf ("Searching, please wait...\n"); the screen will appear as before, except that now the cursor will be on the next line. Furthermore, if the program contains another printf statement later on, the next output will be printed on that same next line. Here are a few more examples of printf statements that make use of printf ("Joe's Diner\b"); printf ("Please type \"Yes\" or |Please type "Yes" or "No":| As you can see, escape sequences give us some ability to format output even when we use the simplest form or the printf function, but printf is actually much more powerful than we have seen. First, assume for a moment that you have declared a variable in your program as follows: int number = 10; Now suppose that you want to prove to yourself that the variable number really does hold the value 10. You want to display the string "The value of number is 10" on the screen. You might try calling printf as follows: printf ("The value of number is number\n"); /* This won't work */ Of course, because printf prints the format string exactly as it appears (except for the escape sequences), what you will see on the screen is: |The value of number is The problem is that within the program's source code, we have only one way to refer to the variable number and that is by its name. The names of identifiers are made up of characters, so if we place a variable's name in a format string, printf will simply print that name. Just as C solves the problem of displaying special characters within a format string through the use of escape sequences, it solves this problem using another special notation within the format string. Besides escape sequences, the format string argument can also contain format specifiers. A format specifier is a placeholder that performs two functions. First, it shows where in the output to place an item not otherwise represented in the format string and it indicates how printf should represent the item. The "item" is most often a variable, but it can also be a constant. In particular, format specifiers allow us to print the values of variables as well as printing their names. All format specifiers begin with a percent sign, just as all escape sequences begin with a backslash. What follows the percent sign tells at least what data type printf should expect to print. It can also indicate exactly how the programmer wants printf to display it. Schematically, we can represent the syntax for a format specifier as As usual, italics indicate placeholders for which a programmer must substitute a specific value. The square brackets that appear in this syntax template mean that the placeholders within are optional. They can appear or not, as the needs of the programmer dictate. The brackets themselves do not appear in the source program. Most of the time, format specifiers will not include any of the optional items. The options give the programmer precise control over the spacing of the output and even over how much of the output printf will display. Thus, most often a format specifier will simply be a percent sign followed by a "type character." The type character is what tells printf what data type to print. The following table shows the most common type characters (several others exist): |type character||print format| |d||integer number printed in decimal (preceded by a minus sign if the number is negative)| |f||floating point number (printed in the form dddd.dddddd)| |E||floating point number (printed in scientific notation: d.dddEddd)| |g||floating point number (printed either as f or E, depending on value and precision)| |x||integer number printed in hexadecimal with lower case letters| |X||integer number printed in hexadecimal with upper case letters| For example, the format string "%d" indicates to printf that it should write an integer in base 10 format, whereas the format string "%s" tells printf to print a string. Notice that the format specifiers tell what kind of thing the programmer wants to display, but they do not tell what value to print. That means that printf will need some additional information in the form of an additional argument. A format string can contain more than one format specifier and the format specifier(s) can appear in conjunction with other text, including escape sequences. Each format specifier that appears in the format string requires an additional argument in the argument list. The additional argument specifies what value printf should substitute for the format specifier. For instance, the following call to printf will display the value of our variable number: printf ("The value of number is %d\n", number); |The value of number is The two arguments to this call to printf combine to tell the function exactly what to write on the screen. What happens is that the printf function actually takes apart the format string and checks each character before displaying it. Whenever it encounters the percent sign, it checks the next character. If the next character is a type character, printf retrieves the next argument in the argument list and prints its value. If the next character is not a type character, printf simply displays the percent sign. For example, printf ("The value of number is %q\n", number); |The value of number is Although it is unlikely that you will ever really want to do so, you may be asking, "so what if I want to print something like '%d' on the screen." If printf finds the sequence %d in a format string, it will substitute a value for it. To print one of the format specifiers to the screen, then, you have to "trick" printf. The following call: printf ("The % \bd format specifier prints a base 10 number.\n"); |The %d format specifier prints a base 10 number. In the format string, a blank space follows the percent sign, not a type character, so printf will simply display the percent sign. It then displays the blank space (the next character in the format string), and then it displays the backspace character (specified with the escape sequence \b). This effectively erases the blank space. The next character in the format string is the letter 'd', which printf writes in the place where it originally wrote the blank. We mentioned above that we can have more than one format specifier in a format string. If we do this, we will need one additional argument for each of the format specifiers. Furthermore, the arguments must appear in the same order as the format specifiers. Examine the following examples: printf ("The value of number, %d, multipled by 2, is %d.\n", number, number*2); |The value of number, 10, multiplied by 2, is 20. As you can see from the previous example, the argument can be an expression such as number * 2 as well as a variable. printf ("The value of number, %d, multipled by 2, is %d.\n", number*2, number); |The value of number, 20, multiplied by 2, is 10. In the preceding example, the order of the additional parameters is wrong, so the output is plainly nonsensical. You must remember that printf does not understand English, so it cannot determine which argument belongs with which format specifier. Only the order of the parameters matters. The printf function substitutes the first of the additional arguments for the first format specifier, the second of the additional arguments for the second format specifier, and so on. Thus, printf ("%d + %d = %d\n", number, number*2, number + number*2); |10 + 20 = 30 Because printf uses the additional parameters to give it the values to substitute for the format specifiers, it is essential that, as a programmer, you supply enough parameters. For instance, if we rewrite the preceding example as follows so that we have three format specifiers in the format string, but only two additional parameters: printf ("%d + %d = %d\n", number, number*2); printf will print something bizarre on the screen. Since it has no value for the third format specifier, it will print a garbage value. We have no way to predict exactly what value it will display (it may even coincidentally be the right value!), but it would not be surprising to see output such as: |10 + 20 = -4797 The moral of the story is that if you see strange output when you are using format specifiers, one of the first things you should check is the order and number of the additional arguments. You might also see unexpected output for one other reason. The type character in the format specifier determines completely how printf will display a value. If you include a format specifier with a particular type character in your format string and then give an argument of a different data type, printf will display the value of the argument using the syntax for values corresponding to the type character, not the syntax corresponding to the data type of the argument. For example, printf ("%d", 'a'); since the ASCII code for 'a' is 97. printf ("%f", 5); In each example, printf makes an implicit type conversion of its argument to force it to agree with the data type that the type character specifies. In some special cases, you can use this automatic conversion to your advantage, but most of the time, you will want to ensure that the data type of the argument matches the type character. The syntax template for a format specifier given above provides for several options. These options allow the programmer to control precisely how output will appear on the screen. The syntax template appears again here for easy reference: Again, the order of the options is just as important as the order of arguments to the printf function. In other words, if you want to use multiple options, they must appear in the same order as they do in the syntax template. Although the first option is flags, it is actually the least common, so we will leave its discussion for last. The width option is the option you will use most frequently. This option is often called a field width. We usually use field widths to line up columns of data to form tables on the screen. Printing things in tables makes output more readable. We can also use field widths for other reasons as well. Since they allow us to control exactly where a value will appear on a line of the screen, we can also use field widths when we want to "draw" with characters. The value that the format specifier indicates is sometimes called a "field" so the field width controls how many columns the field will occupy on the screen. The value that we substitute for the italicized width in an actual format specifier can be one of two things. Most often, it will be an integer. For example, assume that your program contains a declaration for a character variable as follows: char digit = '2'; If the body of the code contains the following statement: printf ("%3c\n", digit); what printf will display is: Notice that two blank spaces precede the digit '2'; this means the entire field (two blanks and one non-blank character) occupies three columns. In addition, notice that the blanks come before the digit. We call this right justification. Right justification implies that items on a single row of the screen will line up along the right margin. Thusm if we immediately make another call to printf as printf ("%3c\n", digit+1); the screen will look like this: The field width you specify is the minimum number of columns that output will occupy. If a complete representation of a value requires more columns, printf will print the whole value, overriding the field width. For example, consider the following call to printf: printf ("%3f\n", 14.5); The field width specifier in this call is 3, but the value, 14.5 actually requires four columns (the decimal point requires a column as well as the digits. In this case, the display will be: If we alter the call: printf ("%3f\n%3f", 14.5, 3.2); Notice that because the field width was too small, the right justification fails; the two numbers do not line up along the right hand margin. A field width that requires fewer columns than the actual data value requires will always result in this failure. You may be wondering at this point how justification allows us to print tables. So far, we have only put one value on each line, but if we print several values before each newline, we can use field widths to force them to line up. Consider the following series of printf statements: printf ("%10s%10s\n", "month", "day"); printf ("%10d%10d\n", 1, 10); printf ("%10d%10d\n", 12, 2); What appears on the screen when these statements executes is a table: month day 1 10 12 2 Notice in particular that the field widths always start from the last column printed. At the beginning, the cursor is in the extreme upper left corner of the screen. The format string in the first printf statement specifies that the function should print the string "month" using 10 columns. Since "month" only requires 5 columns, printf precedes the string with five blanks. At this point, the cursor will be in the eleventh column. The next format specifier requires printf to write the string "day", also using 10 columns. The string "day" takes up three columns, so printf must precede it with seven blanks, starting from the current cursor position. In other words, the column count begins just after the last letter of the string "month". In a situation like this, where we are printing only constants, we could simply embed the blanks in the format string and not bother with format specifiers or additional arguments. The sequence of statements: printf (" month day\n"); printf (" 1 10\n"); printf (" 12 2\n"); would produce exactly the same output. In most situations, however, the additional arguments will be variables. As such, we cannot necessarily determine what values they will hold when the program executes. In particular, when we have numeric variables, we will not be able to predict whether the value of the variable will be a one-digit number or a 4-digit number. Only by using field widths can we guarantee that the columns will line up correctly. Occasionally, when we write a program, we cannot even predict how large a field width we will need when a program executes. This implies that the field width itself needs to be a variable, for which the program will compute a value. Although it is rare for this situation to arise, it is worth mentioning how you can accomplish this. Just as was the case when we wanted to print the value of a variable, if we try to use a variable's name as a field width specifier, printf will simply print the name to the screen. For example, assume that you have declared an integer variable named width and have somehow computed a value for it. The call: printf ("%widthd\n", 10); To solve this problem, C uses an asterisk in the position of the field width specifier to indicate to printf that it will find the variable that contains the value of the field width as an additional parameter. For instance, assume that the current value of width is 5. The statement: printf ("%*d%*d\n", width, 10, width, 12); Notice that the order of the additional parameters is exactly the same as the order of the specifiers in the format string, and that even though we use the same value (width) twice as a field width, it must appear twice in the parameter list. Precision specifiers are most common with floating point numbers. We use them, as you might expect from the name, to indicate how many digits of precision we want to print. Compilers have a default precision for floating point numbers. The help facility or user's manual for your compiler will tell you what this default is, although you can also determine its value by simply printing a floating point number with a %f format specifier and counting how many digits there are after the decimal point. Quite often, we want to control this precision. A common example would be printing floating point numbers that represent monetary amounts. In this case, we will typically want just two digits after the decimal point. You can see from the syntax template that a period must precede the precision specifier. The period really is a syntactic device to help the compiler recognize a precision specifier when no field width exists, but the choice of a period serves to help remind the programmer that it represents the number of places after a decimal point in a floating point number. Just as for the field width specifier, the programmer may use a number or an asterisk as a precision specifier. The asterisk again indicates that the actual value of the precision specifier will be one of the additional parameters to the printf call. For example, printf ("%.2f\n", 3.675); Notice that printf rounds the number when a precision specifier requires that some digits must not appear. printf ("%.*f\n", width, 10.4); assuming that the current value of width is 3, will print: Precision specifiers have no effect when used with the %c format specifier. They do have an effect when printing integers or strings, however. When you use a precision specifier with integer data, one of two things may happen. If the precision specifier is smaller than the number of digits in the value, printf ignores the precision specifier. Thus, printf ("%.1d\n", 20); will print the entire number, "20". Here, the precision specifier is less than the number of digits in the value. On the other hand, if the precision specifier is larger than the number of digits in the value, printf will "pad" the number with leading zeros: printf ("%.4d\n", 20); will print "0020". With string data, the precision specifier actually dictates the maximum field width. In other words, a programmer can use a precision specifier to force a string to occupy at most a given number of columns. For instance, printf ("%.5s\n", "hello world"); will print only "hello" (the first five characters). It is fairly rare to use precision specifiers in this fashion, but one situation in which it can be useful is when you need to print a table where one column is a string that may exceed the field width. In this case, you may wish to truncate the long string, rather than allow it to destroy the justification of the columns. Generally, when this happens, you will use both a field width specifier and a precision specifier, thus defining both the maximum and minimum number of columns that the string must occupy. Thus, if name is a variable that contains a string, printf ("%10.10s\n", name); will force printf to use exactly 10 columns to display the value of name. If the value is less than ten characters long, printf will pad with leading blanks; if the value has more than ten characters, printf will print only the first ten. Flags are fairly uncommon in format specifiers and although several flag options exist, you are most likely to use only two of them. The first is a minus sign, which you will use in conjunction with a field width specifier. By default, whenever printf must pad output with blanks to make up a field width, the blanks precede the representation of the data. This results in right justification, as previously mentioned. In a few situations, you may wish to left justify data. That is, you may want values to line up along the left side, rather than the right side. In most cases, it will be best to right justify numbers and left justify strings. For instance, if you wanted to print a table of student names followed by their test scores, you would probably want the table to appear as follows: Mary Jones 89 Sam Smith 100 John Cook 78 The names are lined up along the left, while the scores are lined up along the right. To print a single line of this table, your format string might look as follows: "%-15.15s%4d"The minus sign indicates left justification for the name, which must occupy exactly 15 columns. The %4d format specifier tells printf to right justify the score and to have it occupy four columns. Since we can assume that no score will actually be larger than a three digit number, specifying a field width of four ensures that we will have a blank space between the name and the score. The other flag that you may want to use is a plus sign. This flag is only meaningful with numeric data. By default, when printf displays a number, it prints a minus sign for negative numbers, but it does not print a plus sign for positive numbers. If you want the plus sign to appear, this flag will cause printf to display it. Thus, printf ("%+.3f", 2.5); will print "+2.500".
Interval arithmetic uses worst case estimates to get an inclusion for results. This can be essential, if the source data are already not precisely known. But even if they are, the computation may spoil the result. Interval arithmetic uses fixpoint theorems to guarantee that there is a solution in the computed interval. This notebook introduces the interval arithmetic of Euler. For a reference to intervals and algorithms using intervals have a look at the following pages. The closed interval [2,3] is entered with the following notation. Alternatively, you can use plus-minus. If you do not have this key, press F8. Plus-Minus is also available for output. To set this output format permanently, use ipmformat. To reset to the normal format, call ipmformat(false). >ipmformat(true); ~1,2~, ipmformat(false); Operators between intervals yield an interval result. The basic rule is: The result contains all possible results, when the operator is applied to all arguments in all intervals. >~2,3~ * ~4,7~ Note that the following two expressions are different. The first one contains all squares of elements in [-1,1], the second contains all s*t, where -1 <= s,t <=1. The interval ~s~ is a small interval around s. For a simple example, we compute the height of a mountain in 20.2km distance, which appears at an angle of 3.2° above the current level. We assume that all values are known to the significant digits. >d=20.2km±0.05km; a=3.2°±0.05°; d*sin(a) Indeed, if we take the extremal values of our errors, we get a large interval of possible results. Evaluating a longer expression in the interval arithmetic can lead to error propagation. Assume we have the following function. Let us evaluate the function in an interval of length 0.2. As we will see, this is not a good inclusion of the result. One of the main reasons is, that the interval evaluation takes a different x from the interval, whenever the variable x occurs, and includes all these results. The function "mximieval" uses Maxima to compute the derivative of the expression, and with the derivative, we get a much closer inclusion. Sub-dividing the interval yields an even better inclusion. A simple subdivision may lead better results too, but with more effort. However, we do not have to compute the derivative. So this method works for functions too, not only for expressions. If we plot the function, we see, that the maximal value on the interval is at 1.1, and the minimal value is somewhere inside the interval. We can find the place of the minimum with fmin, and then evaluate the expression there to get the minimal value. Thus we get the true limits of the image of [1.1,1.3] under this mapping. We can make a plot of an interval function. To demonstrate this, we disturb the x values -1 to 2 by 0.01, and evaluate our expression. We print the first three values only. >x:=-1:0.01:2; y:=expr(x±0.01); y[1:3] [ ~-0.07,0.07~, ~-0.04,0.098~, ~-0.01,0.13~ ] The plot2d function will plot the y-ranges as a filled curve. The 10-th partial sum of the Taylor series of exp is the following polynomial. 1 1 1/2 1/6 1/24 1/120 1/720 1/5040 1/40320 1/362880 1/3628800 Check with Maxima. 10 9 8 7 6 5 4 3 2 x x x x x x x x x ------- + ------ + ----- + ---- + --- + --- + -- + -- + -- + x 3628800 362880 40320 5040 720 120 24 6 2 + 1 We evaluate this polynomial in -2. Compare to the correct result for the infinite series. We can get an inclusion of the true value from the Taylor expansion with the remainder term in interval notation. The error of the series expansion is less or equal the next term in the series, since the series is alternating. Note that we have to use ~p~ because p is not exactly represented in the computer. But we can use the one point interval [-2,-2]. Some more terms. Note that 21! is exactly representable in the computer. The interval arithmetic provides guaranteed inclusions of zeros of functions using the interval Newton method. For this, we need a function f and its derivative f1. Maxima can compute the derivative for us. Compare with the real Newton method. There is a very simple function, which uses a bisection method controlled by interval computation. One famous example occurs, if we imagine a chord around the earth, which is 1m longer than the perimeter of the earth. We lift that chord at one point. How high can we lift it? If we denote the angle of lifting by 2*alpha, the problem is essentially equivalent to the solution of Let us try this. From this, we get the height with elementary trigonometric computations. We have no idea, how good this solution is. So we check with an interval solver, and find that the answer is quite reliable. For larger values of r, we get less reliable results. >r=1.2e15; ibisect("tan(x)-x",0,0.1,y=1/(2*r)); r*(sec(%)-1) Let us try a multidimensional example. We solve First, we need a function. >function f([x,y]) &= [x^2+y^2-1,x^2/2+2*y^2-1] 2 2 2 2 x [y + x - 1, 2 y + -- - 1] 2 First we plot the contour lines of the first and the second component of f, i.e., the solutions of f_1(v)=0 and f_2(v)=0. The intersections of these contour lines are the solutions of f(v)=0. >plot2d("x^2+y^2",r=2,level=1); ... plot2d("x^2/2+2*y^2",level=1,add=1): For the Newton method, we need a derivative matrix. >function df([x,y]) &= jacobian(f(x,y),[x,y]) [ 2 x 2 y ] [ ] [ x 4 y ] Now we are ready to get an inclusion of the intersection points (the zeros of f) using the two dimensional Newton method. The interval Newton method yields a guaranteed inclusion of one of the solutions. [ ~0.8164965809277252,0.816496580927727~, ~0.5773502691896251,0.5773502691896264~ ] The usual Newton method yields the solution as a limit point of the Newton iteration. The Broyden method does the same, but does not deliver a guaranteed inclusion. It does not need the derivative. There are also interval solvers for differential equations. We try The first, very simple interval solver does not use derivatives and is very coarse. We get a very rough inclusion with the step size 0.1. >x=1:0.1:5; y=idgl("y/x^2",x,1); plot2d(x,y,style="|"): With a finer step size of 0.01, the inclusion is better. >x=1:0.01:5; y=idgl("y/x^2",x,1); plot2d(x,y,style="|"): The following solver uses a high order method and Maxima. We get a very good inclusion. >y=mxmidgl("y/x^2",x,1); y[-1], plot2d(x,left(y)_right(y)); Of course, the default ode solver gets a good result too, and very quickly. Let us demonstrate the use of interval methods for the inclusion of a Eigenvalue. We first generate a random postive definite matrix. Then we compute its Eigenvalues in the usual way. We take the first Eigenvalue. Compute an eigenvector. We normalize it in a special way. We now set up the function >function f(x,A) ... n=cols(x); return ((A-x*id(n)).(1|x[2:n])')'; endfunction The following is the starting point of our algorithm. We could use the Broyden method to determine the zero of this function. For the interval Newton method, we define the Jacobian of f. >function f1(x,A) ... n=cols(x); B=A-x*id(n); B[:,1]=-(1|x[2:n])'; return B; endfunction Let us expand lx a bit, so that it probably contains a solution of f(x)=0. The interval Newton method now proves a guaranteed inclusion of the Eigenvalue. v=1 means, that the inclusion is verified. We try to compute 1/n is much smaller than 1, and while 1+1/n is correct up to 16 digits, we loose many digits from 1/n. >longestformat; n=12346789123; 1+1/n And thus the following result is inaccurate, if we compare to the correct result. This is called cancellation. Using interval arithmetic, we can see that our result is not good. As a consequence, the following approximation of E is much worse than it should be. The correct error is of the order E/n~=1e-10. But we have only 1e-6. We can compare with a slow long precision computation in Maxima. >&fpprec:30; &:float((1+1b0/@n)^@n) // (1+1/n)^n with 30 digits However, with a binomial expansion and an estimate we get a much more precise evaluation of (1+1/n)^n. >kn:=15; k=~1:kn~; 2+sum(cumprod((1-k/n)/k)/(k+1))+~0,1/kn!~ We can design a function for log(1+x), which works even for very small x. Depending on the size of x, we use a Taylor expansion for log(1+x), or the usual evaluation. The Taylor expansion can be computed by Maxima at compile time. >function map logh (x:scalar) ... ## Compute log(1+x) exactly if abs(x)<0.001 then res=&:taylor(log(1+x),x,0,7); if isinterval(res) then res=res+~-0.001^8/(8*0.999^8),0~; endif; return res; else return log(1+x); endif; endfunction Now we get a good evaluation of (1+1/n)^n. And for intervals we get an exact inclusion of the value.
Are There Ways to Reduce the Risk of SIDS? Currently, there is no way to prevent SIDS, but there are things that parents and caregivers can do to reduce the risk of a SIDS death. For example, researchers now know that the mother's health and behavior during her pregnancy and the baby's health before birth seem to influence the occurrence of SIDS. Scientists also know that certain environmental and behavioral influences (called risk factors) can make an individual more susceptible to disease or ill health. Although risk factors are not necessarily the cause of a condition, by studying risk factors, scientists are able to better understand a disease or condition, which often leads to detecting a cause. SIDS researchers and clinicians continue to try to identify risk factors that can be modified or controlled to reduce an infant's risk for SIDS. For example, SIDS experts now know that the baby's sleep position, exposure to smoke, and becoming overheated while asleep can increase the infant's risk for SIDS. Infant Sleep Position In April 1992, the American Academy of Pediatrics (AAP) Task Force on Infant Sleep Position issued a statement recommending that infants be placed on their backs to sleep to reduce the risk of SIDS. Then, in 1994, the U.S. Public Health Service, AAP, the SIDS Alliance, and the Association of SIDS and Infant Mortality Programs cosponsored the Back to Sleep campaign, a national public service initiative to disseminate AAP's recommendation that infants be placed on their back to sleep. Between 1992 and 1998, among U.S. infants, stomach (prone) sleeping decreased from more than 70 percent to approximately 20 percent. During that same time frame, the number of SIDS deaths declined by more than 40 percent (Willinger et al., 1998; AAP, 2000; NICHD, 2001). Not surprisingly, most researchers, policymakers, and SIDS professionals agree that this significant decline occurred largely as a result of changing sleep position (AAP, 2000). Rates of SIDS are over twice as high among American Indians and African Americans compared with Whites. Prone sleeping was found to be a significant risk factor for SIDS in an African- American urban sample (Hauck et al., 2002). These authors recommend educational outreach to the African-American community. Another recent study of the relationship between infant sleep position and SIDS concluded that infants placed in an unaccustomed prone or side sleeping position are at a higher risk of SIDS (Li et al., 2003). This ethnically diverse, populationbased, case-controlled study was conducted in 11 counties in California. The health message from this research is that babies should be on their backs for all sleep, including naps. Exposure to Smoke Researchers have concluded that if a mother smokes during or after pregnancy, she is placing her infant at a greater risk for SIDS (AAP, 2000). Some studies suggest that exposure of the newborn to tobacco smoke (whether or not the mother smokes) may be associated with increased risk for SIDS. In a 1997 policy statement, AAP cautioned, "Exposure of children to environmental tobacco smoke is associated with increased rates of lower respiratory illness and increased rates of middle ear effusion, asthma, and SIDS" (AAP 1997). According to AAP (2000), some evidence points to an association of the amount of clothing or blankets on an infant, room temperature, and the time of the year with an increased risk for SIDS. The increased risk associated with overheating is particularly clear when infants are placed on their stomachs (prone). AAP cautions that the possible relationship between clothing and climate as stand-alone factors (or as a cluster of environmental risk factors) is less clear. Moreover, although the number of recorded SIDS deaths has been higher in the winter months, that increase may be due to the greater frequency of colds, flu, and other infections during the winter. Researchers and consumer safety advocates continue to look for a possible link between SIDS and soft bedding (Scheers, Dayton, and Kemp, 1998). During 2000, seven major retailers joined with the U.S. Consumer Product Safety Commission (CPSC) to kick off a nationwide campaign promoting safe bedding practices for infants. Many retailers are developing public service campaigns to spread this message to parents and other infant caregivers. The hope is that by circulating this information, infant deaths will be reduced and that those responsible for infant care will receive one consistent message about ensuring a safe sleeping environment for babies. In recent safety alerts, CPSC has warned parents to guard against unfounded claims from manufacturers of some infant bedding materials that the use of certain products can reduce SIDS. Parents and other caregivers need to be aware that there is no product currently available that can guarantee prevention of a SIDS death.
*Body control and movement *Stanislavski-based acting technique *Expression of emotion *Development of imagination *Pantomime applications for film acting* Students will learn how storyboards are used to visualize the action of a film before it is shot. They will also learn how to use illusion techniques to do Green/Blue Screen work in front of a camera. Green/Blue Screen is a film/video technique used to film actors in a studio in front of an empty backdrop and then combine their image with a separately filmed background image to make it appear as if the actors were actually filmed at the location. This technique is also used to combine computer-generated backgrounds, characters, and effects with live action. Students will learn how to use illusion techniques from pantomime in a more realistic acting style to create the illusion that they are interacting with environments, characters, or objects that are not there with them but which will be added later. *Artistic discipline - Skills for the “professional” actor* Students will learn important aspects of being a professional performer, including how actors are expected to conduct themselves in professional situations, and how to deal with other people in the business.
It’s becoming increasingly well-established that microbes behave differently in microgravity than on Earth… that’s one of the justifications for our own Project MERCCURI. Some previous work has focused on the ability of microbes to survive higher-than-normal levels of antibiotics when grown in space, though the mechanism for this is not at all understood. This article describes the “Antibiotic Effectiveness in Space” project which will look at this question in more detail. From the article: The Antibiotic Effectiveness in Space (AES-1) investigation, scheduled to launch in January aboard the first contracted Orbital resupply flight to the space station, is a systematic attempt to probe the reasons for antibiotic resistance in space. “Is the mechanism that’s allowing this to occur some form of adaptation or drug resistance acquisition within the cell, or is it more of an indirect function of the biophysical environment, the changes due to microgravity and mass transport?” asked AES-1 principal investigator David Klaus, Ph.D., of BioServe Space Technologies at the University of Colorado in Boulder.
Environmental Science: Fire Management Environmental science informs decisions about fire management in some ecosystmes. In both forested ecosystems and grasslands, land managers face difficult decisions about fire management. Wildfires are difficult to control and potentially destructive to human developments. With this potential destruction in mind, for most of the 20th century, land managers practiced methods of fire suppression, focused on keeping fires from starting and putting them out quickly when they did. In recent years, however, ecosystem scientists called fire ecologists have recognized that fire is a welcome periodic disturbance in some ecosystems. For example, fires that sweep through some ecosystems help the processes of nutrient and matter cycling, and in some cases, species have evolved to depend on occasional fires. When land managers completely eliminate fire, those ecosystems and species suffer. Both grasslands and forests may benefit from an approach to fire management that includes prescribed burns. Prescribed burns are fires that land managers set deliberately and control as they burn. The goal is to provide the ecosystem with the services of a fire without allowing the situation to grow out of control and threaten human lives or urban areas. Because many of the largest, most difficult to control fires are a result of the buildup of plant matter in the absence of fire, allowing occasional, controlled burns keeps the fires smaller and less intense and keeps the biomass, or burnable material, to a manageable level.
Acid reflux occurs when the contents of your stomach come up into your esophagus. This is because your lower esophageal sphincter (LES) relaxes and allows stomach acid to enter your esophagus. Your doctor may diagnose you with gastroesophageal disease (GERD) if this happens more than twice a week. Several food-related factors might explain why acid reflux occurs, such as: - the position of your body after eating - the amount of food you eat during a single meal - the type of foods you eat You can control each of these factors by making smart decisions about how and what you eat. Modifying your body position to an upright posture after a meal and eating smaller portions can help prevent reflux. However, knowing which foods to avoid can be a bit more confusing. Part of the issue is that there is still some controversy in the medical community over which foods actually cause reflux symptoms. Despite this lack of consensus, many researchers agree that certain types of foods and beverages are best avoided to prevent indigestion, heartburn, and other symptoms of acid reflux. The following food, drinks, and ingredients may relax the LES and encourage additional esophagus-irritating acid. The way that you prepare your food can increase the likelihood of developing reflux. Here are some preparations to avoid. High-fat Meals and Fried Foods Fatty foods generally decrease pressure on the LES and delay stomach emptying, boosting your risk for reflux symptoms. Try to decrease your total fat intake by avoiding these high-fat foods: - fried onion rings - French fries - whole milk - high-fat cuts of red meat (such as marbled sirloin or prime rib) - ice cream - sour cream - potato chips - creamy salad dressings The National Institutes of Health (NIH) includes spicy foods on its list of foods that can worsen reflux symptoms. Some studies have suggested that spicy foods, such as those containing chili spice, can cause abdominal pain and burning symptoms in functional gastrointestinal disorders. Yet, a 2010 review in the Journal of Neurogastroenterology and Motility showed that repeated exposure to capsaicin, the active ingredient in chili, doesn’t produce the same discomfort as it does when eaten only occasionally. In fact, researchers noted that eating spicy foods like chili may actually improve GERD symptoms in those who eat them on a regular basis. Consider your individual tolerance level for spice when planning meals. Fruits and Vegetables While fruits and vegetables are generally an excellent and necessary part of your diet, certain types have been shown to exacerbate GERD symptoms. The following fruits and veggies are common offenders: - citrus fruits, such as oranges, grapefruit, lemons, limes, and pineapple - tomatoes and tomato-based foods (such as tomato sauce, salsa, chili, and pizza sauce) - garlic and onions A study published in the American Journal of Gastroenterology found that raw onions significantly increased the number of reflux and heartburn episodes in people who suffer regularly from heartburn. However, raw onions didn’t increase these measures in non-heartburn-sufferers. Some doctors suggest that cooked onions may be easier for GERD patients to tolerate. If in doubt, discuss your tolerance level with your doctor. Several common drinks also can be problematic for GERD sufferers, including: - coffee and tea - carbonated beverages - citrus and tomato juices Coffee, with or without caffeine, may promote reflux symptoms, though research on both coffee and tea has been contradictory in this area. You should only consume the beverages on this list if you tolerate them well. Other Foods, Medicines, and Supplements A number of other foods and medicines can cause poor function of the LES, leading to GERD symptoms, for example: - mint (peppermint or spearmint) - iron and potassium supplements - aspirin and other pain relievers - alpha blockers - calcium channel blockers You may be tempted to stop taking a medication or supplement if you suspect that it’s increasing your acid reflux or heartburn. Always talk to your doctor before stopping your current medications. Your doctor can help you develop smart eating habits to avoid your GERD triggers.
The word fiction comes from the Latin word fictum, which means "created". This is a good way to remember what fiction is: if it has been created or made up by somebody, it is fiction. Fiction can be written or told, or acted on stage, in a movie, on television or radio. Usually the purpose of fiction is to entertain. However, the dividing line is not always so clear. Fiction with real people or events in it is called historical fiction, because it is based on things that happened in history. This type of fiction is written so that we can imagine and understand what it was like when those people were alive. Reality can be presented through creative writing, and imagination can open the reader's mind to significant thoughts about the real world. Parts of fictionEdit In fiction, there are always characters. There is usually a protagonist, or hero. Sometimes this is a group of people, not one person. You usually support the hero (or heroes.) The protagonist has to face some kind of enemy, usually another character called the antagonist. The fight between the protagonist and their enemy is called the conflict. Plot is a literary term. It is the events that make up a story, particularly as they relate to one another. The events may form a pattern. That pattern may be a sequence, through cause and effect, or how the reader views the story, or simply by coincidence. Aristotle on plotEdit In his Poetics, Aristotle considered plot (mythos) the most important element of drama—more important than character, for example. A plot must have, Aristotle says, a beginning, a middle, and an end, and the events of the plot must causally relate to one another as being either necessary or probable. Freytag on plotEdit Gustav Freytag considered plot a narrative structure that divided a story into five parts, like the five acts of a play. These parts are: exposition (of the situation); rising action (through conflict); climax (or turning point); falling action; and resolution. The climax is the most dangerous and exciting part of the plot. For example, if you were on a rollercoaster, the highest part would be the climax. The climax usually near to the end of the story, because the whole story has been building up to it (rising action). In an action drama it is the point when the hero or heroine looks like s/he is about to lose, and is in the greatest danger. Conflict is very important in fiction. Every work of fiction needs a conflict, or problem. There are five basic types of conflict. In modern times, a new one, "Person vs. Technology", has been used. Person vs. SelfEdit Person vs. Self is when a character is facing his own fears, confusion or philosophy. Sometimes the character tries to find out who he or she is, and comes to realize it or change it. Sometimes the character struggles to find out what is right or wrong. Although the enemy is inside the character, they can be influenced by outside forces. The struggle of the human being to come to a decision is the base of this type of conflict. Person vs. PersonEdit Person vs. Person is when the hero is fighting another person. There is usually more than one time that the hero meets the enemy. For example, if a child is being bullied, that is person vs. person conflict. An example is the conflict between Judah and Messala in Ben-Hur. Person vs. SocietyEdit Person vs. Society is when the hero's main source of conflict is traditions or ideas. The protagonist is basically fighting what is wrong with the world he lives in. Society itself is often treated as a single character, just as another person is in person vs. person conflict. An example in literature would be Wuthering Heights by Emily Brontë. Person vs. NatureEdit Person vs. Nature is when a character is fighting against forces of nature. Many films focus on this theme. It is also found in stories about trying to survive in places far away from humans, like Jack London's short story To Build a Fire. Person vs. SupernaturalEdit Person vs. Supernatural is when a character is battling supernatural forces. Sometimes, this force is inside themselves, it is internal. Such stories are sometimes used to represent or criticize Freud's theory of id vs. superego. Bram Stoker's Dracula is a good example of this, as well as Frankenstein by Mary Shelley and Christabel by Samuel Coleridge. It is also very common in comic books. Person vs. Machine/TechnologyEdit - fiction. Merriam-Webster.com. Merriam-Webster. 2015. - William Harmon and C. Hugh Holman A Handbook to Literature. 7th ed, New York: Prentice Hall, 1990, p. 212. - "Literature in the form of prose, especially novels, that describes imaginary events and people". Definition of 'fiction'. Oxford English Dictionaries (online). Oxford University Press. 2015. - Foster-Harris (1960). The basic formulas of fiction. Norman, OK: University of Oklahoma Press. ASIN B0007ITQBY. - Freytag, Gustav 1863. Die Technik des Dramas.
Food Related Books and Activities (Originally produced in cooperation with the Summer Library Reading Program) 2014 Fizz Boom Legumes Book: Mouse Went Out to Get a Snack Author: Lyn Rossiter McFarland Publisher: Farrar, Straus and Giroux Age Level: 3 to 6 years Mouse is on a hunt for a snack. - 2 copies of Plates handout - 20 very firm small paper plates - cut and glue pictures from handouts onto paper plates to make plates for relay game (make 2 sets of relay plates) - 10 large paper plates (2 sets with 5 in each set) colored to match my plate colors: blue, orange, red, green, purple - MyPlate poster (available from USDA Doing the Activity: - Read the story Mouse Went Out to Get a Snack. - Show children MyPlate Poster and explain it’s significance to healthy eating. - Show children how you have colored a plate to match the colors on MyPLate as you place them in a circle. - Hold up 1 plate at a time from the set of 10 (cheese, plump plums, baby carrots, fried chicken legs, ears of corn, tasty tacos, assorted jelly beans, jolly gingerbread men, slices of chocolate cake) and ask children if they know which food on the plate belongs to which food group. Example: Holding up the cheese plate, ask; “Do you know which food group the cheese belongs to?” As children answer, “Dairy” put the cheese on the large blue colored plate. Repeat for each food. - Put the sometimes foods (jelly beans, cupcakes, gingerbread men and chocolate cake) in the center. Tell children these foods do not have a place/plate on the MyPlate Poster/MyPlate circle as they are sometimes foods. Put these foods in the center of the circle not on a plate. - Children are going to be participating in a relay race. - Divide children into 2 teams and line them up (one child in front of the other) in front of colored MyPlates (1 set of plates per team). - Place 10 food plates (1 set) in front of each team. The first person will take one plate and pass the plate over their head; the person behind them will take the plate and pass it through their legs. (over and under from child to child) - Last child in line puts the food on the correct colored plate. Sometimes foods should be in the middle of the circle. - First team with all 10 food plates on the correct colored MyPlate wins the game. Last update: Wednesday, December 07, 2016
En la clase de Lengua, estuvimos leyendo el libro “Ficciones” de Jorge Luis Borges. Hasta ahora sólo leímos el cuento “El Sur”, entonces Carol nos pidió que hagamos un prezi/glogster/documento o lo que queramos sobre la historia. Yo trabajé con Jose, Lucía y Rochi y este es el prezi que hicimos: We have started working on our new topic for this year in History, The League of Nations in the 1930s. Our teacher, Lenny, asked us to watch a video and answer the following questions. 1. How does the video open? What might the connection between the League and the opening scenes in Poland be? The video starts by saying how the League had failed at mantaining peace, enforcing disarmament and establishing stability. The opening scenes show Hitler’s SS taking over Poland which was the start of the Second World War as Germany and the USSR, in a secret agreement in the Nazi-Soviet Pact, stated that they would attack and divide Poland. 2. What problems did Japan face? (Mention ALL of them) The problems were that its population was booming which meant more than a million mouths to feed, she was an isolated country that needed to open up trade to get materials, it had no natural resources (agricultural failure) to exploit on its own, there was very high unemployement, it was very affected by the depression and it relied on international networks to import its goods. 3. What was the role of the army in Japan? Step by step, Japan started being under the control of the military. The army was first, and the polititians second. The army controlled the education, they made learning martial arts compulsory. 4. What did army leaders believe Japan needed? They thought that expanding towards eastern Asia would benefit the empire as they would get natural resources and more land for its very large population. 5. What was the value of Manchuria? It was rich in resources that the Japanese desperately needed for their economy, it meant more land for its population and they thought it was perfect for the “elimination of the Chinese subhuman race”. 6. What happened at Mudken? There was an explosion of a Japanese railway in Mukden, China, which was planned by themselves but blamed on the Chinese since they wanted everyone to believe that China was out of control and that they needed to intervene since they wanted and excuse to attack Manchuria. 7. What did the League do about it? The League consulted the Japanese ambassador in Geneva. 8. What was Japan’s reaction to the decision of the League? They thought it was impossible to accept what the report stated and decided to leave the League. On our Literature class, we started analysing the first poem of this year, “A Birthday” by Christina Rossetti. Our teacher, Pato, asked us to watch a video of an actress reciting the poem and answer the following questions. I worked with Jose, Lucía and Rochi. a. What is the theme? The theme is love, relationship, celebration, religion. b. What is the tone? The tone is peaceful, pacific, joyful. c. What is the main difference between the two stanzas? The first stanza revolves more around nature which can be seen with the imagery the author uses like “singing bird”, “nest in a watered shoot”, “apple-tree”, “boughs bent with…fruit”, “halcyon sea”. While the second talks more about material and luxurious objects, man-made objects. For example, “dais of silk”, “vair”. d. How are the similes in the poem appropriate for the romantic longings the speaker feels? In the first stanza the author states how happy she feels because “[her] love has come to [her]” by using natural images such as, “my heart is like a rainbow shell that paddles in a halcyon sea”, rainbow and paddling in a halcyon sea make reference to how peaceful, calm and glad she feels. e. How is the metaphor of the birthday appropriate? Because she feels like she was reborned,like there was a rebirth in herself, like she’s starting from scratch probably because she’s found love again. f. Make a list of religious symbols. What do they mean in the poem? “And peacocks with a hundred eyes” is a metaphor that means that God sees through the eyes of nature, of the animals. “Doves” are white birds and white birds represent peace.
A retina that could respond to light was created in a dish using stem cells. This is the first report of a successful attempt to create a functional, light-sensitive retina initiated from stem cells in the laboratory. The potential benefit from this research is to create retinas in the lab that can then be transplanted into the eyes of people who are blind. Many types of blindness are due to the loss of retinal cells, such as the photoreceptors (rods and cones), and these types of cells never grow back after being damaged. Millions of people could have vision restored with the availability of retinal transplants. Age-related macular degeneration, or AMD, is one of the eye diseases that causes loss of retinal cells and results in blindness. This disease affects a large percentage of the elderly population. A retinal transplant derived from stem cells would be a way to cure AMD and return sight to the affected individual. Another benefit of having retinas grown in dishes in the lab is they can be used as models for the study of various retinal diseases. The retina which was developed in the dish in this study also recapitulated the natural process of development. This means scientists can study the process of retinal development with the stem cell produced model. A team of scientists from Johns Hopkins University carried out the research to create a three-dimensional retina with light-sensitive photoreceptors using stem cells. Dr. M. Valeria Canto-Soler, an assistant professor of ophthalmology, was the lead scientist in this effort and a report of this research was recently published in Nature Communications. The stem cells which were used in this experiment were induced pluripotent stem cells, or iPSCs, which had the same ability to differentiate into different types of cells as embryonic cells. The stems cells used in the study originated from skin cells which were “tricked” into reverting back into behaving like an embryonic cell. One of the remaining problems which will need to be solved is how to get the retina to hook up to the brain in such a way to create proper visual perceptions. If retinal transplants are to be successful in producing sight, the retinal cells, called ganglion cells, must have axons that form an optic nerve to carry the information to the brain. The brain, in turn, must “read” the information from the retina correctly. The successful creation of an organ of the body, and especially a part of the nervous system, can provide lessons for the creation of other parts of the body. A liver, a kidney, and possibly even a heart could be the next organs which can be produced in the laboratory. A benefit from using stem cells to make organs for transplantation would be that the stem cells could come from the recipient’s own body, making immune response rejection less likely. Creating a retina with stem cells in a dish is something which was only a scenario in science fiction in the recent past. The report of successfully inducing a dish of stem cells to create a functioning retina is a definite signpost of the value of stem cell research. By Margaret Lutze
Hand, foot and mouth disease can be a mild or a very serious illness. It is caused by a virus. Hand, foot and mouth disease Anyone can get hand, foot and mouth disease, but it is most common in children under 10. Preschool children tend to get sicker. If your child has hand, foot and mouth disease, they’ll have painful sores in their mouth and a rash with blisters on their hands and feet. Human hand, foot and mouth disease is not related to foot and mouth disease in animals. Hand, foot and mouth disease appears most often in warm weather – usually in the summer or early autumn. What to do if you’re pregnant Hand, foot and mouth disease is rare in healthy adults, so the risk of infection during pregnancy is very low. And if a pregnant woman gets the disease, the risk of complications is also very low. However, if you catch the virus shortly before you give birth, the infection can be passed on to your baby. Most babies born with hand, foot and mouth disease have only mild symptoms. In very rare cases it is possible that catching hand, foot and mouth disease during pregnancy may result in miscarriage or could affect your baby’s development. For this reason, if you have contact with hand, foot and mouth disease while you’re pregnant, or if you develop any kind of rash, see your doctor or lead maternity carer – just to be safe. Mild fever is usually the first sign of hand, foot and mouth disease. This starts 3–5 days after your child has been exposed to the disease. After the fever starts, your child may develop other symptoms, including: - painful red blisters on their tongue, mouth, palms of their hands, or soles of their feet - loss of appetite - a sore throat and mouth - a general feeling of weakness or tiredness. The disease is usually mild and lasts 3–7 days. It can be confused with: - chickenpox (but the chickenpox rash is all over the body) - cold sores in a child’s mouth. The only medicine recommended for hand, foot and mouth disease is paracetamol. Most blisters disappear without causing problems. In the mouth, however, some may form shallow, painful sores that look similar to cold sores. If your child’s mouth is sore, don’t give them sour, salty or spicy foods. Make sure they drink plenty of liquids to avoid getting dehydrated. Call Healthline 0800 611 116 if you are unsure what you should do. How hand, foot and mouth disease is spread Hand, foot and mouth disease is spread by coughing or sneezing, or by contact with mucus, saliva, blisters or the bowel movements of an infected person. Children are contagious (‘catching’) for around 7–10 days. Keep your child home from childcare or school until blisters have dried. If blisters are able to be covered and the child is feeling well, they will not need to be excluded. - Frequent hand washing helps decrease the chance of becoming infected. - Staying away from others who have the disease and not sharing toys during the infection also helps prevent the disease.
One of the mysteries of the English language finally explained. A large spider monkey with long, thin limbs and tail, dense woolly fur, and a large protruding belly, native to the rainforests of SE Brazil. - ‘There are believed to be only 500 woolly spider monkeys alive today, making them in greater danger that the mountain gorilla.’ - ‘Her research focuses on the dietary ecology and digestive physiology of Primates, both humans and non-human, and has involved her in fieldwork with howler monkeys, woolly spider monkeys and chimpanzees as well as forest-based human societies in both the Brazilian Amazon and Papua New Guinea.’ - ‘Other endangered primates include the golden lion tamarins of Brazil, the woolly spider monkeys of Peru and the lemurs of Madagascar.’ - ‘Strier suggests that folivory is a result of seasonal fruit shortages and the woolly spider monkey needs to eat leaves to survive.’ - ‘The pelage of the woolly spider monkey is short and dense, but their limbs are long and slender.’ - ‘Following a troop of female muriquis, or woolly spider monkeys, through the forest canopy was especially exhilarating.’ - ‘Also known as muriqui, the woolly spider monkey is the largest primate in the Americas as well as one of the most endangered in the world.’ - ‘‘I am working with local landowners to try and restore the natural habitat of the woolly spider monkey by planting more trees,’ says Dr Boubli.’ - ‘Prospects for the survival of the woolly spider monkey improved with the birth in November of the first baby bred in captivity.’ - ‘The muriquis used to be called woolly spider monkeys, but that name is out of favor now, and muriqui, the monkey's Indian name, is in.’ - ‘The monkeys on the 957-hectare Caratinga Biological Station, where Veado worked, represent about a quarter of the woolly spider monkeys living in the wild.’ - ‘The woolly spider monkey, or muriqui, is the largest nonhuman primate living in the New World.’ - ‘The flagship species of this ecosystem is also the largest Neotropical primate species, the woolly spider monkey or muriqui, endemic to this ecosystem, and on the verge of extinction with critically endangered status.’ - ‘My field work has involved observations of the dietary behavior of various species of howler monkeys, spider monkeys, capuchins and tamarins as well as woolly spider monkeys.’ - ‘The Muriqui, or woolly spider monkey, is among the worlds rarest mammals, classified by IUCN as Endangered due to loss of their natural Atlantic Rainforest home.’ Top tips for CV writingRead more In this article we explore how to impress employers with a spot-on CV.
Modern Medicine and Physiology Dr. C. George Boeree Technology and the brain In the 1800's, anatomy had reached a point of sophistication that allowed medical artists to make such intricate drawings that modern surgeons could still benefit from them. But there was always a limitation involved: It was one thing to carve up a dead brain -- quite another to actually see a living brain at work. In the late 1800's and throughout the 1900's, we see some remarkable efforts at exploring the brain without removing it from its owner: First, Wilhelm Konrad Roentgen invents the x-ray in 1895. A remarkable tool for physicians and researchers, it proves less useful when it comes to the soft tissues of the brain. In 1972, Godfrey Hounsfield added the computer to the x-ray and developed computerized (axial) tomography -- the CT (or CAT) scan -- which sums multiple extras into a far more detailed three-dimensional image. In a very different approach, Hans Berger developed the first electroencephalogram (EEG) in 1929. In 1932, Jan Friedrich Tonnies created the first modern version, with its moving paper and vibrating pens. The EEG records the minute electrical coordinated pulses of large number of neurons on the surface of the cortex. It was only a matter of time before researchers added the computer to the equation. In 1981, the team of Phelps, Hoffman, and TerPogossian develop the first PET scan. The PET scan (positron emission tomography) works like this: The doctor injects radioactive glucose (that’s sugar water) into the patient’s bloodstream. The device then detects the relative activity level -- that is, the use of glucose -- of different areas of the brain. The computer generates an image that allows the researcher to tell which parts of the brain are most active when we perform various mental operations, whether it’s looking at something, counting in our heads, imagining something, or listening to music! In 1937, Isidor I. Rabi, a professor at Columbia University, noticed that atoms reveal themselves by emitting radio waves after first having been subjected to a powerful magnetic field. He called this nuclear magnetic resonance or NMR. This was soon used by scientists to identify chemical substances in the lab. It would be many years later that a Dr. Raymond Damadian would recognize the potential of NMR's for medicine. Damadian is an interesting and controversial person. He was born in New York City in 1936. When he was only eight years old, he was accepted by the Juilliard School of Music. He was awarded a scholarship to the University of Wisconsin at Madison, and then went on to medical school at the Albert Einstein College of Medicine of the Yeshiva University in the Bronx. He received his MD in 1960 at the tender ago of 24. From there, he began medical research at Brooklyn's Downstate Medical Center. Investigating tumors in rats, he noted that the NMR signals from cancerous tumors were significantly different from the signal from healthy rats. He hypothesized that the reason was the larger number of water molecules (and therefore hydrogen atoms) in these tumors. His findings were published in Science in 1971. Realizing that this was the basis for a non-surgical way to detect cancer, he got the idea for a large-scale NMR device that could record the radio waves coming from all the atoms in a human being. You only had to create a magnetic field big enough! In 1977, he and his students built a temperamental prototype of the modern MRI -- magnetic resonance imaging -- which they called the Indomitable. He tried it, unsuccessfully, on himself first, then on a graduate student named Larry Minkoff. The result was a mere 106 data points (recorded first in colored pencils!) describing the tissues of Minkoff's chest. The Indomitable is now in the Smithsonian. Damadian's story continues with his recording of a patent and years of litigation trying to fight off companies like Hitachi and General Electric who disputed his patent. He has also stirred up controversy by supporting the work of so-called "creation scientists." There have been a number of other scientists studying NMR who were in fact heading in the same direction as Damadian. One person in particular with a legitimate claim to co-discovery is Paul Lautenbur. He developed the idea of using small NMR gradients to map the body while at SUNY Stony Brook. In 1973, he used his technique on a test tube of water, and then used it on a clam. His work was published in Nature, and it is his technique that is favored today. Lautenbur and British MRI researcher Peter Mansfield were awarded the Nobel Prize in 2003. The MRI works like this: You create a strong magnetic field which runs through the person from head to toe. This causes the spinning hydrogen atoms in the person’s body to line up with the magnetic field. Then you send a radio pulse at a special frequency that causes the hydrogen protons to spin in a different direction. When you turn off the radio pulse, the protons will return to their alignment with the magnetic field, and release the extra energy they took in from the radio pulse. That energy is picked up by the same coil that produced the energy, now acting like a three dimensional antenna. Since different tissues have different relative amounts of hydrogen in them, they give a different density of energy signals, which the computer organizes into a detailed three-dimensional image. This image is nearly as detailed as an anatomical photograph! On the more active side, direct electrical stimulation of the brain of a living person became a fine art in the 1900's. In 1909, Harvey Cushing mapped the somatosensory cortex. In 1954, James Olds produced a media sensation by discovering the so-called 'pleasure center" of the hypothalamus. By the end of the century, the specialized areas of the brain were pretty well mapped. Brain surgery also became more effective. In the process of looking for surgical relief for extreme epilepsy, it was discovered that cutting the corpus callosum which joins the two hemispheres of the cerebral cortex, greatly improved the patients' condition. Roger Sperry was then able to discover the various differences between the left and right hemisphere in some of the most interesting studies in history. He was awarded the Nobel Prize for his work in 1981. The other aspect of technology is its use in attempting to heal people with mental illness. Although extremely controversial to this day, the evidence strongly suggests that electroshock therapy, first used by Ugo Cerletti and Lucino Bini in 1938, can be effective in the care of very depressed patients. Electroshock (also known as electro-convulsive therapy or ECT) involves sending a strong electrical current through an anesthetized patient's brain. When they awake, they cannot seem to recall several hours of time before the procedure, but also feel much less depressed. We aren't sure why it works. Less effective and much more radical is the lobotomy, first used on human beings by Antonio Egaz Moniz of the University of Lisbon Medical School, who won the Nobel Prize for his work in 1949. The lobotomy was turned into a mass-production technique by Walter Freeman, who performed the first lobotomy in the U.S. in 1936. To read more about lobotomy, click here. The psychopharmacological explosion In the 1800's, the basic principles of the nervous system were slowly being unraveled by people such as Galvani in Italy and Helmholtz in Germany. Toward the end of the 1800s, biologists were approaching an understanding of the details. In particular, Camillo Golgi (who believed that the nervous system was a single entity) invented a staining technique that allowed Santiago Ramon y Cajal to prove that the nervous system was actually composed of individual neurons. Together, they won the Nobel Prize in 1906. The British biologist Sir Charles Sherrington had already named what Ramon y Cajal saw: the synapse. He, too, would win a Nobel Prize for his work on neurons with Edgar Douglas Adrian. In 1921, the German biologist Otto Leowi completed the picture by discovering acetylcholine and the idea of the neurotransmitter. For this work, he received the Nobel Prize, shared with Henry Hallett Dale. Interestingly, acetylcholine is a relative of muscarine -- the active ingredient of some of those mushrooms that some of our ancient ancestors liked so much. In 1946, another biologist, von Euler, discovered norepinephrine. And, in 1950, Eugene Roberts and J. Awapara discover GABA. In the early part of the 1900's, we see the beginnings of psychopharmacology as a medical science, with the use of bromide and chloralhydrate as sedatives. Phenobarbital enters the picture in 1912 as the first barbituate. In the second half of the 1900s, with the basic mechanisms of the synapse understood, progress in the development of psychoactive drugs truly got underway. For example... In 1949, John Cade, an Australian psychiatrist, found that lithium, a light metal, could lessen the manic aspect of manic-depression. In 1952, a French Navy Doctor, Henri Laborit, came up with a calming medication which included chlorpromazine, which was promoted as the antipsychotic Thorazine a few years later. Imipramine, the first tricyclic antidepressant, was developed at Geigy Labs by R. Kuhn in the early 1950's, while he was trying to find a better antihistamine! In the late 1950's, Nathan Klein studied the use of reserpine in 1700s India, and found it reduced the symptoms of many of his psychiatric patients. Unfortunately, the side effects were debilitating. In 1954, the drug meprobamate, better known as Miltown, became available on the market. Its chemical foundation was discovered a decade earlier by Frank Berger, while he was trying to discover a new antibiotic. He found a tranquilizer instead! Iproniazid (an MAOI antidepressant) was developed in 1956 by the Hoffman-LaRoche pharmaceutical company for tuberculosis patients. It appeared to cheer them up a bit! Although it was banned because of side effects, it was the first in a long series of antidepressants.Leo Sternbach also worked for Hoffman-LaRoche, and discovered the drug Valium (diazepam) in 1959, and Librium (chlordiazepoxide) the following year -- two of the most useful and used psychoactive drugs ever. In 1974, D. T. Wong at Eli Lilly labs discovered fluoxetine -- Prozac -- and its antidepressant effects. It was approved by the FDA in 1987. This substance and others like it -- known as the serotonin selective re-uptake inhibitors or SSRIs -- would dramatically change the care of people with depression, obsessive-compulsive disorder, social anxiety, and other problems. In the 1990's, new neuroleptics (antipsychotic drugs) such as clozapine were developed which addressed the problems of schizophrenia more completely than the older drugs such as chlorpromazine, and with fewer side effects. What is the future going to be like, in regards to psychopharmacology? Some say the major breakthroughs are over, and it is just a matter of producing better variations. But that has been said many times before. Biochemistry is still progressing, and every year brings something new. The rest of us can only hope that many more and better medications with psychiatric applications will be found. Genetics and the human genome The science of genetics begins in the garden of an Austrian Monk named Gregor Mendel. In 1866, he published the results of his work suggesting the existence of “factors” -- which would later be called genes -- that are responsible for the physical characteristics of organisms. A Columbia University professor, Dr. Thomas Morgan provided the next step in 1910 by discovering that these genes are in fact carried within the structures called chromosomes. And in 1926, Hermann J. Muller discovered that he can created mutations in fruit flies by irradiating them with X-rays. Finally, in 1953, Dr. Rosalind Franklin and, independently, Dr. James D. Watson and Dr. Francis Crick outlined the structure of the DNA molecule. And Dr. Sydney Brenner completed the picture by discovering RNA and the basic processes of protein construction. The next phase of genetics involves the mapping of the DNA: What is the sequence of bases (A, T, G, and C) that make up DNA, and how do those sequences relate to proteins and ultimately to the traits of living organisms? Two researchers, Frederick Sanger and Walter Gilbert, independently discovered a technique to efficiently “read” the bases, and in 1977, a bacteriophage virus is the first creature to have its genome revealed. In the 1980’s, the Department of Energy revealed a plan to bring together researchers world-wide to learn the entire genome - of human beings! The NIH (National Institute of Health) joined in, and made Dr. James Watson the director of the Office of Human Genome Research. In 1995, Dr. Hamilton Smith and Dr. J. Craig Venter read the genome of a bacterium. In 1998, researchers published the genome of the first animal, a roundworm. In 2000, they had the genome of the fruit fly. And in the same year, researchers had the genome sequence of the first plant. In June of 2000, at a White House ceremony hosted by President Clinton, two research groups -- the Human Genome Project consortium and the private company Celera Genomics -- announced that they have nearly completed working drafts of the human genome. In February of 2001, the HGP consortium published its draft in Nature and Celera published its draft in Science. The drafts described some 90% of the human genome, although scientists knew the function of less than 50% of the genes discovered.. There were a few surprises: Although the human genome is comprised of more than three billion bases, this is only a third as large as scientists had predicted. And it is only twice as large as that of the roundworm. It is also discovered that 99.9% of the sequences are exactly the same for all human beings. We are not as special as we like to think! The human genome project is not just an intellectual exercise: Knowing our genetic makeup will allow us to treat genetic illnesses, custom design medicines, correct mutations, more effectively treat and even cure cancer, and more. It is an accomplishment that surpasses even the landing on the moon. Copyright 2002, C. George Boeree
In 1990, the NASA/ESA Hubble Space Telescope was deployed into Low Earth Orbit (LEO). As one of NASA’s Great Observatories – along with the Compton Gamma Ray Observatory, the Chandra X-ray Observatory, and the Spitzer Space Telescope – this instrument remains one of NASA’s larger and more versatile missions. Even after twenty-seven years of service, Hubble continues to make intriguing discoveries, both within our Solar System and beyond. The latest discovery was made by a team of international astronomers led by the Max Planck Institute for Solar System Research. Using Hubble, they spotted a unique object in the Main Asteroid Belt – a binary asteroid known as 288P – which also behaves like a comet. According to the team’s study, this binary asteroid experiences sublimation as it nears the Sun, which causes comet-like tails to form. The study, titled “A Binary Main-Belt Comet“, recently appeared in the scientific journal Nature. The team was led by Jessica Agarwal of the Max Planck Institute for Solar System Research, and included members from the Space Telescope Science Institute, the Lunar and Planetary Laboratory at the University of Arizona, the Johns Hopkins University Applied Physics Laboratory (JHUAPL), and the University of California at Los Angeles. Using the Hubble telescope, the team first observed 288P in September 2016, when it was making its closest approach to Earth. The images they took revealed that this object was not a single asteroid, but two asteroids of similar size and mass that orbit each other at a distance of about 100 km. Beyond that, the team also noted some ongoing activity in the binary system that was unexpected. As Jessica Agarwal explained in a Hubble press statement, this makes 288P the first known binary asteroid that is also classified as a main-belt comet. “We detected strong indications of the sublimation of water ice due to the increased solar heating – similar to how the tail of a comet is created,” she said. In addition to being a pleasant surprise, these findings are also highly significant when it comes to the study of the Solar System. Since only a few objects of this type are known, 288P is an extremely important target for future asteroid studies. The various features of 288P also make it unique among the few known wide asteroid binaries in the Solar System. Basically, other binary asteroids that have been observed orbited closer together, were different in size and mass, had less eccentric orbits, and did not form comet-like tails. The observed activity of 288P also revealed a great deal about the binary asteroids past. From their observations, the team concluded that 288P has existed as a binary system for the past 5000 years and must have accumulated ice since the earliest periods of the Solar System. As Agarwal explained: “Surface ice cannot survive in the asteroid belt for the age of the Solar System but can be protected for billions of years by a refractory dust mantle, only a few meters thick… The most probable formation scenario of 288P is a breakup due to fast rotation. After that, the two fragments may have been moved further apart by sublimation torques.” Naturally, there are many unresolved questions about 288P, most of which stem from its unique behavior. Given that it is so different from other binary asteroids, scientists are forced to wonder if it merely coincidental that it presents such unique properties. And given that it was found largely by chance, it is unlikely that any other binaries that have similar properties will be found anytime soon. “We need more theoretical and observational work, as well as more objects similar to 288P, to find an answer to this question,” said Agarwal. In the meantime, this unique binary asteroid is sure to provide astronomers with many interesting opportunities to study the origin and evolution of asteroids orbiting between Mars and Jupiter. In particular, the study of those asteroids that show comet-like activity (aka. main-belt comets) is crucial to our understanding of how the Solar System formed and evolved. According to contrasting theories of its formation, the Asteroid Belt is either populated by planetesimals that failed to become a planet, or began empty and gradually filled with planetesimals over time. In either case, studying its current population can tell us much about how the planets formed billions of years ago, and how water was distributed throughout the Solar System afterwards. This, in turn, is crucial to determining how and where life began to emerge on Earth, and perhaps elsewhere! Be sure to check out this animation of the 288P binary asteroid too, courtesy of the ESA and Hubble:
Pomegranate is a mild-temperate to sub-tropical fruit bush or tree you can grow from seeds, straight from the fruit you purchase in the grocery store. Established pomegranate plants are drought tolerant and can withstand temperatures down to 12 degrees Fahrenheit, but the plant may not produce fruit in areas with extended cold periods. Pomegranates will grow in most types of soil, as long as the soil offers good drainage. Remove the seeds from your pomegranate, clean them with water and place them on a paper towel to dry. Pomegranate bushes or trees that are grown from grocery fruit seeds may or may not produce fruit. They will produce an ornamental plant that grows well indoors in containers, or outside in the garden. Scrape the sides of each seed lightly with a plant file. Scarification helps the pomegranate seeds germinate. Fill a small container 1 inch from the top with a quality potting soil. Look for mixtures that contain peat, vermiculite and perlite. Moisten the soil before you plant the seeds. Plant the dry pomegranate seeds 1/4 inch deep into the center of the container. Water the seeds well and keep them moist until germination. Make sure that the seed container receives at least six hours of full sun daily. Seedlings can begin to sprout within six to eight weeks. Transplant the seedlings to a sunny location in your garden during early spring or early fall. You can also keep them indoors in containers year-round. Pomegranate plants have thorns, so be careful as you repot them into larger containers or in the garden.
"What’s Eating You, Lazybones?" by Mandy Foss Reprinted with permission from the Tar Heel Junior Historian. Fall 2008. Tar Heel Junior Historian Association, NC Museum of History In the beginning of the twentieth century, a battle was fought in North Carolina and ten other southern states. From 1909 to 1914, doctors, public health officials, and northern businessmen worked to destroy what they called the "germ of laziness." They believed such a germ caused many of the South's problems—poverty, a sickly population, and economic underdevelopment. But the germ these people were attacking wasn't a germ at all. It was a worm—the hookworm. Hookworm disease was one of three major diseases that had plagued the South since the early 1800s. Along with malaria and yellow fever, hookworm disease came from Africa during the slave trade. Hookworms are parasites. Parasites survive by feeding off a living host. Less than half an inch long, white adult hookworms attach themselves to a person's small intestine and feed on blood. A female hookworm can lay thousands of eggs every day. The eggs exit the host's body through feces. Once the eggs hatch, hookworm larvae look for a new host. In the early 1900s, many North Carolinians did not have bathrooms inside or outside their homes. People used the great outdoors or chamber pots—containers kept under the bed mainly for nighttime bathroom breaks and dumped outside the next morning. They often did not wear shoes because of the warm climate and a lack of money. When southerners walked barefooted through feces-contaminated soil, they might touch hookworm larvae. The threadlike larvae could enter a person's body through the skin—usually the foot. This caused a rash called ground itch. The larvae traveled through the body to the small intestine. One historian estimated that between 1865 and 1910, 40 percent of the South's population suffered from hookworms. Thousands of worms could live in a person at one time. All these parasites and their constant bloodsucking made an infected person pale and tired. To northerners, southerners with hookworm disease appeared lazy and stupid. But hookworms caused serious health problems. They stunted growth in children and weakened resistance to germs. Weakened bodies could not fight off other—often fatal—diseases. Physician Charles Stiles believed that ridding the South of hookworm disease would solve many problems. He thought southerners, once cured, would become more productive workers and citizens. In 1902, after conducting infection surveys, Stiles published research showing widespread hookworm infections. Based on this research, in 1909 northern philanthropist John D. Rockefeller donated $1 million to form the Rockefeller Sanitary Commission to Eradicate Hookworm Disease, or RSC. RSC's huge health campaign had three goals: to survey eleven southern states to determine how many people had hookworm disease; to cure the infected; and to remove the disease source by ending soil pollution. Doctors, educators, and public health inspectors went county-by-county to test for and treat hookworms. Those infected were dosed with thymol pills to kill the worms and Epsom salts to flush them out of the body. To prevent reinfection, people were encouraged to wear shoes and to build and use sanitary outhouses (or outdoor bathrooms similar to today's portable toilets). Sanitary outhouses kept waste matter away from people, animals, insects, and nearby water supplies. By 1915, RSC ended its southern hookworm campaign. Though reformers had some success battling hookworms, the South's poverty and underdeveloped economy remained. The campaign did, however, make people more aware of the link between cleanliness and the spread of disease. More southerners began wearing shoes, building sanitary outhouses, and taking regular "worming" treatments. And while the germ of laziness was never defeated outright, it has been on the run ever since. "Hookworm" in the North Carolina Digital Collections. "Hookworm in the United States" in NC LIVE. "Hookworm" in UNC-Chapel Hill's "Documenting the American South." "Hookworm in the Southern States" in WorldCat. 1 January 2008 | Foss, Mandy
Here are simple addition & subtraction write the room activities. There are 14 addition and 14 subtraction sentences. Place the gingerbread cookies around the room and get your students up and moving while reinforcing math skills. Both activities have a recording sheet and can be used in math centers or math interventions. Enjoy!
How to Write a Story Ladder Layout While elementary school teachers often use story ladders to illustrate plot in books, they're also a useful tool for fiction writers seeking to organize their projects. A story ladder organizes the events of a plot through an illustration of a ladder, with each rung representing a significant development. Whether you're preparing to write a short story or the next best-seller, designing a story ladder can help you create a gripping sequence of events for readers. Create the beginning of your ladder by drawing two vertical, parallel lines on a piece of paper. These will be the rails of your ladder. Then, beneath the rails, write a brief sentence describing what your story is about. Think about what details from your story idea would best catch readers' attention and encourage them to move up the ladder of the plot by reading on. Draw the first rung on your ladder. Inside it, describe the main character's condition during the exposition stage of the plot, including their current circumstances and areas of discontent. If you were making a story ladder for "Cinderella," for example, this rung might say that Cinderella is unhappy living with and serving her wicked stepmother and wants something more from life. Leave a space on your ladder and draw the next rung. This rung is for the inciting incident, the event that pushes the character out of her present state and into the main action of the story. Write a sentence inside the rung describing what happens at this point in the plot. In a Cinderella story ladder, the next rung might describe the arrival of Cinderella's fairy godmother, who grants her wish to go to the ball. Add the next rung to your ladder. This one will represent the complication in the plot that makes life more difficult for your main character as the action rises. Write a sentence describing what happens that gets in the way of your character's primary goal. In "Cinderella," it might be that she must return to her normal life at midnight after falling in love with the prince, and loses her glass slipper as she flees the ball. Draw the next rung on the ladder. This one is the climax, the highest point the action reaches in the story. Write down a sentence that describes what the climax in your story will be and what is at stake for your main character. On a "Cinderella" story ladder, the climax would be the arrival of the prince at Cinderella's house and the discovery that the glass slipper fits. Draw the last rung, which will represent the plot's resolution. This is the aftermath of the climax, in which the intensity of the story's action begins to fall. In your sentence, describe where readers find your main character at the end of the story and how she has changed as a result of the events. In "Cinderella," the resolution is Cinderella's departure from her stepmother's house to live happily ever after with the prince. Above your ladder, write a sentence that describes the main theme of your story, the central idea or lesson that it teaches. The theme should be a concept that ties all the rungs of your story ladder together. For "Cinderella," a sample theme statement might be that even if you face difficult circumstances, it's possible to make your life into something better. - Read over your story ladder once you've finished to see if you like the arrangement of events. If not, create an alternative ladder with a different plot and outcome. Items you will need - Ingram Publishing/Ingram Publishing/Getty Images
Teachers and professors assign reflection papers to their students to gauge what the students know and what observations they have made through completing class assignments. While each instructor has his own criteria and specifications, the majority of reflection papers are no more than one to two pages in length. To write an effective and successful reflection, a student must start his paper with an introduction that eases the reader into the topic and briefly states what will be discussed via a thesis statement. Make an outline of your reflection paper. Decide what you want to write about and how many paragraphs the entire paper will be. Number each planned paragraph and write a one-sentence description of what the paragraph will talk about. For instance, Paragraph 3 – The role of suicide in “The Catcher in the Rye”. Compile a short list of any assigned reading, textbooks or online resources you want to use to back up the claims and opinions you write about in your reflection paper. Start your introduction with an informative statement about the topic to get the reader interested in your paper. Make the statement specific to what you will be talking about in the rest of your paper and avoid making general or vague statements. For example, instead of writing “‘The Catcher in the Rye’ is one of the most controversial books written in the 20th century,” write something along the lines of “Since J.D. Salinger’s novel ‘The Catcher in the Rye’ was first published in 1951, it has been surrounded by controversy due to the so-called offensive material presented in the book, including alcohol abuse, premarital sex and adult language.” Such an introduction lets your reader know that your overall paper is about “The Catcher in the Rye” but also that you will be writing specifically about the controversies and debates connected to the book. Write another sentence or two continuing the thoughts you presented in the opening statement. You could present important facts that you picked up from the assignment you completed or talk about overarching themes. Continuing with the example of “The Catcher in the Rye,” you could now write a sentence or two containing statistics of how many libraries have banned the book over the years or name the groups and organizations that condemn the novel. End your introduction with a one-sentence thesis statement. In any document, including a reflection paper, a thesis statement is used by the writer to state one striking observation or conclusion that he has come to and how he plans to defend that position throughout the rest of the paper. It is important to make your position clear in the thesis statement and to be unwavering in that position throughout the remainder of the paper. For example, a thesis statement for an introductory paragraph on the “offensive material” in “The Catcher in the Rye” could read something like: “It is my belief that without these supposed controversies ‘The Catcher in the Rye’ would not be the literary classic that it is considered to be today.” Revise your entire reflection paper, including your introduction paragraph, once you have completed writing the paper. Analyze what you have written and determine if the body and the conclusion of the paper match your thesis statement and follow logically from the information you presented in the introduction. If it doesn't, either re-tool the body of the paper or edit your introduction to match the rest of the paper. Re-read through the entire paper carefully to catch any spelling or grammar errors. If you're using a word processing software on a computer, use the spell-check function to help you catch any misspellings.
By Carolyn C. Diaz on July 11 2018 02:34:08 Sing songs with hand movements. There are multiplication albums that sing the times tables. You can listen along and learn the times tables through music instead of rote memorization. Listen to a few different versions and find one that works best for you. Add in hand motions or dance moves that illustrate the different number pairs to make the process more interactive. Math worksheets are particularly useful while adding larger numbers or decimals. Practice the math problems from the workbooks and solve them. Thus, learn these five methods and solve the math efficiently. Students definitely grasp the lessons easier when their studies are combined with games. These methods are ideal if you find it difficult to add large numbers. Students studying at advanced level may also find these methods helpful and improve their addition skills. These games can also be adapted to most math skills. The games are fun and students forget they are learning. That still leaves the problem of memorizing a string of numbers, and here’s where the magic is. Teach children to skip count to the tune of a simple song. Have you ever wondered why children can learn the lyrics to songs so easily? It is because music can be used as a Mnemonic device, that is a strategy to assist with memorization. If you can associate the numbers with the sounds in a song, the children will not only learn them faster, but retain them much longer than they would memorizing them exclusively through repetition.
WALKK A MILE IN SOMEONE ELSE′S SHOE – ″Walk a Mile in Someone Else’s Shoe″ by watching and listening to four slideshows. Listen to the four slideshows. Each slide show is about two minutes long. Take these examples found in these short videos and apply Jones (2002) three level framework to explain racism on three levels: institutional racism (access to power: housing, employment, medical facilities, clean environment), personally mediated racism (prejudice and discrimination: assumptions about the person′s ability, intentions, motives), and internalized racism (negative messages about stigmatized person′s worth). Click Here http://www.pbs.org/race/005_MeMyRaceAndI/005_00-home.htm A) Apply the gardener′s tale and give examples of each level of racism found in the video. You may use your own examples without viewing the video. B) Explain how racism affects health outcomes negatively using this framework? C) Explain the role of the health educators in the gardener′s tale? In other words, how does the gardener′s tale guide our thinking about how to intervene on the three levels?Order Now Report an abuse for product Walk in someone else′s shoes The level of discrimination is determined by the ability of a person to choose what they like to do in particular situations when they find an injustice that is meted to other people because of their skin color. When children are born the level of care that is given to them determines how they will turn out in the future. They may have equal potential, but once they are denied equal opportunities, then they lose the ability to compete equally in the world. Therefore to determine if discrimination and racism affect the outcome of the children from the time they are born to when they are grown-ups the only reference that a person needs is to look at their skin color and the opportunities that are presented to them (Ma’at, et al, 2001).
One in five plants are threatened with extinction, according to a new study. And we're to blame. Scientists from the International Union for the Conservation of Nature (IUCN), London's Natural History Museum and the Royal Botanic Gardens, Kew evaluated 7,000 plant species (out of a known 380,000 species) and assessed their conservation status and the reasons why threatened species are in danger. Twenty-two percent of the species for which they could carry out an assessment were classified as threatened with extinction, and habitat loss was the main reason for species' declines, most often from conversion into farmland. "This study confirms what we already suspected," says Stephen Hopper, Kew's director, "that plants are under threat and the main cause is human-induced habitat loss." Gymnosperms, non-flowering plants that include conifers and ginkgo trees, were the most threatened group in the study. And tropical rain forests were the most threatened habitat; most threatened plant species grow in the tropics. Reading evaluations of threatened species sometimes feels like deja vu. So many species are threatened (plants aren't quite the worst off—greater percentages of amphibians and corals are in danger), especially in the tropics, and habitat loss is often a major factor. But the decline of plants should be a wake-up call. Humans cannot survive if the plant species that feed, clothe and fuel us disappear. "We cannot sit back and watch plant species disappear—plants are the basis of all life on earth, providing clean air, water, food and fuel," Hopper says. "All animal and bird life depends on them and so do we."
In 1959, physicist and future Nobel prize winner Richard Feynman gave a lecture to the American Physical Society called “There’s Plenty of Room at the Bottom.” and here is what he meant by it. Nanotechnology: Nanotechnology, or nanotech, is the study and design of machines on the molecular and atomic level. To be considered nanotechnology, these structures must be anywhere from 1 to 100 nanometers in size. A nanometer is equivalent to one-billionth of a regular meter, which means that these structures are extremely small. The innovations in nanotechnology will range from a wide range of applications which will change the shape of the world. - As we have seen movies like Star Trek and Transporters, the concept of Molecular Manufacturing is taking its heat and if the understanding of the molecular structures is more intensified, there can be unlimited discoveries which world has never ever seen before. - In the medical industry, the research in molecular level of drugs can lead to making NanoRobots, which a patient can drink with a fluid and it can detect and destroy cancerous cells in his/her body. This is revolutionize cancer research and other medicinal drugs preventing the most prevalent diseases like typhoid and malaria. - After the latest discovery of graphene (the new carbon allotrope) in 2005, the electronic properties characterized by the material will change the shape of semiconductor devices, integrated circuits and its applications. IBM and Intel are doing intensive research on this new allotrope of carbon and millions of dollars are being spent on its experimentation. - Computers and robots could be reduced to extraordinarily small sizes. Medical applications might be as ambitious as new types of genetic therapies and anti-aging treatments. New interfaces linking people directly to electronics could change telecommunications.And human behavior could change the way the world lives today.
Factorials and the Gamma function By Murray Bourne, 14 Apr 2010 In math, we often come across the following expression: This is "n factorial", or the product n(n − 1)(n − 2)(n − 3) ... (3)(2)(1). Factorials are used in the study of counting and probability. For example, permutations (which involves counting the arrangement of objects where the order is important) and combinations (where the order is not important) both require factorials when the number of objects is large. Examples of factorials: 2! = 2 × 1 = 2 3! = 3 × 2 × 1 = 6 4! = 4 × 3 × 2 × 1 = 24 5! = 5 × 4 × 3 × 2 × 1 = 120 We can write factorials using product notation (upper case "pi") as follows: This notation works in a similar way to summation notation (Σ), but in this case we multiply rather than add terms. For example, if n = 4, we would substitute k = 1, then k = 2, then k = 3 and finally k = 4 and write: What about 0! ? 0! is a special case. There are many situations where the value of 0! has a meaning, and it has value 1 in those cases, not 0. So by convention, we define: 0! = 1 Also, 1! has value 1: 1! = 1 The graph of f(n) = n! for integer values n is as follows: Note that points A(0,1), B(1,1), C(2,2), D(3,6), E(4,24), and F(5, 120) are discrete points in space - we have not connected them with a curve. There is no meaning for non-integer factorials like the expression 3.5! (3.5 factorial). However, our graph does suggest a curve - one that is (approximately) exponentially increasing. Is there a function we can use that fits this curve and so gives us meaningful values for factorials of numbers which are not whole numbers? It turns out there is. The Gamma Function The Gamma Function is an extension of the concept of factorial numbers. We can input (almost) any real or complex number into the Gamma function and find its value. Such values will be related to factorial values. There is a special case where we can see the connection to factorial numbers. If n is a positive integer, then the function Gamma (named after the Greek letter "Γ" by the mathematician Legendre) of n is: Γ(n) = (n − 1)! We can easily "shift" this by 1 and obtain an expression for n! as follows: Γ(n + 1) = n! But the Gamma function is not restricted to the whole numbers (that's the point). A formula that allows us to find the value of the Gamma function for any real value of n is as follows: For example, let n = 3.5. We want to find the value of 3.5!, assuming it exists. The value of Γ(3.5 + 1) = Γ(4.5) is given by the infinite integral: Examining the graph above, we expect this value to be somewhere in the range 10 to 15. (Recall that 3! = 6 and 4! = 24. Our answer for Γ(4.5) = 3.5! has to be between these values.) What does this integral mean? The function under the integral sign is very interesting. It is the product of an ever-decreasing function with an ever-increasing one. f(x) = e−xx3.5 Let's look at the graphs involved in this expression. Firstly, f(x) = e−x. Note that the value of the function (its height) decreases as x increases. Secondly, f(x) = x3.5 increases as x increases. This function is not defined (over the reals) for negative x. Finally, we look at the product of the 2 functions. This is the graph of f(x) = e−xx3.5. The area under this graph (the shaded portion) from 0 to ∞ (infinity) gives us the value of Γ(4.5) = 3.5!. We use computer mathematics software (Scientific Notebook) to find the value of the integral. We only need to choose some "large number" (I chose 10000) for the upper bound of the integral since as you can see in the graph, the height of the curve is very small as x becomes very large. Here's what we get: This means the shaded area above is 11.6317 square units. This is in the range we estimated earlier. Back to the Graph Let's return to our graph of the values of factorial numbers. We can use the above integral to calculate values of the Gamma function for any real value of n. This time, I have included a smooth curve passing through our factorial values. This curve is f(n) = Γ(n + 1). I've also added the new point F(3.5, 11.6317), which is the ordered pair representing Γ(4.5) = 11.6317 we just found. You can see this new point lies on the smooth curve joining the other factorial values. You can see more information on Γ(4.5) using Wolfram | Alpha. The Gamma function gives us values that are analogous to factorials of non-integer numbers. It was one of the many brilliant contributions to the world of math by the Swiss mathematician Leonhard Euler. To finish, let's look at the graph of f(n) = Γ(n + 1) for a greater range of real values of n. We observe there are some "holes" (discontinuities) in the graph for the negative integers.
- A small candle, 2.5 cm in size is placed at 27 cm in front of a concave mirror of radius of curvature 36 cm. At what distance from the mirror should a screen be placed in order to obtain a sharp image? Describe the nature and size of the image. If the candle is moved closer to the mirror, how would the screen have to be moved? Answer:Size of the candle, h= 2.5 cmImage size = h’Object distance, u= −27 cmRadius of curvature of the concave mirror, R= −36 cmFocal length of the concave mirror, f=R/2 = -18 cm Image distance = v The image distance can be obtained using the mirror formula: Therefore, the screen should be placed 54 cm away from the mirror to obtain a sharp image. The magnification of the image is given as: The height of the candle’s image is 5 cm. The negative sign indicates that the image is inverted and real. If the candle is moved closer to the mirror, then the screen will have to be moved away from the mirror in order to obtain the image. - A 4.5 cm needle is placed 12 cm away from a convex mirror of focal length 15 cm. Give the location of the image and the magnification. Describe what happens as the needle is moved farther from the mirror. - A tank is filled with water to a height of 12.5 cm. The apparent depth of a needle lying at the bottom of the tank is measured by a microscope to be 9.4 cm. What is the refractive index of water? If water is replaced by a liquid of refractive index 1.63 up to the same height, by what distance would the microscope have to be moved to focus on the needle again? - Figures 9.34(a) and (b) show refraction of a ray in air incident at 60° with the normal to a glass-air and water-air interface, respectively. Predict the angle of refraction in glass when the angle of incidence in water is 45º with the normal to a water-glass interface [Fig. 9.34(c)]. - A small bulb is placed at the bottom of a tank containing water to a depth of 80 cm. What is the area of the surface of water through which light from the bulb can emerge out? Refractive index of water is 1.33. (Consider the bulb to be a point source.) - A prism is made of glass of unknown refractive index. A parallel beam of light is incident on a face of the prism. The angle of minimum deviation is measured to be 40°. What is the refractive index of the material of the prism? The refracting angle of the prism is 60°. If the prism is placed in water (refractive index 1.33), predict the new angle of minimum deviation of a parallel beam of light. - Double-convex lenses are to be manufactured from a glass of refractive index 1.55, with both faces of the same radius of curvature. What is the radius of curvature required if the focal length is to be 20 cm? - A beam of light converges at a point P. Now a lens is placed in the path of the convergent beam 12 cm from P. At what point does the beam converge if the lens is (a) a convex lens of focal length 20 cm, and (b) a concave lens of focal length 16 cm? - An object of size 3.0 cm is placed 14 cm in front of a concave lens of focal length 21 cm. Describe the image produced by the lens. What happens if the object is moved further away from the lens? - What is the focal length of a convex lens of focal length 30 cm in contact with a concave lens of focal length 20 cm? Is the system a converging or a diverging lens? Ignore thickness of the lenses. - A compound microscope consists of an objective lens of focal length 2.0 cm and an eyepiece of focal length 6.25 cm separated by a distance of 15 cm. How far from the objective should an object be placed in order to obtain the final image at (a) the least distance of distinct vision (25 cm), and (b) at infinity? What is the magnifying power of the microscope in each case? - A person with a normal near point (25 cm) using a compound microscope with objective of focal length 8.0 mm and an eyepiece of focal length 2.5 cm can bring an object placed at 9.0 mm from the objective in sharp focus. What is the separation between the two lenses? Calculate the magnifying power of the microscope, - A small telescope has an objective lens of focal length 144 cm and an eyepiece of focal length 6.0 cm. What is the magnifying power of the telescope? What is the separation between the objective and the eyepiece? - (a)A giant refracting telescope at an observatory has an objective lens of focal length 15 m. If an eyepiece of focal length 1.0 cm is used, what is the angular magnification of the telescope?(b) If this telescope is used to view the moon, what is the diameter of the image of the moon formed by the objective lens? The diameter of the moon is 3.48 × 106 m, and the radius of lunar orbit is 3.8 × 108 m. - Explain the formation of energy Bands in solids. Distinguish between metals, insulators and semiconductors on the basis of band theory. - Distinguish between intrinsic and extrinsic semiconductors and the conduction in P type and N type semiconductors. - Explain the formation of depletion region and barrier potential in a pn junction. - Draw the circuit diagram used to study the Forward and reverse bias characteristics and draw the graph for forward bias and reverse bias. - Describe the working of a half wave rectifier with the help of a neat labeled diagram and draw the input and output wave forms. - Describe the working of a full wave rectifier with the help of a neat labelled diagram and draw the input and output wave forms. - Draw the symbols of npn and pnp transistor. Show the biasing of a transistor and explain transistor action. - Describe the working of an npn transistor in CE configuration as an amplifier. - Explain the working of a transistor in CE configuration as oscillator. - Explain the action of transistor as a switch. (Have some more idea? Post them as comments) CBSE Science exam was today (20 March 2012). How was the exam? Many students reported that the exam was quite easy and could easily score 90% and above. How was the Science exam for you? Were there any errors in the question paper? Were there any question out of syllabus? Were the diagrams and other questions the expected ones? Were there any questions beyond comprehension? You can post the difficult and confusing questions as comments to this post and we’ll discuss the solutions - Ask The CBSE – March 5, 2012 (thehindu.com) - Combined exam of AIEEE and AIPMT – Top questions to be answered… (edubrite.com) CBSE (Central Board of Secondary Education) has started online counselling for students and parents for Board Exams. The first phase of this pre-exam counselling will start from Wednesday Feb 1,2012 and will continue until April 16 2012, for those appearing for board examinations this year. Toll Free Number 1800-180-3456 is allotted to talk with experts. Anyone can call between 8 a.m. to midnight from Feb 1 to April 16, 2012. This year, approximately 67 experts, including principals, trained counsellors from CBSE affiliated government and private schools and few psychologists are participating in tele-counselling.
Recall sources and describe functions of carbohydrate, protein, lipid (fats and oils), vitamins A, C and D, and the mineral ions calcium and iron, water and dietary fibre as components of the diet. A- The function of Vitamin A is to produce light sensitive pigments in our eye and this allows us to see in poor light conditions. The source for Vitamin A are fish liver oils and liver oil. Deficiency of Vitamin A can lead to poor sight in poor light conditions which is a condition known as night blindness. C- The function of Vitamin C is allowing the body to form connective tissue which helps cells stick to each other so that cells can stay together as a tissue. The source of Vitamin C comes from citrus fruits such as oranges, limes etc and the deficiency disease is called scurvy and the symptoms of scurvy are bleeding gums. D- Vitamin D allows us to absorb calcium from our diet. Vitamin D can be produced by our body itself while under sunshine or you can obtain it from fats, eggs or fishes. The disease that occurs if you don't obtain enought Vitamin D is called richets and which can be determined by the bending of bones. Calcium-You can obtain calcium through dairy products such as milk, cheese etc. The function of calcium is increasing bone strength and a lack in calcium can also cause richets. Iron- We can find iron in food in liver and the famous vegetable spinach. The function of Iron is the synthesis to make the haemaglobin which we find in red blood cells and that carries oxygen. If you lack Iron in your diet and results in a condition called Anaemia. People with Anaemia looks pale and gets tired quickly. Fibre is a plant's cell wall, this means it's the cellulose. We get these from plants from our diet. A lack in fibre can cause what we know as constipation. The main role of fibre is peristalsis in our gut. The function of water is to produce solutions and in those solutions there is the chemistry of life. Substances dissolve and substances react in these solutions. We can obtain water through drinking directly or it may be inside the food inside. If we lack water we will be dehydrated.
Christening a particle is not easy. Do you name it after the person who proposed its existence, or the person who discovered it? Or do you give it a label that is abstract, poetic, whimsical, onomatopoeic, or just plain descriptive? Democritus proposed the existence of a particle, so he could have named it the democriton, but instead this modest Greek philosopher decided to coin the word a-tomos, meaning 'not cuttable', which explains the origin of the word atom. Perversely, today we use the word atom to describe something that is 'cuttable', because we know that even the smallest atom, hydrogen, has components that can be pulled part. So we could rename atoms 'aatoms', which is to say 'not not cuttable'. Inside the atom we find the electron, which also traces its name back to Ancient Greece. Elektron is Greek for amber, and the ancients knew that rubbing amber with a dry cloth would enable it to attract very light objects. We now know that this is because rubbing amber can generate a charge, otherwise known as static electricity, so 19th century scientists used the term electron to describe the first particle that was proven to carry a charge. The rest of the atom is made of neutrons and protons, and in turn these are made of quarks. The story of quarks dates back to the 1960s when physicists discovered a menagerie of new subatomic particles. It was Murrary Gell-Mann who proposed that all these particles (and protons and neutrons) were made of just three types of quark. The name was based on a line from James Joyce's Finnegan's Wake: "Three quarks for Muster Mark!". In this context, quark is probably a corruption of quart (as in quarts of beer), which means it should not be pronounced to rhyme with Mark. Gell-Mann had quite a flair for naming concepts in physics. The existence of three quarks led to composites of quarks being classified into groups of eight, which Gell-Mann dubbed the Eightfold Way. This was a reference to a Buddhist proverb about the path to nirvana: "Now this, O monks, is noble truth that leads to the cessation of pain; this is the noble Eightfold Way." Gell-Mann's three quarks were named up, down and strange. The up and down quarks formed a natural pair, but the strange quark was the odd one out, hence the name. In 1974 its partner was discovered and to celebrate its welcome arrival it was dubbed the charm quark Two more quarks were discovered, and were initially called truth and beauty. They were the focus of my thesis when I worked at Cern in the late 1980s, but sadly I could not boast that I was researching the physics of truth and beauty, because by this time they had been renamed more prosaically as top and bottom quarks It is unlikely any more quark types will be discovered at Cern when the LHC fires up this summer, but they will be studied in closer detail than ever before. In particular, physicists will scrutinise the particles that bind quarks together, predictably known as gluons, because they act like a glue. Sometimes the order of discovery is a factor in the naming of particles. In the 1960s and 70s, many physicists were trying to predict the particles that might carry the weak nuclear force, which is responsible for radioactivity. When they formulated a theory, they sensibly named one type of weak-force carrier the W particle. The other type was given the name Z, partly because physicists believed there wouldn't be any more particles left to discover. Of course, the LHC will also be hunting for new particles. One of the theories being tested is supersymmetry, the idea that every known particle has a partner awaiting discovery in a high-energy collision. When the idea was proposed, the sudden doubling of the number of fundamental particles could have been a headache for the physicists who named things. Their solution was to add an s onto particle names to get the supersymmetric "sparticles". So the partners of the quark and electron became squarks and selectrons. The convention has some unfortunate consequences: the family of particles known as leptons have supersymmetric partners called, well, sleptons Supersymmetric particles could be discovered at Cern in the coming years but other hypothetical particles are much less likely, such as the axion, which was posited in 1977 to solve problems in the way that quarks and gluons interact. The theorists who came up with it named their proposed particle after an American brand of laundry detergent, because it was supposed to clean up a rather messy problem in fundamental physics. There is no sign of axions yet, but if they exist they could explain the vast quantity of missing matter in the universe. There are so many candidates for this so-called dark matter that scientists have coined catch-all acronyms. One umbrella term suggests the missing matter is made of Weakly Interacting Massive Particles (WIMPs). Alternatively, the mysterious dark particles may have aggregated into large collections known as MAssive Compact Halo Objects (MACHOs If all this makes it sound as if physicists make things up as they go along, wait until you hear my favourite particle moniker. This acronym encompasses all the dark matter candidates and truly reflects our level of understanding of this particular subject - Dark Unknown Nonreflective Nondetectable Objects, or DUNNOs, a term which should only be spoken by physicists while shrugging their shoulders. Well, at least they're honest. · Simon Singh is the author of Big Bang and will be presenting "5 Particles", part of BBC Radio 4's special coverage of the LHC switch-on later this summer
- 1 [treated as singular or plural] A body of delegates or representatives; a deputation: a delegation of teachersMore example sentences - Other workers should organise delegations to council picket lines in every area. - But there is no automatic right of Scottish representation in UK delegations. - At the same time, each of the four delegations present includes Pashtun representation. - 2 [mass noun] The action or process of delegating or being delegated: the delegation of power to the district councilsMore example sentences - It appears that the trustees' power of delegation cannot be excluded by the settler. - There's no vote and no delegation of power to experts or a committee by the group. - The Government was right to realise the need for more delegation of powers from Whitehall. early 17th century (denoting the action or process of delegating; also in the sense 'delegated power'): from Latin delegatio(n-), from delegare 'send on a commission' (see delegate).
Proteins are polymers, molecules synthesized by the chemical bonding together or polymerization of many other molecules called their monomers, and furthermore are copolymers, polymers composed of monomers of more than one kind, the monomers of the proteins being called amino acids, of which there are twenty different kinds. And protein amino acid order determines protein function. Living organisms are composed of cells, and most cell structures are formed or synthesized and most other cell functions performed by molecules called proteins. Each such function is performed by its own particular and specific protein, or set thereof, and usually many molecules of each. And a body cell typically synthesizes and uses tens of thousands of different proteins. Proteins are polymers, molecules synthesized by the chemical bonding together or polymerization of many other molecules called their monomers. Furthermore, proteins are copolymers, polymers composed of monomers of more than one kind. Proteins are synthesized from monomers called amino acids, of which there are twenty different kinds. The amino acids are small molecules, composed of only ten to twenty-seven atoms each, depending on kind, averaging about twenty, and averaging in mass about 2.35 * 10-22 gram, about one-quarter of one zeptogram (sextillionth of a gram), or 235 yoctograms (septillionths of a gram), about eight times as much as the three-atom water molecule, an oxygen atom single-bonded to each of two hydrogen atoms (an oxygen atom forms two single bonds or one double bond in a molecule, while a hydrogen atom forms one single), the total mass of which is about 2.99 * 10-23 gram, or thirty yoctograms. Every amino acid molecule consists of a central carbon atom single-bonded to a hydrogen atom, an amino group, a carboxylic acid group and a prosthetic group (a carbon atom forms four single bonds in a molecule, or one double bond and two singles, or two double bonds, or one triple bond and one single). An amino group is composed of a nitrogen atom (which forms three single bonds, or one double and one single, or one triple) single-bonded to each of two hydrogen atoms. And a carboxylic acid group is composed of a carbonyl group or moiety, a carbon double-bonded to an oxygen atom, further single-bonded to a hydroxy or alcohol group, an oxygen atom single-bonded to a hydrogen atom. Such bonds and groups are of course the ordinary molecular bonds and functional groups of carbon chemistry, called "organic chemistry" due to life's taking such advantage of the ability of carbon to form large and stable molecules that all carbon on the face of the earth is or has been part of a living organism. And the amino acid amino and carboxylic acid groups are of course what gives it its name. Every amino acid is identical to every other in the above, regardless of kind, in a nine-atom moiety called here its invariant moiety, composed of its central carbon atom and the hydrogen atom and amino and carboxylic acid groups bonded to it. And an amino acid of one kind differs from one of another solely in the elemental composition (the kinds and number of atoms involved), structure (how those atoms are bonded together) and consequent properties of its prosthetic group. A carbon atom bonded to four different atoms, groups or moieties is asymmetric, and forms an asymmetric center, and a molecule with a single such center can exist in one of two mirror-image forms or stereoisomers. The stereoisomers in such case are distinguished by a convention based on chirality or handedness, the lowest-weight substituent being taken as the "thumb", the asymmetric carbon atom as the "palm" and the other substituents viewed along the "thumb-to-palm" axis and direction, and the "fingers" taken to curl in the direction of decreasing weight of substituent, with the "fingers" of the left-handed or levo (Latin for “left”) stereoisomer curling in a clockwise fashion, and those of the right-handed or dextro stereoisomer counter-clockwise. A molecule with N asymmetic centers can exist in any one of 2N stereoisomeric forms. And except in the simplest amino acid glycine, the prosthetic group of which is a single hydrogen atom, the amino acid central carbons each form an asymmetric center, and the amino acids can exist as either of the two possible stereoisomers, but are always found to be the levo stereoisomer in nature. The amino acid amino and carboxylic acid groups are its bonding groups, which take part in the amino acid polymerization needed for protein synthesis, forming the necessary bonds between that amino acid and its neighbors in the protein. Each such bond involves one of each kind of bonding group, with a loss of two small moieties amounting to the loss of a water molecule for every amino acid added to the protein, an amino group hydrogen and the carboxylic acid group's hydroxy group, amino acid polymerization being therefore what is called a dehydration reaction. And the remaining amino and carbonyl moieties are single-bonded to one another, forming a connecting amide moiety (a carbonyl single-bonded to an amino moiety), the carbon-nitrogen bond of that moiety being the actual protein bond. Proteins are therefore polyamide polymers or polyamides, as are the common artificial copolymers called nylon, although the resemblance should not be stretched too far. The invariant moieties of the amino acids incorporated into a protein, minus water of polymerization, comprise what is called its backbone, an elongated and repetitive structure in which each invariant moiety remnant incorporated forms a unit identical to every other (although of course the end-units each bear a free bonding group unused in polymerization). And the prosthetic groups of those amino acids become protein side-groups projecting from those protein backbone units and that backbone. Each protein backbone unit has one bonding group moiety on one side of its central carbon and the other on the other, in the same order for all backbone units in and along the protein backbone. And each protein bond likewise has one bonding group moiety on one side of the protein carbon-nitrogen bond and the other on the other, in reverse order to that of the backbone units. Such protein order or orientation or direction is referred to in terms of the free bonding groups at either end of the protein, with the direction from the free amino to the free carboxylic acid group being referred to as the "N-to-C" direction, and the opposite as "C-to-N". Length, Size, Mass And Gram Number Proteins of different kinds and functions vary widely in the number of amino acids of which they are composed. But three hundred amino acids is a typical natural protein length and size. And a typical average natural protein mass can be calculated based on that length and size by multiplying the average mass of an amino acid (see section d above) times three hundred, and then subtracting the combined masses of the two hundred and ninety-nine water molecule equivalents lost (see section f), amounting to about 6.16 * 10-20 gram, about sixty zeptograms, a little over two thousand times the mass of a water molecule. And the corresponding typical average natural protein gram number, the number of such proteins contained in one gram thereof, can be calculated by dividing that mass into one gram, giving about 1.62 * 1019 or about sixteen quintillion such proteins per gram. Proteins can rotate around their backbone single bonds, and therefore all along their backbones, and consequently twist and coil, but the proteins which perform the functions of the cell generally assume three-dimensional coiled structures or conformations specific to their kinds, stabilized in various ways. For example, attractions between positively- and negatively-charged moieties of molecules form what are called hydrogen bonds, which separately are weaker but collectively can be much stronger than any one molecular bond. Such bonds between nearby backbone units along the protein backbone cause the assumption of winding or helical conformations along the backbone, as well as sheet conformations involving multiple turns and side-by-side runs thereof. Such interactions and resulting conformations are not specific to any particular protein or moiety thereof, since every backbone unit is identical to and can form the same such bonds as any other, and any stretch of backbone could theoretically engage in any such conformation. Slightly more specifically, the backbone units and some side-groups hydrogen-bond water molecules, and proteins tend to coil in such a way as to present such water-soluble or hydrophilic moieties or groups on their surfaces, and hold those side-groups which cannot form such bonds inside, in water-insoluble or hydrophobic cores. But in most of the proteins which perform the functions of the cell, conformation is specified by side-group interactions, which depend on the kinds of side-groups available and the order in which they occur, which depend in turn on which amino acids are incorporated into the protein and the order in which they are incorporated. That is, protein amino acid order determines protein conformation. Mechanical Properties And Surface Structure Protein conformation obviously determines protein shape, whether globular, elongated or flattened, and whether solid, indented or hollow; protein mechanical properties, such as whether and how one part of a protein can bend or rotate with respect to the rest; and protein surface structure, the protein’s surface shape and pattern of exposed side-groups and backbone units. Determine Protein Complexing And Function Two proteins or other large molecules of complementary shape and surface charge-patterns upon being brought together will develop multiple attractions including hydrogen-bonds to one another, such fit and collective attraction being called an affinity and such collective bond a complex. Protein complexing is so generally specific in its requirements of complementary shape and charge-pattern as to be described as “lock-and-key”. And protein complexing is a fundamental mechanism of protein function: The conformation-determining attractions and bonds of and within the protein itself can be considered intramolecular or internal complexing. Cytostructural proteins complex with one another to form the internal structural framework of the cell called the cytoskeleton. And every cell contains protein enzymes catalyzing—accelerating—the chemical reactions used by that cell, which reactions would otherwise run too slowly to be of use: Body cells typically each synthesize thousands of different enzymes, and many molecules of each, each more or less specifically catalyzing its specific reaction operating upon its specific substrate(s) or reactant(s) (the phrase "lock-and-key" was first applied to enzyme specificity). And each enzyme catalyzes its reaction largely through, and its specificity is that of, not so much its complexing with its substrate(s) as with its reaction's rate-determining transition state, the highest-energy state through which that reaction must proceed, stabilizing and therefore lowering the energy of that state, allowing lower-energy passage through that state, increasing the probability that a given enzyme-substrate complex will have the energy needed to pass through that state, and therefore, in the cell or other reaction mixture where many such complexes are forming and dissociating, increasing the number of such able to pass through that state and their reactions proceed to completion at any given time, and therefore the overall rate of reaction. In addition, many enzymes catalyze water-sensitive reactions in their hydrophobic cores. More complicatedly, many if not most proteins function by virtue of conformation changes, changing back and forth between two or more conformations in the course and by way of function, a phenomenon called allostery, and complexing is frequently combined in protein function with allosteric conformation changes. Protein complexing of one molecule causing an allosteric conformation change in that protein enabling or preventing subsequent complexing of another molecule is a central mechanism of protein function and control in the cell; for example, some enzymes, including some acting as cell switches, sensors or governors, are activated or deactivated—turned on or off—by conformation changes caused by complexing with or dissociating from the appropriate molecules, some used specifically as signals. And other protein enzymes catalyze the degradation of fuel and use the energy yielded to repetitively alter their conformations and shapes, acting as motors and machines. Protein amino acid order determines protein conformation; protein conformation determines protein shape, mechanical properties and surface structure; and protein shape, mechanical properties and surface structure determine protein function. Therefore, protein amino acid order determines protein function. The above is an adaptation from the first chapter of a work in progress, an informal monograph on the empirical or mass trial-and-error development of what the monograph calls "mechanomers", functional copolymers such as the biopolymers, the natural functional copolymers called proteins and nucleic acids; see my condensation of the monograph "Empirical Mechanomeric Development" and two notes on applications from the monograph "Mechanomeric Oncotherapy" and "Empirical Mechanomeric Development and Artificial Photosynthesis of Fuels" . [See also my new blog EMDblog.] I am seeking and need a grant or possibly an advance to finish the monograph. Keywords: empirical mechanomeric development, enzymes, macromolecular nanotechnology, mechanomers, nanobiotechnology, proteins
To express that hearing loss is prevalent is a bit of an understatement. In the United States, 48 million individuals report some extent of hearing loss. This means, on average, for every five people you meet, one will have hearing loss. And at the age of 65, it’s one out of three. With odds like this, how do you escape becoming one of those five? To help you understand how to preserve healthy hearing all through your life, we’ll take a closer look at the causes and types of hearing loss in this week’s posting. How Normal Hearing Works Hearing loss is the disturbance of normal hearing, so a good place to start is with an understanding of how normal hearing is supposed to work. You can picture normal hearing as comprised of three main processes: - The physical and mechanical conduction of sound waves. Sound waves are created in the environment and move through the air, like ripples in a lake, eventually making their way to the external ear, through the ear canal, and ultimately striking the eardrum. The vibrations from the eardrum are then transmitted to the middle ear bones, which then arouse the tiny nerve cells of the cochlea, the snail-shaped organ of the inner ear. - The electrical conduction from the inner ear to the brain. The cochlea, once stimulated, translates the vibrations into electrical signals that are transmitted to the brain via the auditory nerve. - The perception of sound in the brain. The brain perceives the electrochemical signal as sound. What’s fascinating is that what we perceive as sound is nothing more than sound waves, vibrations, electric current, and chemical reactions. It’s an entirely physical process that leads to the emergence of perception. The Three Ways Normal Hearing Can Go Wrong There are three primary types of hearing loss, each interfering with some element of the normal hearing process: - Conductive hearing loss - Sensorineural hearing loss - Mixed hearing loss (a mix of conductive and sensorineural) Let’s take a closer look at the first two, including the causes and treatment of each. Conductive Hearing Loss Conductive hearing loss interferes with the physical and mechanical conduction of sound waves to the inner ear and cochlea. This is due to anything that blocks conduction. Examples include malformations of the outer ear, foreign objects within the ear canal, fluid from ear infections, pierced eardrums, impacted earwax, and benign tumors, among other causes. Treatment of conductive hearing loss consists of removing the obstruction, dealing with the infection, or surgical correction of the malformation of the outer ear, the eardrum, or the middle ear bones. If you suffer from conductive hearing loss, for instance from impacted earwax, you could possibly begin hearing better instantly following a professional cleaning. With the exclusion of the more serious kinds of conductive hearing loss, this form can be the simplest to treat and can restore normal hearing completely. Sensorineural Hearing Loss Sensorineural hearing loss disrupts the electrical conduction of sound from the cochlea to the brain. This is due to the deterioration to either the nerve cells within the cochlea or to the auditory nerve itself. With sensorineural hearing loss, the brain is provided with weakened electrical signals, reducing the volume and clarity of sound. The principal causes of sensorineural hearing loss are: - Genetic syndromes or fetal infections - Normal aging (presbycusis) - Infections and traumatic accidents - Meniere’s disease - Cancerous growths of the inner ear - Side effects of medication - Abrupt exposure to exceedingly loud sounds - Long-term exposure to loud sounds Sensorineural hearing loss is most commonly connected with exposure to loud sounds, and so can be protected against by circumventing those sounds or by safeguarding your hearing with earplugs. This form of hearing loss is a bit more complicated to treat. There are no present surgical or medical procedures to repair the nerve cells of the inner ear. However, hearing aids and cochlear implants are very effective at taking over the amplification assignments of the nerve cells, creating the perception of louder, more detailed sound. The third type of hearing loss, mixed hearing loss, is essentially some mixture of conductive and sensorineural hearing loss, and is treated accordingly. If you have any trouble hearing, or if you have any ear pain or dizziness, it’s best to pay a visit to your physician or hearing professional as soon as possible. In virtually every case of hearing loss, you’ll attain the best results the earlier you attend to the underlying issue.
Natural Frequency: This value is typically given in Hertz (Hz). Damping Ratio: This value is typically represented as a percentage of the critical damping and therefore given in percent (%) Mode shape: The geometrical way the structure moves in this particular mode of vibration. Everything vibrates! This is a fact of life and modal analysis is the technique used to characterize how structures behave dynamically. The modal analysis provides an overview of the limits of a system’s responses. For example, it gives a general answer to the question of what are the limits of the system response (such as when and how much maximum displacement is) for a given input (such as a load applied at a given amplitude and frequency). Every object has a natural frequency (or resonant frequency) at which it can vibrate. This frequency is also the frequency at which the object will allow energy transfer from one form to another (here from vibration to kinetics) with minimal loss. As the frequency approaches the resonant frequency, the amplitude of the system’s response increases asymptotically to infinity. In other words, the frequency at which the amplitude goes to infinity is calculated with modal analysis Modal analysis is an analysis performed to determine the characteristics of the system, without any external loading. In this case, if we consider the generalized equation of motion; For structures, the damping amount is generally neglected as it is below 10% and if it is considered that there is no external loading; It is observed that the natural frequencies of the system depend on mass and stiffness matrices. The purpose of modal analysis in general; - Defining the natural frequencies and mode shapes of the system - To check the connections between components if there are rigid modes in the system - To determine whether the constraints are correct in the system - Determining the behaviour of the system under dynamic loads Knowing the natural frequency of a structure helps to know the operating frequencies of the system under dynamic loading conditions and to predict its responses (such as resonance) at these frequencies. When performing modal analysis with the finite element method, it is important to install a model in size and shape that best represents the mode shapes of the system. Particularly, the elements must represent the geometry in the best way to correctly detect natural frequencies and to determine the mode shapes at these frequencies (such as bending, torsion mode). So what should be done to improve the working conditions of the system as a result of modal analysis? As a result of the modal analysis, two important results we should pay attention to; eigenvalues and strain energy density values. Optimisation is required at the points where these values are maximum. At the points where the eigenvalue is maximum, we need to decrease the mass and increase the stiffness. So ; The natural frequency value is increased and the system avoids resonance at higher frequencies. Another result is strain energy density. At the points where strain energy density is maximum, it is necessary to increase the stiffness and strengthen the region. In this way, we can improve the working conditions by increasing the natural frequency values, which are characteristic of the system, before dynamic analysis. The modal analysis also plays an important role when dynamic analysis needs to compare with physical tests. It allows identifying the correct equipment and the correct location to be used for accelerometers and strain gauges. It helps to understand the test results during testing and to associate the virtual model with the prototype.
A team at Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory has identified exactly which pairs of atoms in a nanoparticle of palladium and platinum – a combination commonly used in converters – are the most active in breaking those gases down. They also answered a question that has puzzled catalyst researchers: Why do larger catalyst particles sometimes work better than smaller ones, when you’d expect the opposite? The answer has to do with the way the particles change shape during the course of reactions, creating more of those highly active sites. The results are an important step toward engineering catalysts for better performance in both industrial processes and emissions controls, said Matteo Cargnello, an assistant professor of chemical engineering at Stanford who led the research team. Their report was published June 17 in Proceedings of the National Academy of Sciences. Significance (from PNAS Publication) Catalysts are essential for a sustainable future because they reduce the energy required in chemical processes and the emission of harmful and polluting compounds. Revealing the function and structure of a working catalyst is a challenging task but critical in order to prepare more efficient and higher-performing materials. Very often, active sites are formed by many atoms that cooperate to perform the catalytic function. It is therefore challenging to identify the structure of the active site. Here, we combine uniform nanocrystal catalysts and theory insights to reveal the active-site ensemble necessary for alkene combustion. This approach can be extended to reveal atomistic details of working catalysts for a variety of applications.
The Great Depression was a long lasting economic downturn in the modern history of the western industrialized world. The Great Depression had both long term and short term causes and effects. It is commonly known to have started as soon as the stock market crashed in October 1929 and to be solved by President Roosevelt’s New Deal in the 1930s but it really began after World War I in 1919 and did not end until after World War II in 1945. In a way the painting below is symbolic because it shows how Americans always had “apples to sell” even through the Great Depression; most americans never lost hope and President F.D. Roosevelt helped to keep hope alive with his “New Deal” for the US citizens. The following is a list of causes and effects of the time period in US History known as the Great Depression. It is not the only list but it gives an accurate idea of how things happened in cities across America over time. At first everyone was doing well and happy to be out of World War I. Farmers were making more and more product due to technology and war demand for food. The farm products were easier to make. The more product they made the more the prices decreased. Some early causes of The Great Depression in the 1920s are known as the farm crisis when farmers were trying to pay for their loans at the bank and they could not pay because they were not making profit. The credit at first was easy to get but them the farmers could not pay it back. The banks could not guarantee the cash for others which eventually led them to close because farmers could not pay for the money that they borrowed on the bank for their farm. Also after that the central part of the nation had a terrible drought. People lost everything after the drought which is called the Dust Bowl during that time. People lost their home so most of the people at the central part of the nation was homeless and most of the farms are destroyed. In 1920’s the american economic and political policy was called isolationism. They did not want to be the part of anything outside America. They were fearful of immigrants and were prejudice against them. This was called nativism and it led to a shortage of cheap labor. The effects of the Great Depression lasted into the 20th century and we still see some of them today. They include the FDIC for all bank accounts, the Social Security program, a larger government and the many public works projects such as the Hoover Dam, the Tennessee Valley authority, & the Appalachian Trail.
Classical Music Higher Music Click here Listen to this piece of music from the Classical period and answer the following questions: • How would you describe the tonality? • What is the time signature? • What technique are the strings using at the beginning? Classical Music - Overview • 1750-1810 • Three main composers Wolfgang Amadeus Mozart (1756-1791) Ludwig Van Beethoven (1770-1827) Joseph Haydn (1732-1809) Investigation Task You will split in to groups of 3. Each member of the group will research a different composer from the Classical Period: Mozart, Beethoven and Haydn. You will have ten minutes to find out the following information about each composer • Where each composer was born and died • What genres/styles of music they composed • Any famous pieces they composed • What instruments were available during the Classical Period? Once you have collated this information, you will be asked to share the information you have gathered with other groups. (see attached instruction sheets) Eine Kleine NachtmusikSerenade for String Quartet • Serenades were often meant for performances outdoors. • Scored mainly for wind instruments • Mozart writes his for string quartet – what is a string quartet? Viola Violin 1 Violin 2 Cello Eine Kleine Nachtmusik – 1st Movement • This is in Sonata Form • Sonata form is divided into three sections • Exposition – 2 themes • Development – ‘develops’ these themes • Recapitulation – plays the two themes again • (Usually ends with a coda) Follow the music and answer these questions in your jotter • What key is this in? • Insert in bars 6 and 8 where there is a trill • Complete the notes and rhythm in bars 16-17 Click here Classical Orchestra • The orchestra was finally established in the Classical period • 4 sections • Strings • Woodwind • Brass • Percussion By the end of the century, the woodwind became a self-contained section, like the strings • Composers started to use wind instruments for simple changes of tone colour • The horns helped to fill in the harmony once supplied by the harpsichord • The trumpets and kettledrums were added to nosier passages Sonata • A sonata is a piece of music written for solo instrument, or solo instrument with piano accompaniment Listen to the following sonata by Beethoven. Do you know what it is called? • Now listen again and answer the following questions while following the score provided: Beethoven – Moonlight Sonata (1st Movement) • What notes are to be played sharp in this extract? • How would you describe the dynamics at the beginning? • What word would describe the rhythm at the end of bar 5? • What chord do the notes in bar 1 outline? • Explain the term “sempre pp e senza sordino”in bar 1. • What form is a typical first movement of a sonata in? Alberti Bass • What is an alberti bass? • Your teacher will demonstrate an alberti bass on the piano. • As a class, you are now going to learn to play Sonata in C by Mozart. What concepts can you find? Mozart Piano Sonata in C Major, K. 545 Choose three features which are present in the music. Symphony • This is played by a full symphony orchestra. • 1st movement – sonata form Symphony • This is played by a full symphony orchestra. • 1st movement – sonata form • 2nd movement – usually slow, often in ternary form Symphony • This is played by a full symphony orchestra. • 1st movement – sonata form • 2nd movement – usually slow, often in ternary form • 3rd movement – (added at the end of the Classical period) usually in the form of a minuet and trio Symphony • This is played by a full symphony orchestra. • 1st movement – sonata form • 2nd movement – usually slow, often in ternary form • 3rd movement – (added at the end of the Classical period) usually in the form of a minuet and trio • 4th movement – fast and can be in various forms, but often in rondo form Mozart Symphony No. 5 In this question you will hear instrumental music. A guide to the music is shown below. You are required to complete this guide by inserting music concepts. There will now be a pause of 30 seconds to allow you to read through the question. The music will be played three times, with a pause of 20 seconds between playings. You will then have a further 30 seconds to complete your answer. In the first two playings, a voice will help guide you through the music. There is no voice in the third playing. Here is the music for the first time. Here is the music for the second time. Here is the music for the third time. Concerto • What is a concerto? • The concerto is a piece for solo instrument with orchestral accompaniment. • It is usually in 3 movements, although in the Romantic period, an extra movement was added. • There is a constant dialogue between soloist and orchestra. Listen to the following concerto from the Classical Period and follow the score before answer the questions: • What is the key of the piece? • Write the time signature in the correct place. • Your teacher will now highlight concepts using your score. • Would you like to hear something a bit different? Vocal Music Click here • Opera was still a popular vocal form of music in the Classical period. • Listen to this extract from Mozart’s opera The Magic Flute and tick one box to describe the form and state the type of voice. Recitative Aria Chorus The voice is a/an ________________ Listen to this version of the same Aria. How would you compare this to the previous version? • Florence Foster Jenkins Mass • The Mass was discussed earlier in the Renaissance period. • The Mass continued to be popular in the Classical period. • What is a Mass? It was a sacred choral work with text taken from the Roman Catholic liturgy. • It had 5 sections. • KYRIE GLORIA CREDO SANCTUS AND AGNUS DEI. • It was originally written for church worship but in later years became a large scale work for voices and orchestra. Listen to Lacrimosa from Mozart’s Requiem. • What is the time signature? • How would you describe the tonality? • What cadence is used at the very end?
What is the most humane way to kill amphibians and small reptiles that are used in research? Historically, such animals were often killed by cooling followed by freezing, but this method was outlawed by ethics committees because of concerns that ice-crystals may form in peripheral tissues while the animal is still conscious, putatively causing intense pain. This argument relies on assumptions about the capacity of such animals to feel pain, the thermal thresholds for tissue freezing, the temperature-dependence of nerve-impulse transmission and brain activity, and the magnitude of thermal differentials within the bodies of rapidly-cooling animals. A review of published studies casts doubt on those assumptions, and our laboratory experiments on cane toads (Rhinella marina) show that brain activity declines smoothly during freezing, with no indication of pain perception. Thus, cooling followed by freezing can offer a humane method of killing cane toads, and may be widely applicable to other ectotherms (especially, small species that are rarely active at low body temperatures). More generally, many animal-ethics regulations have little empirical basis, and research on this topic is urgently required in order to reduce animal suffering. Concern about the ethical treatment of animals has prompted extensive discussion of how to minimise suffering. Unfortunately, human intuition may fail to predict the stress and suffering of species that are only distantly related to us (Rose, 2002,, 2007; Stevens, 2004; Langkilde and Shine, 2006). For example, mammals and birds typically react to falling ambient temperatures by attempting to maintain body temperature (by increasing metabolic heat production). Thus, exposure to low temperatures may cause intense discomfort. In contrast, many amphibians and reptiles exhibit highly variable body temperatures in the course of their day-to-day lives, and react to falling temperatures by becoming torpid (Pough, 1980; Frankenhaeuser and Moore, 1963; Rosenberg, 1978; Roberts and Blackburn, 1975; LaManna et al., 1980; Hunsaker and Lansing, 1962). Periods of low body temperature associated with inactivity are common on a seasonal or even daily basis for many ectotherms, even for species (such as lowland tropical taxa, and diurnal heliotherms) that spend most of their activity time at relatively high body temperatures (Pough, 1980). The immobility and unresponsiveness of such “high-temperature” ectotherms at low ambient temperatures suggest that their brain activity is reduced when they are cold. If that is true, then such animals could be humanely killed by cooling them to induce torpor (to reduce brain activity and thus, pain perception); and then reducing their temperature even further, to lethal levels. This was a popular method for humane killing of experimental animals for many years (McDonald, 1976), widely endorsed by animal welfare organisations (NSW Animal Welfare Advisory Council, 2004). However, opinion has shifted. Globally, modern veterinary guidelines now rule that “cooling then freezing” is ethically unacceptable (e.g. https://www.avma.org/kb/policies/documents/euthanasia.pdf). Ethics guidelines rarely cite primary research literature (Martin, 1995), and the argument against “cooling then freezing” rests upon a hypothesis rather than specific empirical data. The hypothesis is that temperatures low enough to induce ice-crystal formation in peripheral tissues are nonetheless high enough to allow painful sensations to travel through peripheral nerves and reach the brain, which in turn is warm enough to register those sensations as painful (Sharp et al., 2011). The plausibility of this scenario depends upon the temperatures at which ice crystals form relative to the temperatures at which nerves and brains cease to function; and on the magnitude of thermal differentials within an animal's body during rapid cooling. To evaluate this scenario, we (a) reviewed published literature on thermal dependency of nervous-system function to examine the assumptions that have outlawed “cooling then freezing”, and (b) measured brain activity and limb-core thermal differentials directly in ectotherms while they were being frozen. As a case study, we used the cane toad (Rhinella marina). These invasive anurans are spreading across Australia (Urban et al., 2007), fatally poisoning native predators (Shine, 2010). Community “toad-busting” groups kill many thousands of toads annually, by a variety of methods – some of which appear to be cruel (Clarke et al., 2009) or unreliable (Sharp et al., 2011); toads are also killed for university teaching and research. The prohibition on “cooling then freezing” has outlawed the most readily accessible method for killing ectotherms: for example, current guidelines for euthanasia of cane toads recommend blunt trauma or decapitation (Sharp et al., 2011), methods poorly suited to use by untrained people. Thus, the cane toad offers an excellent example of why it is important to know whether or not the prohibition of euthanasia via hypothermia is based on rational grounds. Review of published literature There is significant scientific debate about whether or not ectothermic vertebrates experience “pain” in the way that humans understand the term; several authorities suggest that we should talk of “nociception” instead (Rose, 2002,, 2007; Stevens, 2004). Even if a noxious stimulus induces activity in the brain, the process may not involve anything comparable to “pain” as perceived by humans (Key, 2015). Nonetheless, although the subjective experience of a cane toad may be very different to that of a human, most animal ethics committees (and the wider community) continue to believe that amphibians can feel pain. Even if we accept the contested point that an amphibian is capable of feeling pain, our review of published literature does not support the idea that placing a pre-cooled ectotherm into a freezer is inhumane, for at least four reasons: When a small ectotherm is pre-cooled and then placed into a freezer, thermal differentials within its body are minor (Hillman et al., 2009; Wilson et al., 2009; Bicego et al., 2007) and thus, the animal's brain cools almost as rapidly as do its limbs. Its neural network is likely to be close to freezing by the time that ice crystals form in peripheral tissues. Small animals cool to well below 0°C before freezing begins, allowing time for deep-body temperatures to fall to low levels. In anurans, ice crystals form at −1 to −4.3°C (Hillman et al., 2009). Thus, the critical issue is whether peripheral nerves can transmit nociceptive signals when the superficial tissue of the animal falls below −1°C. In ectotherms, transmission velocities for nerve impulses fall rapidly at low temperatures (Frankenhaeuser and Moore, 1963), and cease at temperatures close to 0°C: for example, 1.3–4°C in tortoises (Rosenberg, 1978), 0–2°C in frogs (Roberts and Blackburn, 1975). In frogs, the nociceptive peripheral neurons cease functioning at higher temperatures than do those transmitting signals such as touch or proprioception (Roberts and Blackburn, 1975). Cold is an anaesthetic (Wilson et al., 2009). Even a modest reduction in skin temperature reduces painful sensations in mammals (Bicego et al., 2007). In amphibians, minor cooling of one limb (to 6°C) reduces the animal's reaction to noxious stimuli (Suckow et al., 1999). In cane toads, low temperatures have the same structural and electrophysiological effects on myelinated nerves as do local anaesthetics such as lidocaine (Luzzati et al., 1999). Thus, nerve endings close to the peripheral tissues being frozen are unlikely to transmit nociceptive (“pain”) signals. Normal brain functioning is dependent upon temperature in ectotherms. The brains of cane toads fail to respond to electrical stimulation below 3.2°C (LaManna et al., 1980), broadly similar to many other ectotherms (Hunsaker and Lansing, 1962). A toad's limb and deep body temperature closely followed ambient temperature (Fig. 1A,B,C); differentials between the animal's skin and its core averaged <1°C (Fig. 1D). Accordingly, we recorded a continuous smooth decline in brain activity from fridge to freezer (Fig. 2). We saw no evidence of increased EEG activity, across any frequency bandwidths, as has been reported in animals exposed to painful stimuli (Lambooij et al., 2002; Zulkifli et al., 2014). Instead, the brain became electrically ‘quiet’ with decreasing temperature. Collectively, general features of the thermal dependency of nerve and brain function in ectotherms suggest that for cane toads (and potentially, for many species of amphibians and reptiles), cooling-then-freezing can offer a humane death. By the time that ice crystals form in peripheral tissues, the brain is almost as cold as those tissues; and hence, is unable to perceive or respond to nociceptive signals. Our experimental study on cane toads supported this more general result. Clearly, there are caveats to this conclusion. First, it is important to pre-cool the animal before exposing it to freezing temperatures. Second, the potential for freezing to induce nociceptive signals will be greater for larger animals, which can maintain greater thermal differentials between the brain and peripheral tissues (Key, 2015), and have higher crystallisation temperatures (Lee and Costanzo, 1998). However, the toads that we used (>200 g) were far larger than adults of most amphibians and reptiles. Mean adult mass >100 g occurs in <5% of amphibian species and <10% of lizard species (Pough, 1980), so our results have broad applicability. Third, low temperatures may not suppress nerve impulses as effectively in cold-adapted species, or cold-acclimated individuals, as they do in tropical taxa (Key, 2015). Species-specific studies are essential before applying hypothermia to kill individuals of any amphibian or reptile taxa that are routinely active at low body temperatures. Despite these caveats, our review of published literature and experiments on cane toads suggest that for many ectotherms (especially small-bodied, warm-climate taxa), cooling then freezing offers a humane form of euthanasia. Unfortunately, this simple, readily-accessible method is currently outlawed internationally by ethics committees: not because of contrary evidence, but because of speculation combined with a lack of critical analysis of available literature, and a dearth of empirical research. Similar problems extend to many other ethics issues (Langkilde and Shine, 2006). There is no point blaming the members of ethics committees; they are doing the best job they can, but cannot be expected to evaluate primary research literature or conduct their own experiments. We urgently need researchers to take up the challenge of clarifying which methods are humane, and which are not. Until scientists provide that evidence, animals will continue to suffer unnecessarily. MATERIALS AND METHODS We implanted electroencephalogram (EEG) electrodes for recording brain activity in four wild-caught adult female cane toads (115–133 mm snout-urostyle length, 201–235 g). After the animals were anaesthetised with MS 222 (tricaine methanesulfonate; 3 g/l), we drilled four holes (0.5 mm diameter, two per cerebral hemisphere) through the exposed cranium to the level of the dura overlying the dorsal cortex. A fifth hole was drilled over the olfactory bulbs for the ground. All electrodes were gold-plated, round-tipped pins (0.5 mm diameter) glued in place using cyanoacrylic adhesive. Electrode wires (AS633, Cooner Wire, Chatsworth, California, USA) terminated at a connector fixed on the head with two stainless steel screws (25/095/0000, Hilco, London) and light-curing dental acrylic (Dentsply, Mt Waverley, Victoria). Electrode position was verified by dissection at the end of the study. Toads were then transferred to a damp cage, monitored until they regained normal motor function, and allowed a 10-day period of post-operative recovery at 30°C with food and water available ad libitum. EEG activity was recorded at 100 Hz using a head-mounted, miniature (25×25×9 mm) and lightweight (8 g, including battery) Neurologger 2A datalogger (Vyssotski et al., 2009; Lesku et al., 2012). To record body temperatures during cooling and freezing, we inserted a thermocouple wire subdermally into each toad's left hind limb (to measure temperature immediately below the skin) and into the cloaca (to measure deep body temperature), respectively. These thermocouple leads, as well as one measuring ambient temperature, were connected to a TC-2000 thermocouple meter (Sable Systems, Las Vegas, NV USA) and logged each minute using ExpeData software via a UI2 analogue/digital converter temperature logger (Sable Systems, Las Vegas, NV USA). Prior to experiments, toads were transferred into a Faraday cage (22×17×12 cm) then placed into a standard household refrigerator (Kelvinator, Charlotte, NC USA). Once the toad's core reached fridge temperature (∼5°C), it was transferred to a household freezer (Fisher and Paykel, Auckland, New Zealand) for 30 min. Toads removed from the freezer after this time were dead (did not regain consciousness). Fast Fourier Transforms were performed on 4-s, artefact-free epochs to calculate power in 0.39 Hz bins between 1.17 and 49.61 Hz using RemLogic 3.2 software (Embla Systems, Broomfield, CO USA). Cumulative EEG power was calculated as a measure of brain activity in 10-min bins, starting at the time placed in the new thermal regime. We also quantified power in the bandwidths typically used in analysis of the mammalian EEG (delta, theta, alpha, beta and gamma) and recently applied to amphibians (Fang et al., 2012). All procedures were approved by the University of Wollongong Animal Ethics Committee (protocol no. AE10/05). We thank J. Thomas, C. Shilton, J. and T. Shine, M. Thompson, and M. Elphick for their assistance, and P. Hawkins for suggestions. R.S. conceived the study; J.A., A.J.M., J.A.L. and M.S. gathered the data; A.L.V. designed and provided neurologgers; J.A.L. analysed the EEG data. All authors contributed to manuscript preparation and gave final approval for publication. The work was funded by the Australian Research Council (Grant no. FL120100074). The authors declare no competing or financial interests.
Eagle-eyed ceremony planners no doubt noticed a traditional symbol was missing from the platform at President Biden’s inauguration. The Mace of the Republic which symbolizes the authority of the House of Representatives was not present. On a typical inauguration day, the House of Representatives goes into session, then recesses to walk as a group to witness the ceremony. The sergeant of arms, carrying the mace, leads a procession of the members of the House and stands behind them holding it throughout the inauguration. This year, due to Covid-19 restrictions, the House did not go in to session on January 20 so the mace was not used in the ceremony. Crafted in 1841 after the British destroyed our country’s original mace when they burned the Capitol during the War of 1812, the mace has 13 ebony rods representing the 13 original colonies. It is topped by a silver eagle perched on a silver globe. It symbolizes the authority of the House and is always present when the House is in session. It is carried in to the chamber each legislative day and posted on a green marble pedestal on the rostrum to the right of the Speaker. It is occasionally presented in front of an unruly member to restore order. The tradition of mace as symbols of authority dates to the Middle Ages when mace were used as war clubs. The roots of the practice can be traced as far back as ancient Rome. An academic mace symbolizes the authority invested in the president by a school’s governing body. Much like the Mace of the Republic and the House of Representatives, when the authority is present, the mace is present. This is why the mace is an integral part of commencement exercises, when students are invested of degrees by the lawful authority of the university, and why the mace plays an important ceremonial role in academic presidential inaugurations. While some schools possess an ancient mace, the article can be created at any time in a school’s history. Maces are often commissioned to commemorate a milestone anniversary or presidential inauguration, frequently incorporating artifacts, precious stones, and rare wood. When to Use the Mace The mace is used only on formal academic occasions, such as commencement, convocations, and presidential inaugurations, when participants are in full regalia and the president is involved. Because the mace is a symbol of presidential authority as the university’s legal representative with the right to govern, it is carried in procession immediately before the president. When the mace is present, the authority of the university is present. More information about how to use a mace on campus is available in my book, Academic Ceremonies A Handbook of Traditions and Protocol, available from CASE at case.org.
face to face & e-learning Teaching Methodology and Good Practices in the Primary School The learning process can be effective as long as it is systematically organized based on psychology findings, special requirements of each cognitive field and children’s needs. The course and technique of teaching methodology follows a total of principles consistent with a philosophy or theory. Given that there is not a generally accepted theory of learning, the teaching methodology level show some variations. Learning cannot be based on curricula organized exclusively according to the various cognitive fields. Children must develop key competences that will enable them to build a positive attitude toward learning so that they acquire a culture of learning and development throughout their entire life. The effective teachers can choose one or/and combine more teaching methods even in the same subject in order to achieve Success Indicators and Proficiency Indicators set prior to designing and planning each subject to be taught. All students’ activation is achievable when the teacher allows activity alteration and task differentiation so that it is feasible and interesting to perform even when students have different abilities. This course offers a wide range of activities which teachers can directly implement toward children’s identity, autonomy and social skills development.
4 December 2012 Large asteroids typically hit Earth with enough force to vaporize the entire rock. One asteroid the size of San Francisco formed the 40-mile-wide Morokweng crater in South Africa – but puzzlingly, in 2006, scientists discovered a solid piece of the space rock the size of a bowling ball. The discovery spurred a scientific mystery: How could any of it have survived? Planetary scientist Ross Potter of the Lunar and Planetary Institute in Houston, Texas may have an answer. In his poster Monday at the American Geophysical Union’s Fall Meeting, Potter detailed the conditions necessary for asteroid “survival,” using a model simulating the physics of shock waves rippling through an asteroid on impact. An asteroid’s shape and internal structure, he found, affect its survival rate. An egg-shaped asteroid, Potter said, leaves more solid material behind than a round one. Shock waves bounce around within the oval shape, lessening the violent pressure that would otherwise melt or vaporize the rock. Yes, the end of the asteroid that hits the ground is still obliterated. But the spaceward end still has a chance. Potter also showed that solid asteroids survive better than those pockmarked with Swiss-cheese-like holes. Solid rock, Potter found, holds up to stronger impacts without melting. These results, Potter said, led him to an equation for estimating the amount of an asteroid that can survive impact, figuring in the asteroid’s speed, impact angle, shape, porosity, and composition. But the equation doesn’t fill in all the gaps. It’s still challenging to draw conclusions about the original asteroid from the fragments remaining. Even under the best conditions, there’s not much material left to study. “For the Morokweng impact, it’s difficult to constrain from the pieces you find how porous that particular impactor was,” Potter said. Studies of our solar system’s asteroid belt show a wide range of possible porosities, he said. The dinosaur-era Morokweng asteroid’s could have been up to 10 percent empty space – within the ideal range for asteroid survival. A high impact angle and low speed also probably helped the asteroid survive, although the same cannot be said for the Jurassic-era dinosaurs in its path. -Paul Gabrielsen is a science communication graduate student at UC Santa Cruz
The way in which a society, on both a large and a small scale, oppresses and discriminates against people who have, or are perceived to have, physical, cognitive, or other disabilities. Disabled people coined this word to mirror the form and meaning of existing words like racism and sexism. Ableism can take the form of anything from the prejudice that casts individual disabled people as tragic and childlike objects of pity, to the systems in society which exclude large numbers of disabled people from gainful employment or a decent standard of living. The word can be used to describe both negative and seemingly positive stereotypes. It can include both abstract ideas and concrete architectural barriers. Ableist is the adjective form. Other words for the same concept include disableism and disabilism.
Australia’s largest dinosaur drowned and buried in Noah’s Flood The evidence is overwhelming Published: 17 June 2021 (GMT+10) Paleontologists recently announced the largest dinosaur ever discovered in Australia (figure 1), which they called Australotitan cooperensis (the genus name means ‘the southern titan’; cooperensis because it was found near Cooper Creek). The first of its fossilized bones was recovered in 2006 and 2007. Robyn and Stuart Mackenzie found the bones while riding motorbikes on their property 90 km west of Eromanga, 1,000 km west of Brisbane, and not far from the South Australian border. Now, after years of study, researchers have announced that the dinosaur’s size is on a par with the largest Titanosaurus dinosaurs from other parts of the world. All these dinosaurs were drowned and buried in Noah’s Flood. We will see evidence for this as we proceed. Cooper, as it is nicknamed, was said to be as long as a basketball court and as high as a two-story building. That is, 25 to 30 metres (80 to 100 feet) long and 5 to 6 metres (16 to 21 feet) tall at its hip. They said it weighed about 70 tonnes. Its size was estimated from an arm bone, its humerus, which is the bone that reaches from the shoulder to the elbow. Other bones the researchers found include shoulder blades and pelvic bones. Cooper is similar in appearance to other long-necked sauropods such as Brachiosaurus and Apatosaurus. Robyn Mackenzie is now a field paleontologist. In 2016, together with her husband Stuart, their family, and the small 60-strong town of Eromanga, they established the Eromanga Natural History Museum. Didn’t see the evidence for Noah’s Flood Scott Hocknull from the Queensland Museum headed the research and field investigations. He and five others, including Robyn and Stuart Mackenzie, published their findings in the journal PeerJ on 7 June 2021.1 At 130 pages, it is a thorough paper in which they meticulously describe much relevant detail for A. cooperensis, such as the geology and paleontology of the discoveries. They also summarise other dinosaur finds in Queensland over the last decade or so. These scientists have done careful research, faithfully excavating, measuring, and recording the find. However, it’s fascinating that they have not recognized the dramatic evidence that this dinosaur perished in Noah’s Flood and was buried during that event, which destroyed the earth some 4,500 years ago. I suppose it’s understandable they did not consider this because such an idea would not have been given serious mention in their geological training. Mainstream geologists do not even think about the possibility that Noah’s Flood happened. In fact, to mention such an idea among their colleagues would risk losing their job. Nevertheless, the evidence for their Flood demise is overwhelming (see box). The dinosaur fossils were found in the Winton Formation, which is the topmost formation of the Great Artesian Basin (figure 2). This extensive sedimentary basin covers much of eastern Australia. It’s generally around 2–3 km thick (figure 3) but more in some places, like the Surat Basin. The careful study linked in the box shows that this basin was deposited about half way through Noah’s Flood as the waters were peaking on Earth. Sediments would have been deposited above the Winton Formation as the waters of the Flood continued to rise for a while, but these were eroded away as the waters were receding. This left the Winton Formation as the topmost deposit remaining on the surface, but erosion was uneven, meaning the formation is not continuously present. This makes the geological relationships difficult to work out. Nevertheless, Hocknull et al. estimate the dinosaur bones were some 270–300 m from the base of the Winton Formation, which is close to the peak of the Flood. Evidence dinosaurs were overwhelmed and buried by Noah’s Flood Here are links to a few articles that describe the evidence for the dinosaurs being buried during Noah’s Flood. Evidence that the sediments of the Great Artesian Basin were being deposited as the waters of Noah’s Flood were rising and not long before they reached their peak. Figure 3 in this article shows the geological formations that comprise the Great Artesian Basin, and the location of the Winton Formation in the Eromanga Basin. The Great Artesian Basin, Australia A simple guide to help you see where the dinosaur rocks (Triassic to Cretaceous) fit in Noah’s Flood. The geology transformation tool: A new way of looking at your world Evidence for water covering Queensland quite deeply in the past and for marine animals being buried rapidly such that they are well preserved. Deluge disaster: Amazing Australian plesiosaur preservation Evidence that the burial of the dinosaur involved lots of water and sediment. Fossilization of large animals after death requires rapid burial by a significant depth of sediment. Dead crocodiles down under: How croc decomposition helps confirm a crucial element of Bible history More evidence for lots of water. Dinosaurs near Winton making tracks while swimming in shoulder-deep water. A stampede of swimming dinosaurs Another example of a dinosaur caught up in deep water, this time a dinosaur in Spain swimming through deep water. Terrible lizards trapped by terrible Flood More water with large dinosaurs in Texas, USA, making footprints while in deep water. Thunder lizard handstands Enormous dinosaurs near Winton, possibly the size of A. cooperensis, leaving footprints in soft mud. Notice the flatness of the landscape in figure 6 of this article. This flatness was caused by the waters of Noah’s Flood when they covered the whole area. The same flat landscape is evident around Eromanga. Dramatic dinosaur footprints at Karoola station, Australia: Fleeing the rising waters of Noah’s Flood Dinosaur graveyards are evidence of large-scale catastrophe. Fossil preservation points to rapid burial. Dinosaur herd buried in Noah’s Flood in Inner Mongolia, China Another graveyard consistent with the biblical Flood impacting the whole world. Massive graveyard of parrot-beaked dinosaurs in Mongolia: Paleontologists puzzle about the cause of death but miss the obvious clue This article explains why we can discount the quoted, subjective dates of millions of years and interpret the dinosaur as being buried in Noah’s Flood. How dating methods work Another explanation of how radioactive dates are accepted or rejected depending on the geological field relationships. The way it really is: little-known facts about radiometric dating Insights from the artist’s impression Vlad Konstantinov and Scott Hocknull, two of the authors of the PeerJ paper, made available an artists’ impression of the dinosaur (figure 1). It’s a good representation, especially in the way the sauropod holds its head and tail. Interestingly, it shows the dinosaur wandering through an idyllic country setting, a desirable place to take one‘s family camping today. However, it was anything but idyllic at the time it was buried. Cooper and his friends were trying to escape from the turbulent, sediment-laden waters of the global Flood as they were devastating the area. We see evidence for this at dinosaur sites all over the world. In the figure a forest is shown growing in the distance, but there would not have been any forests growing at that time. All the forests had been destroyed. Yes, there was much vegetation around, but this had been ripped up by the Flood from the pre-Flood continents. The vegetation was being washed around and buried. Some of the broken logs are found petrified in the region. It is worth noting that vegetation buried just a little earlier in the Flood across this vast area is now more than a kilometre underground. It is the source of the oil and gas from the Eromanga and Cooper basins, which are the premier onshore oil and gas producing basins in Australia. Again, on the artist’s impression, consider that the water would not have been flowing in a gentle stream as the picture shows. Rather, it would have been surging periodically across the area in a torrent. It is not uncommon to find evidence of dinosaurs in deep water at the time—several examples are given in box above. They were trying to escape. At this site near Eromanga, we see evidence of how the dinosaurs were trampling through the mud and sediment, leaving a footprint mishmash that has been preserved by further sediment dumped on top by the floodwaters. The paleontologists describe this trampled sediment as dinosaur bioturbation. Next time you see a picture of a dinosaur like this or read a news report of a new dinosaur find, remember the global Flood that the Creator brought upon our world. Dig out your Bible and read what happened in Genesis chapters 6–9. Consider that these dinosaurs that are being dug up in Queensland are described in that account, in Genesis 7:21–22. This must be one of the saddest statements in the Bible: Every living thing that moved on land perished—birds, livestock, wild animals, all the creatures that swarm over the earth, and all mankind. Everything on dry land that had the breath of life in its nostrils died. And Hebrews 11:7 must be one of the most encouraging: By faith Noah, when warned about things not yet seen, in holy fear built an ark to save his family. By his faith he condemned the world and became heir of the righteousness that is in keeping with faith. References and notes - Hocknull SA, Wilkinson M, Lawrence RA, Konstantinov V, Mackenzie S, Mackenzie R., A new giant sauropod, Australotitan cooperensis gen. et sp. nov., from the mid-Cretaceous of Australia. PeerJ 9:e11317, 2021 | DOI 10.7717/peerj.11317 Return to text.
Converting cubic feet into pounds is not a direct calculation because cubic feet is a measure of volume and the pound is a measure of mass. A cubic foot of lead, for example, will weigh a lot more than a cubic foot of feathers. The key to converting volume into mass is to use the density of the object in the equation. If you know the object's density, you can convert its cubic feet into pounds with a simple calculation. Write down the density of the material you are converting. It should be expressed as either pounds per cubic foot or kilograms per cubic meter. To convert kg/m3 to lb./cubic feet, multiply by 0.0624. If you don't know the density of the material, try checking the list at Gerry Kuhn's website (see Resources). For example, gold has a density of 19,302.2 kg/m3, which is 1,204.46 lb./cubic feet. Write down the number of cubic feet you are converting. For the gold example, use 20 cubic feet. Multiple this number with the density figure to arrive at your answer on how many pounds it would weight. For the gold example, it would be 20 multiplied by 1,204.46 lb. per cubic feet for a result of 24,089.20 lb. of gold in 20 cubic feet. About the Author Based in the Washington, D.C., area, Dan Taylor has been a professional journalist since 2004. He has been published in the "Baltimore Sun" and "The Washington Times." He started as a reporter for a newspaper in southwest Virginia and now writes for "Inside the Navy." He holds a Bachelor of Arts in government with a journalism track from Patrick Henry College.
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work! Vernalization, the artificial exposure of plants (or seeds) to low temperatures in order to stimulate flowering or to enhance seed production. By satisfying the cold requirement of many temperate-zone plants, flowering can be induced to occur earlier than normal or in warm climates lacking the requisite seasonal chilling. Knowledge of this process has been used to eliminate the normal two-year growth cycle required of winter wheat. By partially germinating the seed and then chilling it to 0° C (32° F) until spring, it is possible to cause winter wheat to produce a crop in the same year. Devernalization can be brought about by exposing previously vernalized plants or seeds to high temperatures, causing a reversion to the original nonflowering condition. Onion sets that are commercially stored at near freezing temperatures to retard spoilage are thereby automatically vernalized and ready to flower as soon as they are planted. Exposure to temperatures above 26.7° C (80° F) for two to three weeks before planting, however, shifts the sets to the desired bulb-forming phase. Learn More in these related Britannica articles: agricultural technology: Temperature…spring type; such treatment, called vernalization, has practical application in cold-climate plants.… plant development: Environmental control of development…is said to have been vernalized, or brought to the spring condition. Again the response is akin to a determination, because the condition attained is transmitted through subsequent cell divisions. Furthermore, there are indications that vernalization induces a persistent modification in the metabolism of apical cells and their derivatives.… Origins of agricultureOrigins of agriculture, the active production of useful plants or animals in ecosystems that have been created by people. Agriculture has often been conceptualized narrowly, in terms of specific combinations of activities and organisms—wet-rice production in Asia, wheat farming in Europe, cattle…
Possessive Pronouns Worksheets Keep children in grade 1 and grade 2 grammatically refreshed with our printable possessive pronouns worksheets with answers! As you can tell from the name, possessive pronouns are pronouns that show possession. Corralled here are exercises like identifying possessive pronouns, completing sentences using correct possessive pronouns, sorting possessives, and more. Bring the young ones some much-needed pronoun upliftment with our free possessive pronouns worksheet pdf! Children in 2nd grade walk through this part of our printable possessive pronouns worksheets, where the task is to color the suitable possessive pronoun to complete a bunch of sentences. An ode to telling possessive pronouns from adjectives, this possessive pronouns exercise works great for grade 2. Sort the possessives in the word box as possessive pronouns and adjectives, and write them in the correct columns. While a possessive adjective is followed by a noun as in "This is his shirt", a possessive pronoun often ends the sentence like in "It is his". Fill in the blanks with apt possessive pronouns and adjectives. If 1st grade kids are feeling down about their possessive pronouns, this practice worksheet provides great companionship! Cut each possessive pronoun and glue it correctly to complete each sentence. Check the correct possessive pronouns/possessive adjectives to complete sentences in this pdf. Open up to possessive pronouns, such as "mine" and "yours" and possessive adjectives, like "my" and "your". When Dad buys you a brand-new sports car, take good care of it for it's all "yours"! Pick appropriate possessive pronouns from the box and complete these sentences. Possessive pronouns are exciting! This possessive pronouns worksheet pdf delivers enough winning charm to carry the young scholars in grade 1 away. Watch their spirits fly as they complete a bunch of sentences using possessive pronouns.
|Time: 20 min| This activity practices changing the point of view of the sentence. For example "You like me." is from my point of view speaking to you. To change it around, you would say "I like you". With New Horizon, students learn all these in Unit 7 of book one. Although the material is covered in the first year, it can be reviewed by the second year too. - Enough unique cards for at least one card per students. Then some copies of the card so that the students can come back for more when they loose their card. The card should have a sentence from the two points of view; "You like me." and "I like you". - (optional) a poster to show how the words change ("my" becomes "your"). See page 70 of New Horizon One for such a table. - Demonstrate with the JTE how to play. - Practice with a few sentences, testing some of the students. - Each students get one card. - Check that they can read all the words; perhaps get them to practice their sentence - Students stand up and walk around the room, looking for people to talk with. - When they meet someone, Say "Hello" and Janken. - The looser must read their sentence and the winner gets the chance to reply. - If the reply is correct (according to the card), the winner gets to keep the loosers card. - When a student has more than one card, they can choose which card to read (they want to choose the harder one so that it is harder to answer) - When a student has no cards, they go to the ALT or JTE and say; "Help me! Please give me a new card" and they recieve a new card. - If the reply is incorrect, the looser reads the correct sentence and they both move on. - The goal is to get the most cards Two students meet, "Hello" they both say to each other. They Janken, with Ken loosing. - Ken reads the top sentence of his card "You like me". - Mary thinks and then replies with the correct answer "I like you". - Ken looks at his card and realises (sadly) that the answer is correct, handing over the card. - They both say "goodbye", Mary looks for a new partner and Ken finds a teacher to ask a new card from. This can be a difficult task for many of the students. It is therfore, essential that the students understand the concept. Ensure that the class get enough practice before letting them try it themselves.
Linda Crampton is a writer and former teacher with a first-class honors degree in biology. She writes about the scientific basis of disease. An Essential Chemical Cholesterol is an essential chemical in the human body and has many vital functions. Our bodies make all of the cholesterol that we need. If we eat certain foods or follow certain lifestyles, the cholesterol level in our body may increase and cause health problems, such as heart disease and strokes. Taking steps to keep the substance at a healthy level is therefore very important. Cholesterol exists in several forms. Excess LDL cholesterol stimulates the buildup of fatty deposits in the linings of our arteries. HDL cholesterol helps to remove these deposits. While our bodies need both of these substances, it's important to keep the amount of the LDL form under control. LDL cholesterol and HDL cholesterol are often referred to as different chemicals. The only difference is the specific type of molecule attached to the cholesterol, however. The attached substance acts as a carrier. Functions of Cholesterol Essential Functions in the Cell Membrane Cells are surrounded by a membrane that determines which substances get in and out of the cell. Cell membranes require cholesterol in order to function properly. The chemical maintains the correct fluidity of the membrane at different temperatures. It increases the fluidity at low temperatures and decreases it at high temperatures. It also reduces the permeability of the membrane to certain substances. Improving Nervous System Function A neuron (nerve cell) has an extension called an axon. The axon transmits nerve impulses to the next neuron. The myelin sheath is a covering that surrounds and electrically insulates axons. This insulation speeds up the transmission of nerve impulses. Myelin contains a large concentration of lipids, including cholesterol. Cholesterol is a steroid molecule and is converted to steroid hormones in the body. These hormones include the reproductive hormones estrogen, progesterone, and testosterone. Other steroid hormones made from cholesterol are cortisol and aldosterone. Cortisol has many functions in the body, including helping to regulate the blood sugar level. Aldosterone affects the amount of sodium ions and water in the body. Vitamin D Production Our skin makes a vitamin D precursor from 7-dehydrocholesterol when it absorbs ultraviolet light. This precursor is then converted to active vitamin D inside the body. Researchers are discovering that vitamin D has many very important functions in the body. The vitamin is needed for the absorption of calcium in the small intestine and also plays a role in immunity and cancer prevention. Bile Acids and Salts Bile is a yellow-green liquid made by the liver. Bile emulsifies fats in the small intestine, which prepares them for digestion by enzymes. The emulsification is performed by bile acids, which may exist in the form of bile salts. Bile acids are made from cholesterol. Cholesterol is a waxy substance. It’s not inherently “bad.” In fact, your body needs it to build cells. But too much cholesterol can pose a problem. — American Heart Association LDL Cholesterol Facts Cholesterol is a type of lipid. Since lipids can't dissolve in the watery blood plasma, cholesterol molecules are attached to lipoprotein molecules. These transport the chemical around the body in the blood. A lipoprotein contains both lipid and protein. LDL cholesterol contains lipoproteins that have low density and is often called the “bad” cholesterol. The lipoproteins transport their cargo from the liver to the rest of the cells in the body, which is a necessary function. When there is too much LDL cholesterol in the blood, however, cholesterol molecules are deposited in the lining of arteries. Here they combine with other substances such as fat and calcium to form a material called plaque. The plaque may protrude into the channel of the artery, impeding blood flow. Read More From Youmemindbody Atherosclerosis and Arteriosclerosis The buildup of plaque in arteries is called atherosclerosis. Plaque can decrease the available space for blood flow. It also increases the probability of blood clots. Bits of plaque may break off, leaving a rough surface which can cause a blood clot to develop. The clot and broken bits of plaque may move to other areas, blocking the flow of blood. These processes may cause a heart attack if they happen in a coronary artery, since the coronary arteries supply oxygen and nutrients to the heart muscle. Blockage of the carotid arteries going to the brain can cause a stroke. Blocked arteries in the arms and legs may result in peripheral artery disease (PAD), also called peripheral vascular disease. Peripheral artery disease affects the legs more commonly than the arms and can cause leg numbness and weakness. Plaque can also cause artery walls to become less flexible. When the artery walls become stiff and inflexible, a person is said to have arteriosclerosis. HDL cholesterol consists of cholesterol attached to high-density lipoproteins. It's known as the “good” cholesterol because it reduces the risk of heart disease in most people. High-density lipoproteins transport their cargo from the arteries to the liver, which processes the chemical or eliminates it from the body. For some time, the standard medical advice has been "LDL cholesterol bad, HDL cholesterol good". The recommendation is still considered valid by many nutritionists. There is plenty of evidence that an excessive level of LDL cholesterol increases the risk of cardiovascular problems. In addition, multiple research projects have shown that drugs that decrease the level of the substance reduce the risk of health problems such as heart disease. New research suggests that we don't completely understand the role of HDL cholesterol, however. It's probable that there is more to learn about the other versions of the substance as well. Researchers have found that people with a rare gene mutation have a very high level of HDL cholesterol in their blood. They also have an increased risk of heart disease. This observation doesn't necessarily mean that the increased risk is due to the extra HDL cholesterol. "Correlation doesn't imply causation" is a common statement in scientific investigation. The observation does indicate that we need to do more research, however. Blood Test Results Several types of blood tests can be used to measure the cholesterol level. One test determines the total level of the chemical in the blood. If this test shows that the level is higher than it should be, more specific tests can be performed to discover the levels of the different forms of the chemical. A lipid profile is a blood test that measures the level of total cholesterol, LDL cholesterol, HDL cholesterol, triglycerides (neutral fats), and sometimes VLDL cholesterol (very low density lipoprotein cholesterol) as well. Like the LDL form of the substance, the VLDL type stimulates cholesterol to build up in the arteries. The normal blood level of the chemical is often said to be between 5 and 40 mg/dL. Sometimes the ratio of total cholesterol to HDL cholesterol is reported. The lower this ratio, the better (within certain limits). A desirable ratio is said to be 4.0. A ratio of 5.0 is considered to be a borderline value and 6.0 a high one. The tables below show the generally accepted meanings of the blood test numbers. Women usually have a higher HDL cholesterol level than men. While men are said to have a low HDL cholesterol level at less than 40 mg/dL (1 mmol/L), women have a low level at less than 50mg/dL (1.3 mmol/L). |Total Cholesterol Level (mg/dL)||Total Cholesterol Level (mmol/L)||Possible Significance| Less than 200 mg/dL Less than 5.2 mmol/L 200 to 239 mg/dL 5.2 to 6.2 mmol/L 240 mg/dL and above More than 6.2 mmol/L |LDL Cholesterol Level (mg/dL)||LDL Cholesterol Level (mmole/L)||Possible Significance| Less than 100 mg/dL Below 2.6 mmol/L 100 to 129 mg/dL 2.6 to 3.3 mmol/L 130 to 159 mg/dL 3.4 to 4.1 mmol/L 160 to 189 mg/dL 4.1 to 4.9 mmol/L 190 mg/dL and above Above 4.9 mmol/L |HDL Cholesterol Level||HDL Cholesterol Level (mmol/L)||Possible Significance| Less than 40 mg/dL Below 1 mmol/L High risk of heart disease 40 to 59 mg/dL 1 to 1.5 mmol/L Less risk of heart disease 60 mg/dL and above Above 1.5 mmol/L Protective against heart disease (but see note below) Healthy Fats in the Diet According to the majority of health experts, fats in the diet should be mainly monounsaturated. Monounsaturated fats have been found to have a neutral or even beneficial effect on blood cholesterol. Olive oil is rich in monounsaturated fatty acids and lowers LDL cholesterol. The same is true for almonds and avocados. Walnuts, which contain mainly polyunsaturated fats, also lower LDL cholesterol. Nutritionists generally recommend that saturated fats be restricted in the diet (but not eliminated), since they have been found to increase the cholesterol level in the body. Saturated fats are usually found in foods that come from animals, such as fatty meats and full-fat dairy foods. Some plant foods contain saturated fats too, such as coconut oil. Eggs are low in saturated fat but high in cholesterol. For most people, eating foods containing cholesterol doesn’t increase the blood cholesterol level significantly. Eggs are packed full of nutrients and are a great addition to the diet, except for people who have been diagnosed with an inherited form of hypercholesterolemia. In this disorder, the blood cholesterol is abnormally high. Doctors generally advise people with hypercholesterolemia to avoid or limit foods containing cholesterol. Artificial trans fats have been partially hydrogenated to change their properties. Nutritionists say that they should be completely removed from the diet, since they increase LDL cholesterol and decrease the HDL type. Our liver helps control the amount of cholesterol in our body. For example, if we obtain the substance from food, the liver reduces the amount that it makes. However, there is a limit to the liver's ability to help us. Healthy fats are an essential component of the diet. They are high in calories, however, and should be eaten in moderation. It's a good idea to avoid unhealthy additions to good fats, such as the salt and roasting oil added to some nuts. Soluble Fiber Facts and Effects Soluble fiber has been found to lower LDL cholesterol. This type of fiber forms a gel when it mixes with water in the small intestine. There are several theories concerning how this gel lowers cholesterol. One theory is that the gel prevents the reabsorption of bile acids from the small intestine. Bile acids are usually reabsorbed once they have done their job. If they aren’t reabsorbed, they pass out of the body in the feces. The liver then has to convert more cholesterol into bile acids, thereby reducing the blood cholesterol level. Foods containing significant amounts of soluble fiber include the following: - grains such as oatmeal and barley - vegetables such as peas, beans, beets, parsnips, carrots, potatoes, and sweet potatoes - fruits such as bananas, apples, pears, strawberries, plums, prunes, and citrus fruits Other Foods That Affect Cholesterol Level While there is strong evidence that certain foods lower LDL cholesterol, the evidence that specific foods raise the HDL type is less strong. Alcohol appears to increase the substance, but excess alcohol consumption can cause other health problems. Some evidence suggests that cranberry juice, raw onions, and omega-3 fats found in fish such as salmon and sardines can increase HDL cholesterol. Salmon is a heart-healthy food even without affecting HDL cholesterol, since it lowers the level of trigylcerides in the blood and is low in saturated fat. Niacin, or nicotinic acid, is a type of B vitamin. Niacin has been found to lower LDL cholesterol and significantly raise HDL cholesterol when taken at high doses. These doses can cause unpleasant and possibly dangerous side effects, so high-dose niacin should never be taken without a doctor’s supervision. Some types of processed foods contain cholesterol-lowering additives. For example, plant sterols and stanols have been shown to decrease LDL cholesterol level and are added to certain types of orange juice and margarine. Smoking should be avoided, since it decreases the level of HDL cholesterol. On the other hand, regular exercise increases the level of HDL cholesterol, and so does weight loss (if this is necessary). A Diagnosis of High Cholesterol If you are diagnosed with high cholesterol, there is a lot that you can do to improve the situation. You may not even need to take cholesterol-lowering medications, although of course a doctor's recommendations should be sought and followed. You can reduce or remove harmful foods from your diet, add foods known to lower LDL cholesterol, add healthy foods that might increase the HDL type, stop smoking, and get regular exercise. If you are very out of shape, it's important to begin exercising slowly. It's also important to get a health check-up and advice from your doctor before you start an exercise program. These steps are worth doing. Suitable exercise and diet will help to ensure that cholesterol stays a friend instead of becoming a foe.
Erin Siracusa set out in 2016 to prove what would seem oddly fitting for a year like 2020: Even solitary creatures need friends, especially as they age. But she didn’t focus on grizzly bears or mountain lions, animals often known for their elusiveness. She turned her sights on red squirrels. Just as you likely notice squirrels lecturing and shouting at one another in your backyard, red squirrels in the wild also dislike any other red squirrels. They defend a territory about .3 hectares – slightly more than half a football field – and aside from mothers and young, only interact physically on the one day a year females go through estrus. All other days, squirrels spend most of their time collecting food in a large midden on the forest floor, defending that midden from marauding squirrels and other potential thieves, and recovering from all that effort in grass nests in trees. Siracusa, then a graduate student at the University of Guelph in southern Ontario, already knew from her previous work that red squirrels with familiar neighbors spent less time angrily defending their middens. But did that time saved in not fending off attacks result in any measurable benefit? She thought the answer may well be yes. And after three years tracking red squirrels in the southwestern corner of Canada’s Yukon Province while also analyzing another 19 years of rigorous data from the same squirrel population, her hypothesis held up. Red squirrels – despite their apparent disdain for all other red squirrels – actually increased their chances of survival in their final years if they knew their neighbors, she recently published in the journal Current Biology. “When we think about ‘stable social relationships’, we often think about close social bonds like ‘friendships’ but there are also many stable relationships that are more socially distant and competitive, like rival songbirds or squirrels on neighboring territories,” explains Gerald Carter, an assistant professor at Ohio State University’s Department of Evolution, Ecology and Organismal Biology (who was not involved in this research). “Similarly, when we think about cooperation, we often think about cooperation with ‘friends’ but there’s also many subtle forms of cooperation that exist between familiar rivals,” Carter continues. “For example, you are better off having a familiar rival than an unfamiliar one, because you both can work out ways to minimize harming each other, which is known as the ‘dear enemy effect.’” Basically, when you’re a red squirrel, better the devil you know than the devil you don’t. A Trove of Squirrel Data Siracusa’s study uses 22 years’ worth of data on thousands of squirrels captured on Champagne and Aishihik First Nations land in the Yukon. The Kluane Red Squirrel Project started in 1987 and continues today with Canadian and U.S. researchers. While more than three decades of research on a population of squirrels in a hard-to-reach spot might seem odd, the project makes quite a bit of sense, Siracusa says. “Red squirrels provide an ideal model system to ask all kinds of questions about ecology and the questions I’m interested in for social relationships and fitness,” she says. Practically, trees in the Yukon are shorter than locations farther south, making it easier for researchers to capture, tag and follow the squirrels. Red squirrel middens are easy to locate because they’re so large. The population is also about 30 miles from the nearest city or town, giving the least disturbed look at behavior. Each spring, researchers trap squirrels to verify certain ones still own their territory and note reproductive status. If a female is close to birth, researchers place a radio collar on her neck to track her to her nest. The scientists then crawl up to the creatures’ nests, count the number of babies, affix each with numeric and colored ear tags and take DNA samples to determine the father. The process continues each year between May and August. “Because we do this, we can create a special map of who lives where. We know Bob lives here and Sally, Joe and Jane are Bob’s neighbors,” she says. Researchers also recorded information about squirrel vocalizations – often referred to as rattles – to determine how squirrels changed the way they defended their territory depending on their neighbors. Each squirrel neighborhood is about 130 square meters – roughly the furthest distance that rattles can carry. Less Time Yelling Armed with information from more than 1,000 individual squirrels, Siracusa and the paper’s co-authors determined that squirrels with the same neighbors rattled less. But the effects to longevity were even more interesting. The chance of survival for any creature tends to diminish as it ages. For squirrels, the effects of aging drops annual survival for a 4-year-old from 68 to 59 percent. If that squirrel knows all its neighbors, its chances of survival actually increase to 74 percent from age 4 to age 5. In effect, while the average lifespan of a red squirrel is 4 years old, maintaining neighbors may well help them live longer, Siracusa found. They also can sire almost twice as many pups. But the effects, surprisingly, weren’t tied to squirrels related to one another. Kinship made little difference in the longevity of a squirrel’s life, researchers found. The benefits were all based on consistency and familiarity. Researchers can’t say for sure why having the same neighbors year after year provides these benefits, especially when neighbors aren’t necessarily helping accomplish anything. One theory is that health benefits are likely a product of less time spent yelling at intruders and more time spent collecting food, resting, and mating. More food matters everywhere, but especially in a place known for harsh winters. The information isn’t critical for red squirrel conservation – North American red squirrels are considered a species of least concern – but it could have implications in the conservation of other species that humans generally consider “anti-social.” “Some scientists might be skeptical of squirrels having social relationships,” says Siracusa, who is now working on her post-doctorate research on macaques at the University of Eketer in the U.K. “They reserve the term social relationship for friendships.” Siracusa argues, however, that “relationships can form between any individual with an interaction with each other, even though squirrels don’t physically interact.” It’s a point, she says, that seems particularly fitting for humans right now.
When the sample size is relatively small, the normal distribution of variability of a sample mean is actually not a very good approximation to its true distribution. This means that Z-tests are unreliable for small sample sizes. An improvement is Student’s t-test. This was designed not by a student but by a brewer going under the pseudonym of Student. The t-test recognises the inaccuracies of smaller samples by having a distribution that becomes wider the lower the sample size. For very large sample sizes, the t-distribution is the same as the normal distribution. A wider test means that differences will have to be greater to have the same probability of error, in other words false positive errors will be less likely. This is also called being more conservative. The formulae for determining the t-score for the t-test take a similar form to those for the Z-scores. In the case of an standard error of the mean plot, the Z-score comparing means of two samples a and b relates to the difference between the means divided by a standard error term. As we saw earlier, the denominator, the standard error term, when expressed in terms of SD, was: SE = √(SDa2/na + SDb2/nb) The t-test uses the identical formula. The difference lies in the table that matches the t-score to a probability value. In fact for the t-test, as for the binomial distribution, there is a different table for every value of n. The value used in the table is not quite the number of subjects, but instead what is known as the number of degrees of freedom (df). In this situation, the df is the sum of n for both groups minus 2. Obviously with all the tables for all the degrees of freedom, computers make a particularly attractive alternative for t-tests. Pooled Standard Deviations We mentioned earlier in the proportion comparisons section that sometimes an intermediate calculation is performed to derive a “pooled” proportion. A similar process is used in a t-test to derive a pooled SD; whenever we assume that the population variances, and hence SDs, are equal, we derive this “pooled” SD as the best estimate of the true population SD. We first calculate the two actual SDs for the two samples. A simple way to estimate the population value would be to take the average of the two. A better way would be to average the two corresponding variances, since these are the more fundamental quantities. If the sample sizes are different, it would be better still to take a weighted average, giving more weight to the SD derived from a sample with a greater number of subjects. So the general form for assuming equal variances, and therefore taking a single SD (denoted SD’) or variance value, is: t = (μa-μb)√n/(SD’√2) (Exactly the same can apply to a Z-score calculation) If the sample sizes are the same, we just average the variances, so the SD term is: SD’ = √(1/2 * (SDa2 + SDb2) If the sample sizes are different, the weighted average SD makes the whole equation long-winded so is expressed in two parts. The overall t-score equation becomes: t = (μa-μb)/(SD’√(1/na + 1/nb)) And the SD’ term is: SD’ = √(((na-1)SDa2 + (nb-1)SDb2)/(na + nb -2)) The “n-1” terms are I think why the degrees of freedom is used for t-test calculations. If there is a variable measured in two subjects, it can vary only in one dimension, getting further apart or closer together between the two subjects; hence n-1 = 1. If there are three subjects, there is an extra degree of freedom of variation, like the three points of a triangle being pulled in 2D. Four points is like a tetrahedron in 3D and so on. The degrees of freedom is the number of subjects – 1, hence na-1 and nb-1. When combining two samples the total degrees of freedom are added, so it is the number in both groups – 2, hence na + nb -2. Finally if we do not assume equal variances, we have a t-score again in the form of the equivalent Z-score: t = (μa-μb – Δ)/√(SDa2/na + SDb2/nb) But unlike for the Z-score, we also need a degrees of freedom term for the t-table, some kind of amalgamated n value. So we use the above formula as is, and calculate the degrees of freedom (df) as below: df = (SDa2/na + SDb2/nb)2/(((SDa2/na)2)/(na-1) + ((SDb2/nb)2)/(nb-1)) Considering unequal variances was actually beyond Student’s remit, and so is known as Welch’s t-test. The df term above gives a good approximation so that ordinary t-tables can still be used. Independent vs dependent samples The type of t-test above, and the Z-tests describes earlier, are for two independent samples. In broad terms, this means that the samples are of different subjects. No one subject is in both samples. When measuring bp, one subject cannot be in company Nice and company Nasty! The next section describes what to do with dependent samples.
Imagine a cube on which light is projected by a flashlight. The cube reflects the light in a particular way, so simply spinning the cube or moving the flashlight makes it possible to examine each aspect and deduce information regarding its structure. Now, imagine that this cube is just a few atoms high, that the light is detectable only in infrared, and that the flashlight is a beam from a microscope. How to go about examining each of the cube's sides? That is the question recently answered by scientists from the CNRS, l'Université Paris-Saclay, the University of Graz and Graz University of Technology (Austria) by generating the first 3D image of the structure of the infrared light near the nanocube. Their results will be published on 26 March 2021 in Science. Electron microscopy uses an electron beam to illuminate a sample and create an enlarged image. It also provides more complete measurements of physical properties, with unrivaled spatial resolution that can even visualize individual atoms. Chromatem, the Equipex Tempos team's dedicated instrument for spectroscopy, is one of these new generation microscopes. It probes the optical, mechanical, and magnetic properties of matter with very high resolution, one that is matched by only three other microscopes in the world. Scientists from the CNRS and l'Université Paris-Saclay working at the Solid States Physics Laboratory (CNRS/Université Paris-Saclay), along with their colleagues at the University of Graz and Graz University of Technology (Austria), used Chromatem to study a magnesium oxide nanocrystal. The vibration of its atoms creates an electromagnetic field that can only be detected in the mid-infrared range. When the electrons emitted by the microscope indirectly encounter this electromagnetic field, they lose energy. By measuring this energy loss, it becomes possible to deduce the outlines of the electromagnetic field surrounding the crystal. The problem is that this type of microscopy can only provide images in 2D, raising the question of how to visualize all of the cube's corners, edges, and sides. In order to do so, the scientists developed image reconstruction techniques that have, for the first time, generated 3D images of the field surrounding the crystal. This will eventually enable targeting a specific point on the crystal, and conducting localized heat transfers, for instance. Many other nano-objects absorb infrared light, such as during heat transfers, and it will now be possible to provide 3D images of these transfers. This is one avenue of exploration for optimizing heat dissipation in the increasingly small components used in nanoelectronics. More information: Three-dimensional vectorial imaging of surface phonon polaritons. Science (2021). science.sciencemag.org/cgi/doi … 1126/science.abg0330 Journal information: Science Provided by CNRS
This Sociology Factsheet will look at how ethnicity shapes identity and will explore the extent to which it is still an important factor of identity formation. The Factsheet includes Exam Hints to help you to use your knowledge to gain maximum marks, while the activities give you the opportunity to apply what you have learned and will help you develop your skills in this area. Words in bold are explained in the glossary and a reference list is included at the end of the Factsheet. The examiner will expect you to be able to: •Demonstrate your understanding of both identity and ethnic identity. • Demonstrate your understanding of how both primary and secondary socialisation facilitates ethnic identity. • Demonstrate an understanding of how different minority ethnic groups form an identity. • Critically evaluate differing theories and research on how ethnicity shapes identity. • Be confident in incorporating ethnicity in an answer on culture & identity. ISSN / ISBN The materials published on this website are protected by the Copyright Act of 1988. No part of our online resources may be reproduced or reused for any commercial purpose, or transmitted, in any other form or by any other means, without the prior permission of Curriculum Press Ltd.
STUDY GUIDE IDEAS & PROJECTS FOR RIVETS & THE STUDY OF HOMEFRONT EFFORT OF WW2 1. What were the traditional roles of women prior to WW 2? How did these roles change once America entered the War? 2. . Who was Rosie the Riveter? 3. What is propaganda? How was propaganda used to encourage women to enter the work force during WW2? When the war was over, how was propaganda used to urge women to return to their homes? What types of propaganda do we see in today's society? 4. Using the Web, find stories and pictures of Real Rosie the Riveters. What did these women contribute to our society? 5. How did President Roosevelt impact the war efforts by signing Executive Order 8802? How did this Order effect African Americans? 6.. What was the role of the Kaiser Richmond Shipyards in WW2? 7. How did the War effect the population of the San Francisco Bay Area during WW2? 8. What is rationing? What did American's give up so that our War Efforts would be successful? 9. What does having a Blue or Gold Star in the window of the family home mean? 10. Why did Americans have Black Outs in WW2? How close did our Country's enemies get to America during the war? 11. What discrimination did Japanese Americans experience in WW2? 12. How did the role of Women in the Military change because of WW2? Have students interview family members for first hand recollections of home front life during WW2. Many will find that their Grandparents and Great Grandparents have vivid memories of the War.
5 ways chemicals can save the world from climate change The chemical industry has a vital role to play in developing technological solutions to help save us from climate catastrophe, and could create significant opportunities for global economic development at the same time. Chemical engineers have been working for some time to find and implement ways to combat climate change. Here are 5 ways chemicals can save the world from climate change: Using unconventional gas (for example shale gas or coal seam gas) is a more environmentally friendly option than existing fossil fuels. Switching, for example, from coal to gas can result in around 50 per cent less carbon dioxide emissions being produced in power generation. Chemical engineers work to ensure that extraction of unconventional gases is performed to the highest environmental standards. Ammonia is used to make fertiliser, and the chemical’s large-scale production was a major break through in efforts to feed a growing global population. The fertiliser industry is still a big energy consumer, and producing ammonia close to renewable energy sources and agricultural production sites rather than in centralised facilities will be an important way of reducing its carbon footprint. Any sustainable fuel or fertiliser cycle will also have to account for the water supply. Making ammonia (NH?) uses hydrogen, which is present in all high-energy chemicals (fuels) and ultimately requires water for production. The fact that the most solar power can be generated in places where water is scarce is one of the biggest obstacles to a large-scale roll-out of renewables-based fuel – and needs to be addressed. Electric vehicles (EVs) burn no gasoline and have no tailpipe emissions, but producing the electricity used to charge them does generate global warming emissions. The amount of these emissions, however, varies significantly based on the mix of energy sources used to power a region's electricity grid. For example, coal-fired power plants produce nearly twice the global warming emissions of natural gas-fired power plants, while renewable sources like wind and solar power produce virtually no emissions at all. Being more flexible in the way we generate and consume energy will require new energy storage. When we think of storage batteries are what commonly spring to mind, but other ideas include using embodied energy in chemicals as stored energy – to be released on demand via chemical conversions. Effective energy storage is a major part of the climate change solution, and chemical engineers can help. More than half of the world’s annual carbon emissions could be prevented over the next 50 years by using sustainable bioenergy. However, the raw materials used in bioenergy production – food crops like maize and sugarcane – come with a lot of associated challenges. Chemical engineers have the technology to use these materials efficiently and bioenergy production has the potential to be cost effective. Looking for a chemical engineering course? Why not visit us today to find out more about our engineering courses and chemical courses. PROCESSING, PLEASE WAIT...
In this activity, students will be introduced to (or review) the term foreshadowing. Using the set of criteria provided in this activity, the teacher will lead the class in analyzing foreshadowing in a well-known work. Before the Activity See the attached Activity PDF file for detailed instructions for this activity. Print the appropriate pages from the Activity for your class. Install the NoteFolio(tm) App on the students' graphing calculators following the attached instructions. The teacher should decide ahead of time whether to have the students take notes on the video while watching it, or whether the class should use the file to record the class analysis of foreshadowing after the analysis is completed. Note that this initial group exercise prepares the students for the next activity where they will use the same process collaboratively. During the Activity Distribute the appropriate pages from the Activity to your class Distribute the NoteFolio(tm) file(s) to your class using TI Connect(tm) and the appropriate TI Connectivity cable Follow the procedures outlined in the Activity Comprehend foreshadowing in literary works and works of popular culture. Analyze a short story to identify foreshadowing. Develop their own ideas of foreshadowing for a story. Identify and evaluate the literary element of foreshadowing in professional models and peer writing. After the Activity Near the end of the class period, the teacher should summarize the process used to analyze foreshadowing in a work. Ask the class if they have any questions, then advise the students that they will be applying the skills practiced in this activity in the next activity.
Ultraviolet (UV) radiation is thought to be the major risk factor for most skin cancers. Sunlight is the main source of UV rays, which can damage the genes in your skin cells. Tanning lamps and beds are also sources of UV radiation. People with high levels of exposure to light from these sources are at greater risk for skin cancer. Ultraviolet radiation has 3 wavelength ranges: - UVA rays cause cells to age and can cause some damage to cells’ DNA. They are linked to long-term skin damage such as wrinkles, but are also thought to play a role in some skin cancers. - UVB rays can cause direct damage to the DNA, and are the main rays that cause sunburns. They are also thought to cause most skin cancers. - UVC rays don’t get through our atmosphere and therefore are not present in sunlight. They are not normally a cause of skin cancer. UVA and UVB rays make up only a very small portion of the sun’s wavelengths, but they are the main cause of the damaging effects of the sun on the skin. UV radiation damages the DNA of skin cells. Skin cancers begin when this damage affects the DNA of genes that control skin cell growth. Both UVA and UVB rays damage skin and cause skin cancer. UVB rays are a more potent cause of at least some skin cancers, but based on current knowledge, there are no safe UV rays. The amount of UV exposure depends on the strength of the rays, the length of time the skin is exposed, and whether the skin is protected with clothing or sunscreen. Skin cancers are one result of getting too much sun, but there are other effects as well. The short-term results of unprotected exposure to UV rays are sunburn and tanning, which are signs of skin damage. Long-term exposure can cause prematurely aged skin, wrinkles, loss of skin elasticity, dark patches (lentigos, sometimes called age spots or liver spots), and pre-cancerous skin changes (such as dry, scaly, rough patches called actinic keratoses). The sun’s UV rays also increase a person’s risk of cataracts and certain other eye problems and can suppress the skin’s immune system. Dark-skinned people are generally less likely to get skin cancer than light-skinned people, but they can still get cataracts and suppression of the skin’s immune system. The UV Index The amount of UV light reaching the ground in any given place depends on a number of factors, including the time of day, time of year, elevation, and cloud cover. To help people better understand the intensity of UV light in their area on a given day, the Environmental Protection Agency (EPA) and the National Weather Service have developed the UV Index. The UV Index number, on a scale from 1 to 11+, is a measure of the amount of UV radiation reaching the earth’s surface during an hour around noon. The higher the number, the greater the exposure to UV rays. The UV Index is given daily for regions throughout the country. Many newspaper, television, and online weather forecasts include the projected UV Index for the following day. Further information about the UV Index, as well as your local UV Index forecast, is available on the EPA’s web site at www.epa.gov/sunwise/uvindex.html. As with any forecast, local changes in cloud cover and other factors may change the actual UV levels experienced.
How Does A Millimeter-Sized Cell Find Its Center? When mitosis occurs, the nucleus is at the center of the cell, and the two daughter cells split symmetrically. But how does the nuclues find the center of the cell? While there seems to be reasonable explanations for small and intermediate sized cells, the mechanism for large (millimeter) sized cells is unclear. This paper proposes a few possibilities. Below are immunostaining pictures of the egg of a clawed frog Xenopus laevis during its first cell cycle after fertilization. At time t=0 the sperm enters the egg, and at time t=1 the first cleavage occurs. The sperm upon entering the cell carries with it a centrosome from which a radial network of microtubules grow. This network, called the sperm aster, somehow moves the centrosome towards the center of the cell. When a microtubule touches the female nucleus, it is somehow pulled to the center of the aster. Thus by the end of the sperm aster growth the genetic material of both the sperm and the egg are at the center of the cell. The sperm aster breaks down and the mitotic spindles are formed. Then two astral microtubules begin to grow and move towards the centers of the new cells, bringing the genetic material of the daughter cells with them. This paper studies how the asters are able to locate the center of the cell. Finding the Center: Proposed Models The paper discusses four possible methods by which the asters can find the center of the cell.
By Roberta Attanasio, IEAM Blog Editor The influence of climate change on the spread of infectious diseases is a topic that generates intense debate, mostly because these effects depend on a variety of intertwined, variable factors – wealth of nations, healthcare infrastructure, availability of vaccines and drugs as well as ability to control vectors such as mosquitoes, ticks, snails, and others. Vector control is, indeed, one of the most important measures included in the existing global strategy to fight infectious diseases. Mosquitoes transmit – among other infectious pathogens – the West Nile virus, which was first detected in New York City in 1999. It spread rapidly throughout the continental United States. Outbreaks of West Nile virus fever and encephalitis now occur across the nation each summer, when mosquitoes are most numerous and active. The virus moves from birds to mosquitoes and from mosquitoes to birds. Infected female mosquitoes spread the virus to a variety of hosts – including humans – when taking a blood meal. Thus, the best way to prevent West Nile virus infection is to control mosquito populations through integrated pest management programs that incorporate vigilant surveillance, habitat reduction, pesticides, and public education. The use of pesticides to prevent disease outbreaks in humans is often necessary to reduce the number of biting female mosquitoes, especially during the summer months. These pesticides are sprayed from hand-held application devices, trucks, or aircraft to surface water where mosquitoes breed, which raises concerns about their potential toxicity to aquatic organisms immediately after spraying events, and brings up the need to balance risks to environmental and human health. How can we determine whether or not these concerns are justified? Results from a new study published in Integrated Environmental Assessment and Management (March 21, 2014) help to answer this question. During the summers of 2011 and 2012, Bryn Phillips, at the University of California, Davis, and his collaborators monitored water column and sediment samples in California to determine the environmental effects of pesticide applications. The goal of the study, “Monitoring the aquatic toxicity of mosquito vector control spray pesticides to freshwater receiving waters,” was to determine whether or not toxicity testing could contribute important information for evaluating the impact of pesticides on aquatic systems, as compared to the analysis of the active ingredient alone. Phillips and collaborators performed a combination of aquatic toxicity tests and chemical analyses on pre- and post-application samples from agricultural, urban, and wetland habitats. They monitored a variety of pesticides used for mosquito control: the organophosphate pesticides malathion and naled; the pyrethroid pesticides etofenprox, permethrin and sumithrin; pyrethrins, and piperonyl butoxide. The study results show that about 15% of the water samples collected following application of pesticides were significantly toxic, in many cases because of dichlorvos, a breakdown product of naled. In other cases, toxicity was likely caused by synergism between piperonyl butoxide and pyrethroid pesticides. These results emphasize the importance of toxicity testing for the detection of effects due to breakdown products and synergism. In their article, the investigators conclude “Toxicity testing can provide useful risk information about unidentified, unmeasured toxicants, or mixtures of toxicants.” Overall, however, the results indicate that many of the spray pesticides examined do not pose a significant toxicity risk to invertebrate animals in aquatic habitats. Interestingly, the study was prompted by the California State Water Resources Control Board in cooperation with the Mosquito Vector Control Association of California and in the context of the Clean Water Act, following the adoption of a National Pollutant Discharge Elimination System General Permit. However, the final permit decision (summarized on page 2, number 8, of the State Water Resources Control Board Amended Monitoring and Reporting Program) states that: “In lieu of water quality monitoring for these active ingredients, the amended MRP requires reporting of corresponding application rates and incidents of noncompliance.” In a telephone interview, Phillips said, “While the study was designed to determine the potential environmental effects of spray pesticide applications, it was clear, based on the final permit decision, that the policy makers were trying to strike a balance between human and environmental health.” We need pesticides to protect human populations from West Nile virus. At the same time, we want to protect human populations and aquatic systems from the damaging effects of pesticides. The balance between these factors is likely to be influenced by climate change, where temperature shifts and altered patterns of precipitation directly affect mosquito populations, even in drought conditions. Warmer temperatures and prolonged periods of drought can actually boost mosquito populations. Indeed, warmer temperatures increase the rate of mosquito development from egg to larva to adult, as well as the rate of viral replication in infected insects. In addition, mosquitoes feed more often in warmer temperatures, thus increasing the spread of the virus. Pools of standing water, so often found in periods of drought, provide breeding grounds, and the lack of rainfall ensures that developing mosquitoes are not washed away. It has been said that the chance of a West Nile virus outbreak hinges on a fine line between drought and rainfall. We can expect an increased use of pesticides when the right conditions for mosquito spread occur. Hopefully, additional toxicological studies will be carried out to identify the still undetected, environmentally harmful breakdown products and synergistic activities of pesticides – prompting a reassessment of water quality monitoring. This information is necessary to properly guide policy in a constantly changing environmental landscape subjected to temperature increases and varying precipitation patterns.
On this day in 1940, Franklin Delano Roosevelt is re-elected for an unprecedented third term as president of the United States. Roosevelt was elected to a third term with the promise of maintaining American neutrality as far as foreign wars were concerned: “Let no man or woman thoughtlessly or falsely talk of American people sending its armies to European fields.” But as Hitler’s war spread, and the desperation of Britain grew, the president fought for passage of the Lend-Lease Act in Congress, in March 1941, which would commit financial aid to Great Britain and other allies. In August, Roosevelt met with British Prime Minister Winston Churchill to proclaim the Atlantic Charter, which would become the basis of the United Nations; they also drafted a statement to the effect that the United States “would be compelled to take countermeasures” should Japan further encroach in the southwest Pacific. Despite ongoing negotiations with Japan, that “further encroachment” took the form of the Japanese bombing of Pearl Harbor—”a day that would live in infamy.” The next day Roosevelt requested, and received, a declaration of war against Japan. On December 11, Germany and Italy declared war on the United States. Certain wartime decisions by Roosevelt proved controversial, such as the demand of unconditional surrender of the Axis powers, which some claim prolonged the war. Another was the acquiescence to Joseph Stalin of certain territories in the Far East in exchange for his support in the war against Japan. Roosevelt is often accused of being too naive where Stalin was concerned, especially in regard to “Uncle Joe’s” own imperial desires.