content
stringlengths
275
370k
Causes and Risk Factors Gender and Age Leukemia does not affect all populations equally. For instance, men are more likely to develop leukemia than women, and older people are typically at higher risk than younger people. ALL is an exception to this trend (ie, ALL is more common in children) (Siegel 2011; Siegel 2012; Siegel 2013). Leukemia accounts for almost one-third of all cancers diagnosed in children from birth to 14 years, and over 75% of these childhood leukemias are ALL. Leukemia is the leading cause of cancer death among men under age 40 (Siegel 2013). Genetics and Family History There is strong evidence for a genetic component to some types of leukemia. People with at least one affected sibling with a hematological (blood-related) cancer have a 2.3 times increased risk of developing leukemia. Individuals reporting at least one sibling with leukemia showed three times the risk of developing CLL (Pottern 1991). In a study on twins, those with an identical twin affected by leukemia had a greatly increased chance of developing leukemia themselves (Kadan-Lottick 2006). Family history of other types of cancer may be a risk factor for adult leukemia as well (Poole 1999; Wang, Lin 2012). Certain genetic abnormalities, such as Down syndrome, are associated with leukemia. Studies suggest that children with Down syndrome have an almost 20-fold greater risk of developing leukemia than the general population. In this population, the highest incidence of leukemia is observed in children less than 5 years of age (Ross 2005). Among certain types of leukemia, patients with specific genetic abnormalities have an increased risk of developing resistance to therapy, and possibly a greater chance of relapsing after remission (Meijerink 2009; Estey 2010; Medeiros 2010). One example of this is a monosomal karyotype, a chromosomal abnormality that is a strong predictor of drug resistance and poor prognosis in patients with AML (Estey 2010; Medeiros 2010). Alterations in specific genes play a role in some types of leukemia as well. For example, the FLT3 gene is important for normal growth and development of blood cell precursors. The protein encoded by this gene is not normally present in large amounts in mature blood cells (Karsunky 2003). However, mutations in the FLT3 gene are known to be common in AML, and FLT3 mutations are a strong predictor of poor prognosis in AML (Kottaridis 2001; Gilliland 2002). Patients with certain pre-existing blood disorders may be at increased risk of developing some forms of leukemia. For instance, there is a higher risk of developing AML in people with chronic myeloproliferative disorders including polycythemia vera, essential thrombocythemia, and idiopathic myelofibrosis. There is additional risk when treatment for these conditions includes some types of chemotherapy or radiation (ACS 2013a). Smoking cigarettes increases the risk of developing leukemia, particularly adult AML (Brownson 1993; Thomas 2004; Musselman 2013). In one study, the risk of leukemia increased according to the number of cigarettes smoked per day (Brownson 1993). Even children exposed to parental cigarette smoke prenatally or after birth appear to have an increased risk of childhood ALL (John 1991; Chang 2006). However, a 2013 study revealed encouraging evidence that leukemia risk decreases with increased time after quitting, and in long-term quitters (30 years or more) the risk was comparable to that of non-smokers (Musselman 2013). Exposure to certain chemicals has been found to elevate the risk of developing leukemia. For example, long-term exposure to benzene, a constituent of crude oil and known cancer-causing agent, increases the risk of leukemia (Snyder 2012; Li 2014; Yin 1996; Savitz 1997; Hayes 2001; ACS 2013b). Benzene is used as a starting material to make a wide variety of substances including plastics and pesticides. It is a common chemical in the environment throughout the United States, with high levels in the vicinity of gasoline stations and some industrial facilities, vehicle exhaust, and secondhand tobacco smoke (Brugnone 1997. Therefore, people who work in chemical plants, oil refineries, and gasoline-related industries may be exposed to high benzene concentrations (ACS 2013b). A major source of benzene exposure is tobacco smoke. Estimates suggest that benzene in cigarettes is responsible for about one-third of smoking-induced AML (Korte 2000; Vineis 2004). Formaldehyde is another environmental chemical potentially linked to leukemia. Formaldehyde is generated by automobile engines, is a component of tobacco smoke, and is released from household products, including furniture, particleboard, plywood, and carpeting (Zhang 2009). The association between formaldehyde exposure and leukemia risk is controversial, however, as some studies support the link (Schwilk 2010) while others do not (Checkoway 2012; Gentry 2013). Agent Orange, a chemical defoliant to which many soldiers were exposed during the Vietnam War, has also been associated with increased leukemia risk (Yi 2013; Baumann Kreuziger 2014). Parental pesticide exposure near the time of conception or during pregnancy has been associated with childhood leukemia, as has childhood exposure (Turner 2010; Turner 2011; Ferreira 2013). Among adults, living on or near a farm has been linked with a greater risk of developing or dying from leukemia, possibly as a consequence of increased agricultural pesticide exposure (Viel 1991; Jones 2014). Exposure to multiple pesticides may exacerbate the risk of malignancy, as experimental evidence shows that mixtures of pesticides, at low concentrations, can damage DNA to a similar extent as higher concentrations of single pesticides alone (Das 2007). Exposure to high-energy (ionizing) radiation (eg, atomic bomb explosions) is linked to leukemia. For example, there was a dramatic increase in leukemia risk among Hiroshima and Nagasaki atomic bomb survivors soon after the bombings (Hsu 2013; Preston 1994); and in one study, the excess risk persisted, especially for AML, even 55 years after the bombings (Hsu 2013). Also, exposure to lower doses of radiation from post-Chernobyl cleanup work has been associated with a significant increase in the risk of leukemia (Zablotska 2013). Some evidence suggests that low-dose radiation, such as from diagnostic X-rays, may be associated with increased leukemia risk as well. In one study, children who had undergone X-ray examinations were at increased risk of childhood leukemia (Shih 2014). Another study found similar results: children who had been exposed to post-natal X-ray examinations for diagnostic purposes had an increased risk of childhood ALL (Bartley 2010). Computed tomography (CT) scans also appear to increase cancer and leukemia risk in children, though it is thought that the diagnostic benefits generally outweigh the risks (Pearce 2012). Older CT scanning technology resulted in higher radiation exposure, though current CT scans are believed to still carry some degree of risk (Mathews 2013). Thus, non-radiative diagnostic measures in children, when appropriate, are considered preferable (Miglioretti 2013; Knusli 2013; ARSPI 2014). The possibility of a relationship between living in close proximity to high-voltage electricity lines and the risk of childhood leukemia has been studied for decades (Washburn 1994). This remains a contentious issue (Magana Torres 2013; Clavel 2013); some studies failed to find a relationship (Pedersen 2014), while other studies found an association ranging from significant (Washburn 1994; Sohrabi 2010) to not achieving statistical significance (Sermage-Faure 2013). Those that found a relationship suggested risk increased markedly with closer proximity to such power lines (Sidaway 2013; Roosli 2013). Previous Cancer Treatment Aggressive chemotherapy and radiation therapy can improve outcomes for many cancer patients. Unfortunately, high-dosage treatment regimens also significantly increase the risk of development of subsequent leukemia (Huh 2013; Pedersen-Bjergaard 2000; Morton 2013; Kaplan 2011). Therapy-related myelodysplastic syndromes are a major complication among patients treated for previous blood-related malignancies or solid tumors (Leone 2007; Leone 2010; Zompi 2002). A 2013 study examined the records of over 426 000 US cancer patients who were treated with chemotherapy for a primary tumor between 1975 and 2008. Among this group, a subsequent diagnosis of AML was 4.7 times more common than expected in the general population (Morton 2013). A similar study examined the records of over 5700 breast cancer patients diagnosed between 1990 and 2005. A 10.9-fold increased risk of MDS was found in breast cancer patients under age 65, and a 5.3-fold increased risk of AML was noted in the same population. The risk was higher for those who received a combination of chemotherapy and radiation compared to those who received chemotherapy or radiation alone (Kaplan 2011). Studies indicate that childhood cancer survivors are at an increased risk of developing malignancies during their later years compared to peers who did not have childhood cancer. In one study, children who survived more than five years after having AML had 3.9-times greater risk of a secondary cancer, and those who survived more than five years after having ALL had a 4.3-times greater risk compared with the general population. This study followed more than 4800 childhood cancer survivors for 14.5 years (Perkins 2013; Joh 2011). Overall, the risk of developing leukemia is increased as a result of cancer therapy. Thus, lifelong surveillance is recommended for cancer survivors (Vega-Stromberg 2003). Chemotherapeutic drugs such as alkylating agents, nitrosoureas, procarbazine (Matulane), and topoisomerase II inhibitors have considerable potential to increase the risk of leukemia. Exposure to large, cumulative doses of alkylating agents is a prominent risk factor for leukemia secondary to chemotherapy (Leone 2010; Pedersen-Bjergaard 2000). In addition, some supportive agents administered along with rigorous chemotherapy have also been found to increase the risk of leukemia. For example, colony stimulating factor (CSF) use among elderly patients with non-Hodgkin’s lymphoma undergoing chemotherapy has been associated with an increased risk of developing leukemia (Gruschkus 2010). Human T-cell lymphotropic virus type 1 (HTLV-1), the first retrovirus shown to cause human malignancy, was identified as the causative agent of adult T-cell leukemia-lymphoma (ATLL) (Yoshida 1982; Beltran 2009; Zane 2014). HTLV-1 mainly affects specialized immune T cells (CD4+ T cells), causing infected T cells to transform into cancerous cells (Satou 2013). In addition, multiple studies have shown a correlation between maternal and childhood infections and subsequent risk of developing childhood leukemia. Some studies have also found a significantly increased risk of childhood leukemia in children whose mothers experienced infections during pregnancy (Sadrzadeh 2012). For example, children whose mothers experienced a reactivation of the Epstein-Barr virus during pregnancy were found to have an almost three-fold greater risk of developing ALL in one study (Lehtinen 2003).
From Seed to Pumpkin is a great story that will answer kids questions about where pumpkins come from and it integrates science with a seasonal theme. The story begins with the farmer planting seeds in the spring and takes you through an entire year of the life of a pumpkin. Throughout the story there are explanations describing how the seeds are growing and what the needs of the plant are. It explains how flowers bloom on the vines and after they wither away they turn into tiny fruits that begin to grow. The book also explains some uses of pumpkins as jack-o’-lanterns and for pumpkin pie for fall holidays. At the end of the story it is spring again and the farmer is out planting more pumpkin seeds. This book will give children a great understanding of the needs of plants and how they grow throughout the seasons of the year. This book is a great tool to use for a unit on plants and living things, pumpkins or Halloween. It discusses the needs of plants(air, water, light, place to grow) and the parts of the plant(seed, stem, roots, leaves, stem, flower buds). For Virginia, this covers Life Processes SOLs K.6 and 1.4. - First School has a good worksheet students can practice the letter P and sequences with a pumpkin unit. - Busy Teacher Cafe has a great pumpkin unit and different ways you can use pumpkins in math, science, and writing. It has ideas for bulletin boards, crafts, and even some pumpkin poems. - The Pumpkin Circle has a great informational page that discusses lots of questions about pumpkins from how to grow then to when should they be picked. - Education World has a good lesson for predictions with pumpkins where kids get to count the seeds of different size pumpkins and then graph the results.
Because it contains the directions for assembling the components of the cell, DNA is often thought of as the "instruction book" for assembling life. DNA (deoxyribonucleic acid) is the nucleic acid located in the nucleus of a cell. It contains the genetic information that controls all cell activities and has the unique ability to replicate itself. Because it contains the directions for assembling the components of the cell, DNA is often thought of as the "instruction book" for assembling life. DNA is organized into structures called chromosomes. At various points along the molecule, stretches of DNA are grouped into functional regions called genes. Genes can encode for a protein or regulate the expression of other genes. Every cell contains the entire complement of DNA, which, in humans, consists of more than 6 billion base pairs. The chemical structure of DNA was discovered in 1953 by James Watson and Francis Crick. Watson and Crick concluded that DNA is a double helix‚ a twisted ladder, with two phosphate-based backbones and "runged" nucleotides that pair. Attached to the backbone are molecules called nucleotides. Nucleotides attach in pairs (or base pairs), with adenine (A) pairing with thymine (T) and guanine (G) with cytosine (C). The long sequence formed by these four base pairs is, essentially, the genetic code. Shorter sequences of the code are transcribed into RNA, and translated into a sequence of amino acids – the building blocks of proteins. DNA, deoxyribonucleic acid, gene, genetic code, james watson, francis crick, double helix, base pair, nucleotide, sequence, amino acid, cytosine, thymine, guanine, adenine, nucleic acid
New Evidence of Ice Age Comet Found in Ice Cores A new study cites spikes of ammonium in Greenland ice cores as evidence for a giant comet impact at the end of the last ice age, and suggests that the collision may have caused a brief, final cold snap before the climate warmed up for good. sciencenewsIn the April Geology, researchers describe finding chemical similarities in the cores between a layer corresponding to 1908, when a 50,000-metric-ton extraterrestrial object exploded over Tunguska, Siberia, and a deeper stratum dating to 12,900 years ago. They argue that the similarity is evidence that an object weighing as much as 50 billion metric tons triggered the Younger Dryas, a millennium-long cold spell that began just as the ice age was loosing its grip (SN: 6/2/07, p. 339). Precipitation that fell on Greenland during the winter after Tunguska contains a strong, sharp spike in ammonium ions that can’t be explained by other sources such as wildfires sparked by the fiery explosion, says study coauthor Adrian Melott, a physicist of the University of Kansas in Lawrence. The presence of ammonium suggests that the Tunguska object was most likely a comet, rather than asteroids or meteoroids, Melott says. Any object slung into the Earth’s atmosphere from space typically moves fast enough to heat the surrounding air to about 100,000° Celsius, says Melott, so hot the nitrogen in the air splits and links up with oxygen to form nitrates. And indeed, nitrates are found in snow around the Tunguska blast. But ammonium, found along with the nitrates, contains hydrogen that most likely came from an incoming object rich in water — like an icy comet. More than a century after the impact, scientists are still debating what kind of object blew up over Tunguska in 1908. They also disagree about whether an impact or some other climate event caused the Younger Dryas at the end of the ice age. But the presence of ammonium in Greenland ice cores at both times is accepted. “There’s a remarkable peak of ammonium ions in ice cores from Greenland at the beginning of the Younger Dryas,” comments Paul Mayewski, a glaciologist at the University of Maine in Orono who was not involved in the new study. The new findings are “a compelling argument that a major extraterrestrial impact occurred then,” he notes. Whenever a comet strikes Earth’s atmosphere, it leaves behind a fingerprint of ammonium, the researchers propose. Immense heat and pressure in the shock wave spark the creation of ammonia, or NH3, from nitrogen in the air and hydrogen in the comet. Some of the ammonium, or NH4+, ions generated during subsequent reactions fall back to Earth in snow and are preserved in ice cores, where they linger as signs of the cataclysmic event. Although an impact big enough to trigger the Younger Dryas would have generated around a million times more atmospheric ammonia than the Tunguska blast did, the concentrations of ammonium ions in the Greenland ice of that age aren’t high enough. But the relative dearth of ammonium in the ice might simply be a result of how the ice cores were sampled, Melott and his colleagues contend. Samples taken from those ice cores are spaced, on average, about 3.5 years apart, and ammonia could have been cleansed from the atmosphere so quickly that most of the sharp spike might fall between samples.
Decision-based learning (DBL) is a teaching method that organizes instruction around the conditional knowledge that guides experts’ decision-making processes. Briefly, conditional knowledge is knowing “when or under what conditions” to apply procedures and concepts (Bransford, Brown, Cocking, & Center, 2000). In short, DBL is organized around a functional sequence rather than a logical sequence. For experts (which includes most instructors), their recognition of conditions has become so automatic as to seem intuitive. This phenomenon has become known as the "expert blind spot" (Cardenas, West, Swan, & Plummer, 2020). Consequently, this essential knowledge remains invisible to students in most forms of instruction. However, conditional knowledge is essential for successfully analyzing situations and selecting an appropriate course of action. Conditional knowledge is also a necessary foundation for well-developed conceptual understanding (Swan, Plummer, & West, 2020). DBL seeks to reveal this conditional knowledge. Using a form of cognitive task analysis, an expert breaks down decisions they make based on the conditions in a real-world problem/artifact/scenario. This process serves to classify the problem and, therefore, signal a correct/appropriate/optimal action for the given situation. The decision-making process can be structured as a series of questions (decisions) with possible responses. Decisions lead to a culminating action or resolution. The result is an expert decision model (EDM), which can be represented visually (Plummer, Swan, & Lush, 2017). An EDM may be linear, branching, or looping or may exhibit a combination of these patterns (for example, see Figure 1). Portion of an Expert Decision Model (EDM) Used in a Basic Statistics Course An EDM should focus on a single learning outcome (i.e., culminating action) and the object of analysis for that learning outcome (e.g., problem). For example, Plummer, Kebritchi, Leary, and Halverson (2022) describe several culminating actions as follows: At the end of each decision path is a culminating action or decision. For example, in a chemistry course, the culminating action at the end of their decision model was to determine if the correct technique had been located to solve a heat and enthalpy problem (Sansom, Suh, & Plummer, 2019). In a qualitative inquiry course, the culminating action was to determine the credibility of a published qualitative study (Owens & Mills, 2021). Finally, in a mechanical engineering course, the culminating action was to determine the design and performance of a machine element (Nelson, 2021). (p. 5) It should be noted that a given learning outcome, and therefore an EDM, includes a range of problem types. These problem types share many characteristics but also have defining characteristics that make them distinct. For example, heat and enthalpy are two high-level problem types which also contain problem types within themselves. The more closely related, the more characteristics they share until there may be only one distinguishing characteristic between a problem type and its nearest sibling(s). Given a real-world problem or scenario, students navigate a series of stepwise decision points, learning how to reason through a scenario leading to an appropriate culminating action. Instruction occurs at each decision point focusing on how to identify the defining conditions in the given problem for the current decision. Instruction should be limited to what is essential to make that specific decision. We refer to this as just-enough, just-in-time instruction. The concise nature of this instruction helps students focus on and separate the defining condition for that decision from other sibling or cosmetic conditions in the scenario. Initially, learners may have difficulty distinguishing cosmetic conditions from defining conditions. With sufficient repetition, learners develop the ability to distinguish defining conditions that lead to resolution of the problem. To provide sufficient repetition, a robust bank of multiple problems for each problem type is ideal. One way to quickly create problems is to keep the same cosmetic conditions and alter the defining conditions to account for each problem type. Finally, DBL includes frequent, interleaved assessment without the aid of the EDM. Initially, instruction is highly scaffolded by the EDM and associated instruction. However, students tend to over-rely on the model unless they are required to perform without scaffolding. Frequent, low-stakes assessments that require equal performance without the model are essential to prompt students to internalize their learning. In this way, students begin to develop a functional schema of the domain. With practice, DBL helps students begin to conceptualize individual real-world situations as instances of a problem type. In other words, they begin to generate a functional schema allowing them to independently apply their learning in real-world situations. Further, with conditional knowledge as the organizing principle, students have an opportunity to see how conditions have patterns that invoke relevant concepts and procedures. As they delve deeper, this framework also helps students understand the boundaries and application of underlying theories, principles, and concepts of the domain. Bransford, J., Brown, A., Cocking, R., & Center, E. R. I. (2000). How People Learn: Brain, Mind, Experience, and School 2nd ed.). Washington, D.C.: National Academy Press. Cardenas, C., West, R. E., Swan, R. H., & Plummer, K. J. (2020). Modeling Expertise through Decision-based Learning: Theory, Practice, and Technology Applications. Revista de Educación a Distancia (RED), 20(64). doi:10.6018/red.408651 Nelson, T. G. (2021). Exploring Decision-Based Learning in an Engineering Context. In N. Wentworth, K. J. Plummer, & R. H. Swan (Eds.), Decision-Based Learning: An Innovative Pedagogy that Unpacks Expert Knowledge for the Novice Learner (pp. 55-65): Emerald Publishing Limited. Owens, M. A., & Mills, E. R. (2021). Using Decision-Based Learning to Teach Qualitative Research Evaluation. In N. Wentworth, K. J. Plummer, & R. H. Swan (Eds.), Decision-Based Learning: An Innovative Pedagogy that Unpacks Expert Knowledge for the Novice Learner (pp. 93-102): Emerald Publishing Limited. Plummer, K. J., Kebritchi, M., Leary, H. M., & Halverson, D. M. (2022). Enhancing Critical Thinking Skills through Decision-Based Learning. Innovative Higher Education, 47(4), 711-734. doi:10.1007/s10755-022-09595-9 Plummer, K. J., Swan, R. H., & Lush, N. (2017). Introduction to Decision-Based Learning. Paper presented at the 11th International Technology, Education and Development Conference, Valencia, Spain. Sansom, R. L., Suh, E., & Plummer, K. J. (2019). Decision-Based Learning: ″If I Just Knew Which Equation To Use, I Know I Could Solve This Problem!″. Journal of Chemical Education, 96(3), 445-454. doi:10.1021/acs.jchemed.8b00754 Swan, R. H., Plummer, K. J., & West, R. E. (2020). Toward functional expertise through formal education: Identifying an opportunity for higher education. Educational Technology Research & Development, 68(5), 2551-2568. doi:10.1007/s11423-020-09778-1 Suggested Citation& (2023). Decision-based Learning. EdTechnica: The Open Encyclopedia of Educational Technology. https://edtechnica.org/encyclopedia/decision_based_learning CC BY: This work is released under a CC BY license, which means that you are free to do with it as you please as long as you properly attribute it. End-of-Chapter Survey: How would you rate the overall quality of this chapter? - Very Low Quality - Low Quality - Moderate Quality - High Quality - Very High Quality
Gyasi Ross has an excellent article up about Crispus Attucks, and the shared narrative between Black people and Indigenous people. Since white colonization of this continent, Black and Native lives have always been valued less than other people. The story of Crispus Attucks was an early illustration of how there seems to always be a reason why black and Native people get killed that somehow exonerates the authorities of guilt when they harm us. “Self-defense.” But, perhaps most importantly, the story of Crispus Attucks is about combined Native and black lineages that resisted, suffered, but through that resistance caused a revolution. The year was 1770 and the scene was the Massachusetts Colony. Boston was hot with anger and resentment toward England. 150 years after Pilgrims originally occupied the homelands of the Wampanoag people, the descendants of those Pilgrims felt like they were losing control of the land they called “home.” At that time slavery was legal in the Massachusetts Colony—white colonists enslaved Natives and blacks alike in Massachusetts. For example, in 1638 during the so-called “Pequot Wars,” white colonists enslaved a group of Pequot women and children. However, most of the men and boys, deemed too dangerous to keep in the colony. Therefore white colonists transported them to the West Indies on the ship Desire and exchanged them for African slaves. Crispus Attucks was both Indigenous and black and a product of the slave trade. He was brilliant in the survival skills that is common and necessary amongst both Indigenous people and black people since the brutal regime of white supremacy came to power on Turtle Island. His mother’s name was Nancy Attucks, a Wampanoag Native who came from the island of Nantucket. The word “attuck” in the Natick language means deer. His father was born in Africa. His name was Prince Yonger and he was brought to America as a slave. Attucks was himself born a slave. But he was not afraid to actively seek his own (or others’) liberation. For example he escaped from his slave master and was the focus of an advertisement in a 1750 edition of the Boston Gazette in which a white landowner offered to pay 10 pounds for the return of a young runaway slave. “Ran away from his Master, William Brown of Framingham, on the 30th of Sept. last, a Molatto Fellow, about 27 Year of age, named Crispas, 6 Feet two Inches high, short curl’d Hair…,” Attucks was not going back though—he never did. He spent the next two decades on trading ships and whaling vessels. The story of Crispus Attucks is powerful. Native and black people have been facing the same tribulations and common enemies for a very long time. For most of the time since white people have been on this continent, black folk and Native folk have had no choice but to work together and have. If we look at statistics today—from expulsion/suspension from schools, to the blacks and Natives going to prison, to getting killed by law enforcement—not a lot has changed. We still share very common narratives and need each other. We still need to work together.
This week we read The Three Little Pigs and lots of our learning for the week as well as Drawing Club was based around this story. The children built houses using different building materials, and we then used these to create our own map showing the pigs' homes and the surrounding area. The children then explored how to programme our Bee Bots to move around the map. On Welly Wednesday we continued to look at the signs of winter outside and the children then collected fallen twigs to make their own 'house of sticks'. In maths we continued using part part whole models to add and started to talk about the size of objects using the language bigger, smaller, taller, shorter. For homework, please use language relating to size to compare things at home such as toys, people's heights, etc. We would also like your child to design their own house for the pigs using any material they would like to or a combination of materials. This could be made practically or the design could be drawn, encourage your child to write labels for the different parts of the house. Thank you for your continued support.
Editor-In-Chief: C. Michael Gibson, M.S., M.D. Dendritic cells (DCs) are immune cells and form part of the mammalian immune system. Their main function is to process antigen material and present it on the surface to other cells of the immune system, thus functioning as antigen-presenting cells. Dendritic cells are present in small quantities in tissues that are in contact with the external environment, mainly the skin (where they are often called Langerhans cells) and the inner lining of the nose, lungs, stomach and intestines. They can also be found in an immature state in the blood. Once activated, they migrate to the lymphoid tissues where they interact with T cells and B cells to initiate and shape the adaptive immune response. At certain development stages they grow branched projections, the dendrites, that give the cell its name. However, these do not have any special relation with neurons, which also possess similar appendages. Immature dendritic cells are also called veiled cells, in which case they possess large cytoplasmic 'veils' rather than dendrites. Dendritic cells were first described by Paul Langerhans (Langerhans cells) in the late nineteenth century. It wasn't until 1973, however, that the term "dendritic cells" was coined by Ralph M. Steinman and Zanvil A. Cohn.. In 2007 Steinman was awarded the Albert Lasker Medical Research Award for his discovery. Types of dendritic cells In all dendritic cells, the similar morphology results in a very large contact surface to their surroundings compared to overall cell volume. In vivo - primate The most common division of dendritic cells is "myeloid" vs. "plasmacytoid" (or "lymphoid"): |Myeloid dendritic cells (mDC)||are most similar to monocytes. mDC are made up of at least two subsets: (1) the more common mDC-1, which is a major stimulator of T cells (2) the extremely rare mDC-2, which may have a function in fighting wound infection |IL-12||TLR 2, TLR 4| |Plasmacytoid dendritic cells (pDC)||look like plasma cells, but have certain characteristics similar to myeloid dendritic cells.||They can produce high amounts of interferon-alpha and thus became known as IPC (interferon-producing cells) before their dendritic cell nature was revealed.||TLR 7, TLR 9| The markers BDCA-2, BDCA-3, and BDCA-4 can be used to discriminate among the types. Lymphoid and myeloid DCs evolve from lymphoid or myeloid precursors respectively and thus are of haematopoietic origin. By contrast, follicular dendritic cells (FDC) are probably not of hematopoietic origin, but simply look similar to true dendritic cells. In some respects, dendritic cells cultured in vitro do not show the same behaviour or capability as dendritic cells isolated ex vivo. Nonetheless, they are often used for research as they are still much more readily available than genuine DCs. - Mo-DC or MDDC refers to cells matured from monocytes - HP-DC refers to cells derived from hematopoietic progenitor cells. While humans and non-human primates such as Rhesus macaques appear to have DCs divided into these groups, other species (such as the mouse) have different subdivisions of DCs. Formation of immature cells Dendritic cells are derived from hemopoietic bone marrow progenitor cells. These progenitor cells initially transform into immature dendritic cells. These cells are characterized by high endocytic activity and low T-cell activation potential. Immature dendritic cells constantly sample the surrounding environment for pathogens such as viruses and bacteria. This is done through pattern recognition receptors (PRRs) such as the toll-like receptors (TLRs). TLRs recognize specific chemical signatures found on subsets of pathogens. Once they have come into contact with such a pathogen, they become activated into mature dendritic cells. Immature dendritic cells phagocytose pathogens and degrade its proteins into small pieces and upon maturation present those fragments at their cell surface using MHC molecules. Simultaneously, they upregulate cell-surface receptors that act as co-receptors in T-cell activation such as CD80, CD86, and CD40 greatly enhancing their ability to activate T-cells. They also upregulate CCR7, a chemotactic receptor that induces the dendritic cell to travel through the blood stream to the spleen or through the lymphatic system to a lymph node. Here they act as antigen-presenting cells: they activate helper T-cells and killer T-cells as well as B-cells by presenting them with antigens derived from the pathogen, alongside non-antigen specific costimulatory signals. Every helper T-cell is specific to one particular antigen. Only professional antigen-presenting cells (macrophages, B lymphocytes, and dendritic cells) are able to activate a helper T-cell which has never encountered its antigen before. Dendritic cells are the most potent of all the antigen-presenting cells. As mentioned above, mDC probably arise from monocytes, white blood cells which circulate in the body and, depending on the right signal, can turn into either dendritic cells or macrophages. The monocytes in turn are formed from stem cells in the bone marrow. Monocyte-derived dendritic cells can be generated in vitro from peripheral blood mononuclear cells (PBMCs). Plating of PBMCs in a tissue culture flask permits adherence of monocytes. Treatment of these monocytes with interleukin 4 (IL-4) and granulocyte-macrophage colony stimulating factor (GM-CSF) leads to differentiation to immature dendritic cells (iDCs) in about a week. Subsequent treatment with tumor necrosis factor alpha (TNFa) further differentiates the iDCs into mature dendritic cells. Life span of dendritic cells Activated macrophages have a lifespan of only a few days. The lifespan of activated dendritic cells, while somewhat varying according to type and origin, is of a similar order of magnitude, but immature dendritic cells seem to be able to exist in an inactivated state for much longer. The exact genesis and development of the different types and subsets of dendritic cells and their interrelationship is only marginally understood at the moment, as dendritic cells are so rare and difficult to isolate that only in recent years they have become subject of focused research. Distinct surface antigens that characterize dendritic cells have only become known from 2000 on; before that, researchers had to work with a 'cocktail' of several antigens which, used in combination, result in isolation of cells with characteristics unique to DCs. Dendritic cells and cytokines The dendritic cells are constantly in communication with other cells in the body. This communication can take the form of direct cell-to-cell contact based on the interaction of cell-surface proteins. An example of this includes the interaction of the receptor CD40 of the dendritic cell with CD40L present on the lymphocyte. However, the cell-cell interaction can also take place at a distance via cytokines. For example, stimulating dendritic cells in vivo with microbial extracts causes the dendritic cells to rapidly begin producing IL-12. IL-12 is a signal that helps send naive CD4 T cells towards a Th1 phenotype. The ultimate consequence is priming and activation of the immune system for attack against the antigens which the dendritic cell presents on its surface. However, there are differences in the cytokines produced depending on the type of dendritic cell. The lymphoid DC has the ability to produce huge amounts of IFN-a, more than any other blood cell. Relationship to HIV, allergy, and autoimmune diseases HIV, which causes AIDS, can bind to dendritic cells via various receptors expressed on the cell. The best studied example is DC-SIGN (usually on MDC subset 1, but also on other subsets under certain conditions; since not all dendritic cell subsets express DC-SIGN, its exact role in sexual HIV-1 transmission is not clear). When the dendritic cell takes up HIV and then travels to the lymph node, the virus is able to move to helper T-cells, and this infection of helper T-cells is the major cause of disease. This knowledge has vastly altered our understanding of the infectious cycle of HIV since the mid-1990s, since in the infected dendritic cells, the virus possesses a reservoir which also would have to be targeted by a therapy. This infection of dendritic cells by HIV explains one mechanism by which the virus could persist after prolonged HAART. Many other viruses, such as the SARS virus seems to use DC-SIGN to 'hitchhike' to its target cells. However, most work with virus binding to DC-SIGN expressing cells has been conducted using in vitro derived cells such as moDCs. The physiological role of DC-SIGN in vivo is more difficult to ascertain. Altered function of dendritic cells is also known to play a major or even key role in allergy and autoimmune diseases like lupus erythematosus. Dendritic cells in animals other than humans The above applies to humans. In other organisms, the function of dendritic cells can differ slightly. For example, in brown rats (but not mice), a subset of dendritic cells exists that displays pronounced killer cell-like activity, apparently through its entire lifespan. However, the principal function of dendritic cells as known to date is always to act as the central command and central encyclopedia of the immune response, or similar to servers in a computer network. They collect and store the immune system's "knowledge", enabling them to instruct and direct the adaptive arms in response to challenges. Novel subpopulations of dendritic cells have been recently identified in the mouse. - Interferon-producing killer dendritic cells (IKDC) have been shown to display a role in tumor protection. Although it produces interferon-alpha, as for plasmacytoid dendritic cells, it can be distinguished from the latter by its cytotoxic potential and the expression of markers usually found on NK cells. - In addition, an immediate precursor to myeloid and lymphoid dendritic cells of the spleen has been identified. This precursor, termed pre-DC, lacks MHC surface expression. Template:Multi-video start Template:Multi-video item Template:Multi-video item Template:Multi-video end - List of human clusters of differentiation for a list of CD molecules (as CD80 and CD86) - Dendritic+Cells at the US National Library of Medicine Medical Subject Headings (MeSH) - Dendritic cells Presented by the University of Virginia - www.dc2007.eu : 5th International Meeting on Dendritic Cell Vaccination and other Strategies to tip the Balance of the Immune System - Website of Dr. Ralph M. Steinman at The Rockefeller University contains information on DCs, links to articles, pictures and videos - ↑ Steinman RM, Cohn ZA (1973). "Identification of a novel cell type in peripheral lymphoid organs of mice. I. Morphology, quantitation, tissue distribution". J. Exp. Med. 137 (5): 1142–62. PMID 4573839. - ↑ Sallusto F, Lanzavecchia A (2002). "The instructive role of dendritic cells on T-cell responses". Arthritis Res. 4 Suppl 3: S127–32. PMID 12110131. - ↑ McKenna K, Beignon A, Bhardwaj N (2005). "Plasmacytoid dendritic cells: linking innate and adaptive immunity". J. Virol. 79 (1): 17–27. PMID 15596797. - ↑ Liu YJ (2005). "IPC: professional type 1 interferon-producing cells and plasmacytoid dendritic cell precursors". Annu. Rev. Immunol. 23: 275–306. doi:10.1146/annurev.immunol.23.021704.115633. PMID 15771572. - ↑ Dzionek A, Fuchs A, Schmidt P, Cremer S, Zysk M, Miltenyi S, Buck D, Schmitz J (2000). "BDCA-2, BDCA-3, and BDCA-4: three markers for distinct subsets of dendritic cells in human peripheral blood" (PDF). J Immunol. 165 (11): 6037–46. PMID 11086035. - ↑ Ohgimoto K, Ohgimoto S, Ihara T, Mizuta H, Ishido S, Ayata M, Ogura H, Hotta H (2007). "Difference in production of infectious wild-type measles and vaccine viruses in monocyte-derived dendritic cells". Virus Res. 123 (1): 1–8. PMID 16959355. - ↑ Reis e Sousa C, Hieny S, Scharton-Kersten T, Jankovic D; et al. (1997). "In vivo microbial stimulation induces rapid CD40 ligand-independent production of interleukin 12 by dendritic cells and their redistribution to T cell areas". J. Exp. Med. 186 (11): 1819–29. PMID 9382881. - ↑ Siegal FP, Kadowaki N, Shodell M, Fitzgerald-Bocarsly PA; et al. (1999 June 11). "The nature of the principal type 1 interferon-producing cells in human blood". Science. 284 (5421): 1835–7. doi:10.1126/science.284.5421.1835. Check date values in: - ↑ Yang, Zhi-Yong; et al. (2004). "pH-dependent entry of severe acute respiratory syndrome coronavirus is mediated by the spike glycoprotein and enhanced by dendritic cell transfer through DC-SIGN". J. Virol. 78 (11): 5642–50. PMID 15140961. - ↑ Welner R, Pelayo R, Garrett K, Chen X, Perry S, Sun X, Kee B, Kincade P. "Interferon-producing killer dendritic cells (IKDC) arise via a unique differentiation pathway from primitive c-kitHiCD62L+ lymphoid progenitors". Blood. PMID 17317852. - ↑ Naik SH, Metcalf D, van Nieuwenhuijze A; et al. (2006 Jun). "Intrasplenic steady-state dendritic cell precursors that are distinct from monocytes". Nature Immunolgy. 7 (6): 663–71. doi:10.1038/ni1340. Check date values in: de:Dendritische Zelle ko:수지상 세포 id:Sel dendritik he:תא דנדריטי nl:Dendritische cel ur:شجری خلیہ
What is an API ? API is the Application Program Interface that allows the computer or application to talk or communicate with another one. In other words, it allows transmitting the data between two separate application products. What is API Testing? API testing is a type of software testing which tests API directly. The motive of the API testing is to test the performance, functionality, reliability, and the security of application. In API Testing users use software to send calls to the API, get output, and note down the system’s response. GUI Tests are very different from API tests. API tests focus primarily on the business logic layer of software architecture. Why API Testing? - API Testing helps to find small issues. - API Testing is fast and hence users can test applications in lesser time. - API testing helps to find the issues like missing or duplicate functionality. - API testing can provide security for application - API testing having high performance speed. Types of API Testing :- 1. Validation Testing: Validation testing comes in the final stages and plays an important role in the development process. It verifies the aspects of behaviour, product, and efficiency 2. Functional Testing: This includes testing of specific functions in the codebase. These tasks represent specific situations to ensure that the API works within the expected parameters and handles errors. 3. UI Testing: The UI test is defined as a test of the user interface for your API and its integral parts. The UI test focuses more on the interface that connects to the API rather than the API test. 4. Load Testing: Load testing usually occurs after the compilation of a specific unit or entire codebase. It checks whether the theoretical solution works as planned. 5. Runtime Or Error Detection: These types of tests are entirely related to the actual running of the API, Most of our tastes are primarily related to the API and the outcome of the implementation of the environment or scenarios. This mainly focuses on monitoring, executing error, and error detection. 6. Security Testing: To ensure that API implementation is protected from external threats, This test includes additional steps such as the validity of the encryption, its functionality, and the design of the API access control. Challenges Of API Testing: - The main challenges in Web API testing are Parameter Combination, Parameter Selection, and Call Sequencing - There is no GUI available to test the application which makes it difficult to give input values - Tester must know about the parameters selection and category is required - Exception handling function needs to be tested - Coding knowledge is necessary for testers Advantages of API Testing: - Users can access applications without a UI. - Users can test the core functionality of the application - It is time effective - It is language independent - It will easily integrate with GUI API consists of a set of classes/functions/procedures which represent the business logic layer. If API is not tested properly, it may cause problems not only the API application but also in the calling application
Sugar is good and the sugar granules are small. Can we cut small sugar granules? Yes! we can easily. Imagine the size the smallest sugar particle which could retain the sweetness after you cut it. If you keep cutting, you are going to reach a particle that can’t be divided again without losing sweetness. That smallest particle in the sugar that retains all of the qualities of sugar may be the molecule of sugar. The sugar molecule again may be divided, nonetheless it will not be sugar again. Dividing molecules again into smaller particles can be done. The smallest particle of matter whenever we divide molecule is referred to as an atom. That means molecules are formed because of the bonding of 2 or more atoms. Thus atoms will be the building blocks of molecules. Sugar molecule contains 12 carbon atoms, 22 hydrogen atoms and 11 oxygen atoms. Many such sugar molecules join together produce a sugar granule. Before moving to learn how these atoms are joined, we are going to understand what elements like oxygen, hydrogen, carbon etc are. In order to learn it, we must realize yet another thing: an atom consists of subatomic particles namely protons, neutrons and electrons. Each atom features a positively charged nucleus that keeps attracting negatively charged electrons. The nucleus could be the permanent seat of protons and neutrons, whereas electrons keep revolving throughout the nucleus. An atom with only 1 electron the other proton is hydrogen(H) and Atom with 2 electrons and a pair of protons is helium(He), 3 each with Lithium(Li), 4 each with Beryllium(Be) and son.Thus, atoms are differentiated on the basis in the number of subatomic particles. In that way carbon has 6 electrons and oxygen has 8. Formation of molecules Electrons resemble small children. They do revolve round the nucleus in several shells and levels. These stamina are called sub shells and so are denoted with the letters s, p, d and f. The S subshell has the ability to hold 2 electrons. If we are thinking about the formation of hydrogen molecule by hydrogen atoms, the merely one electron of hydrogen atom is filled in the initial S sub shell. Filling one electron inside a subshell where 2 electrons may be kept creates imbalance and so the atom looks to talk about its electron with another atom. So, it undergoes a mutual sharing of electrons with another hydrogen atom to make a hydrogen molecule. Electrons thus really are like youngsters. They develop stress from the home and discover friendship with neighboring electrons to get happy and stable! Isn’t it fun. This fun behavior of electrons will be the reason for all of the development we view around us. Learning about it also is really fun!
In general, trauma can be defined as a psychological, emotional response to an event or an experience that is deeply distressing or disturbing. When loosely applied, this trauma definition can refer to something upsetting, such as being involved in an accident, having an illness or injury, losing a loved one, or going through a divorce. However, it can also encompass the far extreme and include experiences that are severely damaging, such as rape or torture. Because events are viewed subjectively, this broad trauma definition is more of a guideline. Everyone processes a traumatic event differently because we all face them through the lens of prior experiences in our lives. For example: one person might be upset and fearful after going through a hurricane, but someone else might have lost family and barely escaped from a flooded home during Hurricane Katrina. In this case, a minor Category One hurricane may bring up traumatic flashbacks of their terrifying experience. Because trauma reactions fall across a wide spectrum, psychologists have developed categories as a way to differentiate between types of trauma. Among them are complex trauma, post-traumatic stress disorder (PTSD), and developmental trauma disorder. Complex trauma happens repetitively. It often results in direct harm to the individual. The effects of complex trauma are cumulative. The traumatic experience frequently transpires within a particular time frame or within a specific relationship, and often in a specific setting. Post-Traumatic Stress Disorder (PTSD) Post-Traumatic Stress Disorder (PTSD) can develop after a person has been exposed to a terrifying event or has been through an ordeal in which intense physical harm occurred or was threatened. Sufferers of this PTSD have persistent and frightening thoughts and memories of their ordeal. Developmental Trauma Disorder Developmental trauma disorder is a recent term in the study of psychology. This disorder forms during a child’s first three years of life. The result of abuse, neglect, and/or abandonment, developmental trauma interferes with the infant or child’s neurological, cognitive, and psychological development. It disrupts the victim’s ability to attach to an adult caregiver. An adult who inflicts developmental trauma usually doesn’t do it intentionally – rather, it happens because they are not aware of the social and emotional needs of children. Often, shock and denial are typical reactions to a traumatic event. Over time, these emotional responses may fade, but a survivor may also experience reactions long-term. These can include: - Persistent feelings of sadness and despair - Unpredictable emotions - Physical symptoms, such as nausea and headaches - Intense feelings of guilt, as if they are somehow responsible for the event - An altered sense of shame - Feelings of isolation and hopelessness Trauma therapy is not one-size-fits-all. It must be adapted to address different symptoms. Mental health professionals who are specially trained in treating trauma can assess the survivor’s unique needs and plan treatment specifically for them. Currently, there are several trauma therapy modalities in place: - Cognitive Behavioral Therapy (CBT) teaches the person become more aware of their thoughts and beliefs about their trauma and gives them skills to help them react to emotional triggers in a healthier way. - Exposure therapy (also called In Vivo Exposure Therapy) is a form of cognitive behavior therapy that is used to reduce the fear associated with the emotional triggers caused by the trauma. - Talk therapy (psychodynamic psychotherapy) is a method of verbal communication that is used to help a person find relief from emotional pain and strengthen the adaptive ways of problem management that the individual already possesses. These modalities treat the memory portion (the unconscious) of the trauma, however we now know that a survivor’s conscious brain must be treated, as well. Recent studies have found that body-oriented approaches such as mindfulness, yoga, and EMDR are powerful tools for helping the mind and body reconnect. Additionally, neurofeedback (a type of biofeedback that focuses on brain waves) shows promise in helping patients with trauma symptoms learn to change their brain wave activity to help them become calmer and better able to engage with others. Healing from Trauma It is possible to heal from emotional and psychological trauma. We know that the brain changes in response to a traumatic experience, however, by working with a mental health professional who specializes in trauma, you can leave your trauma behind and learn to feel safe again. Compassionate Trauma Therapy The clinicians at The Center for Anxiety and Mood Disorder’s Trauma Institute provide compassionate care through specialized training in trauma therapy. For more information, contact us or call us today at 561-496-1094.
shunted surface gap spark igniter. The system is powered by the aircraft 28-volt dc electrical system. This ignition system is only required during starting, because continuous combustion takes place after the engine is started. Components of the ignition and turbine outlet temperature (TOT) systems are illustrated in figure 7.16. The TOT thermocouple harness contains four probes used to sense the temperature of the gases on the outlet side of the gas-producer turbine rotor. Each thermocouple probe generates a dc millivoltage which is directly proportional to the gas temperature it senses. The thermocouple harness averages the four voltages produced and indicates TOT on a gage in the cockpit. The Allison T63 is a free-power gas turbine engine which has four major sections: the compressor assembly, power and accessory gearbox, turbine assembly, and combustion assembly. The power-turbine governor senses power turbine speed and relays this to the fuel control that controls the compressor speed. The fuel control sends fuel to the nozzle located in the combustion section, and the nozzle sprays fuel into the combustion liner. The engine is lubricated by a dry-sump pressure system. Ignition for engine starting comes from an ignition exciter and spark igniter located next to the fuel nozzle in the engine combustion
In Spring 2011, thousands of people in Germany were hospitalized with a deadly disease that started as food poisoning with bloody diarrhea and often led to kidney failure. It was the beginning of the deadliest outbreak in recent history, caused by a mysterious bacterial strain that we will refer to as E. coli X. Soon, German officials linked the outbreak to a restaurant in Lübeck, where nearly 20% of the patrons had developed bloody diarrhea in a single week. At this point, biologists knew that they were facing a previously unknown pathogen and that traditional methods would not suffice – computational biologists would be needed to assemble and analyze the genome of the newly emerged pathogen. To investigate the evolutionary origin and pathogenic potential of the outbreak strain, researchers started a crowdsourced research program. They released bacterial DNA sequencing data from one of a patient, which elicited a burst of analyses carried out by computational biologists on four continents. They even used GitHub for the project: https://github.com/ehec-outbreak-crowdsourced/BGI-data-analysis/wiki The 2011 German outbreak represented an early example of epidemiologists collaborating with computational biologists to stop an outbreak. In this online course you will follow in the footsteps of the bioinformaticians investigating the outbreak by developing a program to assemble the genome of the E. coli X from millions of overlapping substrings of the E.coli X genome.
NASA researchers studying have found that the intensity of the “heat island” created by a city depends on the ecosystem it replaced and on the regional climate. I have measured the heat island effect in the Greater Vancouver area, specifically Metrotown, Burnaby to be in the order of 6 deg C, during a late summer evening. >”The placement and structure of cities — and what was there before — really does matter,” said Marc Imhoff, biologist and remote sensing specialist at NASA’s Goddard Space Flight Center in Greenbelt, Md. “The amount of the heat differential between the city and the surrounding environment depends on how much of the ground is covered by trees and vegetation. Understanding urban heating will be important for building new cities and retrofitting existing ones.” Goddard researchers including Imhoff, Lahouari Bounoua, Ping Zhang, and Robert Wolfe presented their findings on Dec. 16 in San Francisco at the Fall Meeting of the American Geophysical Union. Scientists first discovered the heat island effect in the 1800s when they observed cities growing warmer than surrounding rural areas, particularly in summer. Urban surfaces of asphalt, concrete, and other materials — also referred to as “impervious surfaces” — absorb more solar radiation by day. At night, much of that heat is given up to the urban air, creating a warm bubble over a city that can be as much as 1 to 3°C (2 to 5°F) higher than temperatures in surrounding rural areas. The impervious surfaces of cities also lead to faster runoff from land, reducing the natural cooling effects of water on the landscape. More importantly, the lack of trees and other vegetation means less evapotranspiration — the process by which trees “exhale” water. Trees also provide shade, a secondary cooling effect in urban landscapes. Using instruments from NASA’s Terra and Aqua satellites, as well as the joint U.S. Geological Survey-NASA satellite Landsat, researchers created land-use maps distinguishing urban surfaces from vegetation. The team then used computer models to assess the impact of urbanized land on energy, water, and carbon balances at Earth’s surface. < See on www.nasa.gov
United States Declaration of Independence The Declaration of Independence was adopted by the Second Continental Congress on July 4, 1776. It was a formal explanation of why Congress had voted to declare independence from Great Britain. The best-known version of the Declaration is a signed copy that is displayed at the National Archives. About United States Declaration of Independence in brief The Declaration of Independence was adopted by the Second Continental Congress on July 4, 1776. It was a formal explanation of why Congress had voted to declare independence from Great Britain. The best-known version of the Declaration is a signed copy that is displayed at the National Archives in Washington, D. C. It has become a well-known statement on human rights, particularly its second sentence: We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. The Declaration inspired many similar documents in other countries, the first being the 1789 Declaration of United Belgian States issued during the Brabant Revolution in the Austrian Netherlands. It also served as the primary model for numerous declarations of independence in Europe and Latin America, as well as Africa and Oceania during the first half of the 19th century. The source copy used for this printing has been lost and may have been a copy in Thomas Jefferson’s hand. The original purpose was to announce independence, and references to the text of the declaration were few in the following years. Abraham Lincoln made it the centerpiece of his policies and his rhetoric, as in the Gettysburg Address of 1863. It is now considered one of the most important documents in American history, along with the Declaration of the Rights of Man and of the Dog. The U.S. Constitution is based on the Declaration, but it is not the same as the British Constitution, which was written after the American Civil War in 1775-1776. The United States is now the world’s largest democracy, with a population of more than 320 million people, according to the Pew Research Center. The federal government is the largest employer in the United States, with more than 100,000 employees. The number of Americans living in poverty is at an all-time high, the Pew research Center says, and the number of millionaires is at a record-high. The nation’s economy is the second-largest in the world, after the European Union, with $1.2 trillion in sales in 2013. The country’s population is now more than 1.2 billion, up from about 1.1 billion in 1770, the Census Bureau says. The population is the biggest in the U.N. and the World Health Organization says, with nearly 1.3 billion people, making it the largest non-European country in the history of the world. The world’s first black president, Barack Obama, was elected in 2008, and is the first African-American to serve in the White House. The first black vice president, Joe Biden, served in the House of Representatives from 2008-2009. The president is the son of African-Americans, and his father was a former U. S. senator from New Hampshire. The current president is a former New Hampshire state senator, who served in Congress from 1841-1911.
working principle of steam jet refrigeration system. Reference of steam jet refrigeration system diagram and P -H and T – S diagrams. The steam jet refrigeration system (also known as ejector system refrigeration system) is one of the oldest methods of producing refrigeration effect. The basic components of this system are an evaporator, a compressor device, a condenser and a refrigerant control device. This system employs a steam injector or booster (instead of mechanical compressor) to compress the refrigerant to the required condenser pressure level. In this system, water is used as the refrigerant. Since the freezing point of water is 0°C, therefore, it cannot be used for applications below 0°C. The steam jet refrigeration system is widely used in food processing plants for pre-cooling of vegetables and concentrating fruit juices, gas plants, paper mills, breweries etc. Principle of steam jet refrigeration system: - The boiling point of a liquid changes with change in external pressure. In normal conditions, pressure exerted on the surface of a liquid is the atmospheric pressure. If this atmospheric pressure is reduced on the surface of a liquid, by some means, then the liquid will start boiling at lower temperature, because of reduced pressure. This basic principal of boiling of liquid at lower temperature by reducing the pressure on its surface is used in steam jet refrigeration system. The boiling point of pure water at standard atmospheric pressure of 760 mm of Hg is 1 00°C. It may be noted that water boils at 12′C if the pressure on the surface of water is kept at 0.014 bar and at 7′C if the pressure on the surface of water is 0.01 bar. The reduced pressure on the surface of water is maintained by throttling the steam through the jets or nozzles. Working of steam jet refrigeration system: - The flash chamber or evaporator is a large vessel and is heavily insulated to avoid the rise in temperature of water due to high ambient temp. It is fitted with perforated pipes for spraying water. The warm water coming out of the refrigerated space is sprayed into the flash water chamber where some of which is converted into vapours after absorbing the latent heat, thereby cooling the rest of water. The high pressure steam from the boiler is passed through the steam nozzle thereby increase its velocity. The high velocity steam in the ejector would entrain the water vapours from the flash chamber which would result in further information of vapour. The mixture of steam and water vapour passes through the ventilate-tube of the ejector and gets compressed. The temperature and pressure rises considerably and fed to the water cooled condenser where it gets condensed. The condensate is again fed to the boiler as feed water. A constant water level is maintained in the flash chamber and any loss of water due to evaporation is made up from the make- up water line. Steam Ejector: - The steam ejector is one of the important components of a steam jet refrigeration system. It is used to compress the water vapours coming out of the flash chamber. It uses the energy of fast moving jet of steam to entrain the vapours from the flash chamber and then compress it. The high pressure steam from the boiler expands while flowing through the convergent divergent nozzle. The expansion causes a very low pressure and increases steam velocity. The steam attains very high velocities in the range of 1000 m/s to 1350 m/s. The nozzles are designed for lowest operating pressure ratio between nozzle throat and exit. The nozzle pressure ratio of less than 200 is undesirable because of poor ejector efficiency when operating at low steam pressure. The water vapour from the flash chamber are entrained by the high velocity steam and both are mixed in the mixing section at constant pressure. The mean velocity of the mixture will be supersonic, after the mixing is complete. This supersonic steam gets a normal shock, in the constant area throat of the diffuser. This results in the rise of pressure and subsonic flow. The function of the diverging portion of the diffuser is to recover the velocity head as pressure head by gradually reducing the velocity. Analysis of Steam Jet Refrigeration System: - The temperature – entropy (T– s) and enthalpy-entropy (h – s) diagrams for a steam jet refrigeration system are shown in fig. (a) and (b) respectively. The point A represents the initial condition of the motive steam before passing through the nozzle and the point B is the final condition of the steam, assuming isentropic expansion. The point C represents the initial condition of the water vapour in the flash chamber or evaporator and the point E is the condition of the mixture of high velocity steam from the nozzle and the entrained water vapour before compression. Assuming isentropic compression, the final condition of the mixture discharged to the condenser is represented by point F. The final condition of the before mixing with the water vapour is shown at point D. The make-up water is supplied at point G whose temperature is slightly lower than the condenser temperature and is throttled to point H in the flash chamber.
Today, more than any other time in history, there is growing support to move away from nonrenewable resources towards developing renewable resources to meet current and future energy needs. Fossil fuels are nonrenewable resources that continue to negatively impact the environment. It is important to learn how these resources are formed since this process of formation is at the very heart of why fossil fuels are considered to be nonrenewable. Using the readings for this module, the Argosy University online library resources, and the Internet, research the nonrenewable resource assigned to you: Note: You are assigned a resource based on the first initial of your last name. Last names beginning with K?R research natural gas. Respond to the following: Describe how this nonrenewable resource was initially formed. Briefly explain where the major reserves of this nonrenewable resource are located and how it is extracted. Examine the environmental impact caused by the extraction process. Explain how this nonrenewable resource is used to produce energy, and identify the pollution problems that are caused from this energy source. Save your time - order a paper! Get your paper written from scratch within the tight deadline. Our service is a reliable solution to all your troubles. Place an order on any task and we will take care of it. You won’t have to worry about the quality and deadlinesOrder Paper Now
High school special education teachers play a critical role in the learning and development of their special-needs students, continuing what was begun with preschool and continuing through high school. Many prominent people past and present, including athlete Magic Johnson and the composer Ludwig van Beethoven, have had special needs, physical disabilities or learning difficulties, reports The College Board. In spite of their challenges, these and many others went on to enjoy successful careers, making their marks in society and history. If you enjoy helping others, especially those with mental or physical disabilities, are patient, creative, and supportive of those whose learning styles differ from most, then you may wish to learn more about becoming a special education teacher. What are the duties of high school special education teachers? - evaluating a student’s unique strengths and well as available skills - developing Individualized Education Programs (IEPs) to outline the services and other attention to be given to students - working with a student’s other teachers to adapt lessons to fit the students’ academic and other needs - update IEPs throughout the school year as needed to reflect changes and improvements in academic progress and changing goals - meet regularly with parents, administrators and others to discuss and update student progress - oversee special education teacher assistants to ensure that they possess the skills and training needed to successfully work with special needs students - monitor and ensure that the school complies with regulations of the Individuals with Disabilities Education act (IDEA) Special education teachers may work in public or private schools, as well as charter or magnet schools, adds the BLS. Work hours are usually during normal daytime school hours, within a traditional 10-month cycle and a summer break. Special education teachers may be employed by residential schools as well as tutor homebound or hospitalized children. How to become a special education teacher All special education teachers, whether elementary, middle or high school, need to hold a bachelor’s degree. College courses usually include teaching methods designed for learners with special needs, what kind of disabilities that may be encountered and how to successfully work with them, as well as creating teaching and IEP plans. Some states require special education teachers to hold a master’s degree, particularly at the high school level. Fieldwork (student teaching) is also required for graduation. Public school special education teachers must be certified or licensed; many private schools do not require licensing beyond a degree. States requiring licensure also require continuing education courses to maintain the license or certification. These can often be easily obtained through online education. Find out more here: What kind of personal qualities are needed? - You should be patient and supportive of people who learn differently from most students - Good communication skills are necessary for communicating with parents and others - Critical thinking skills are needed to develop teaching plans based on analysis of data concerning progress and other variables - Good instructional skills to help students become engaged in the learning process and explain difficult ideas in understandable terms. What is the job outlook and pay? Due to early-intervention programs, the BLS states that the biggest hiring increase for special education teachers will be at the elementary and middle school levels. However, there will continue to be a need for extending special attention through high school, especially in terms of learning life skills, such as using a checkbook or time management to help with future employment. As of 2010, BLS data shows that the average annual salary was $54,810 for high school special education teachers.
Immune System: The complement system is a part of the immune system that helps eliminate pathogens, but if it is activated in excess, it can be harmful. Understanding how our immune system responds to infection with the Covid-19 virus is one of the keys to focus on treating the disease. 1. Complement System Our immune system is very complex. One of its parts is the so-called complement system, which is part of innate immunity and is one of our oldest defense systems. The complement system is a defense mechanism whose mission is to eliminate pathogens from the circulation. However, when activated in excess, it can be harmful. 2. Complement System and Covid-19 A recent study by researchers from the Irving Medical Center of Columbia University (United States), published in Nature Medicine, indicates that the complement system could influence the severity of Covid-19. As previous studies indicate, coronaviruses can mimic proteins involved in coagulation and proteins that make up the complement system. Coronaviruses mimic proteins of the complement system. Complement proteins work a bit like antibodies and help kill pathogens by attaching themselves to viruses and bacteria. The complement system can also increase clotting and inflammation in the body. “The new coronavirus, by mimicking the complement or coagulation proteins, could lead both systems to a hyperactive state.” 3. Macular Degeneration and Increased Severity of Covid-19 The researchers wanted to see if people who previously had clotting or complement system disorders were more susceptible to the Covid-19 virus. To do this, they looked at people with Covid-19 with age-related macular degeneration (an eye disease caused by an excessive response of the complement system), as well as common bleeding disorders such as thrombosis and bleeding. They found that people with age-related macular degeneration are at increased risk of complications or death from Covid-19. 4. Coagulation in People With Covid-19 The researchers found that people with a history of bleeding disorders also had a higher risk of dying from Covid-19 virus infection. “The complement system is also more active in obesity and diabetes and may help explain, at least in part, why people with these conditions are also at increased risk of mortality from COVID.” 5. Activation of the Complement System The study also revealed that in people with Covid-19, the virus induces a strong activation of the body’s complement and clotting systems. “We found that the complement system is one of the most differential expression pathways in patients infected with SARS-CoV-2,” the authors indicate. “As part of the immune system, you would expect to see the complement system activated, but it seems that it is beyond what you would see in other infections such as the flu.” The COVID-19 virus induces strong activation of complement and coagulation systems. “These results provide important information on the pathophysiology of Covid-19 and show the role of complement and clotting pathways in determining the clinical outcomes of patients infected with SARS-CoV-2.” The studies authors suggest that drugs that inhibit the complement system could help treat severe Covid-19 patients.
(HealthDay News) -- If an infant has hearing loss, it can affect the child's ability to develop speech, language and social skills, the U.S. Centers for Disease Control and Prevention says. An infant's first hearing screening is recommended typically within the first month of life. Even if the child passes the initial screening, the CDC recommends watching for signs of hearing loss. These signs may include: - The child does not startle at loud noise. - The child does not turn to the source of a sound at 6 months of age or later. - The child does not say single words, such as "dada" or "mama," by age 1 year. - The child turns the head when he or she sees you, but doesn't if you only call his or her name. - The child seems to hear some sounds, but not others.
Gigantic waves that can level coastal communities demonstrate nature’s enormous power. Tsunami sprung from disturbances underneath the ocean such as volcanic eruptions and earthquakes. They may also occur after a huge meteorite impact or a nuclear testing where large amount of water are displaced, creating a ripple effect that reaches the shore. Man-made tsunami is possible although it is not as strong and massive as the one caused by nature. Typically, tsunami can be seen as a raging tide, hence the name “tidal wave.” But this term is somewhat misleading since they are not caused by tides or the interaction between the earth and moon’s gravitational force. Rather, it is a wave formed by the displacement of water from the epicenter moving towards the shore and back. Causes of Tsunami One of the most common cause of tsunami is the subduction fault between tectonic plates of the earth’s crust. As the plates move towards each other, one of them eventually slips underneath. Sometimes it can get stuck, creating huge amount of tension that builds up overtime. And then snap! The plate breaks loose releasing all the tension and creating a massive earthquake and subsequently, a tsunami. Underwater disturbances such as volcanic eruptions are also one of the leading causes of tsunami. The upward thrust of the explosion causes the water to rise momentarily, high enough to generate a devastating tsunami. In deep waters, this may not be observable, but comes thundering as it reaches shallow waters. It moves at an incredible speed and eventually slows down. Depending on the distance through which the wave travels and the intensity of the earthquake, a tsunami’s can be either weak or strong. The further it has to travel the weaker it gets. Minor tremors underneath the ocean may not cause a tsunami at all. Signs of Tsunami The first indication that a tsunami is about to happen is the occurrence of earthquakes. People living in the coastal areas need to be aware of the danger of tsunami whenever tremors are felt beneath the earth. Sometimes the epicenter of an earthquake is so distant that only slight earthquakes are felt. However, this may mean a huge tsunami is on its way, so it is good advice to evacuate as soon as possible to a higher elevation far from the shore.
This 30 Day Science Activity Planner is an excellent resource for fun and easy science experiments for kids to do at home. Make sure you grab the printable science activity plan you can print out at the bottom of this post. Kids are naturally curious and it's incredibly important for kids to participate in science activities. So I put together this list that you can use to do just for fun or as part of your science curriculum. These work well for homeschool, school classrooms, virtual classrooms, and just for fun learning at home. You may also like these STEM gift ideas for kids too! There are a variety of hands-on science experiments, activities, as well as demonstrations for observation. Most are simple to set up and use items that you already have around the house or are easy to obtain. These science activities are great for kids of all ages, but are ideal for preschool and kindergarten. You may need to adjust some of them depending on your child's age. For example, some of these are great to introduce to toddlers, but they may need to only observe. Whereas older children may be able to assist with set up. 30 Kids' Science Activities - Melting Rainbows – This baking soda and vinegar experiment is super fun and the perfect science experiment for preschoolers to do. Older kids may prefer this fizzy bath bomb science project. - Candy Rainbow – Use M&Ms for this pretty science experiment. Try making a rainbow and other color patterns. - Oil & Water Color Changing Lab – This activity is easy to set up and let's kids explore color mixing with oil and water. - Ocean Zone Density Jar – Learn about the different zones of the ocean while also learning about liquids having different densities. - Viscosity Art Project – Kids of all ages will be fascinated by the patterns they can make with this STEAM activity. - Galaxy Oobleck – Oobleck is easy to make and fantastic for sensory play. - Liquid or Solid? Experiment – Yes, this is a basic homemade slime recipe, but it's an excellent way to learn about non-Newtonian fluids and compare it to other liquids and solids. - Butterfly Life Cycle Worksheet – Learn about the different stages of a butterfly's life and use our printable worksheet to practice putting them in the correct order. - Human Body Bingo Game – Our printable game is a fun way to learn about organs. - Unicorn Tower Density Jar – Try this hands on activity to test density of different objects. My daughter did this for her second grade science fair project. - Balloon Air Pressure Experiment – Use a balloon and a mason jar to demonstrate air pressure to kids. - Bouncing Playdough – Make this playdough recipe and let the kids have a blast testing how well it bounces. - Sidewalk Paint Rockets from the Gluesticks blog – Delight the kids with some messy fun as they launch these chalk paint rockets. - Leak Proof Bag Experiment from Fun Learning for Kids – Grab a Ziploc bag and some sharp pencils for this one. The kids will be amazed when the water doesn’t leak out! - Simple Light Refraction Experiment by Look We’re Learning – This activity is a great way to learn about how light bends when passing through objects. All you need is some water and a Post-It Note for this demonstration. - Heat Conduction Experiment from Look We’re Learning – This simple demonstration uses a few items from your kitchen to show how different materials conduct heat. - Lava Lamp Bottles from Natural Beach Living – This is such a cool chemistry experiment for little scientists! - Rain Cloud in a Jar by Natural Beach Living – This is a great weather science activity for kids. They will have fun making it rain and learning how clouds work. - Salt Water Density Experiment by Little Bins for Little Hands – Take the simple sink or float experiment to the next level by testing how salt affects water density. - Easy Flower Science Experiment by Kindergarten Worksheets and Games – There are so many things kids will learn with this classic flower dying science activity. Plus, the results are pretty! - Exploding Watermelon Experiment from 123 Homeschool 4 Me – Learn about potential and kinectic energy with this cool activity. It’s messy and fun, so you’ll want to set it up outside. - Dissolving, Expanding, and Bouncing Eggs from Blue Bear Wood – Kids will put their hypothesis to the test when attempting to dissolve egg shells in different liquids. - Color Mixing Sensory Bottle by The Chaos and The Clutter – This color mixing activity is both a science experiment and sensory play. - Make Borax Crystal Ornaments by The Craft Train – There are so many incredible uses for borax. Kids will have fun seeing these crystals grow! - Easy Coffee Filter Science Experiment by Sixth Bloom – Fun and simple activity to teach chromatography to kids. Perfect for toddlers and up. - Simple Pulley Machine Game from JDaniel4’s Mom – Make a simple pulley system and play a counting sheep game with it. - Outdoor Bug Hunt Activity from KC Edventures – Explore the backyard and keep notes of your bug observations with this free printable. - Walking Water Science Activity by A Dab of Glue Will Do – Teach the kids about capillary action with this fun and easy science demo. - Coffee Ground Fossils from Crafts by Amanda – Whether your kids are obsessed with dinosaurs or not, they’ll enjoy this science themed craft project. - Easy Weights & Measures Experiments by Mommy Evolution – Test the strength of coffee filters with a series of experiments. Great way for kids to work on learning weights, measurements, and counting. 30 Day Science Activity Printable Calendar Our printable science activity planner is free for your personal use at home or in your classroom. Please share this post with other parents and teachers to get their own copy. *Special Note For Teachers: You may use this planner with your virtual classes too. How to use the 30 Day Science Activity Plan Printable - Download and save the printable kids science activity calendar. - Each activity name in the PDF is clickable and will take you to the instructions for that science experiment. - Use it for a handy reference for all of the activities. - Use it to create your own weekly or daily plan of science activities. You can do them in any order you choose. - Print it out and cross off each science activity after completing it with your kids. - Circle or star your child's favorite science experiments so you can do them again! Get weekly sanity saving parenting tips, recipes, and kids activities.
Read/review the following resources for this activity: Textbook: Chapter 11, 12, 13 Lesson Additional scholarly sources you identify through your own research Initial Post Instructions Analyze why older, white adults vote in elections more than other groups while describing how each political party cultivates voters and the role an interest group plays in turning people out to the polls. You can explore political voting data at the United States Census Bureau’s website here: https://www.census.gov/topics/public-sector/voting.html (Links to an external site.)Links to an external site. Use evidence (cite sources) to support your response from assigned readings or online lessons, and at least […] write an essay …………………………….”.. 1. For questions 1 and 2 , there are many websites which contain lots of information about the Middle Eastern society, the Arab culture and Islam. 2. Your answers must be in essay format and in your own language , not the language of the articles you get your information from. 3. Question #3 is open-ended. You can answer it in your own way but answer each part of the question. “The political parties and/or interest groups can impact this difference in voter turnout by helping voters make an informed choice instead of trying to manipulate voters.” Do you think that political parties or interest groups could also take part in manipulating or misleading voters in order to achieve favorable election results? Requirement: 1 scholarly source or text book One page 1) Find at least one item in the news media (text or video) since January 1, 2019 that discusses Islam. 2) Cite the news item (preferably a hyperlink so that your colleagues and I could easily find it to read or view as well.) 4) In two paragraphs, summarize the implicit or explicit claims of the news item regarding Islam. 5) In two or three paragraphs, write your thoughts about those claims, in light of what you have already read about religion in general, and Islam in particular. Question Description In this milestone, you will submit Mapping the Issue (Section II of your final project). This milestone is a concept map that will help you visualize the social issue and how it relates to the following sociological concepts: cultural beliefs and biases, social roles, social inequalities, and existing social conditions. A concept map is a visual diagram that helps you make mental connections between concepts and show the nature of those relationships graphically. In a concept map, lines represent that there is a relationship between different categories. This concept map is a critical piece of your final project, […] What is the impact of the Yemen war on the US-Saudi relationship? you must well explain it and talk about the democrats and the republicans points of view about that topic. you shouldn’t be with or against Saudi or US just explain the impact of the Yemen war on relationships of these country together The answers should be in essay format. Answer and identify each part of the question. Attachment previewInstructions: -Your answers should be in essay format. -Answers and identify each part of the question. For example, if you are answering Q2A, mention it clearly on your answer so that I know which question and what part of it you are answering. Why is Aristoteles considered the father of political science and why he called politics the master science? Prompt: In this unit we learn about the history of our democracy and the rights and responsibilities of citizens. On November 6th, 2018 we had a midterm election that choose U.S. senators and representatives as well as a number of state and local officials from governors on down to local justices of the peace. Midterm elections have historically low voter turnout (only about 40% of those eligible actually vote – and even less in Texas!). Consider what you’ve read in the chapters and comment on the following: 1. What is necessary for a democracy to thrive? Can it still exist […] see the teacher notes please and fix them and then add more paragraphs: (In red font ) and please add to the paper the bad form of leadership instead of the ordinary leader i did add a pragrph about (In my opinion) you will finid it in green font you just need to add few paragraphs to (How to Make a Good Leader?) and also add two long paragraph about the bad leader (international leader not only american) and then compare maybe
Obesity in children has reached to epidemic proportions. And our nation’s psychological and physical health is significantly impacted by it. According to research, children who are obese and overweight are more prone to develop into obese adults. Moreover, their risk of getting diseases such as heart disease, hypertension and diabetes when they are young is much higher. 1 out of 10 children suffer from obesity when they enter primary school. However, what can be done by schools for its prevention? 1. Promote Healthy Habits of Eating The first thing that schools can do is to educate children about healthy habits of eating so that they are able to recognize the kind of food which is healthy for their system and bodies. One of the ways in which this can be promoted is to cultivate a garden in school where vegetables can be grown. Then teachers may highlight to the children the link that exists between foods grown at home and good health and also how different kinds of meals can be prepared from these foods. 2. Focus on Menus of Lunches Healthy meals at schools should be provided to children. The new scheme of government under which a free lunch is offered at school to all children at primary schools up to age of seven years is a good initiative. It offers some control on the kind of food children are eating at school. Ensuring that meals provided at schools contain healthy foods is an excellent way to make sure that children receive the correct nutrients they require to thrive. 3. Enhance High-Quality Physical Education For childhood obesity prevention in schools, strict guidelines has been set by the government regarding the amount of physical exercise that every child of primary school should be getting; however, the onus lies on individual schools to make sure that the program runs successfully. Schools should not only promote physical education lessons regularly but also encourage extra-curricular sports and active playtimes whenever possible. 4. Regular Contact Should Be Maintained with Parents The major influence on a child’s obesity is of parents; hence, schools should maintain regular contact with parents so that any issues of child may be worked upon closely between teachers and parents. Moreover, constant contact between parents and school is also essential to promote healthy habits of eating-it cannot just simply be learnt; it requires role-modeling and constant reinforcement. 5. Promote Right Messages The schools should promote right messages to children regarding healthy lifestyles. For instance, schools which otherwise promote healthy habits of eating but give sweets as reward to their children send wrong messages. Hence, developing robust policies by schools for physical activity and food are essential for them to maintain consistency and promoting right messages. 6. Change Policies Made by the Government Currently there are limitations by the government regarding things that can be done for childhood obesity prevention in schools. It cannot be managed by the schools alone. It is essential that parents work closely with schools. Moreover, guidance, resources, expert support and a flexible curriculum is important so that schools can freely dedicate much more time to physical activity and healthy eating. Parents make a big role in preventing childhood obesity. Try the following tips. 1. Encourage healthy habits of eating. Small changes may result in success. 2. Make their favorite recipes healthier. Certain of your favorite dishes may turn healthier by making some changes. You may also try certain new dishes that are heart healthy that may become your favorite. 3. Remove temptations that are rich in calories. Treats should be given in moderate amounts. Limiting high-sugar, salty and high-fat snacks may help in developing healthy habits of eating in your children. Some examples of making low-sugar and low-fat treats, which are 100 calories or less are: 4. Make your children understand the health benefits of physical activity. Teach them the health benefits of physical activity such as: 5. Help your children stay active. Teens and children should do at least one hour of physical activity (moderate intensity) on most days of week if possible. You should be a role model to your kid. Add physical activity/exercise to your routine and ask your kids to accompany you. Some instances of physical activity (moderate intensity) are: 6. Reduce your kid’s sedentary time. Though quite time or sedentary time for homework and reading is fine, limit your kid’s screen time for video games, TV and internet to not greater than 2 hours in a day. According to the American Academy of Pediatrics, TV is not recommended for children aged two years or younger than two years. Encourage kids to do fun activities that they can do with their family members or on their own but which involve more physical activity.
Canada Lesson Plans Teaching Resources | Canada Flag of Canada's Own- Students examine the representative symbols of flags and make their own national flags for Canada. A Land Rich in Beauty and Culture- This PDF document focuses on the geography, provinces, and cultural history of Canada. Four Trading Cards - This project is a cumulation of learning from lessons and readings in Grade Five Social Studies as well, a fun way to present information about Canada to classmates and share through print this information as a trading card. - Canadian Citizenship and World Languages - We look at the Canada's Rights and Freedoms. We also look at the various Language spoken in the Maple leaf country. Cultural Connections - Students will examine the different ways that they see culture in Canada. Province Puzzle- "Students will examine the shapes of each province as well as where it belongs in Canada." to Canada! Poster- "Students will examine the positive aspects of Canada (e.g.. diversity, natural resources, scenery) while they design a poster attracting people to come to Canada (travel or immigrate)." Neighbor! A Journey through Canada- "Through active participation, learners will develop a global awareness of our neighbor to the north, Canada. Learners will explore the complex nature of the Canadian culture by examining significant historical Canadian events. Spatial sense skills are exposed and extended through various hands-on multi-sensory lessons." a Canadian Expedition- "Students will examine the physical and climatic characteristics of a physiographic region in Canada through planning an expedition of the area as though they are early explorers (with a present-day knowledge of the geography, though without settlements!)." a Trip in Canada Using Mileage- "Students will use the mileage charts on a map of Canada to plan a trip that will be possible within a pre-set mileage." to Canada - Students explore Canadian culture.
Artificial intelligence (AI) refers to the recognition or creation of patterns that simulate human actions or thought. Since the late 1970s, when people began regularly interacting with computers, AI has become increasingly prevalent, and uses of AI technology continue to create greater opportunities for interaction with human norms — those rules that define acceptable behavior. The intersection of those norms and AI processes that seek to replace human actions where efficiency calls for it is also an intersection of expectations and the law ? one that is changing and adapting quickly. The recent article "AI–human interaction: Soft law considerations and application," published in the Journal of AI, Robotics & Workplace Automation, discusses these issues. In particular, the article considers several primary concepts: - The history of AI and how its purpose, usability and interaction with humans have evolved in the past 50 years. - One potential challenge to AI is the Uncanny Valley, or those instances where robots create a digital presence that is indistinguishable from a real human that can induce mental uneasiness when humanlike appearances create expectations that robots cannot meet. - The Turing test, a concept anticipated by AI pioneers, as a method of inquiry for determining whether a computer is actually capable of thinking like a human being. - How chatbots launched by technology corporations have recently demonstrated the risks and ethical challenges that advancement to AI presents. - The necessity of soft law to address the risks presented by increased use of AI as the industry progresses, and how legislative bodies have already begun addressing those risks. Examining these issues, the article begins by tracing the history of AI from personal computing in the 1970s, when software and computer platforms were being developed with the goal of making everyone a computer user. As computers became a part of daily life, the field of cognitive engineering, a scientific field merging how people think and the engineering of products to address human needs, developed with the goal of increased efficiency worldwide. Human interaction with computers then progressed actively for decades, conforming to usability and aiming to reflect changes in society. And today, everyone is "plugged in" in some way in nearly every part of their existence, especially given the virtual and remote world designed in response to the COVID-19 pandemic. AI has not only adapted to these changes but continues to evolve, and the use of AI is shaping up to create a new normal. The rapid maturation of the industry has set off related calls for action in the legal and regulatory communities. The article considers these movements and posits that soft law is the ideal step to address AI innovations, especially when considering how certain legislative bodies (including the California Legislature) have frameworks addressing how AI communicates with the public. As noted in the article, because "this is unlikely to be a situation where AI developers police themselves without any outside demands or influence," there is a need to continue expanding efforts like this into a soft law approach that works. The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
Originally found on https://medicalxpress.com/news/2017-07-autism-severity-brain.html UCLA researchers have discovered that children with autism have a tell-tale difference on brain tests compared with other children. Specifically, the researchers found that the lower a child’s peak alpha frequency—a number reflecting the frequency of certain brain waves—the lower their nonverbal IQ was. This is the first study to highlight peak alpha frequency as a promising biomarker to not only differentiate children with autism from typically developing children but also to detect the variability in cognitive function among children with autism. Autism spectrum disorder affects an estimated one in 68 children in the United States, causing a wide range of symptoms. While some individuals with the disorder have average or above-average reasoning, memory, attention and language skills, others have intellectual disabilities. Researchers have worked to understand the root of these cognitive differences in the brain and why autism spectrum disorder symptoms are so diverse. An electroencephalogram, or EEG, is a test that detects electrical activity in a person’s brain using small electrodes that are placed on the scalp. It measures different aspects of brain activity including peak alpha frequency, which can be detected using a single electrode in as little as 40 seconds and has previously been linked to cognition in healthy individuals. The researchers performed EEGs on 97 children ages 2 to 11; 59 had diagnoses of autism spectrum disorder and 38 did not have the disorder. The EEGs were taken while the children were awake and relaxed in dark, quiet rooms. Correlations among age, verbal IQ, non-verbal IQ and peak alpha frequency were then studied. The discovery that peak alpha frequency relates directly to non-verbal IQ in children with the disorder suggests a link between the brain‘s functioning and the severity of the condition. Moreover, it means that researchers may be able to use the test as a biomarker in the future, to help study whether an autism treatment is effective in restoring peak alpha frequency to normal levels, for instance. Written by Sarah C.p. Williams For more information on Autism treatments, contact us.
In this quick tutorial you'll learn how to draw a Cattle Egret in 6 easy steps - great for kids and novice artists. The images above represents how your finished drawing is going to look and the steps involved. Below are the individual steps - you can click on each one for a High Resolution printable PDF version. At the bottom you can read some interesting facts about the Cattle Egret. Make sure you also check out any of the hundreds of drawing tutorials grouped by category. How to Draw a Cattle Egret - Step-by-Step Tutorial Step 1: To make the egret's head, draw a shape that looks like a 7, with a sharp beak. Draw your line down for a neck. Add a line to divide the beak in half. Add an eye. Step 2: Add a small crescent shape on top of the bird's head and neck. Then add a gently curving line with a W at the bottom for the body. Step 3: Next, made a hook shape for the wing. Step 4: Then we can add some wavy lines and straight lines to make the feather details. Step 5: Draw long legs and big feet, with toes both in front and back of the foot. That's it, your cattle egret is done. They are usually white in color. Interesting Facts about the CATTLE EGRET The Cattle Egret is a member of the bird family and the scientific term for them is Bubulcus ibis. It is the only member of the species Bubulcus. They get their name from the association of cattle, capturing small creatures that are disturbed by the large mammal, and feeding on top of them for ticks and flies. Other common names of this animal are Western Cattle Egret and Eastern Cattle Egret. Did you know? - They were first documented in 1758. - There can be a wingspan of over 3 feet. - The wings are 1/3 more than the length of their body. - These animals are more than 1 pound. - Their population covers almost 4 million square miles. - The species flew across the Atlantic Ocean in 1877. The white and silent bird lives in most parts of the world at warm climates near water. It develops orange plumes on its upper body and red on non-feathered areas during mating season. They are a relative of the heron, but spend more time on dry land areas. Some populations migrate and others do not. The position of their eyes gives it binocular vision. Some are found in Arctic regions.
It’s back to school time again and most educators are acutely aware of the potential social emotional needs of students. Last school year was a challenging year for many teachers. Anxiety, social insecurities, inability to focus, distractions coming from many angles were worse than prepandenmic times. How can teachers give students the opportunity to stay present, grounded, feel accepted, and focus on learning? One simple and free way is by using The Imagine Project. Emotional support through writing The Imagine Project is a writing tool that gives kids an opportunity to talk about issues that are bothering them; a difficult life event or a stressful situations they’ve experienced recently or in the past. This is done by having students K-12 write their story using Imagine to begin every sentence. They follow a 7-step simple writing process that’s in a journal format. The journals can be downloaded (for free) at www.theimagineproject.org. The beautiful part of this writing process is in Step 4 where the writer is asked to Imagine a new, more positive version of their story—helping them shift to a positive mindset, giving them the social emotional support to move forward and learn. How to begin Students can begin the first week of school by writing a story about coming back to school—their worries, hopes, and dreams. They can keep an Imagine journal and write it in often, on their own or together in the classroom; particularly when there is an emotional event in their lives, classroom, school, or in the world. Using this process often teaches students a tool they can use whenever needed as difficult life circumstances occur. Social Emotional support in the classroom When classrooms do The Imagine Project together and read their stories out loud to each other, empathy and camaraderie are created. Kids hear that they aren’t alone in their experiences and they feel a sense of relief in telling their story, and a sense that they’ve been heard. It’s a remarkable and beautiful process to watch students in a classroom come together and support one another. Relationships are critical for our social emotional health, as is self-expression. The Imagine Project helps promote both of these. Watch here to teachers and students talking about using The Imagine Project in their classrooms. When a student is experiencing stress (past or present) it’s difficult for them to make friends, focus, and learn in school. Giving them a simple process (that meets many core standards and can be incorporated into many lessons plans) will support their social emotional needs and growth–something students need now more than ever. To learn more and get started go to The Imagine Project Getting Started page. If you recognize the value of social emotional support for students as students go back to school and throughout the school year, you will love The Imagine Project! Dianne is the founder and CEO of The Imagine Project, Inc., a nonprofit organization that helps children K-12 (and adults) process and heal from difficult life circumstances through expressive writing. Dianne has her Masters in Psychiatric/Mental Health Nursing, is a thought leader in stress and trauma in children, has written multiple award winning books including The Imagine Project: Empowering Kids to Rise Above Drama, Trauma, and Stress. She is an international speaker, lives in Colorado and has 3 grown children. Learn more about The Imagine Project at www.theimagineproject.org.
- Let your child see you reading Children learn from what they observe. If he/she sees that you love reading, your child is likely to follow suit, too. -Create a reading space. Your reading space can even be a corner of the couch or a chair in your child’s room. Picking out a comfortable spot that has good lighting and room to keep some children books can help your child learn to connect coziness and comfort with reading. -Allow your child to pick his or her own book. He/She would be interested to listen to the story especially if it is something he/she had chosen. If the chosen story is too wordy and you do not have the time to read and summaries to the child, just explain that the book is too difficult for his/her level. If the child insist on having the wordy book, what you can do is a story telling from looking at illustrations. -Get books that has a topic that your child is familiar with. If your child shows lack of interest in books and yet familiar with "Sofia the First" and "Jake and the Never Land Pirates"? What about Disney Junior? If your kids are familiar with these characters, good chances is that they would read these books or ask you to read with them. It will be good to introduce reader series to pre-schoolers. - Encourage reading all by oneself. As a rule of the thumb, choose book that contains not more than 5 new words per page. This is ensure that the child would not feel overwhelm by the presence of numerous new words that he/she do not understand. The new words make reading less pleasant. - Read wordless books occasionally. I used to think wordless books are not good for children to read simply because they are wordless. However, I was told by a librarian that wordless books are good for children because it encourage children to look through the pictures and narrates based on their understanding. Reminds you of Oral Test or Picture Composition? Yeah... that is how wordless books are supposed to work. I would listen to Little One's narration and correct her when she phrases her sentences wrongly.
Class theme/topics discussed: 1. Warm up Activity: Number in Your Life -A student says a number which he/she likes and other students guess the reason why he/she likes. 2. Discussion Topic: Thinking about gender role -filling in the blank of the article in group (Which occupation is mostly for women or men in general?) How did you pick this theme or topic? In the previous lesson, my students were wondering gender roles in the Anpanman video. So I just wanted students to think about this issue more deeply. How did you present the material? (handouts, group work, general discussion, student presentations, etc.) How did students react? It was a kind of tough topics, but they tried to express what they thought in Japanese as much as possible. It was amazing!! Did they engage with each other and you? Yes, they did. What materials or media did you use? (articles, satellite tv, digital projector, etc.) I used an article from Japanese text book. Please attach a copy. Would you recommend this activity for a future class? Why or why not? Connecting to the previous class was very useful for students to remember words for the topic. As for the warm up activity, they love to guess the meaning, which drove students to think about lots of questions in Japanese. It was successful!!
The metacharacter \b is an anchor like the caret and the dollar sign. It matches at a position that is called a “word boundary”. This match is zero-length. There are three different positions that qualify as word boundaries: Simply put: \b allows you to perform a “whole words only” search using a regular expression in the form of \bword\b. A “word character” is a character that can be used to form words. All characters that are not “word characters” are “non-word characters”. Exactly which characters are word characters depends on the regex flavor you’re working with. In most flavors, characters that are matched by the short-hand character class \w are the characters that are treated as word characters by word boundaries. Java is an exception. Java supports Unicode for \b but not for \w. Most flavors, except the ones discussed below, have only one metacharacter that matches both before a word and after a word. This is because any position between characters can never be both at the start and at the end of a word. Using only one operator makes things easier for you. Since digits are considered to be word characters, \b4\b can be used to match a 4 that is not part of a larger number. This regex does not match 44 sheets of a4. So saying “\b matches before and after an alphanumeric sequence” is more exact than saying “before and after a word”. \B is the negated version of \b. \B matches at every position where \b does not. Effectively, \B matches at any position between two word characters as well as at any position between two non-word characters. Let’s see what happens when we apply the regex \bis\b to the string This island is beautiful. The engine starts with the first token \b at the first character T. Since this token is zero-length, the position before the character is inspected. \b matches here, because the T is a word character and the character before it is the void before the start of the string. The engine continues with the next token: the literal i. The engine does not advance to the next character in the string, because the previous regex token was zero-length. i does not match T, so the engine retries the first token at the next character position. \b cannot match at the position between the T and the h. It cannot match between the h and the i either, and neither between the i and the s. The next character in the string is a space. \b matches here because the space is not a word character, and the preceding character is. Again, the engine continues with the i which does not match with the space. Advancing a character and restarting with the first regex token, \b matches between the space and the second i in the string. Continuing, the regex engine finds that i matches i and s matches s. Now, the engine tries to match the second \b at the position before the l. This fails because this position is between two word characters. The engine reverts to the start of the regex and advances one character to the s in island. Again, the \b fails to match and continues to do so until the second space is reached. It matches there, but matching the i fails. But \b matches at the position before the third i in the string. The engine continues, and finds that i matches i and s matches s. The last token in the regex, \b, also matches at the position before the third space in the string because the space is not a word character, and the character before it is. The engine has successfully matched the word is in our string, skipping the two earlier occurrences of the characters i and s. If we had used the regular expression is, it would have matched the is in This. Word boundaries, as described above, are supported by most regular expression flavors. Notable exceptions are the POSIX and XML Schema flavors, which don’t support word boundaries at all. Tcl uses a different syntax. In Tcl, \b matches a backspace character, just like \x08 in most regex flavors (including Tcl’s). \B matches a single backslash character in Tcl, just like \\ in all other regex flavors (and Tcl too). Tcl uses the letter “y” instead of the letter “b” to match word boundaries. \y matches at any word boundary position, while \Y matches at any position that is not a word boundary. These Tcl regex tokens match exactly the same as \b and \B in Perl-style regex flavors. They don’t discriminate between the start and the end of a word. Tcl has two more word boundary tokens that do discriminate between the start and end of a word. \m matches only at the start of a word. That is, it matches at any position that has a non-word character to the left of it, and a word character to the right of it. It also matches at the start of the string if the first character in the string is a word character. \M matches only at the end of a word. It matches at any position that has a word character to the left of it, and a non-word character to the right of it. It also matches at the end of the string if the last character in the string is a word character. The only regex engine that supports Tcl-style word boundaries (besides Tcl itself) is the JGsoft engine. In PowerGREP and EditPad Pro, \b and \B are Perl-style word boundaries, while \y, \Y, \m and \M are Tcl-style word boundaries. In most situations, the lack of \m and \M tokens is not a problem. \yword\y finds “whole words only” occurrences of “word” just like \mword\M would. \Mword\m could never match anywhere, since \M never matches at a position followed by a word character, and \m never at a position preceded by one. If your regular expression needs to match characters before or after \y, you can easily specify in the regex whether these characters should be word characters or non-word characters. If you want to match any word, \y\w+\y gives the same result as \m.+\M. Using \w instead of the dot automatically restricts the first \y to the start of a word, and the second \y to the end of a word. Note that \y.+\y would not work. This regex matches each word, and also each sequence of non-word characters between the words in your subject string. That said, if your flavor supports \m and \M, the regex engine could apply \m\w+\M slightly faster than \y\w+\y, depending on its internal optimizations. If your regex flavor supports lookahead and lookbehind, you can use (?<!\w)(?=\w) to emulate Tcl’s \m and (?<=\w)(?!\w) to emulate \M. Though quite a bit more verbose, these lookaround constructs match exactly the same as Tcl’s word boundaries. If your flavor has lookahead but not lookbehind, and also has Perl-style word boundaries, you can use \b(?=\w) to emulate Tcl’s \m and \b(?!\w) to emulate \M. \b matches at the start or end of a word, and the lookahead checks if the next character is part of a word or not. If it is we’re at the start of a word. Otherwise, we’re at the end of a word. The GNU extensions to POSIX regular expressions add support for the \b and \B word boundaries, as described above. GNU also uses its own syntax for start-of-word and end-of-word boundaries. \< matches at the start of a word, like Tcl’s \m. \> matches at the end of a word, like Tcl’s \M. Boost also treats \< and \> as word boundaries when using the ECMAScript, extended, egrep, or awk grammar. The POSIX standard defines [[:<:]] as a start-of-word boundary, and [[:>:]] as an end-of-word boundary. Though the syntax is borrowed from POSIX bracket expressions, these tokens are word boundaries that have nothing to do with and cannot be used inside character classes. Tcl and GNU also support POSIX word boundaries. PCRE supports POSIX word boundaries starting with version 8.34. Boost supports them in all its grammars. Did this website just save you a trip to the bookstore? Please make a donation to support this site, and you'll get a lifetime of advertisement-free access to this site! Page URL: https://www.regular-expressions.info/wordboundaries.html Page last updated: 22 November 2019 Site last updated: 05 October 2020 Copyright © 2003-2020 Jan Goyvaerts. All rights reserved. |Table of Contents| |Regex Engine Internals| |Character Class Subtraction| |Character Class Intersection| |Shorthand Character Classes| |Grouping & Capturing| |Backreferences, part 2| |Branch Reset Groups| |Free-Spacing & Comments| |Lookahead & Lookbehind| |Lookaround, part 2| |Keep Text out of The Match| |Recursion & Quantifiers| |Recursion & Capturing| |Recursion & Backreferences| |Recursion & Backtracking| |POSIX Bracket Expressions| |Regular Expressions Quick Start| |Regular Expressions Tutorial| |Replacement Strings Tutorial| |Applications and Languages| |Regular Expressions Examples| |Regular Expressions Reference| |Replacement Strings Reference| |About This Site| |RSS Feed & Blog|
We all know that a reactor is simply like an inductor which helps in limiting the rate of change of current in the whole circuit. While you may find several types of reactors available with reactor manufacturers such as shunt reactor and smoothing reactor, line reactor is one of the simplest types of reactors, which is connected in series to lower the current spikes and limit peak currents. Here, we will discuss some of the top reasons for using a line reactor in the circuit. Let’s check out them. Used for buffering In a circuit, there are certain elements like switchgear, contactors and disconnectors that cause line transients when inductive loads, e.g. motors, are switched off. This may result in voltage spike at the input to the drive that further ends with a surge of current at the input. If the voltage spike is high enough, semiconductor devices may fail or damage as they are very sensitive to current surges. Therefore, sometimes, a line reactor is used at the input to buffer from the line. Neither it fixes grounding issues, nor it provides isolation but it provides some buffering so that protection devices get sufficient time to react safely and lower the damage chances. Used to reduce harmonics Most of the six pulse drives are nonlinear loads and they tend to draw current only at plus and minus peaks of the line. But the problem is that the current wave is not sinusoidal which means it contains harmonics. In such situations, a line reactor can be installed so that peaks of the current are reduced and somewhat broadened out. This act makes the current to be more sinusoidal as the harmonic level is reduced. This effect is also advantageous in DC filter capacitors. Used to increase load inductance Installing a reactor at the output of the drive is sometimes essential. It is because if the motor has low leakage inductance, a line reactor will help in bringing the total load inductance back up to the point that a drive can handle without any issue. In some cases where a strange motor configuration is used or a motor with six or more poles is installed, the motor inductance might be very low and therefore, there will be a need for installing a line reactor. If more than one motor are used, then also you will need to fix a line reactor at the output. Used to reduce the effect of reflected wave Sometimes, a reactor is installed at the drive’s output so that it can prevent a reflected voltage spike when long motor leads are needed. In these conditions, a reactor can slow the frequency of rise of voltage which will be beneficial. But, it will not limit the peak voltage at the motor. If a reactor is installed at the output of drive, it means that it is most probably the part of specially designed reflected wave reduction device that has damping resistors in parallel also. When installed at the output, a reactor must be positioned as close to the drive as possible. If you are also in need of a line reactor or want to buy a reactor that is designed specifically for your application and in accordance with your customized specifications, contact only a reputed and experienced reactor manufacturer to save your money and time and avail a high quality reactor.
In 2001, the U.N. Intergovernmental Panel on Climate Change (IPCC) featured a graph of Northern Hemisphere temperature history from a 1999 study by Profs. Michael Mann, Raymond Bradley, and Malcolm Hughes. Because of its shape, the graph became known as the “hockey stick.” From A.D. 1,000 to about 1915, the graph depicts a gradual decline in Northern Hemisphere temperatures (the hockey stick handle) followed by an abrupt upturn in hemispheric temperatures during the remainder of the 20th century (the blade). The graph appears in the IPCC 2001 report’s Summary for Policymakers, Technical Summary, and chapter 2 on Observed Climate Variability and Change. Based on the Mann-Bradley-Hughes (MBH) study, the IPCC famously concluded that, “The 1990s are likely to have been the warmest decade of the millennium in the Northern Hemisphere and 1998 is likely to have been the warmest year” (chapter 2, p. 102). The IPCC also asserted that, “Evidence does not support the existence of globally synchronous periods of cooling or warming associated with the ‘Little Ice Age’ and ‘Medieval Warm Period’.” The hockey stick instantly became the poster child for pro-Kyoto advocacy, touted as seeing-is-believing evidence that late 20th century warmth was unprecedented during the past 1,000 years, and that mankind’s fuelish ways must be to blame. Soon after its PR boost from the IPCC, the hockey stick became embroiled in a controversy that persists to this day. Books both pro and con have been written on the subject. Two leading critics, mining consultant Steve McIntyre and economist Ross McKitrick, argued that MBH’s computer program generates hockey stick-shaped graphs from random data. As for the IPCC’s dismissal of the Medieval Warm Period as a European phenomenon, the Center for the Study of Carbon Dioxide and Global Change maintains a large and growing archive of studies indicating that the Medieval Warm Period was global and/or warmer than recent decades. A recent study published in Nature Climate Change further undermines the credibility of the hockey stick. The study, “Orbital forcing of tree-ring data,” by Jan Esper of Johannes Gutenberg University, in Germany, and colleagues from Germany, Switzerland, Finland, and Scotland, used X rays to measure changes in the cell-wall density of trees in Northern Finland over the past 2,000 years. The analysis examined both “living and subfossil pine (Pinus sylvestris) trees from 14 lakes and 3 lakeshore sites.” The researchers argue that “X-ray densitometry” enables a more accurate reconstruction of climate history than does analyzing the width of tree rings – the principal data used by MBH. For example, MBH found a “divergence,” starting in 1960, between a decline in Northern Hemisphere temperatures, as reconstructed from tree ring data, and the increase in Northern Hemisphere temperatures, as measured by thermometers and other heat sensing instruments. The divergence raises the question of how MBH can be so sure the Medieval Warm Period was tiny or non-existent when their proxy data fail to reflect the instrument-measured warmth of recent decades. To give the hockey stick its dangerous-looking blade, MBH had to “hide the decline.” In contrast, the Esper team found no divergence between instrumental data and temperatures inferred from density analysis of living trees in the study area. So what’s the upshot? Their reconstruction “shows a succession of warm and cold episodes including peak warmth during Roman and Medieval times alternating with severe cool conditions centred in the fourth and fourteenth centuries.” The warmest 30-year period was A.D. 21-50, which was 1.05°C warmer than the mean temperature for 1951-1980 and ~0.5°C warmer than the region’s maximum 20th century warmth, which occured during 1921-1950. Source: Esper et al. 2012 (extracted by CO2.Science.Org) The reconstruction also “reveals a long-term cooling trend of -0.31°C per 1,000 years (±0:03°C) over the 138 B.C.-A.D. 1900 period . . .” This trend is not reflected in tree ring width data from “the same temperature-sensitive trees.” Thus, reliance on such data (as in the hockey stick reconstruction) ”probably causes an underestimation of historic temperatures.” The authors write in a politic manner. Although they reference the MBH study, they do not directly criticize it or mention the hockey stick by name. They do not claim their reconstruction is definitive. However, they do argue that the reconstruction reflects long-term changes in ”orbital configurations” that have continually reduced Northern Hemisphere summer “insolation” (solar irradiance) over the past two millennia. If so, then we should expect densitometry analysis of trees in other parts of the Northern Hemisphere to produce similar results. Climate alarm skeptics will be pleased to see in the chart above evidence that the Roman Warm Period and Medieval Warm Period were warmer than the late 20th century. On the other hand, they may not be pleased by an apparent implication of the study. If Northern Hemisphere temperatures have been in an overall cooling trend for two millennia due to ”orbital forcing” (i.e. reduced solar irradiance), then the burden of proof becomes greater on those who attribute the warmth of recent decades to solar variability rather than rising greenhouse gas concentrations.
A xerophyte is a species of plant that has adapted to survive in an environment with little water, such as a desert or a snow-covered region. The Cylindropuntia Imbricate, more commonly known as the Cane Cholla cactus, is an example of this. The plant is often found in Southwestern United States (such as Texas, Oklahoma and Arizona)?and northern?Mexico, where the climate is often very warm and humid. Due to this and the lack of rainfall the Cane Cholla receive, they have had to adapt to survive. These adaptations include having roots close to the soil surface. The water is more easily and quickly collected by the roots and stored in thick, expandable stems for the long summer drought. When water is no longer available in the summer, the Cane Cholla drop their leaves and become dormant (asleep). However, the cactus continues to photosynthesize as they have fixed spines instead of leaves; this minimises the surface area and therefore reduces water loss by?transpiration. The spines also protect the cacti from animals that might eat them. These green stems also produce the plant’s food, but lose less water than leaves because of their sunken pores and a waxy coating on the surface of the stem. The pores close during the head of the day and open at night to release a small amount of moisture, just like that of a stoma. These dense network of spines also shades the stems, keeping them cooler than the surrounding air, which helps to reduce the amount of water the cactus needs. Also, the Cane Cholla leans to the south as it lets them avoid the drying sun as much as possible. However, the Cane Cholla pays a price for these water saving adaptations, slow growth. Growth may be as little as 1/4 inch per year and most young sprouts may never reach maturity.
The American Psychological Association (APA) and the Modern Language Association (MLA) have each set forth style guides which represent the two leading formatting styles used for formal citation. APA formatting is used to guide writing in fields such as business, nursing, social work, psychology, criminology and sociology, and MLA formatting is generally used for academic writing in such settings as high schools, universities and graduate programs. There are many differences in format requirements between the two styles. Parenthetical citation is the citation that appears within a work which gives credit to the original source. In MLA style, the format requires the author's name and the page number where the cited piece can be found. In addition to these two pieces of information, APA style also requires the date of publication. Authors and Editors MLA style requires that the full names of authors and editors be written on a Works Cited of Bibliography page. However, if there are more than 3 authors or editors, MLA requires only the first three to be included and the others to be referred to as "et al." APA style requires the full last name and only the first initial, but it also requires that all authors and editors be listed in this format no matter how many there are. MLA style requires that the first letter in every major word in the title of a cited piece be capitalized. In contrast, APA style only requires that the first letter of the first word of the title be written in caps. Publisher and Publication Location When referring to the publisher of a cited work, MLA style allows for an abbreviated version of the publisher's name. APA, on the other hand, requires that the full name of the publisher be spelled out in the citation. When citing the location of publication, MLA requires the name of the city, while APA asks for an abbreviated version of the state name when the city of publication is an obscure one. A Works Cited or Bibliography list in MLA style must be formatted such that the first line of each entry is flush with the left margin of the page, and all subsequent lines in the same entry are indented. APA style is opposite this in that it requires the first line of each entry to be indented and subsequent lines of the same entry to be flush with the left margin of the page. MLA format specifies that the first page number of a citation should be listed and the subsequent pages referred to with a + sign. APA format requires that the first and each additional page of the cited resource be listed. The date of publication is cited at the end of the reference in MLA format. However, in APA style, the date is listed following the name of the author.
From the time of the Reformation, scholars and philosophers in Europe debated important issues of theology and faith. The Reformation had made a big impact on society because it explored views beyond that of the Catholic Church. One of the core beliefs of the Reformation was that scripture should be the guide for one's life. Central to that belief is that the final interpretation of Scripture is one's own interpretation, not a church father's or anyone else. Still another core belief of Protestantism is that God's covenant (that is, the mutual promises between God and one who believes in him) is based on the consent of that individual. Diverse Protestant groups were gaining membership as various groups tried to define what they felt was the truest form of Christianity. One of these groups was the Puritans. A group of radical Puritans who called themselves "Pilgrims" came to the New World to begin a radical new society that revolved around their beliefs. In fact, groups seeking to promote their own interpretation of Christian morality started all of the first settlements in New England. There were some differences, but basic Protestant convictions guided behavior in most colonies and formed the glue for early colonial society. By the beginning of the 18th century, however, this fiery religious spirit died down. The descendents of the initial settlers began to generate wealth. Communities flourished and the sense of distinct religious identity diminished. Christian belief was under attack as more rational thinking became popular. In Europe, the Enlightenment was flourishing and a belief called Deism was becoming popular. Deism is a religious philosophy that a Supreme Being created the universe, but that any religious truth can be proven through reason. It focuses on the observation of the natural world, without the need for faith or organized religion. Beliefs about religion were starting to change again. Then came the "Great Awakening." The First Great Awakening was a period when spirituality and religious devotion were revived. This feeling swept through the American colonies between the 1730s and 1770s. The revival of Protestant beliefs was part of a much broader movement that was taking place in England, Scotland, and Germany at that time. Many different preachers spoke the message that being truly religious meant repenting (confessing sins) and devoting oneself to God. The movement was popular in Europe, but even more popular in the American colonies. Tens of thousands of non-religious colonists were converted to Protestant beliefs. This had a huge impact on church attendance, homes, workplaces, entertainment, and colleges. In New England, Reverend Jonathan Edwards preached about the need to repent and be converted. People flocked to listen to him, and many consider Edwards to be America's most important and original theologian. He was also a major leader in colonial life. Edwards went on to become the third president of Princeton University. Early graduates of Princeton were important leaders including James Madison, and Aaron Burr, who famously dueled Alexander Hamilton and was Thomas Jefferson's Vice President. In the Middle Colonies, Gilbert Tennent was an early leader in the Awakening. His father William was a Presbyterian minister who started the famous religious school known as Log College. Their graduates would help to develop Princeton. Interestingly, the Awakening was a reaction against rationalism, but it also led to the founding of a number of colleges. Many universities other than Princeton were founded then, including Brown, Dartmouth, and Rutgers. George Whitefield, however, was the most electrifying figure of the era. Whitefield was an Anglican priest who lost favor in England. After he was expelled from preaching in England, he began to preach in the farmers' fields to crowds of thousands. When he came to the American colonies, he brought the same energy with him. Whitefield was an actor by training, and he delivered emotional sermons where he would shout, weep, and tremble as he spoke of God. Colonists gathered by the thousands to hear him speak. He travelled from New England to Georgia and is considered by many to be the founder of the evangelical movement in America. He was so loved by his followers that during the Revolutionary War, his body was dug up so that soldiers could take scraps of his clothing. They believed that if they did this, God would watch over them. Whitefield converted slaves and some Native Americans to Christianity. Even Benjamin Franklin, who was a religious skeptic, became Whitefield's good friend and printed many of his sermons. Franklin once emptied his coin purse after hearing him speak in Philadelphia. The First Great Awakening divided many American colonists. On the one hand, it was an experience that created unity between the colonies. It led to a shared awareness of being American because it was the first major, "national" event that all the colonies experienced. On the other hand, it also caused division between New Lights, who embraced it, and Old Lights, who preferred old-fashioned ways. It also split the Presbyterian denomination in half. Because there were conflicts and divisions, the movement was in decline by the mid-1740s. Fortunately, the more unifying effects remained for decades. And despite the conflict, one surprising result was greater religious tolerance. With so many new denominations, it was clear that no one religion would dominate any region. The spirit of the First Great Awakening helped to encourage the Revolutionary spirit. Many things had changed, and many powers shifted. Before, ministers were almost treated like aristocrats. Most new ministers connected with common people. They were not always ordained, and sometimes showed less (not more) respect for those above their social class. Most of all, the new denominations of Christianity were much more democratic. The overall message was one of greater equality. So the First Great Awakening paved the way for independence and the Constitution. Speaking about spiritual equality encouraged colonists to think more about the need for democracy in both church and state. The reformation principal that God's covenant with has church was based on voluntary consent and that his covenant is a participatory relationship would be expanded into political philosophy and general feelings about authority. As Locke and other great thinkers of this era had suggested, the people became seen as leaders, not the monarchs and aristocrats. These ideas would take time to take hold, and tolerance would continue to be a challenge. But soon enough, the Founding Fathers would be able to put this all in writing, and the war would rage. "The Great Awakening" went viral... People like Whitefield and Edwards were like rock stars. Why do you think the Great Awakening went viral? Studying about the Great Awakening has put many students to sleep. Give two good reasons why they might want to stay awake!
Three weeks after general alarm over an unruly mob that threw stones, England officially suspended habeas corpus, the shorthand term for a person's right to appeal imprisonment. The term came from the phrase "habeas corpus ad subjiciendum" (produce or have the person to be subjected to [examination]), used in 14th century documents that ordered state-run prisons to answer to the court system. Habeas corpus describes the principle of even state powers having to justify their actions; you can't just lock a person up and throw away the key. In late February, 1817, the government of England said it would do exactly that for as long as it wanted to. By May of 1817, William Hone was in prison, following the publication of his parodies ridiculing the Prince Regent (later to be George IV). His parodies, based on well-known Church of England texts, used the format and key words of every-day religious ritual to point out just how irreligious and immoral the Prince Regent was: his lavish lifestyle and indulgences, set against the general populations struggles with high taxes and poverty, for instance. |The Prince Regent, as illustrated by James Gillray (1792)| Hone was finally given an opportunity to defend himself in court, in December, although in this case the court hardly acted like a separate branch of government. Hone, representing himself, battled a biased judge and old customs that had effectively made the jury a group of official yes-men. These challenges had to come well before he could answer the actual charges against him. He couldn't hope to win his trials with a jury stacked against him. Because there were three pamphlets the government objected to, Hone had to defend himself at three separate trials. Hone's presentations, in large part, consisted of a history of parody, including specific examples and their strategies. One major point was that the original work itself is often not the target of the parody; sometimes, the target is a political figure or others who have come to be fair game. Since the line between religion and government was still rather blurred in 19th century England, this was a major distinction. Hone wasn't making fun of the religious texts. He was criticizing the excesses of the Prince who sat upon the throne. Hone was acquitted by the jury in all three trials. Entry on Hone's Reformists' Register and Weekly Commentary Text of Hone's Reformists' Register [published as "Proceedings against William Hone before his trials. Complete"] Entry on William Hone Introduction to exhibit about William Hone (http://libraries.adelphi.edu/bar/hone/intro.html) Meaning and translation of "habeas corpus" (http://www.etymonline.com/index.php?term=habeas%20corpus) Biographical sketch of the Prince Regent (later George IV) (http://www.historyhome.co.uk/people/george4.htm) James Gillray's illustration of the Prince Regent
Help kids learn about the moon with this free printable moon phases mini book. It is a great pocket guide to take while observing the solar system with a telescope or camping. More Science for Kids Get kids excited about science with these fun, creative, unique, and AMAZING science activities for kids! - How Diapers Work Science Experiment - Amazing Air Pressure Science Experiment - Simple Charged Atoms Science Experiment for Kids - Colorful Capillary Action Science Project - Beautiful Spring Chromatography Science Craft - Learn about density with this clever Water Balloon Science Experiment - Amaze your kids with this simple to make DIY Lava Lamp FREE Moon Phases Mini Book Kids will have fun learning about the different phases of the moon with this free printable book. This Moon Phases for Kids activity includes several free printables and hands on activities to make learning about our solar system fun! Moon Phases for Kids We like to use this moon phases printable with our Oreo Moon Phases activity! Cookies make everything more fun! Make sure to grab this free printable worksheet to go along with a hands on Oreo Moon Phases Activity kids of all ages will LOVE! If you have younger kids, you may also enjoy this Solar System Pack with fun space themed learning activities for toddler, preschool, kindergarten, and 1st & 2nd grade. Download Moon Phases Mini Book Before you download your free pack you agree to the following: - This set is for personal and classroom use only. - This printable set may not be sold, hosted, reproduced, or stored on any other website or electronic retrieval system. - Graphics Purchased and used with permission from ScrappinDoodles License #94836 and Dancing Crayon Designs - All downloadable material provided on this blog is copyright protected.
Hubble's new view of the Carina Nebula shows the process of star birth at a new level of detail. The bizarre landscape of the nebula is sculpted by the action of outflowing winds and scorching ultraviolet radiation from the monster stars that inhabit this inferno. These stars are shredding the surrounding material that is the last vestige of the giant cloud from which the stars were born. This immense nebula contains a dozen or more brilliant stars that are estimated to be at least 50 to 100 times the mass of our Sun. The most opulent is the star eta Carinae, seen at far left. Eta Carinae is in the final stages of its brief eruptive lifespan, as shown by two billowing lobes of gas and dust that presage its upcoming explosion as a titanic supernova. The fireworks in the Carina region started three million years ago when the nebula's first generation of newborn stars condensed and ignited in the middle of a huge cloud of cold molecular hydrogen. Radiation from these stars carved out an expanding bubble of hot gas. The island-like clumps of dark clouds scattered across the nebula are nodules of dust and gas that have so far resisted being eaten away by photoionisation. The hurricane-strength blast of stellar winds and blistering ultraviolet radiation within the cavity is now compressing the surrounding walls of cold hydrogen. This is triggering a second stage of new star formation. Our Sun and Solar System may have been born inside such a cosmic crucible 4.6 billion years ago. In looking at the Carina Nebula we are seeing star formation as it commonly occurs along the dense spiral arms of a galaxy. This immense nebula is an estimated 7,500 light-years away in the southern constellation Carina, the Keel of the old southern constellation Argo Navis, the ship of Jason and the Argonauts from Greek mythology. This image is an immense (29,566 x 14,321 pixels) mosaic of the Carina Nebula assembled from 48 frames taken with Hubble's Advanced Camera for Surveys. The Hubble images were taken in the light of neutral hydrogen. Colour information was added with data taken at the Cerro Tololo Inter-American Observatory in Chile. Red corresponds to sulphur, green to hydrogen, and blue to oxygen emission. Source: Hubble Information Centre Explore further: Milky Way photo with 46 billion pixels is the largest astronomical image of all time
Where is the parathyroid, and what does it do? Parathyroid glands are located in the neck, on the thyroid gland. Most people have four pea-sized, oval-shaped parathyroid glands. Endocrine glands, such as the thyroid and parathyroid, secrete hormones, which are natural chemicals that regulate body functions. The job of the parathyroid is to secrete parathyroid hormone, which helps regulate how the body uses calcium. Calcium is needed by cells in many parts of the body—the brain, heart, nerves, bones, and digestive system. Parathyroid hormone satisfies these needs by taking calcium from bone, where it is stored, and releasing it into the bloodstream. “Communication” between the parathyroid and bloodstream help keep calcium at its normal level. What is a parathyroid adenoma? Sometimes, benign (noncancerous) growths called adenomas appear on one or more of a person’s parathyroid glands. The cause of most parathyroid adenomas is unknown. However, about 10 percent are thought to be hereditary. Radiation exposure of the head and neck also may increase the risk of adenomas (as in the people who were exposed to the atomic bomb in Hiroshima). Adenomas cause the parathyroid gland to make more parathyroid hormone than the body needs, a condition called primary hyperparathyroidism. Too much parathyroid hormone upsets the body’s normal calcium balance, which increases the amount of calcium in the blood stream. A similar but less common condition, called secondary hyperparathyroidism, can occur in people with chronic kidney failure. Women are twice as likely to develop pituitary adenomas as men, and often after menopause. Primary hyperparathyroidism may be caused by one adenoma, more than one adenoma (hyperplasia), or cancer (which is very rare). How are parathyroid adenomas diagnosed? Too much calcium in the blood (hypercalcemia) may not cause any symptoms at all or can cause a number of symptoms and medical conditions. These include: - Depression or mental confusion - Kidney stones - Bone and joint pain - Abdominal pain - General aches and pains from no obvious cause Parathyroid adenomas are usually discovered when a higher-than-normal calcium level shows up in a routine blood test, particularly in people without symptoms. Doctors then confirm the diagnosis of primary hyperparathyroidism with a test that shows parathyroid hormone levels in the blood are higher than normal. Patients with parathyroid cancer have symptoms including: - Bone pain - Kidney disease - Extremely high levels of parathyroid hormone in the blood - Neck masses that can be felt with the hand Sometimes the diagnosis of cancer is difficult to make, even after surgery. This is because parathyroid cancer cells look very similar to noncancerous adenoma cells. However, parathyroid cancer is so rare (less than 1 percent of all cases) that many head and neck surgeons (otolaryngologists) never see a patient with it. How are parathyroid adenomas treated? The most common treatment is to remove the enlarged gland (or glands). This surgery cures the problem 95 percent of the time. Instead of surgery, some people with mild or no symptoms of primary hyperparathyroidism may decide to try hormone replacement therapy or medication options. These and other treatments do not reduce the extra amount parathyroid hormone in the blood. Instead, they fight back by preventing the loss of calcium from bone. Hormone replacement therapy or other treatments for this condition must be taken for the rest of your life. A prescription medication called cinacalcet (Sensipar®) reduces both calcium and parathyroid hormone levels in people with chronic kidney failure (secondary hyperparathyroidism). However, its use in people with primary hyperparathyroidism is still being studied. If I do not have symptoms, do I need surgery? Surgery is the usual treatment for parathyroid adenoma, even for people who do not have obvious symptoms. Some people with only mild symptoms, such as feeling tired, forgetful, or depressed, may actually have another medical condition. When primary hyperparathyroidism is the cause, surgery can improve these symptoms and the person’s quality of life. If you have hyperparathyroidism, a specific test (a bone density test) can show if the raised level of parathyroid hormone in your blood is causing serious calcium loss. Calcium loss can cause osteoporosis (thin bones) and increase the risk for fractures. X-rays of the kidney can show if you have kidney stones. The most common cause of kidney stones is too much calcium in your blood. These and other tests will help you and your doctor decide if surgery is appropriate for you. If I decide to have a surgery, what should I expect? A procedure called minimally invasive parathyroidectomy has become widely accepted for removing enlarged parathyroid glands on one side of the neck. The benefits of this surgery include smaller incisions, shorter operations, and fewer complications compared with traditional two-side (bilateral) open-neck surgeries. Several weeks before surgery, the surgeon will order tests to locate your one or more overactive parathyroid gland(s). These tests may include: - An ultrasound of the neck - A scan that uses a drug called Tc-sestamibi If the results of these tests do not locate the adenoma, other scans may be ordered including: - Computed tomography (CT) - Magnetic resonance imaging (MRI) These presurgical imaging tests are quite accurate but not foolproof, and interpreting them requires skill and experience. If the results are still unclear, the surgeon might order more imaging tests to help pinpoint the location of a parathyroid adenoma.The surgeon plans the surgical approach based on how many adenomas are found and where they are located. What happens during surgery to remove parathyroid nodules? Minimally invasive surgery can be done if the patient has only one adenoma or two adenomas on the same side of the neck. Anesthesia may be local or general, depending on the surgeon’s judgment and the patient’s preference. On the day of surgery, patients sometimes receive another low dose of 99m Tc-sestamibi to guide the surgeon’s incisions. An IV line is installed to measure the amount of parathyroid hormone in the patient’s blood before and after the surgeon removes the affected glands. When the enlarged glands are highlighted by 99m Tc-sestamibi, the surgeon removes them through an incision of approximately 2 cm (less than 1 inch). Parathyroid hormone levels drop dramatically within 10 to 20 minutes after the surgeon successfully removes the glands with adenomas. If parathyroid hormone levels do not drop after the targeted glands are removed, the surgeon may switch to open-neck surgery to look for other adenomas. The open-neck method is used instead of minimally invasive surgery for patients with adenomas on both sides of the neck or when preoperative imaging fails to locate one or more adenomas. The cure rates for open-neck and minimally invasive surgeries are similar. What are the risks of having surgery? All surgeries have risks. With parathyroid surgery, some patients experience: - Hoarseness from paralysis of the voice box (from damage to the voice box nerve; permanent hoarseness [about 3.5 percent of patients]). - Short-term or permanent low calcium levels in the blood (hypocalcemia). To reduce these risks: - A device can be used to monitor the nerve’s location during surgery. - Hypocalcemia can be treated with calcium and vitamin D supplements or by leaving at least part of one parathyroid gland in the neck. - Careful control of bleeding during surgery can reduce the risk of developing blood clots in the neck.
Did You Know? Coyotes are such excellent swimmers that they have populated the Elizabeth Islands of Massachusetts by swimming from its mainland. The coyote is a medium-sized animal, that was once restricted to the plains of the American Northwest. However, the rising human interference in nature, especially after the end of the nineteenth century, ensured both, the decline of gray wolves―the coyote's chief rival―and the easy availability of food scraps in trash bins. The versatile animals that they are, coyotes soon took advantage of the situation, and spread out to cover the entire United States, with the exception of Hawaii. While coyotes may appear heavy due to their thick fur, most individuals only weigh between 30 to 40 pounds. They are known for their resemblance to dogs, wolves, and foxes, which is justified, considering that they all share the same family. However, the coyote's rising population in the country has not gone down well with many people, especially the farmers, who blame it for killing their livestock. Besides its sightings, the presence of a coyote in an area can be confirmed by identifying any other signs it leaves behind, such as its scat. Let us see how to identify coyote droppings. ► Coyote droppings are large, tubular, and resemble a twisted rope, with several segments. ► The droppings are between ¾ to 1½ inches in diameter, and between 3 to 5 inches in length. The droppings of males are larger than that of females, and those of big males may even reach lengths between 6 to 12 inches. ► An important characteristic of coyote droppings is that, they have long, curly, and tapering ends. ► Their droppings contain a mixture of animal and plant matter. They may contain insect casings, feathers, fur, and bones from small mammals like rats, mice, shrews, and other rodents, apart from rabbits and carrion. The scat will also contain plant matter like fruits, berries, seeds, nuts, and grass, which is consumed to remove fur and intestinal worms lodged in their digestive systems. ► The color of coyote scat varies with its diet and the time elapsed since defecation. A meat-rich diet results in a dark-red to black-colored scat, thanks to its blood content. Ordinarily, droppings range from dark-gray to brown in color. With time, they tend to get bleached and become lighter in color due to exposure to the elements. ► Coyote feces lacks odor in most cases, though sometimes, they have a musty smell. ► The droppings mostly consist of two to three turds, though more numbers are possible, as coyotes are known to use communal latrines. ► In the summers, the scat contains more insect and berry remnants, making it small, breakable, and bright-colored. ► It may be formless or semi-liquid if the animal's diet solely consists of meat. ► During the winters, the scat contains a higher proportion of fur and bone bits, because this is the period when coyotes consume more meat. Droppings are also larger in the winters. ► Coyote feces often tends to get confused with that of dogs and wolves, which is an issue between animals belonging to the same family. The points of difference between them are: - Coyote droppings show a variety of constituents, have tapering ends, lesser segments, no odor; that of dogs are more homogenous (with cereal content), lack tapering ends, more segmented, and have a repulsive odor. - Coyote droppings are smoother, have diameters smaller than 1 inch, and show remnants of smaller mammals; wolf droppings are larger and show remnants of bigger mammals like deer and beavers. ► Coyote droppings are mostly found on trails, ridges, crossroads, and on clumps of vegetation or rocks. If the trail is located near a slope, droppings will be usually laid at the bottom of the slope, or at the topmost point. ► It is believed that coyotes use their droppings and urine to mark their territory, and warn rivals from intruding. This is why their scat is usually found on those trails where the most animals are likely to pass by. Another important sign is scratch marks found on the ground near their droppings. Approaching coyote droppings is quite safe, and wildlife experts even routinely handle them for closer inspection. However, adequate care must be taken when observing or handling droppings of any animal, as they may contain infectious microbes, some of which are even released as airborne particles that can be inhaled. So, the next time you see coyote poop in your yard, make sure to put on a pair of protective gloves and a respirator before handling it.
Grammar is way more than simple definitions or rules. It’s important that you equip your students to correctly and effectively use grammar in their written and spoken English, and today we’re going to tackle pronouns. Pronouns can be found in all sorts of writing. They’re used in fiction, non-fiction, personal letters, conversation, and so much more. Since they’re so widely used, it’s important for both you and your students to be familiar with how, when, and why to use pronouns in the English language. Learn More About ESL Grammar with What is a Pronoun? Defined: A pronoun is simply a word that replaces or is used as a substitute for a noun. The most used pronouns are I, You, He, She, We, and They. However, there are many different types of pronouns, which we’ll discuss below. Q & A: Each of the 8 parts of speech are used to answer a question, and pronouns are no different. Because pronouns simply replace nouns, they answer the same question: Who or What. When trying to identify a pronoun in a sentence, simply ask ‘who’ or ‘what’ performed the action, and you’ll be sure to find either a noun or pronoun. This same technique can be use within a larger section of text in order to identify ‘who’ or ‘what’ the pronoun is replacing and pointing back to. For example, take a look at the sentences below: Sarah is going to adopt a dog. She wants a dog that is good with kids. In the second sentence we can ask who wants a dog that is good with kids, and know that ‘she’ is the one performing the action. However, we can also point back to the previous sentence to know that ‘Sarah’ is the noun that ‘she’ refers to. (I, You, He, She, We, They) am/are going to the park. Did Lin see (Me, You, Him, Her, Us, Them)? Kristy is the girl (Who, That) Hank is bringing to prom. I spilled my coffee which is why I’m late. Types of Pronouns Just as there are many types of nouns, verbs, adverbs, and adjectives, there are also many types of pronouns. If you’re teaching a beginner-level pronoun lesson, this information may be too much for your beginner-level learners. However, it’s good information for you, as the teacher, to be familiar with if nothing else because it shows us how, when, and why to use certain pronouns in certain situations. A subject pronoun is simply a pronoun that can be the subject of a sentence, such as I, You, He, She, It, We, and They. It can be easy for your students to confuse ‘he’ and ‘she’ in a sentence, so be sure you go over the definitions of each of these pronouns and when to use them. The meaning of ‘they’ should also be addressed, as it can be used to refer to a group of people or an individual. Another common error that is made with subject pronouns is to forget the subject pronoun altogether. For example, ‘I love Brazil. Is great country.’ Make sure you stress the importance of a subject pronoun to refer back to the previous sentence or statement. She is my sister. They are going to Oregon tomorrow. When will we leave for the party? An object pronoun can be either the direct object, indirect object, or object of a preposition, such as Me, You, Him, Her, It, Us, and Them. Students can very easily use a subject pronoun in an object slot, such as in ‘These shirts are for they.’ It can be helpful to teach subject and object pronouns together or at least one after the other in order to address this potential error. These shirts are for her. How many times did I tell her? The gift is for us. A relative pronoun connects a clause or phrase to the rest of the sentence, such as Who, That, Which, and Whom. Be careful that your students don’t use ‘what’ instead of ‘that’ in an adjective clause (I want to borrow the book what you bought last week). Make sure you identify that three of the four relative pronouns are wh- words, while the last begins with th-. I really wanted a cookie, which is why I went to the store. Chris, who is buying his first house, is looking for new furniture. Which dog did you want? An indefinite pronoun doesn’t refer to a specific person or thing, but it refers to a general group of people, such as Anyone/thing/body, Everyone/thing/body, Someone/thing/body, and No one/thing/body. Students often confuse these pronouns and/or assume that they are plural. However, such as in the case of the word ‘family,’ these collective nouns are grammatically singular. If you’re addressing indefinite pronouns, which I don’t recommend for beginner- or even lower intermediate-level learners, be sure to carefully define each of them. Start by defining the prefix before moving onto the suffix. Everyone is invited tonight! Someone stole my keys. I don’t know anyone who enjoys collard greens. A reflexive pronoun is used when a word refers to the same subject, such as Myself, Yourself, Himself, Herself, Itself, Ourselves, Yourselves, and Themselves. Confusing reflexive pronouns and reciprocal pronouns (below) is an easy mistake. Be sure to stress that reflexive pronouns always point back to the subject, while reciprocal pronouns point to a different subject. I had to remind myself to plant those flowers. We went to the park by ourselves. A demonstrative pronoun stands in place of a specific thing or person, such as This, That, These, and Those. These pronouns can be tricky because, technically speaking, all pronouns stand in place of a noun. However, a demonstrative pronoun points to a specific object. It’s typically used when speaking about an object that is physically there. A common error that many students make is to confuse the plurals. However, ‘this’ and ‘that’ are the singular demonstrative pronouns, while ‘these’ and ‘those’ refer to the plurals. There’s no need to add an ‘s’! This is your gift. I don’t know what that is. These shoes are for Bruce. Don’t Forget About Verbs! Possessive pronouns are so common and so important for your students to be familiar with. They refer to a thing or person and it’s owner. They show who or what belongs to someone else, for instance, Mine, Yours, His, Hers, Ours, and Theirs. Students often use definite articles with possessive pronouns, but it’s not necessary (Your shirt is new, but them mine is really old.) Take note that all possessive pronouns except for ‘mine’ end in an -s, but the same form is used for both singular and plural. You don’t need to add an -s. Those shirts are mine. Is this yours? His backpack is in my car. A reciprocal pronoun is a little tricky to explain. I think the easiest way to explain it is to demonstrate swapping something with another person. For instance, if two people each have a gift and they swap gifts, it is similar to how a reciprocal pronoun is used. Another great way to explain reciprocal pronouns is to just give examples. You can find the examples for Each Other and One Another below. Take note that the most common error is to confuse these pronouns with reflexive pronouns, which is address above. Ben and Alexis love each other. Let’s get along with one another. We gave gifts to each other for the Holidays. As I wrote this article, I couldn’t help but notice every time I used a pronoun unintentionally. I would bet that there isn’t a single conversation, email, letter, or article that goes by without pronouns. They’re such an unnoticed, but very important, part of English grammar. While they’re seemingly pretty simple, there are many different types of pronouns and uses for pronouns, which is where it can get tricky. If you’re teaching beginners, be sure to start with just the basics. However, as your students get more and more familiar with pronouns, grammar, and the English language, you can begin to introduce more complex ideas and terms to them to help them better understand and use grammar for their own conversations. I Want to Hear From You! How do you like to teach grammar? Do you find teaching grammar to beginners or more advanced learners more challenging?
The Brown-headed Cowbird (Molothrus ater) is a brood parasite, meaning that it Cowbird eggs require a shorter incubation period than most other songbirds. Nov 8, Rates of parasitism by Brown-headed Cowbirds (Molothrus ater) were high in density and nonlinearly (U-shaped relationship) with nest-initiation timing, but The extent to which small songbirds can influence predation risk. Jun 1, Brown-headed cowbirds are not what one would call nice. The black birds with brown heads are known as brood parasites for their habit of. However, unpermitted control of cowbirds is occasionally permissible under special circumstances outlined in the Migratory Bird Treaty Act. Some species, such as the Yellow Warbler, can recognize cowbird eggs and will reject them or build a new nest on top of them. Those species which accept cowbird eggs either do not notice the new eggs, or as new evidence suggests, accept them as a defense against total nest destruction. Use feeders that are made for smaller birds, such as tube feeders that have short perches, smaller ports, and no catch basin on the bottom. Avoid platform trays, and do not spread food on the ground. General Bird & Nest Info Cowbirds prefer sunflower seeds, cracked corn, and millet; offer nyjer seeds, suet, nectar, whole peanuts, or safflower seeds instead. Clean up seed spills on the ground below feeders. First, look for any eggs that appear different or out of place. Cowbird eggs are sometimes, but not always, larger than those of the host bird. Consequently, predation of nests has shaped the evolution of avian behaviors such as nest-site selection and parental attendance Ghalambor and Martin ; Peluc et al. Nest predation also shapes population growth Saether and Bakke and community structure by favoring nest-site diversification to reduce competition for predator-free space Lima and Valone Therefore, ornithologists study nest predation to better understand the evolution and ecology of birds. An understanding of how and why nest predation occurs requires examination of the predation process Lahti Nest predation involves interaction between predator and prey, so ecological traits of predators, namely their abundance and behavior, determine predation risk Thompson Accordingly, several studies link predator ecology with predation rates and patterns Schmidt and Ostfeld ab ; Sperry et al. Nesting parent birds also influence predation risk by deciding where to nest Martin ; Davis ; Peluc et al. For small songbirds, the importance of nest-site selection is well recognized reviewed by Limawhich can influence predation patterns observed at natural nests Schmidt and Whelan a ; Latif et al. The extent to which small songbirds can influence predation risk following nest initiation is less certain. Parental and nestling activity e. She may lay during nest building, egg laying or incubation. She generally but not always removes one egg or two the day she lays her egg in the nest, or sometimes before. The eggs may be eaten, or dropped away from the nest. Keith Kridler observed Cowbirds dropping purloined eggs 15 feet and 75 feet from a nest. A Cowbird was caught on videotape destroying an entire clutch of 5 eggs from an unattended Western Meadowlark nest. Occasionally they remove eggs without replacing them with one of their own. If there are already Cowbird eggs in a nest e. Several nests already had a Cowbird egg in them, but he never saw a Cowbird remove a Cowbird egg - they only took the hosts' egg s. Cowbirds usually lay about six eggs one each day in different nests, wait a few days, and then start again. brown-headed cowbird parasitism: Topics by promovare-site.info They may lay more than 40 to 41 per BNA eggs per season. A captive two-year old female was recording laying 77 eggs, 67 of those in a continuous sequence. They may pause for 2 days in between eggs. The female usually sneaks into the nest minutes before sunrise to quickly deposit an egg. Egg laying usually takes only 20 - 40 seconds. One Cowbird managed to lay her egg during a four second visit. A Cowbird was videotaped laying an egg while being attacked by both Wood Thrush parents. About two thirds of the time, only one Cowbird egg is placed in the host's nest. Sometimes two or more appear, but they may be from different females whose territories overlap. Nine Cowbird eggs were found in one Wood Thrush nest. Brown-headed Cowbird eggs are usually oval, but the shape can vary to short, rounded and elongate oval. The shell is granulated and moderately glossy. The markings are all over the egg, rarely concentrated into a wreath on the larger end. The eggs of the Bronzed Cowbird are pale bluish-green and have no markings. Host's reaction to egg: Successful parasitism for Cowbird hosts has been recorded for species BNA. Female Bluebirds may rebuild the nest cup and lay a new batch of eggs. Ed Mashburn of PA reported bluebirds abandoning a nest when a Cowbird egg appeared apparently replacing the third bluebird egg laidand rebuilding in another box nearby and successfully raising a brood. I had a Black-capped Chickadee desert when the sixth egg was replaced. Others will incubate the egg and rear the nestling as one of their own. Species vary in their reaction to Cowbird egg deposition. Phoebes tend to accept the eggs. It seems possible that cavity-nesters would be less likely to recognize and reject Cowbird eggs because they see them less often, they nest in dark locations, and some like Tree Swallows and bluebirds do not have large bills that would make egg removal easier. Cowbird eggs laid in House Finch nests often "disappear" or the chicks die due to the diet all vegetable matter fed by foster parents. There are no reports of Mountain Bluebirds raising Cowbirds. They also found that Cowbirds "farmed" a non-parasitized nest by destroying existing eggs so the host would build another that they could then parasitize and get their eggs in 'synch' with the hosts' eggs. Effects of parents and Brown-headed Cowbirds (Molothrus ater) on nest predation risk for a songbird Even though some hosts Mockingbirds, Wood Thrushes viciously attack the cowbird female as she sits on their nest, she is typically undeterred, laying an egg in seconds before fleeing the scene. A Cowbird typically hatches at least one day ahead of the young of its adopted siblings, usually in or up to 14 days typically ? When Cowbird eggs are larger than the hosts' eggs, they may affect hatching of host eggs - e. Cowbird nestlings are significantly e. Some bluebird monitors equate them to Baby Huey or a Frankenbaby. At hatching they are altricial nakedblind, with buff colored skin the newborn I saw had pink skinand usually a yellow rictal flange. They have whitish down on their heads, whereas normal bluebirds have black or dark fuzz when they first hatch.
Binary search is the most efficient searching algorithm having a run-time complexity of O(log2 N). This algorithm works only on a sorted list of elements. Binary search begins by comparing the middle element of the list with the target element. If the target value matches the middle element, its position in the list is returned. If it does not match, the list is divided into two halves. The first half consists of the first element to middle element whereas the second half consists of the element next to the middle element to the last element.
An electroencephalogram (EEG) is a test that detects electrical activity in your brain using small, flat metal discs (electrodes) attached to your scalp. Your brain cells communicate via electrical impulses and are active all the time, even when you’re asleep. This activity shows up as wavy lines on a EEG recording. An EEG tests for any abnormal brain activity. It is one of the main diagnostic tests for epilepsy. An EEG is a safe and painless test that has no associated risks. After the test, a neurologist will interpret the recordings taken from the EEG and will send the results to your doctor. Then your doctor will have you schedule a follow-up appointment to discuss and go over the results. Why is an EEG performed? EEGs are performed to confirm or rule out various conditions such as: How to Prepare Prior to the day of the EEG, your doctor or healthcare provider will ask you what medications you are taking at the time. He or she may ask you to stop taking certain medications such as sedatives and tranquilizers, muscle relaxants, sleeping aids, and seizure medications because they can affect your brain's normal electrical activity. Being on the medications at the time of the EEG highly increases the chance of producing inaccurate test results. Do not eat or drink caffeinated products 12 hours before the test. This includes coffee, tea, soda, and chocolate. The electrodes will be placed on your scalp, so make sure your hair is clean. Shampoo and rinse your hair with water the night before or morning of the test. Do not put any conditioner or hair products in after washing.
Claude Monet (November 14, 1840-December 5, 1926) is arguably the most important figure in the foundation of the French Impressionist school of painting. Its most consistent and prolific practitioner, Monet applied the movement's philosophy of exploring and expressing one's perceptions before nature, particularly in his well-known landscape paintings. In fact, the term Impressionism is derived from the title of his 1872 painting Impression, Sunrise (Impression, soleil levant). Inspired by the Barbizon painters of the early-nineteenth century, Monet's dedication to painting en plein air led him to question the formalized European traditions of color, composition, and representation. Monet's studies of French landscapes, leisurely activities of the upper-middle class, portraits, architecture, and garden scenes are recognized as seminal influences on not only the late 1800s, but also the painters of the early twentieth century. Monet died in Giverny in 1926, and his home and prolific garden were bequeathed to the French Academy of Fine Arts, and are currently open to the public. View all of Claude Monet's work.
Is humanity nothing more than a cancer on the planet, consuming its host until it is gone, guaranteeing its own destruction in the process? A quick glance at the effects of our behavior might lead us to say yes. But looks can be deceiving. Nature shows us that what is destructive on one level can also be part of a larger process of change that creates new forms of value at another level. Consider the Indian elephant. At first glance, the elephant appears to be destructive as she tramples through the forest, breaking limbs from trees, eating all the jackfruit, and littering the landscape with huge piles of poop. But when we look closer, we see that the trampling provides pathways for the other animals in the forest. The breaking of tree branches allows the sun’s rays to reach the forest floor, enabling plants to grow in the understory. And elephant poop contains jackfruit seeds buried in fertile manure, which propagates the jackfruit tree. Like the elephant, could it be that some of humanity’s destructive behavior might actually have some positive unintended consequences? As we look back at Earth’s story, we learn that its climate has been vastly different throughout its history. When dinosaurs roamed the land, Earth was void of ice. The sea level was hundreds of meters higher and the temperature 10 degrees warmer than it is today. Modern science has learned that there is a direct relationship between the concentration of greenhouse gases in the atmosphere and the Earth’s temperature. Greenhouse gases are like a blanket around the Earth, absorbing just the right amount of heat from the sun to make life possible. The higher the concentration, the warmer the temperature. In Earth’s more recent history, it has endured a series of long ice ages. During each ice age, life struggles to survive. Imagine a 1 mile thick sheet of ice on top of Chicago. All the forests die. Most of the life in the forests die as very little can survive in these extreme conditions. Every hundred millennia, a slight change in the Earth’s tilt allows the planet to absorb just enough additional heat to thaw some of the ice. Life expands for a few thousand years during this interglacial period before retreating again as the Earth tilts back and falls into another ice age. Enter human beings. From the Earth’s perspective, humans are new to the scene – we’ve been here for a blink of an eye. But we’ve been busy. As the Earth entered its most recent interglacial period, humans began deforesting the Earth to fuel their nascent civilizations. It is calculated that more greenhouse gases were released into the atmosphere through deforestation than all the gases emitted from the burning of fossil fuels. Humans, unknowingly to them, warmed the Earth, delaying the onset of the next ice age. It turns out there is a “safe zone” when it comes to greenhouse gas concentrations that keeps Earth at just the right temperature for life as we know it to thrive. Any less and we freeze, any more and we fry. In the last few decades, we have blown past this safe zone way too fast for life today to adjust as we continue to destroy the forests for animal agriculture and find destructive new ways of extracting fossil fuels, adding ever more greenhouse gases to the atmosphere. While our actions have had devastating impacts to many species on the planet, they have also enabled humans to develop the technology necessary to monitor Earth’s composition. Like the part of the brain that helps stabilize and regulate our body’s temperature, humans now have the ability help Earth regulate its temperature. We have already been doing it unconsciously for thousands of years. Now it is time we become conscious of our role on the planet as the thermostat species. When we choose to stop eating animals, the land used for raising those animals can be restored to forests, and those forests can sequester all of the greenhouse gases we’ve emitted during the fossil fuel age, bringing us back within the safe zone where life as we know it can thrive. Unlike the forest elephant, this requires that humanity undergo a metamorphosis from a ego-centric, consumer culture to an eco-centric, life-enriching culture. Again, Nature shows us how this is done. A caterpillar spends his entire life from the moment he is born eating. He eats the nutritious shell he was born out of. Then he eats the leaf the egg was clinging to. The caterpillar continues eating all the leaves he encounters. Once fully satiated, the caterpillar attaches to the underside of a twig and begins growing a cocoon. New imaginal cells are born. At first, the imaginal cells are attacked by the caterpillar’s immune system. But soon, they multiply and the immune system gives up. What was once a caterpillar is now a messy glob of imaginal goo. Soon, out of the cocoon emerges a beautiful butterfly. The butterfly is a very light consumer. As she sips the nectar, she pollinates the flowers helping to regenerate life, nurturing life instead of destroying it. The time has come for humanity to undergo its metamorphosis. Our imaginal cells are awakening. We have a unique opportunity in this pivotal time in history to realize our full potential by considering ourselves stewards of life, applying the lessons we’ve learned to nurture the conditions for life to thrive for all of Earth’s remaining years.
(Phys.org) —Inspired by spiders' abilities to produce draglines and use them to move across open space, researchers have designed and built a robot that can do the same. Similar to Spiderman shooting a dragline from his wrist, the robot produces a sticky plastic thread that it attaches to a surface, such as a wall or tree branch. Then the robot descends the dragline, while simultaneously continuing to produce as much line as needed. The mechanism could enable robots to move from any solid surface into open space without the need for flying. The researchers, Liyu Wang, Utku Culha, and Fumiya Iida, at the Bio-Inspired Robotics Lab at ETH Zurich in Switzerland, have published a paper on the spider-inspired robot in a recent issue of Bioinspiration & Biomimetics. "The dragline-forming robot is interesting because it implements a new concept: that a robot may accomplish a task by building structures to assist it," Wang told Phys.org. "It is advantageous because the robot can flexibly vary the structure (in this case, the thickness of the dragline) according to environments or tasks that cannot be anticipated." At first glance, the robot doesn't look much like a spider, since it is about 3 times larger and made of an assortment of metal, wires, and onboard batteries. The source of its dragline material is a stick of thermoplastic adhesive (TPA), which functions similarly to a glue stick in a hot glue gun. When the robot is ready to produce a dragline, the solid TPA stick is pushed through a heating cavity and out of a nozzle. Two wheels located just beyond the nozzle help elongate and guide the dragline in the desired direction. The robot can form draglines with a thickness varying from 1 to 5 mm. Since the hot TPA dragline is sticky, it can adhere to the solid surface from where the robot starts its journey into open space. Once the dragline is stuck on the surface, the robot can begin descending down the dragline while producing more of it, mimicking the way that spiders fall down their draglines in a controlled way. While spiders use a fourth pair of legs to move down their draglines, the robot relies on its two wheels for locomotion down the dragline. In tests, the robot could form and move along its dragline at an average descending speed of 5 cm/min. The robot demonstrated dragline-assisted locomotion for distances of up to 82 cm, although there is no limitation to traveling distance unless the dragline material is used up. The researchers note that the TPA dragline material is potentially reusable, although this ability would require additional onboard mechanisms to retrieve and reuse the material. In the future, the researchers plan to extend the robot's abilities to enable it to form multiple draglines in both vertical and horizontal directions, eventually forming grids that partially mimic a real spider web. In order to form dragline grids, the robot would need gecko-inspired adhesive legs instead of wheels so that it could easily move between draglines and solid surfaces. Robots that form their own draglines for locomotion could have a wide variety of applications, particularly in unanticipated environments such as hazard removal and extraterrestrial exploration, among other uses. Although there are other mechanisms that allow robots to cross open space, such as flying or using existing cables, these options have their own sets of challenges such as the payload factor. In some situations, a spider-inspired robot may offer a less complex and more robust alternative.
Anatomy of the Brain The following questions and answers will help you better understand the anatomy of the brain, which is part of the central nervous system. What is the Central Nervous System (CNS)? The CNS consists of the brain and spinal cord. The brain is the organ that controls thought, memory, emotion, touch, motor skills, vision, respirations, temperature, hunger, and every process that regulates our body. What are the various parts of the brain? The brain is divided into several areas: * Cortical areas o frontal lobes o temporal lobes o parietal lobes o occipital lobes * Subcortical limbic structures o basal ganglia * The brainstem (midline or middle of brain) includes the midbrain, the pons, and the medulla. Functions of this area include: Movement of eyes and mouth, relaying of sensory message (hot, pain or loud), hunger, respirations, consciousness, cardiac function, body temperature, involuntary muscle movements, sneezing, coughing, vomiting and swallowing. * The cerebellum (infratentorial or back of brain) is located at the back of the head. Its function is to coordinate voluntary muscle movements, and to maintain posture, balance and equilibrium. The cerebellum also affects emotions and higher-level cognitive functions. What are some other critical parts of the brain, and their functions? More specifically, other parts of the brain include: * Pons: Located in (and part of) the brainstem, the pons contains many of the control areas for eye and facial movements. * Medulla: The lowest part of the brainstem, the medulla is the most vital part of the entire brain. It contains important control centers for the heart and lungs. * Spinal Cord: A large bundle of nerve fibers located in the back extending from the base of the brain to the lower back, the spinal cord carries messages to and from the brain and the rest of the body. * Frontal Lobe: The largest section of the brain, located in the front of the head, the frontal lobe is involved in personality characteristics, higher-order cognitive abilities such as goal-oriented behavior, planning, and mental flexibility, as well as movement. * Temporal Lobe: Located at the sides of the brain, these lobes are involved in auditory processing, understanding of verbal information, memory and sense of smell. * Parietal Lobe: This term refers to the middle part of the surface of the brain. The parietal lobe helps one understand spatial relationships (i.e., where your body is compared to other objects around you). The parietal lobe is also involved in sensory processing and interpreting pain and touch in the body. * Occipital Lobe: This is the back part of the brain that is involved with vision.
By Louis & Jennifer This web page will help you find out information about Egypt and Mesopotamia. You will also find out information about farming and agriculture in Egypt and farming and agriculture in Mesopotamia. We will discuss how they are the same and how they are different. The Nile River helped farming and agriculture in Egypt. It helped by providing silt whenever there was a flood. The Nile River floods between June and October. Crops are usually harvested during the spring. The depth of the flood was 45 feet. After floods, there would be a fertile strip along the Nile river that was 12 miles wide. There, the Egyptians would plant and grow things such as vegetables and fruits. The Nile River is the longest river in the world. Farmers sophisticated irrigation systems and used dikes to maximize the use of the Nile River. The Nile River helped the Egyptians by supplying water for the farmers and Farming brought people together. During harvest season, everyone was made to gather the crops together. Economy was based on wheat and grains. The economy grew stronger because of irrigation. Irrigation led to an increased food supply. Irrigation helped water dry lands with streams, canals, or pipes. Farmers planned for the seasonal flooding. They also used wooden plows led by a pair of oxen, but by 2800 BC, they learned how to make bronze tools. They used tools made of flint to cut wheat. They threw seed into the ground to grow fruit and vegetables. Farmers led farm animals loose to trample seeds into soil. In Mesopotamia, there were a lot of crops to grow. Farmers raised grain, fruit, vegetables, and barn yard animals. Farmers changed their houses from reed house to brick houses. They plowed ground with stone hoes. The metal plows had a funnel shape. They filled containers with seeds. Cows would pull plow seed and the seeds would go into the ground. This method was quick and easy. Sumerians had handbooks that told much how to plant crops. tremendously. Like the Egyptians depended on the Nile River, Mesopotamians depended on the Tigris and Euphrates Rivers. The silt left over from the flooding of these rivers made the soil fertile. Irrigation produced an extra supply of food. Farmers would trade would trade grain for lumber and stone. Farmers didn't have money so they used their crops. The climate of the Mesopotamia was dry. There was very little rainfall. Farmers had to do find ways to find water for their crops. In the spring and early summer, melting snow from the northern mountains to overflow the crops. The floods were violent and unpredictable. They destroyed villages and took many lives. Floods sometimes caused rivers to change courses. A lot of trouble is caused to the farmers' crops when river change course. Mesopotamia wheat and barley were most important grown crops by the Sumerians. Shade trees protected trees from harsh winds and from the sun. Some of the fruits they planted were dates, grapes, figs, melons, and apples. Their favorite vegetables that they grew were the eggplant. They planted vegetables such as onions, radishes, beans, and lettuce. Farmers irrigated land and started planting wheat, barley, millet, beans, and sesame seeds. They used spears to hunt, caught fish in nets, and killed birds with sling shots and arrows. Sumerians got their food from nearby marshes and rivers. Thought the climate in Mesopotamia was very hot, they still received enough rainfall for crops. Soon, Mesopotamia became a very rich farming ground. Though Mesopotamia and Egypt are on different continents, they still have similarities. Some of the similarities of Egypt and Mesopotamia are that the river(s) provided silt to help their crops grow. Irrigation helped both Egypt and Mesopotamia. Irrigation helped them get a surplus, or extra, supply of food. The first great civilization arose in the two regions. Egypt and Mesopotamia both have fertile land, but neither received enough rain to grow crops. By 3000 BC., farmers invented plow the oxen could pull. The extra supply of food helped people to give up farming and live in the city. Also by 3000 BC., Egypt and Mesopotamia developed the world's first large-scale irrigation system. are many similarities, there still are differences. First of all, Egyptian sophisticated irrigation and dikes to maximize the use of the Nile and Mesopotamia just used irrigation. The people of Egypt knew when the Nile River would flood (predictable), but the people of Mesopotamia didn't know when the Tigris and Euphrates River would flood (unpredictable). Floods of the Nile River were between the months of June and October. The depth of the flood would be 45 feet deep. The Tigris and Euphrates River's depth and the months it flooded differ every time it floods. One either sides of the Nile River is a fertile strip of 12 miles wide, but there weren't any fertile strips around the Tigris and the Euphrates River. Farmers in Mesopotamia used the rivers to trade with merchants. In this web page we described things about Mesopotamia and Egypt. Also, in this essay is a comparison of farming and agriculture of Egypt and farming and agriculture of Mesopotamia. We believe that the Egyptians and Mesopotamians were very advanced in their farming and agriculture. Now we hope that you will cherish this information about the differences and similarities about farming and agriculture of Egypt and farming and agriculture of Mesopotamia. Ancient Egypt : The Geography, History, Culture and Scientific Achievements of the People of the Nile River; Gordon, Ancient Mesopotamia: the Geography, History, Culure and Scientific Achievements of Its People; Gordon, A. World Book, Letter A, Book #1 World Book, Letter E, Book #6 World Book, Letter M, Book #13 The First Book of Ancient Mesopotamia and Persia by Robinson Jr., Charles A., Fraukin Watts, Inc. 1962; J 913.35 R
When Charles Darwin embarked on the Beagle, he took with him a book written by Charles Lyell: Principles of Geology. In the book, Lyell made the argument for gradualism (or uniformitarianism), the idea that present-day geological processes can explain the history of the Earth. When Lyell introduced this concept in 1830, it was a controversial idea; many people relied on the story of the biblical flood to explain the Earth's features, though most of Lyell's gentlemen-geologist colleagues did not. Many of them believed that the ancient planet had been much hotter and wetter than the present, with more dramatic geologic processes. The frontispiece to Lyell's Principles of Geology showed the Temple of Serapis in Italy. At the tops of the stone pillars were dark bands, made of holes drilled by mollusks. Lyell showed the picture to make a point: The pillars had been constructed above ground, later been submerged under water, and finally lifted above sea level. Considering these changes had happened during recorded history, the same geological processes could, during prehistoric times, build mountains, valleys, canyons, and all the other features we see today. In fact, Lyell devoted the first two of his three-volume Principles of Geology to the effects of natural processes occurring during recorded human history. Mount Etna provided Lyell with more evidence of the slow pace of our planet. In between lava layers, he found thick layers of oysters, meaning that the time span between lava flows was significant. And lava flows had built Etna to a height of more than 10,000 feet. In 1831, King's College, London, awarded Lyell the geology chair. He lasted just three years there. The Church of England dominated the institution's thinking, and Lyell wanted to "free the science from Moses." His situation soon improved and by the late 1830s, he was president of the Geological Society. Some modern historians have argued that the way Lyell portrayed his intellectual rivals, the catastrophists, was unfair. He accused them of looking to supernatural causes of landscape features, but many of them did no such thing; they simply suspected that events like earthquakes had happened on a larger scale in the geologic past than at present. In fact, using the present to explain the past, as Lyell recommended, is an approach that must be used with caution. Extrapolating from a single year or even a century may not work — such a short time span isn't likely to reproduce all the events that have shaped the landscape. Extrapolating from longer time spans is more effective. Early on, Lyell had a strong influence on Darwin, and Lyell was impressed with the work of Darwin in the field of geology. Upon hearing of Darwin's theory of coral atoll formation, Lyell reportedly broke into a joyful dance. But Darwin's theory of evolution was a different story. Lyell only reluctantly accepted the theory of evolution; for much of his life, he maintained a steady-state view of the Earth and its inhabitants, arguing that as one species went extinct, another appeared. This was partly because of his belief in a long-standing, deep division between humans and animals, in which mankind's superiority to animals was moral, not physical. Between the time he published Principles of Geology in 1830-1833 and Antiquity of Man in 1863, Lyell changed his views. In the first book, he agreed with Cuvier that no humans predated the current epoch. In the second, he extended humanity's existence back in time. His acceptance of Darwinian evolution and human prehistory weren't the only times Lyell had to surrender his beliefs; he also came to support Louis Agassiz's theory of the Ice Age, when gigantic ice sheets covered much of the northern hemisphere in the Pleistocene epoch. Lyell withheld his support of Agassiz's theory for decades, because it stood in direct opposition to his own hypotheses of a steady-state Earth. In addition to uniformitarianism, Lyell's Principles of Geology contained some ideas that seem absurd today, though they struck him as quite reasonable at the time. Lyell concluded that, because the Earth undergoes periodic changes in climate and because animals are adapted to certain climates, "huge Iguanodon might reappear in the woods, and the ichthyosaurs in the sea, while pterodactyle might flit again through umbrageous groves of tree ferns." This suggestion invited derision from the academic community, and Henry De la Beche caricatured Lyell in his cartoon Awful Changes (sometimes assumed to be directed at William Buckland). "You will at once perceive," continued Professor Ichthyosaurus, "that the skull before us belonged to some of the lower order of animals, the teeth are very insignificant the power of the jaws trifling, and altogether it seems wonderful how the creature could have procured food." Narrative text and graphic design © by Michon Scott - Updated January 1, 2015
What is an Image? An image is a matrix of pixels. It is a 2-dimensional or rectangular array of pixels. The dimensions (width and height) of the rectangle is called as resolution of the image. There will be m x n pixels in an image, where m and n are width and height of the image respectively. For example, there will be 76800 pixels in an image of resolution 320 x 240. You can download the full sourcecode below for reading, creating and displaying PGM images!! The sourcecode is available in Java, VB.NET and C# along with sample PGM images. Types of Images Each pixel is represented by one or more integers. These integers are called as intensity values which represent the brightness in that pixel. Based on color, images can be classified into three types namely binary (black & white), grayscale and color. For binary images, intensity values can be either 0 or 1. For grayscale and color images, intensity values range from 0 to 255. In grayscale images, there will be only one intensity value per pixel. In color images, there will be three intensity values per pixel (for red, blue and green). Some advanced color images that support transparency, contains four intensity values per pixel. Based on compression, images can be classified into compressed and un-compressed images. In un-compressed images, these intensity values will be stored as integers (or bytes) and hence reading and writing such images are easy when compared to compressed images. In compressed images, the intensity values will be transformed into some form using special algorithms. The un-compressed and lossless-compressed images retain the original visual quality, whereas the quality of compressed images will be based on the compression-ratio used during compression. Examples of compressed image formats are JPEG, GIF, PNG, TIFF, etc. Examples of un-compressed image formats are PNM, BMP, etc. PNM is the acronym for Portable Any Map. The PNM images are the simplest image file formats for processing. These files contain the actual pixel values without any compression or decomposition of the data into file sections; and hence these images are more suitable for any image processing application. PNM images are of three types, namely portable bit-maps (PBM), portable gray-maps (PGM), and portable pix-maps (PPM). These formats are a convenient (simple) method of saving image data as they are equally easy to read in ones own applications. There is no explicit file format associated with PNM itself and is just an abstraction of the PBM, PGM, and PPM formats, i.e. the name PNM refers collectively to PBM, PGM, and PPM image file formats. These image files are distinguished by magic identifier that appears in the first line of the file. The magic identifiers are P1 for PBM images; P2 and P5 for PGM images; and P3 and P6 for PPM images. Each PGM file has two segments internally: Image Header and Image Data. Image header contains three lines. The first line contains the magic identifier which uniquely identifies the type of image and the type of data contained in the Image Data. The second line starts with the character #and is a comment that provides some information about the image. Optionally, there can be any number of comment lines each beginning with #. In addition, a comment can be placed anywhere with a #character, the comment extends to the end of the line. The third line contains three integers that are respectively the number of columns, number of rows and the maximum pixel value (usually 255). The last value of the third line gives the maximum value of the color components for the pixels and this allows the format to describe more than single byte (0..255) color values. The header lines are normally delineated by carriage returns and/or linefeeds. Image Data contains pixel values either in binary or ASCII format as indicated by the magic identifier. Each image data line should not be longer than 70 characters. Each pixel value either binary of ASCII, has a value starting from 0 to the maximum pixel value, specified in the third header line. While not required by the format specification it is a standard convention to store the image in top to bottom, left to right order. The following are all valid PNM headers: P5 1024 788 255 1024 #the image width 788 #the image height To access the data in PNM files, we read the header lines, set the number of columns and rows, and then read the rows into a data structure. Then we can process this data and write the new pixel values out to a new PNM file. PBM File Format PBM images are for storing black and white images. PBM stores single bit pixel image as a series of ASCII 0s or 1s. Traditionally 0 refers to white while 1 refers to black. The header is identical to PPM and PGM format except there is no maximum pixel value in the third header line, as it doesnot have any meaning here. The magic identifier for PBM is P1. The following is an example of a small bitmap in this format. #P1 PBM example 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PGM File Format PGM images are grayscale image file formats. It is of two types, containing magic identifiers namely P2 and P5. P2 stores pixel values as the ASCII characters for the digits, delimited by spaces between consecutive pixel values. P5 writes the bytes without delimiters binary numbers from 0 to 255. The following image (lena.pgm) is a classical example of a PGM image: Note: The image Lena has been in the public domain for almost half a century (it supposedly appeared in Playboy magazine originally). It is often used by image processing researchers around the world for comparison of their methods and algorithms with other ones and appears in most journals on image processing. The following data shows a P2 PGM file for lena256.pgm. The datum 158, for example, is three ASCII characters for the respective digits 1, 5 and 8, followed by a space. #P2 PGM Example 256 256 255 158 165 158 158 158 158 155 158 155 161 155 150 155 155 ……… P5 PGM image file is shown below; where each pixel is stored as a byte that is a binary number from 0 to 255 (text editors show 0 through 127 as ASCII characters and 128 through 255 at whatever code the editor uses for those values). #P5 PGM Example 256 256 255 We see that the P5 type of PGM is packed (without delimiters) so there is a single byte for each pixel. This is also called the raw data format. The size of this file is 65,576 bytes. The P2 type of PGM uses a byte for each numerical symbol (digit) and therefore requires three bytes for each number greater than 99 and also uses a space character after each pixel value. Thus the file is nearly four times as large. This P2 file is 245,724 bytes. However, humans can read the P2 type of PGM file, whereas they can not read the ASCII characters of the packed bytes, which can appear different in different editors (characters 128 to 255). PPM File Format PPM images are RGB color images, which magic identifiers P3 for ASCII data and P6 for binary data. P6 image files are obviously smaller than P3 and much faster to read. Note that P6 PPM files can only be used for single byte colors. The components are stored in the usual order, red - green - blue. #P3 PPM Example 0 0 0 0 0 0 0 0 0 15 0 15 0 0 0 0 15 7 0 0 0 0 0 0 0 0 0 0 0 0 0 15 7 0 0 0 15 0 15 0 0 0 0 0 0 0 0 0 PGM Utility in VB.NET The following image is a part of the screenshot of Visual Studio Object Browser. It shows the properties, methods and functions of PGM class written in VB.NET. This class can be used to read an existing PGM image, create a new PGM image, save an existing PGM image in another path, display a PGM image in a windows form, convert any bitmap image into PGM image (ReadFromBitmap() method), and convert any PGM image into bitmap (BMP) file format (CreateBitmap()) method). The sourcecode can be downloaded below. PGM Utility in C# The following image shows the Object Browser screenshot of PGM class written in C#. This class is similar to the one written in VB.NET. This class can be used to read an existing PGM image, create a new PGM image, save an existing PGM image to another path, display a PGM image in a windows form, convert any bitmap image into PGM image (ReadFromBitmap() method), and convert any PGM image into bitmap (BMP) file format (CreateBitmap() method). The sourcecode can be downloaded below. A windows-forms application is written to demonstrate the PGM class written in C#. The executable and source code of this demo application can be downloaded below. The following image shows a screenshot of the demo application. A set of sample PGM images are available for download below. The samples included are: - baboon.pgm (512x512) - barbara.pgm (512x512) - cameraman.pgm (256x256) - f16.pgm (384x384, 512x512) - f18.pgm (320x240) - fishingboat.pgm (512x512) - goldhill.pgm (512x512) - houses.pgm (512x512) - lena.pgm (384x384, 512x512) - lighthouse.pgm (384x384, 512x512) - man.pgm (512x512) - peppers.pgm (384x384, 512x512)
Climate researcher Klaus Hasselmann, Director of the Max-Planck-Institut (MPI) for Meteorology in Hamburg and a project co-ordinator of EC’s Environment and Climate Programme, was one of the first scientists to warn that recently observed global warming trends have a discernible human related forcing component. Climate model calculations show, that global warming is closely related to rising atmospheric concentrations of greenhouse gases (GHGs) as consequence of man’s activity. Since pre-industrial times the atmospheric concentration of CO2, the most important GHG, has increased from 280 to 360 ppm and will rise further. According to the Intergovernmental Panel on Climate Change (IPCC) total anthropogenic emissions add up to 7-8 GtCy-1 (1 GtCy-1 = 1,000,000,000 tons) of carbon per year. Burning fossil fuels and deforestation are two of the largest contributors to the emissions figures. If recent global warming trends continue, the impact on natural and agriculture and ecosystems in many regions of the world can be expected to be severe and to affect almost all sectors of human life, from tourism to water supply. To avoid these potentially devastating consequences, both the climate research community and the public are calling for urgent political action to cut GHG emissions. The balance of evidence of human interference with climate supported by climate research has forced policy makers to take the threat of climate change seriously. Finally, after lengthy negotiations, the efforts culminated in the third session of the Conference of the Parties to the United Nations Framework Convention on Climate Change (UNFCCC) in Kyoto. Here, for the first time the parties agreed on legally binding commitments (Annex I countries only) to reduce GHG emissions by 5% on average (the EU has unilaterally committed itself to a voluntary 8% reduction), compared to the 1990 level in the commitment period 2008-2012. The scientific uncertainties In order to meet the GHG reduction targets there are basically two options: either to cut atmospheric emissions or to enhance GHG sinks in the terrestrial biosphere. The sink option is based on the assumption that the terrestrial biosphere is able both to take up and store significant portions of CO2 from atmosphere. Estimates by the IPCC suggest that the terrestrial biosphere currently takes up about 25% (1.8 GtCy-1) of the global annual emissions of CO2. The margin of error associated with this estimate, however, is of the same order, which to some extent undermines the credibility of the approach. Even less is known about the storage capacity and possible saturation levels of the terrestrial biosphere. A third alternative is to make use of the GHG storage potential of the oceans. There are a number of direct and indirect options available. Deep-sea disposal of CO2 is one possibility but production of liquid CO2 or dry ice is expensive and transport costs are high. Moreover, deep-sea injection of CO2 would acidify the water into which it was injected and create CO2 lakes at the bottom of the sea. The environmental implications could be severe. A second (indirect) method often discussed is the enhancement of the biological activity in the upper ocean layer by fertilizer. By this method the concentration of organic particles transported into the deeper sea (‘biological pump’) would be increased thus enhancing carbon flux in parallel. Although the carbon in the deep-sea is ‘safely buried’, decomposition processes will reduce the oxygen content of deeper ocean layers. In summary, these options are costly and the environmental impacts and risks associated with these options are unpredictable. The precautionary principle makes using the terrestrial biosphere the most pragmatic way of mitigating the greenhouse gas problem for the time being. Although enhancing the carbon sequestration potential of the terrestrial biosphere poses less of a risk to the environment, it is nevertheless difficult to quantify the sources and sinks. There are three reasons for this uncertainty: The amount of carbon accumulated by the terrestrial biosphere is small compared with the overall carbon turnover (the exchange between the terrestrial biosphere and the atmosphere is about 60 GtCy-1 in both directions). The processes in the soil, plants and atmosphere controlling the gas exchange between the reservoirs are complex and not very well understood and therefore difficult to model. There is a mismatch between the size of the problem and scales involved. The size is global, but measurements dealing with the problem are mainly local. Local measurements have to be extrapolated (called ‘up-scaling’) taking into account the large geographical and temporal variations (dimensions have to be upscaled from metres to continental scale, times from hours to years). Unfortunately, records of consistent observations of carbon fluxes with sufficient temporal, horizontal and vertical resolution (also required to calibrate the models) do not yet exist. On the other hand the implementation of new measurement technologies and methodologies has now made it possible to separate ocean and land uptake. The initial results indicate that the forests in the Northern Hemisphere present a strong sink in the early 90’s. The magnitude is of the order of 0.8 GtCy-1 but varies from year to year. The origin of the sink in the Northern Hemisphere is not fully understood. It could be related to increased nitrogen deposition associated with industrial and agricultural activities. Nitrogen plays an important role in the nutrient balance of ecosystems. It acts as a fertilizer and enhances productivity. The fertilization effects of increasing atmospheric CO2 concentrations could also contribute, but how the biosphere responds to this fertilization from the species level all the way up to the ecosystem level is not known. Re-growth of forests and the lengthening of the growing season (observed by satellites) provide another possibility. Most of these effects have occurred simultaneously during recent years and it remains difficult to identify and quantify the contribution of each process to the regional or global carbon budget. The largest source of uncertainty, however, is the response of the carbon pools of the terrestrial biosphere to climate change. Past records show that the annual atmospheric growth rate of CO2 is not steady with time. Climate fluctuations following El-Niño events, change of ocean circulation and volcanic eruptions have modulated the CO2 growth rate in the past (equivalent to an annual uptake/release variation of 2-3 GtCy-1). Although forests are generally believed to be carbon sinks, this may not be true in all cases. There is recent evidence that boreal forests are highly vulnerable to climate change and can switch from being a carbon sink to a carbon source depending on climatic conditions. Recent results also indicate that tropical forests accumulate larger amounts of carbon than previously thought, but again, estimates show a wide spread depending on forest type and climatic conditions. The underlying processes, in particular those affecting the soil, need to be better understood. The sink approach Prior to the Kyoto conference there was a broad consensus within the European climate research community that the problem of global warming should be tackled at its roots by cutting emissions and not to go for the sink enhancement strategy. The main concern is that the carbon sequestration potential of the terrestrial biosphere is limited and that the carbon sequestered is not ‘buried safely’ over the long term. It will, sooner or later, reach a saturation level and re-emission to the atmosphere within a few decades becomes likely. Therefore the sink enhancement strategy provides only a temporary ‘political’ solution, but could in fact simply shift the problem to later generations. Furthermore, as discussed above, the carbon exchange between the atmosphere and the terrestrial biosphere and the bio-geochemical processes involved are complex, highly variable in space and time and are still not very well understood. This is the reason why both measurements as well as model calculations of the carbon sequestration of the terrestrial biosphere show a wide spread. At the current state, detailed estimates of changes in the terrestrial carbon stocks, as requested by the Kyoto Protocol are available for some local areas but not at a global scale. Science knows even less about the long-term consequences, feedback mechanism and possible ‘surprises’ related to distortions of the global carbon cycle and its impacts on marine and terrestrial ecosystems. A large source of carbon dioxide emissions, directly related to the human interference with the terrestrial biosphere is often forgotten in discussions. Land-use change and deforestation, especially the conversion of natural forest into farmland, significantly contribute to the overall rise of atmospheric CO2. On average 1.6 Gt of CO2 are released to the atmosphere annually, accounting for more than 20% of global anthropogenic carbon emissions. These facts made most scientists recommend made GHG emission cuts without the sink option. Alternatively, the conservation of natural forest should have highest priority, summarized in the session statement of Greenhouse gas Workshop in Orvieto, Italy, 10-13 November 1997, organized by the European Commission: ‘Although probably accumulating carbon at a lower rate, the large carbon stocks of pristine forests represent carbon accumulated over many centuries. This carbon can only be replaced over a similarly long time-scale. Therefore, preservation of pristine forests should take priority over afforestation programmes where possible.’ Scientific agenda after Kyoto Although the parties agreed in Kyoto to include GHG source and sink options in the Protocol this was strictly limited to the forest-related categories of afforestation, reforestation and deforestation. The consequences of the Protocol have been analysed e.g. during a workshop organized by the European Commission and the IGBP Terrestrial Carbon Working Group. The scientific community came to the conclusion that the partial sink categories agreed upon are insufficient. Instead they recommended using the full carbon budget of the terrestrial biosphere, monitored over sufficiently long time scales as the appropriate basis for a carbon accounting system. Furthermore, the sink approach of Kyoto Protocol has a number of loopholes and opens ways for a ‘creative’ accounting system. (Refer to the articles listed in the references for a more detailed analysis of the Kyoto Protocol). In summary, the contribution of the carbon sequestration potential of terrestrial biosphere, even towards a temporary solution of the global warming problem will remain small due to the limited carbon sink categories agreed upon. Although climate scientists still have reservations, and still give preservation of natural forests highest priority, the Kyoto agreement on terrestrial sinks is also a big challenge for climate research to fill gaps in our understanding of their characteristics. The EC is supporting a number of research projects such as EUROFLUX, ESCOBA and Eurosiberian Carbonflux in the framework of the Environment and Climate Programme in the field of GHG research. The aim of these projects is to develop tools and methodologies to make it possible to understand the processes and quantify the sources and sinks better. Within EUROFLUX a network of carbon monitoring stations has been established along a European axis at a number of representative forest sites. The long-term measurements of the carbon exchange between forests and the atmosphere together with the application of new model-based methodologies will allow better estimates of the European carbon balance in the future. The EUROFLUX methodology provides the basis for a global carbon monitoring network to be established. The integration of carbon flux data between the terrestrial biosphere and the atmosphere at a continental scale is the aim of the Eurosiberian Carbonflux project. This will be achieved by joint field experiments carried out by Russian and EU research teams over Europe and Siberia. Aircraft measurements at different heights in the atmospheric boundary-layer, complemented by ground-based observations, provide the basis for data integration. The objective of the ESCOBA project is the investigation of the global carbon budget by using sophisticated measurement techniques and inverse modelling methods. These techniques will help to better quantify and distinguish carbon up-take between the terrestrial biosphere and the ocean. Since more data of better quality on a global scale will soon become available, inverse modelling techniques will help to identify the carbon sources and sinks more precisely. That the European Commission (EC) has taken the challenge of climate change research on board is shown once more in the 5th Framework Programme, which includes the Key-action ‘Global change, climate and biodiversity’ as part of the programme ‘Preserving the ecosystem’. This focuses on climate-related environmental problems and gives GHG research a high priority. This key-action also supports new elements such as infrastructure and long term monitoring programmes of environmental parameters to meet the demands of researchers in this area more closely. This new approach will put European research at the forefront of international efforts on this crucial issue.
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | In statistics, the multiple comparisons problem occurs when one subjects a number of independent observations to the same acceptance criterion that would be used when considering a single event. Typically, an acceptance criterion of a single event takes the form of a requirement that the observed data be highly unlikely under a default assumption (null hypothesis). As the number of independent applications of the acceptance criterion begins to outweigh the high unlikelihood associated with each individual test, it becomes increasingly likely that one will observe data that satisfies the acceptance criterion by chance alone (even if the default assumption is true in all cases). These errors are considered false positives because they positively identify a set of observations as satisfying the acceptance criterion while that data in fact represents the null hypothesis. Many mathematical techniques have been developed to counter the false positive error rate associated with making multiple statistical comparisons. For example, one might declare that a coin was biased if in 10 flips it landed heads at least 9 times. Indeed, if one assumes as a null hypothesis that the coin is fair, then the likelihood that a fair coin would come up heads at least 9 out of 10 times is 11/210=0.0107. This is relatively unlikely, and under most statistical criteria (such as p-value<0.05), one would declare that the null hypothesis should be rejected - i.e. the coin is unfair. A multiple comparisons problem arises if one wanted to use this test (which is appropriate for testing the fairness of a single coin), to test the fairness of many coins. Imagine if one was to test 100 fair coins by this method. Given that the probability of a fair coin coming up 9 or 10 heads in 10 flips is 0.0107, one would expect that in flipping 100 fair coins ten times each, to see a particular (i.e. pre-selected) coin come up heads 9 or 10 times would still be very unlikely, but seeing any one of the coins behave that way would be more likely than not. Precisely, the likelihood that all 100 fair coins are identified as fair by this criterion is (1-0.0107)100=0.34. Therefore the application of our single-test coin-fairness criterion to multiple comparisons would more likely than not falsely identify a fair coin as unfair. Technically, the problem of multiple comparisons (also known as multiple testing problem) can be described as the potential increase in Type I error that occurs when statistical tests are used repeatedly: If n independent comparisons are performed, the experiment-wide significance level α (alpha) is given by and it increases as the number of comparisons increases. In order to retain the same overall rate of false positives (rather than a higher rate) in a test involving more than one comparison, the standards for each comparison must be more stringent. Intuitively, reducing the size of the allowable error (alpha) for each comparison by the number of comparisons will result in an overall alpha which does not exceed the desired limit, and this can be mathematically proved to be true. For instance, to obtain the usual alpha of 0.05 with ten comparisons, requires an alpha of 0.005 for each comparison to result in an overall alpha which does not exceed 0.05. However, it can be demonstrated that this technique is overly conservative, i.e. it will actually result in a true alpha significantly less than 0.05; thereby raising the proportion of false negatives, failing to identify an unnecessarily high percentage of actual significant differences in the data. This can have important real world consequences; for instance, it may result in failure to approve a drug which is in fact superior to existing drugs, thereby both depriving the world of an improved therapy, and also causing the drug company to lose the substantial investment in research and development up to that point. Similarly in fMRI the test is extremely conservative since tests are done over 100000 voxels in the brain. This demands significance values should be unrealistically low. For this reason, there has been a great deal of attention paid to developing better techniques for multiple comparisons, such that the overall rate of false positives can be maintained without inflating the rate of false negatives unnecessarily. Such methods can be divided into general categories: - Methods where total alpha can be proved to never exceed 0.05 (or other chosen value) under any conditions. These methods provide "strong" control against Type I error, in all conditions including partially correct null hypothesis. - Methods where total alpha can be proved not to exceed 0.05 except under certain defined conditions. - Methods which rely on an omnibus test before proceeding to multiple comparisons. Typically these methods require a significant ANOVA/Tukey range test before proceeding to multiple comparisons. These methods have "weak" control of Type I error - Empirical methods, which control the proportion of Type I errors adaptively, utilizing correlation and distribution characteristics of the observed data. The advent of computerized resampling methods, such as bootstrapping and Monte Carlo simulations, has given rise to many techniques in the latter category. In some cases where exhaustive permutation resampling is performed, these tests provide exact, strong control of Type I error rates; in other cases, such as bootstrap sampling, they provide only approximate control. Post hoc testing of ANOVAsEdit Multiple comparison procedures are commonly used after obtaining a significant omnibus test, like the ANOVA F-test. The significant ANOVA result suggests rejecting the global null hypothesis H0 = "means are the same". Multiple comparison procedures are then used to determine which means are different from each other. Comparing K means involves K(K − 1)/2 pairwise comparisons. - The non parametric Friedman test is useful when doing multiple test on an hypothesis. - The Bonferroni-Dunn test allows comparisons with a control. - Key concepts - Comparisonwise error rate - Experimentwise error rate - Familywise error rate - False discovery rate (FDR) - General methods of alpha adjustment for multiple comparisons - Bonferroni bound - Dunn-Sidak bound - Holm Bonferroni method - Testing hypotheses suggested by the data - Westfall-Young step-down approach - Single-step procedures - Two-step procedures - Fisher's protected LSD (1935) - Multi-step procedures based on Studentized range statistic - Student Newman Kuels method (1939) - Tukey B method (Mid 1950s probably 1953-4) - Duncan's new multiple range test (1955) - Ryan Einot Gabriel Welsch method (1960-mid1970s) - Bayesian methods - Abdi, H. " ((2007). Bonferroni and Sidak corrections for multiple comparisons. In N.J. Salkind (Ed.): Encyclopedia of Measurement and Statistics. Thousand Oaks (CA): Sage.". - Miller, R G (1966) Simultaneous Statistical Inference (New York: McGraw-Hill). ISBN 0-387-90548-0 - Benjamini, Y, and Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing, Journal of the Royal Statistical Society, Series B (Methodological) 57:125-133.de:Alphafehler-Kumulierung |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
Information for 17067IIED Protecting traditional knowledge from the grassroots up Briefing, 4 pages For indigenous peoples round the world, traditional knowledge based on natural resources such as medicinal herbs forms the core of culture and identity. But this wealth of knowledge is under pressure. Indigenous communities are increasingly vulnerable to eviction, environmental degradation and outside interests eager to monopolise control over their traditional resources. Intellectual property rights such as patents, however, sit uneasily with traditional knowledge. Their commercial focus wars with fundamental indigenous principles such as resource access and sharing. Local customary law offers a better fit, and findings in China, India, Kenya, Panama and Peru show how this pairing can work in practice. The research has identified common elements, and key differences, in customary law that should be informing policy on traditional knowledge and genetic resources. Follow the links below for more about our work on Biocultural heritage.
Energy from Water: Hydroelectric, Tidal, and Wave Power / by Nancy Dickmann. (Next Generation Energy) Crabtree Publishing ISBN 9780778723806 MS Grades 5-8 Rating: 5 Energy from Wind: Wind Farming / by Megan Kopp. (Next Generation Energy) Crabtree Publishing ISBN 9780778719830 Both of these interesting and well-presented volumes from the juvenile environmental education series, Next Generation Energy, are written at guided reading level S. Both titles in the series present action choices and preferred green options for middle grade students age 10 and up. For centuries, falling water has been used in parts of the world to create energy to run grinding stones at mills and irrigation systems for crops. Nancy Dickmann’s Energy from Water shows how the use of this clean form of energy, called hydroelectricity, is being expanded to help us build a more sustainable future. Readers focus on how other forms of water-based energy, such as energy from ocean waves and tides, are being harnessed and used to help create electricity to power our homes, offices and factories. Megan Kopp’s Energy from Wind discusses wind power as a clean, sustainable, and renewable form of energy. The chapter “Power Up” invites its readers to think it through, ask and answer questions, and design wind turbines, testing location distances from the wind source and using different blade sizes to see which one works best. A handy Glossary and Learning More section lists books and website resources for inquiring readers. Both titles are filled with informational graphs, maps, and charts, as well as color-shaded sidebar features such as Fast Forward (where, Energy From Water notes, hydroelectric power is dependent on a consistent supply of running water) and Rewind (promotes comparative, critical thinking about the early hydroelectric plants compared to the most recent). The student is asked to make convincing arguments for their created answer to questions about difficult choices, with regard to environmental impact. There are positive messages regarding what students and young people can do to promote earth protection, such as living and eating green, and redirecting choices towards sustainable changes. Leroy Hommerding, CLJ
Nuclear Magnetic Resonance (NMR) spectroscopy is an analytical chemistry technique used in quality control and reserach for determining the content and purity of a sample as well as its molecular structure. For example, NMR can quantitatively analyze mixtures containing known compounds. For unknown compounds, NMR can either be used to match against spectral libraries or to infer the basic structure directly. Once the basic structure is known, NMR can be used to determine molecular conformation in solution as well as studying physical properties at the molecular level such as conformational exchange, phase changes, solubility, and diffusion. In order to achieve the desired results, a variety of NMR techniques are available. The basis of NMR The principle behind NMR is that many nuclei have spin and all nuclei are electrically charged. If an external magnetic field is applied, an energy transfer is possible between the base energy to a higher energy level (generally a single energy gap). The energy transfer takes place at a wavelength that corresponds to radio frequencies and when the spin returns to its base level, energy is emitted at the same frequency. The signal that matches this transfer is measured in many ways and processed in order to yield an NMR spectrum for the nucleus concerned. Fig. 1. The basis of NMR The precise resonant frequency of the energy transition is dependent on the effective magnetic field at the nucleus. This field is affected by electron shielding which is in turn dependent on the chemical environment. As a result, information about the nucleus' chemical environment can be derived from its resonant frequency. In general, the more electronegative the nucleus is, the higher the resonant frequency. Other factors such as ring currents (anisotropy) and bond strain affect the frequency shift. It is customary to adopt tetramethylsilane (TMS) as the proton reference frequency. This is because the precise resonant frequency shift of each nucleus depends on the magnetic field used. The frequency is not easy to remember (for example, the frequency of benzene might be 400.132869 MHz) so it was decided to define chemical shift as follows to yield a more convenient number such as 7.17 ppm. δ = (ν-ν0)/ν0 The chemical shift, using this equation, is not dependent on the magnetic field and it is convenient to express it in ppm where (for proton) TMS is set to ν0 thereby giving it a chemical shift of zero. For other nuclei, ν0 is defined as Ξ νTMS where Ξ (Greek letter Xsi)is the frequency ratio of the nucleus (e. g., 25.145020% for 13C). In the case if the 1H NMR spectrum of ethyl benzene (fig. 2), the methyl (CH3) group is the most electron withdrawing (electronegative) and therefore resonates at the lowest chemical shift. The aromatic phenyl group is the most electron donating (electropositive) so has the highest chemical shift. The methylene (CH2) falls somewhere in the middle. However, if the chemical shift of the aromatics were due to electropositivity alone, then they would resonate between four and five ppm. The increased chemical shift is due to the delocalized ring current of the phenyl group. Nombre: Franklin J. Quintero C. Ver Blog: http://franklinqcrf3.blogspot.com/
Bacteria Acidify Milk Acidifying (souring) milk helps to separate the curds and whey and control the growth of undesirable bacteria in cheese. Usually special ‘starter’ bacteria are added to milk to start the cheesemaking process. These bacteria convert the lactose (milk sugar) to lactic acid and lower the milk’s pH. There are two types of bacteria used for this process: Enzymes Speed Up Coagulation Some cheeses are curdled only by acidity. For example, paneer cheese is made using lemon juice to curdle the milk and cottage cheese is made using mesophilic bacteria. However, for most cheeses, rennet is also added to the milk after a starter bacteria. Rennet is a mixture containing the active enzyme chymosin. Rennet speeds up the coagulation of casein and produces a stronger curd. It also allows curdling at a lower acidity, which is important for some types of cheese. Casein Proteins Coagulate Milk is about 86% water but also contains fat, carbohydrate (mainly lactose), proteins (casein and whey), minerals and vitamins. Milk is an emulsion of fat globules and a suspension of casein micelles. These are suspended in the liquid phase of milk that contains dissolved lactose, whey proteins and some minerals. The chymosin in rennet breaks down the kappa casein on the surface of the micelles changing them from being hydrophilic to hydrophobic. This causes them to aggregate together, trapping fat and water molecules in the developing curd. Further processing of the curd helps remove more water and compress the curd to form a solid cheese Releasing the Whey After separating curds and whey, further processing of the curds helps release more of the whey trapped in the network of micelles before it is drained away. The exact processing steps vary depending on the type of cheese. However, generally, the curds are captured, pressed and moulded to form blocks of cheese. Historically, whey was considered a waste product of cheesemaking. However, growing concern over the environmental impact of its disposal encouraged research to better understand the properties and potential uses of whey. Increasing scientific understanding and technological advances have led to a wide range of uses for whey and established it as a valuable coproduct of the cheese industry. Ripening the Cheese Cheese is left to ripen, or age, in a temperature and humidity-controlled environment for varying lengths of time depending on the cheese type. As cheese ripens, bacteria break down the proteins, altering the flavour and texture of the final cheese. The proteins first break into medium-sized pieces (peptides) and then into smaller pieces (amino acids). In turn, these can be broken down into various, highly flavoured molecules called amines. At each stage, more complex flavours are produced. During ripening, some cheeses are inoculated with a fungus such as Penicillium. Inoculation can be either on the surface (for example, with Camembert and Brie) or internally (for example, with blue vein cheeses). During ripening, the fungi produce digestive enzymes, which break down large protein molecules in the cheese. This makes the cheese softer, runny and even blue. And there you have it, the complicated science behind cheesemaking! Here, Aubrey Fletcher, writes little cheese tid-bits or pieces about the farm. Enjoy!
(1455–1538), Doge of Venice, the last military figure to be elected as doge. As a young man he travelled overseas as a merchant, and on returning to Venice was appointed as a military provveditore. He fought in the campaign against the League of Cambrai and in 1512 he was captured and taken as a prisoner to France. He returned to Venice the following year and was again employed as provveditore. He prepared a report on the defence of Venetian possessions on the terraferma; the report formed the basis of the programme to enhance fortifications for the rest of the century. Gritti was elected doge in 1523 after a career that had culminated in a series of army and naval commands. His tenure of office was marked by the relentless approach of the Ottomans. He died during the first Turkish War (1537–40). From The Oxford Dictionary of the Renaissance in Oxford Reference. Subjects: Early Modern History (1500 to 1700).
This 22-page unit will take your students back in time in a quest to explore the economic forces that led hunter-gatherers to settle, and form early farming communities that later evolved into civilizations. The unit has everything you need to take your students in this adventure: answer keys, handouts, and instructions for lesson and activity are all included. All the materials are ready for you to use in your class with no extra work required. It all begins with the reading activity “Putting Down Roots”, a fictional account of what life was like in early farming communities and the role that scarcity, innovation, and technology played in the agricultural revolution. Students will make predictions about the effects of agricultural revolution on society, and then read a factual piece to compare their predictions. In lesson two, a different spark will be added to the class, as students play the “Wheat Game”. A practical simulation game that will take them back to the Neolithic period and early farm life, while they effortlessly learn the role that surplus and technology played in the development of society and the fight for their own survival! This lesson also gives you excellent resources for setting up alternative learning centers in lieu of the simulation. Finally, students will learn about the characteristics of civilizations and then use numerous resources to determine whether or not the four societies in Mesopotamia, India, China, and Egypt meet the criteria to classify them as civilizations. Twenty-two pages of time travel, fun, exploration, and boundless learning. Areyou up for it?
Our language is like a pearl inside a shell. The shell is like the people that carry the language. If our language is taken away, then that would be like a pearl that is gone. We would be like an empty oyster shell. Yurranydjil Dhurrkay, Galiwin’ku, North East Arnhem Land Indigenous knowledge about the living power and presence of Indigenous languages creates visible boundaries that give us insight into what we have lost as individuals from a people, from a place – and how modern equivocating of spiritual practices or cultural re-imagining based in english are missing an integral component to their wholeness, connection, and honesty. This is a provocative understanding to take in. If deeply considered, it will raise unsettling questions about what we think and say we may be doing within a cultural or spiritual pathway like neopaganism or ancestral recovery if recovering Indigenous language and its ways of thinking and being is not a part. To prevent likely evasions away from this understanding, a whole series of related quotes are shared below from Indigenous elders, activists, and teachers from different nations around the world. Please read and consider all of them. - Indigenous language is living part of the land itself. They cannot be separated. - Indigenous language is the heart of an individual and the soul of a people. - Indigenous language is what connects a people to their place – spiritually, culturally, and across time. - Indigenous language is how Indigenous people communicate with life in their place. - The language of spirit and ceremony in a place is its Indigenous language. It is not english. - Without our Indigenous language, we are like empty shells that lack the means to access real cultural or spiritual connection in our current or ancestral homes. - No matter how hard we may resist, we (including our children) will be indoctrinated to the cultural identities and value systems of eurocentric, colonial culture that lives in the english language. - To recover our lost Indigenous ways of thinking and being, we must relearn or recover our Indigenous languages. The idea that white people are somehow exceptional, or able to disregard the same guiding knowledge of life that has served Indigenous peoples of this world for so long, are attitudes grown from white supremacy and colonialism. The same rules DO apply to us, and the dire consequences for losing these understandings are evident all around us. English is NOT an Indigenous language. It is the language of eurocentric white supremacy, colonial occupation, and the melting pot of cultural erasure. When we ascribe equivalence between english and Indigenous language, we are lying to ourselves and inflicting trauma on others. We are committing an act of colonialism. We must rid ourselves of these false and genocidal attitudes and instead undertake deep reflection on what may be required for our lost people of european heritage to recover language as the means to re-connection, healing, and acting with authentic love and respect. We must find a way to be honest with ourselves about what we are actually creating, and the kinds of cultural depth, spiritual abilities, or ceremonial presence we may claim to possess. As an example, do we really understand within the deeply rooted knowledge and philosophy of our Indigenous language and culture, what a word like “sacred” even means or how to respect it? Our ignorance and disregard has consequences. We can find the integrity and determination to relearn and recover our Indigenous languages for the benefit of ourselves, our children, our land, and all life. While it won’t be done over-night, within language we can rediscover our connections and identities as people of a place, whose unique and beautiful ways of communicating with life are critical to the health and balance of the whole world. Hau gure bidea da – this is our path.
The Vikings from Iceland reached Labrador and the island of Newfoundland a thousand years ago. European exploration began in 1497 with the arrival of the Italian John Cabot, who first drew up a map of Canada's East Coast. In the mid 16th century, French exploration here began with Jacques Cartier. Cartier heard two captured guides speak the Iroquoian word kanata, meaning "village." By 1550 the name "Canada" started appearing on maps. Samuel Champlain established settlements on the east coast and then built a fortress at what is now known as Quebec City. At the same time, English adventurers such as Henry Hudson were exploring the land and Britain was establishing colonies in North America. By the 1700s, England and France were fighting over "Canada". Eventually, Great Britain won, marked by victory in 1759 during the Battle of the Plains of Abraham at Quebec City.
Online Dental Education Library Our team of dental specialists and staff strive to improve the overall health of our patients by focusing on preventing, diagnosing and treating conditions associated with your teeth and gums. Please use our dental library to learn more about dental problems and treatments available. If you have questions or need to schedule an appointment, contact us. What Is Tooth Decay? Tooth decay is caused by a variety of things; in medical terms, cavities are called caries, which are caused by long-term destructive forces acting on tooth structures such as enamel and the tooth's inner dentin material. These destructive forces include frequent exposure to foods rich in sugar and carbohydrates. Soda, candy, ice cream—even milk—are common culprits. Left inside your mouth from non-brushing and flossing, these materials break down quickly, allowing bacteria to do their dirty work in the form of a harmful, colorless sticky substance called plaque. The plaque works in concert with leftover food particles in your mouth to form harmful acids that destroy enamel and other tooth structures. If cavities aren't treated early enough, they can lead to more serious problems requiring treatments such as root canal therapy. The best defense against cavities is good oral hygiene, including brushing with a fluoride toothpaste, flossing and rinsing. Your body's own saliva is also an excellent cavity fighter, because it contains special chemicals that rinse away many harmful materials. Chewing a good sugarless gum will stimulate saliva production between brushing. Special sealants and varnishes can also be applied to stave off cavities from forming. If you have any of the following symptoms, you may have a cavity: - Unusual sensitivity to hot and cold water or foods. - A localized pain in your tooth or near the gum line. - Teeth that change color. Baby Bottle Tooth Decay Baby bottle tooth decay is caused by sugary substances in breast milk and some juices, which combine with saliva to form pools inside the baby's mouth. If left untreated, this can lead to premature decay of your baby's future primary teeth, which can later hamper the proper formation of permanent teeth. One of the best ways to avoid baby bottle tooth decay is to not allow your baby to nurse on a bottle while going to sleep. Encouraging your toddler to drink from a cup as early as possible will also help stave off the problems associated with baby bottle tooth decay.
2022 Term 2 Maths Add/Sub: Stage 4 AC - EA Jess In small groups students will work with their LA using materials and moving to imaging to build the knowledge areas described below. This will be three times a week with support from complimentary lessons in mathsbuddy in SDL time. -Use simple additive strategies with whole numbers -Know forward and backward counting sequences with whole numbers to at least 1000. - Know the basic addition and subtraction facts. - Know how many ones, tens, and hundreds are in whole numbers to at least 1000. Equations and Expressions: - Communicate and interpret simple additive strategies, using words, diagrams [pictures], and symbols.
Dairy calves fed milk from cows treated with antimicrobials have a higher probability of excreting resistant bacteria through their faeces than those who aren’t. This is one of the conclusions of an EFSA scientific opinion on the risk of antimicrobial resistance associated with feeding milk containing antimicrobial residues to calves. Feeding calves with milk containing residues of antimicrobials on the farm of origin is not generally prohibited in the EU and potential national regulations on this practice are not harmonised. To get a grip on the situation, EFSA sent out a questionnaire to all member states (MS) on the use of antibiotics in dairy for their particular country and to what extent this milk is fed to dairy calves. Of the 24 MS responding to the questionnaire, 16 provided estimates of the proportion of farms that used milk from cows treated with antimicrobials as feed for calves, whereas eight MS provided no estimate. EFSA also did an extensive literature review on this topic. The results have been recently published in the EFSA journal. Results from the questionnaire showed that France and Slovenia stated that using milk from treated cows for calves was ‘common practice’ for both male and female calves. Bulgaria stated that no farms use waste milk as feed for calves. Cyprus, Spain, Hungary estimated that milk from treated cows was used for male calves on all farms. In Denmark, Croatia, Italy, Luxembourg, Malta, the Netherlands, Slovakia, and the UK, it was estimated that milk from treated cows was used for both male and female calves on 4–100% of the farms depending on the country. Two MS provided more detailed information. In Finland, milk from treated cows is given to both male and female calves on all farms but only after treatment has been completed, i.e. during the statutory withdrawal period for human consumption. Milk from cows treated with benzylpenicillin can be used also during treatment if it is treated with β-lactamase to destroy potential residues. In Sweden, colostrum is given to both female and male calves on almost 90% of the farms and milk from cows treated during lactation is used during treatment and during the withdrawal period on 56% and 79% of the farms, respectively. Feeding milk from treated cows can result in more resistant bacteria in the calves’ gut. In a recent national project from the Netherlands related to the antimicrobial resistance development in young calves (2013), Gonggrijp et al. (2015) investigated the compounds and concentrations found in 118 colostrum samples. 67% of the colostrum samples did not exceed the MRL concentration of any antimicrobial applied, 29% of the samples contained cloxacillin at a concentration above the MRL with a median concentration of 86.5 μg/kg and a mean of 229.8 μg/kg, 3% exceeded the MRL concentration of ampicillin, and 1% of penicillin). In other studies it was found that an increased proportion of antimicrobial-resistant faecal bacteria are shed when calves are fed milk containing antimicrobial residues at subtherapeutic doses. In the case of colostrum, earlier studies found that no effect was observed of feeding calves colostrum from cows treated with penicillins and aminoglycosides at drying-off or waste transition milk. This observation is limited to E. coli and to treatment with penicillins and aminoglycosides and is not confirmed by other studies with other antimicrobials. Based on the data gathered from the EU members states, combined with the existing knowledge and literature on the formation and shedding of antimicrobial resistant bacteria, EFSA states that feeding milk and colostrum of cows treated with antibiotics can enhance the probability of calves excreting resistant bacteria through their faeces. Combatting antimicrobial resistance is a priority for the EC which launched a 5-year Action Plan in 2011 against the rising threats from AMR, based on a holistic approach, in line with the ‘One Health’ initiative.
|Crop Knowledge Master| Aspidiotus destructor (Signoret) Jayma L. Martin Kessing, Educational Specialist Ronald F.L. Mau, Extension Entomologist Depatment of Entomology Updated by: J.M. Diez April 2007 The coconut scale is a common pest of coconut and banana. It also infests many other trees and ornamental plants, some of the hosts include avocado, bird of paradise, breadfruit, ginger, guava, mango, mock orange, mountain apple, palm, papaya, pandanas, plumeria and sugarcane. See Williams and Watson (1988) for an extensive listing of hosts in the South Pacific area. The coconut scale is common to tropical and subtropical regions worldwide, especially on islands. It is found in American Samoa, Fiji, French Polynesia, Hawaii, Irian Jaya, New Caledonia, Papua New Guinea, Solomon Is., Sri Lanka, Vanuatu and Western Samoa. It was first found in the State on Oahu in 1968 and has since spread to Kauai, Hawaii and Maui. According to Taylor (1935) this scale disperses primarily with the aide of other creatures such as birds, insects and as is the case in Fiji, by bats. Accidental dispersal by human activities may occur through the transport of tropical nursery plants and goods made from plant material such as coconut leaf baskets (Taylor, 1935). Although little evidence exists, it is believed that another important mode of spread is by wind blown crawlers. The coconut scale is among the most damaging of all armored scale insects (Beardsley, 1970). This pest is usually found in densely massed colonies on the lower surfaces of leaves, except in extremely heavy infestations where it may be present on both sides. It may also be found on petioles, peduncles and fruits. Mature scales are found on the older leaves. Infestations are typically associated with yellowing of the leaves in areas where the scales are present. The yellowing is caused by the removal of sap by the sucking mouth parts and the toxic effects of the saliva that kills the surrounding tissues at the feeding site (Waterhouse & Norris, 1987). This yellowing is distinct from and less mottled than that caused by another armored scale, Chrysomphalus ficus (Beardsley, 1970). The coconut scale is classified as an armored scale. Unlike other scales, armored scales do not produce honeydew (Beardsley and Gonzalez, 1975). Armored scales feed on plant juices. Feeding sites are usually associated with discolorations, depressions and other host tissue distortions (Beardsley and Gonzalez, 1975). Scale insects belong to one of two types, the armored scales or the soft scales. The coconut scale is classified as an armored scale. These scales are protected by a distinct, hard, separable shell or scale over their delicate bodies (Metcalf, 1962). The shell is made of entangled threads of wax exuded from the body wall of the scale and discarded cast skins (the old skin shed during molts). Armored scales lose their legs and antennae after the first molt. Females are always wingless and remain under their scale their entire life. Males have one pair of membranous wings, move about actively in search of females and do not feed during the adult stage. Reproduction is by eggs in most cases, but a few species birth live young. Eggs are protected underneath the scale or shell of the mother insect until they hatch. All armored scales have essentially the same life history (Metcalf, 1962). Duration of developmental stages varies with temperature. Life history studies were conducted in Fiji by Taylor (1935) at a mean temperature of 79û F on seedling coconuts. Taylor found the total life cycle of females, from egg to the beginning of oviposition, required 34-35 days. Complete development of males required 30-35 days (Taylor, 1935). There are 8 - 10 generation per year in tropical regions. Eggs are laid beneath the scale. Adult females shrink in size as they lay their eggs (Taylor, 1935). The female rotates as it deposits eggs so that eggs are arranged in concentric circles around the females. Eggs are laid in batches of 3 or 4 at a time, in moderately quick succession, with an interval of several hours between batches (Taylor, 1935). In life history studies conducted in Java on coconut, 65 to 110 eggs were laid by each female, with an average of 90 (Taylor, 1935). The eggs are white when first laid and turn yellow after a few days. Because of the color change due to age of the egg, the eggs in the outer concentric rings are yellow and those in the inner and younger rings are white (Taylor, 1935). Eggs hatch in the order in which they are laid (Taylor, 1935). Egg incubation is about as long as the egg laying period, such that the outer eggs are hatching as the last eggs are laid. This allows the larvae from the inner rings to crawl freely pass the remains of the older eggs (Taylor, 1935). After hatching, the young larvae push their way out from beneath the adult scale. The larvae, called crawlers, have well-developed legs and antennae and a pair of bristles at the tip of the abdomen (Waterhouse & Norris, 1987). They crawl over the leaf surface until they find a suitable feeding site where they attach themselves to the leaf. Once a feeding site has been selected the scale will not move. The free living stage lasts from 2 to 48 hours, but usually does not exceed 12 hours (Taylor, 1935). After the larvae have attached themselves to the leaf, they go through a period of rapid growth for 7-11 days before the first molt. All appendages are lost after the first molt. Up to the first molt there is no physical difference between the sexes, both are pale yellow. In the middle of the second larval stage, males become reddish brown and elliptical in shape and the females remain pale yellow and circular. From this point, the development of the sexes differs. The second larval stage lasts for 5-8 days for males and 8-10 days for females (Taylor, 1935). Males are full grown at the end of the second larval stage and become pupae after their second molt. The male pupal period lasts for 4-6 days. Females continue to grow after their second molt for 8-9 days and do not change in body shape. When the female stops growing, she begins to lay eggs and is then considered an adult (Taylor, 1935). The adult female is circular in shape and approximately 1/12 inch in diameter. Her orange-yellow body is visible beneath the milky-white, semitransparent, thin scale that covers her body. They are easily recognized because of their closely packed colonies and their resemblance to a fried egg, "sunny-side up" (Dekle, 1965). The female body color may also be greenish yellow depending on the food plant (Taylor, 1935). Females produce an average of 90 eggs throughout her lifetime on coconut (Taylor, 1935). The oviposition, or egg laying period, lasts for 9 days (Taylor, 1935). Adult, winged males emerge from the pupae by pushing out from beneath the larval skins. They do not emerge from the protective scale for 2 days. They are very minute two-winged yellowish insects, with antennae, eyes, three pairs of legs and a prominent long appendage projecting from the tip of the abdomen (Metcalf, 1962). Once out from beneath the scale the males crawl actively about and occasionally fly in search of a female to mate with. The adult male does not feed and is short lived. Reproduction primarily occurs through parthenogenesis, or reproduction without fertilization, in which females are able to produce both male and female progenies. Occasionally a female may lay eggs that produce only male scales. It is therefore assumed that fertilized reproduction may occur in some instances, most likely influenced by food availability (Taylor, 1935). Once the larvae have attached themselves to the leaf, scale formation occurs. Initially the scale appears as fine threads of silk. The larvae rotate as they produce silk until the matting of threads forms a thin continuous scale around the periphery of the larvae. The scale covers the entire larvae. This process takes about 12 hours (Taylor, 1935). After accidental introduction to various Pacific islands, this scale became a serious pest of coconuts. With the introduction of parasitoids and predators, its pest status has been greatly reduced (Waterhouse & Norris, 1987). Largely because of efficient parasitoids and predators this scale is generally not a problem in Hawaii. Around 40 species of predators and parasites have been reported to attack the coconut scale in areas outside of Hawaii (Beardsley, 1970). Several of these natural enemies have been used to successfully control outbreaks of this scale (Sweetman, 1958). The coccinellid beetle, Cryptognatha nodiceps (Marshall), was introduced to Fiji and effectively controlled this scale (Taylor, 1935). Another coccinellid beetle, Chilocorus politus (Mulsant), has also been effective in Mauritius and Indonesia (Beardsley, 1970). Ladybird beetles have been the very effective throughout the Tropics, especially Pseudoscymnus anomalus, Cryptognatha nodices, C. gemellata, Rhyzobius satelles, Chilocorus nigritus and C. malasiae (the last four are non-specific predators) (Waterhouse & Norris, 1987). Cryptognatha nodiceps has a long adult life, high reproductive capacity, good dispersal abilities, and is a voracious predator with a preference for the coconut scale (Waterhouse & Norris, 1987). Pseudoscymnus anomalus is a specific predator and requires only 14 days for development to adulthood which is much quicker than the development of other predators (Waterhouse & Norris, 1987). Refer to Waterhouse and Norris (1987) for an extensive list of natural enemies of the coconut scale. In Hawaii, two coccinellid beetles, Telsimia nitida(Chapin) and Lindorus lophanthae (Blaisdell), were introduced to control other scale pests and are the principal predators of the coconut scale (Beardsley, 1970). Other successful introductions include Chilocorus nigritis and Pseudoscymnus anomalus from Guam in 1970 (Waterhouse & Norris, 1987). Other natural enemies in Hawaii are the predaceous thrips, Aleurdothrips fasciapennis (Franklin) and two minute aphelinid wasps that are internal parasites of the coconut scale (Beardsley, 1970). Chemicals used on scales are usually the same as those used on mealybugs and may include diazinon, dimethoate, formothion, malathion and nicotine (Copland and Ibrahim, 1985). As in the use of all chemicals, consult the label or a pesticide database, to determine what chemicals may be used on specific crops. Special care should be taken with chemical sensitive palnts (Copland and Ibrahim, 1985). There is no listing for malathion, diazinon and dimethoate are not labelled as of April 2007. Sprays are only effective on the crawler stage of scales. However, control is difficult on other life stages. Adults are firmly attached to the plant and remain so after their death that may give a false impression of the pest status (Copland and Ibrahim, 1985). Chemical applications should be used only when parasites are not economically effective. The application of pesticides may kill natural enemies of the scale and result in a resurgence of the pest. Beardsley, J. W. 1970. Aspidiotus destructor Signoret, an Armored Scale Pest New to the Hawaiian Islands. Proc. Hawaii. Entomol. Soc. 20: 505-508. Copland, M. J. W. and A. G. Ibrahim. 1985. Chapter 2.10 Biology of Glasshouse Scale Insects and Their Parasitoids. pp. 87-90. In: Biological Pest Control The Glasshouse Experience. Eds. Hussey, N.W. and N. Scopes. Cornell University Press; Ithaca New York. Dekle, G. W. 1965. Florida Armored Scale Insects. Florida Department of Agriculture: Gainseville. pp. 265. Elmer, H. S. and O. L. Brawner. 1975. Control of Brown Soft Scale in Central Valley. Citrograph. 60(11): 402-403. Metcalf, C. L. 1962. Scale Insects. pp. 866-869. In: Destructive and Useful Insects Their Habits and Control. McGraw-Hill Book Company; New York, San Francisco, Toronto, London. 1087 pages. Swain, G. 1969. The Coconut Stick Insect Graeffea crouani Le Guillou. Oleagineux. 24: 75-77. Sweetman, H. L. 1958. The Principles of Biological Control. Wm. C. Brown Co. Dubuque, Iowa. 560 pp. Taylor, T. H. C. 1935. The Campaign Against Aspidiotus destructor, Sign., in Fiji. Bull. Entomol. Res. 26: 1-102. Waterhouse, D. F. and K. R. Norris. 1987. Chapter 8: Aspidiotus destructor Signoret. pp. 62-71. In: Biological Control Pacific Prospects. Inkata Press, Melbourne. 454 pages. Williams, D. J. and G. W. Watson. 1988. Aspidiotus destructor Signoret. pp. 53-56. In: The Scale Insects of the Tropical South Pacific Region Part 1: The Armored Scales (Diaspidae). The Cambrian News Ltd. 290 pages.
During COVID-19 youth sports activities have started up in many parts of the United States. To decrease the risk of youth athletes getting COVID-19, coaches, teams and parents can take precautions to help decrease the risk of young athletes getting COVID-19, including wearing masks, cleaning and disinfecting, and even new technology. How can we keep children playing sports safe? One place to look is the article “Youth Sports and COVID-19: Understanding the Risks” from the HealthyChildren.org, run by the American Academy of Pediatrics. The article says that safety precautions need to be in place for practices and games, including face coverings, the use of hand sanitizers and more. The article also reminds that different sporting events have different risks. Individual sports that allow for six to eight feet between competitors carry a lower risk than team sports with frequent contact. Sports that enable physical distancing are the equivalent of the social distancing generally recommended by immunology and medical experts to keep people free from COVID-19. The article explains that small teams carry less risk than large teams. Traveling also increases the risk of becoming infected, as children become enclosed in small vehicles or buses, flow through common areas, or visit other towns or schools. When players share equipment, they increase their chances of catching the virus, the article warns. Children should try to bring and use their own equipment, and if that's not possible, equipment should be thoroughly cleaned and disinfected or wiped down with sanitizing wipes between uses. The article also reminds parents that indoor sports are higher risks than outdoor sports. The better the area is ventilated, the safer for the children, with large fields and outdoor areas offering the best possible playing area. Small teams carry less risk than large teams. Traveling increases the risk of becoming infected. CDC Guidelines for Youth Sports During COVID-19 The CDC has excellent advice as well. It recommends that you “make a game plan to reduce risk" during the COVID-19 pandemic. Here’s one recommendation: “If organizations are not able to keep safety measures in place during competition (for example, keeping participants six feet apart at all times), they may consider limiting participation to within-team competition only (for example, scrimmages between members of the same team) or team-based practices only.” It also has these recommendations for young athletes: - Bring supplies to help you and others stay healthy—for example, masks (bring extra), hand sanitizer with at least 60% alcohol, broad spectrum sunscreen with SPF 15 or higher, and drinking water. - Prioritize participating in outdoor activities over indoor activities and stay within your local area as much as possible. - If using an indoor facility, allow previous groups to leave the facility before entering with your team. If possible, allow time for cleaning and/or disinfecting. - Check the league’s COVID-19 prevention practices before you go to make sure they have steps in place to prevent the spread of the virus. - If you are at an increased risk for severe illness or have existing health conditions, take extra precautions and preventive actions during the activity or choose individual or at-home activities. COVID-19 Youth Sports Safety from HealthChildren.org Also worth checking out is “Youth Sports Participation During COVID-19: A Safety Checklist,” published by HealthChildren.org. It’s a great checklist that offers advice starting before the season begins, prior to practices and games, during games or practices, and after games and practices. It also has advice on what to do if an athlete has contracted COVID-19. Screening is important as well In addition to all that, screening youth athletes for COVID-19 is critical. Many schools and athletic associates require temperature checks or proof of a negative COVID-19 test before a child participates in sports activities. Another effective way to screen children for sports is by using a COVID-19 screening app. Coronavirus screening apps require users to ask questions about their current health condition, any symptoms they may have, where they're located or if they've traveled recently, and if they've been exposed to any know COVID-19 patients or at-risk areas. Based on the answers supplied by the user, the app recommends or certifies if the child is safe to attend or participate in a youth sports activity. COVID Screening App for Youth Sports A great COVID screening app is AlphaMED COVIDCARE, a wellness mobile app built to help school districts reopen schools safely during the coronavirus pandemic. The public health app is the first written by a physician on the front line of COVID-19 diagnosis and care in New York City. The app adapts to changing Centers for Disease Control (CDC) and state guidelines. It also takes into account up-to-date global guidelines on the changing nature of the virus, so it can assess newly identified symptoms or risk factors. Teachers and faculty or students and their parents log into the app each morning, answer questions about their COVID risk factors and any current symptoms, then certifies if they can attend school that day. The app is currently in use by school districts to help the schools reopen safely. The AlphaMED COVIDCARE COVID screening app for schools can easily be adopted by coaches, teams and youth athletes to assess their risk of having COVID-19 and certify them to participate in sports each day. Learn more about this potential youth sports screening app See an example of another COVID screening app currently in use by professional rugby leagues. Assess your risk of having or developing COVID-19 with Alpha Software's Free COVID Risk Assessment app.
Red blood cells must maintain a distinct dimpled shape as they travel through the body, returning to form even after pushing through narrow capillaries. Misshapen red blood cells are associated with disorders such as sickle cell anemia, and though red blood cells are well-studied, there are still unanswered questions about how healthy ones properly maintain their shape. A new study from The Scripps Research Institute, published in the Proceedings of the National Academy of Sciences, demonstrated that a protein called myosin IIA plays a key role in the process. Sickle cell anemia Globally, approximately 300,000 children are born with sickle cell anemia every year. In the United States, the disease disproportionally affects African-Americans; according to the Centers for Disease Control and Prevention, sickle cell disease occurs in about 1 out of every 365 African-American births. There is currently no cure for the disorder. In sickle cell anemia, red blood cells are deformed from their typical rounded shape to crescent moons or sickles. These misshapen cells are rigid and sticky, causing them to get stuck within blood vessels and impeding the flow of oxygen throughout the body. As a result, patients don’t get enough oxygen to feel energized. Symptoms include fatigue, pain and frequent infections. Myosin is key The question is: how do healthy red blood cells maintain their distinct dimpled shapes? Do they passively bounce back into shape – or is it a mechanical process in which the cell membrane is actively contracting and relaxing to sustain its shape? The Scripps research team found that red blood cells use a protein called myosin IIA to actively regulate their shape. Red blood cell myosin IIA – which is related to the protein that drives muscle contraction in other parts of the body – assembles into barbell-shaped structures called filaments. Both ends of the filaments then pull on red blood cell actin to regulate the stiffness of the cell membrane. “You need active contraction on the cell membrane, similar to how muscles contract,” senior author Dr. Velia Fowler said in a press release. “The myosin pulls on the actin to provide tension in the membrane, and then that tension maintains the biconcave shape.” The team also treated the cells with a compound called blebbistatin, which stops myosin from working properly. The treated cells lost their shape, confirming that myosin IIA is key for maintaining red blood cell shape. This new insight is an important step toward better understanding diseases in which red blood cells are deformed, including sickle cell anemia. The researchers hope their findings could eventually lead to new treatments; for example, by inhibiting myosin IIA, we might be able to restore some of the elasticity red blood cells lose in sickle cell anemia. Up next, the team plans to continue studying the regulation of myosin IIA’s activity in red blood cells, in particular how myosin IIA filaments are phosphorylated to become more stable.
The focal length is a measure of how a lens converges light. It can be used to know the magnification factor of the lens and given the size of the sensor, calculate the angle of view. A standard reference used for comparisons is the 35 mm format, which is a sensor of size 36×24 mm. A standard wide angle lens would have around 28 to 35 millimeters based on the 35 mm format. The smaller the number, the wider the lens is.Close The focal length is a measure of how a lens converges light. It can be used to know the magnification factor of the lens and given the size of the sensor, calculate the angle of view. The native focal length of the sensor cannot be used for comparisons between different cameras unless they have the same size. Therefore, the focal length in 35 mm terms is a better reference. For the same sensor, the smaller the number, the wider the lens is.Close Indicates the type of image stabilization this lens has: The horizontal field of view in degrees this lens is able to capture, when using the maximum resolution of the sensor (that is, matching the sensor aspect ratio, and not using sensor cropping).Close The vertical field of view in degrees this lens is able to capture, when using the maximum resolution of the sensor (that is, matching the sensor aspect ratio, and not using sensor cropping).Close Shows the magnification factor of this lens compared to the primary lens of the device (calculated by dividing the focal length of the current lens by the focal length of the primary lens). A magnification factor of 1 is shown for the primary camera, ultra-wide cameras have magnification factors less than 1, and telephoto cameras have magnification factors greater than 1.Close Physical size of the sensor behind the lens in millimeters. All other factors being equal (specially resolution), the larger the sensor the more light it can capture, as each physical pixel is bigger.Close The size (side) of an individual physical pixel of the sensor in micrometers. All other factors being equal, the larger the pixel size, the better the image quality is. In this case, each photoreceptor can capture more light and potencially can better differential the signal from the noise, yielding better image quality, specially in low-light.Close The maximum picture resolution this sensor outputs images in JPEG format. Sometimes, if the sensor can also provide images in RAW (DNG) format, they can be slightly larger because of an additional area used for calibration purposes (among others). Unfortunately, firmware restrictions for third-party apps also mean that the maximum picture resolution exposed to third-party apps might be considerably lower than the actual resolution of the sensor, therefore the resolution shown here is the maximum resolution third-party apps can access from this sensor.Close The available output picture formats this camera is able to deliver: The focusing capabilities of this camera: It displays whether this lens can be set to focus at infinity or not. Even if the camera supports autofocus and manual focus, it might happen that the focus range the lens is able to adjust to does not include the infinity position. This property is important for astrophotography, as in such low-light scenarios the automatic focus does not work reliably.Close The distance from which objects that are further away from the camera always appear in focus. Therefore, if the camera is set to focus at infinity, any object further away from this distance will appear in focus.Close The range of supported manual exposure in seconds (minimum or shortest to maximum or longest). This camera might support exposures outside this range, but only in automatic mode and not in manual exposure mode. Also, note that this range is the one third-party apps have access to, as often the first-party app preinstalled on the phone by the manufacturer might have privileged access to the hardware and offer longer or shorter exposures times.Close The range of supported manual sensitivity (ISO). This camera might support ISO sensitivities outside this range in automatic mode. Also, note that this range is the one third-party apps have access to, as often the first-party app preinstalled on the phone by the manufacturer might have privileged access to the hardware and offer an extended manual sensitivity range.Close The maximum ISO sensitivity possible in manual mode is usually reached by using digital amplification of the signal from the maximum supported analog sensitivity. This information, if available, will let you know what is the maximum analog sensitivity of the sensor.Close The data on this database is provided "as is", and FGAE assumes no responsibility for errors or omissions. The User assumes the entire risk associated with its use of these data. FGAE shall not be held liable for any use or misuse of the data described and/or contained herein. The User bears all responsibility in determining whether these data are fit for the User's intended use.
Students enter words in the gaps, based on the context within a given article, individually or collaboratively. This activity helps improve your vocabulary and sentence structure and your communication skills. Type: Individual or Group collaboration Instructions: Click on the gap and type in a word. Click on the light bulb icon (if any) for help. The words of sentences are scrambled and students must sort them into their original order. This activity helps you study sentence structure by providing you with genuine text and allowing you to select suitable materials to practice on. Instructions: Put the bold words in the correct order by drag-drop them into the correct position. This activity is for image collections only. A randomly chosen image is shown to one player (called the "describer"), while the other player (the "guesser") must identify it by asking questions. This activity helps improve your communication skills and vocabulary. Type: Collaboration in pairs Instructions: The "describer" sees a single image and describes it to their partner through the chat box. Based on what their partner says, the "guesser" selects one of the images by double-clicking. Both score a point if it is the correct image. If a timer is shown, the "guesser" must make their choice before time runs out. Students collaborate to predict words they think will occur in a given text, This activity provides a learning environment in which you help each other by sharing information and exchanging ideas. Type: Group collaboration Instructions: In the text box, type your guesses of what words you think might be in the article. Use the title and/or image to help you think of words.
The two most common types of encryption algorithm used in modern cryptography are the block and stream ciphers. The block cipher uses a deterministic algorithm that conducts operations on fixed-length groupings of bits, or blocks. By using a transformation specified by a symmetric key, a block cipher is able to encrypt bulk data, and is one of the basic components of many cryptographic protocols in use today. A stream cipher, on the other hand, takes plaintext characters or digits and combines them with a pseudo random cipher digit stream, or key stream. Block Cipher Background The block ciphers found in use today are based on the iterated product cipher concept. These ciphers were first discussed and later analyzed in 1949 by Claude Shannon. The iterated product cipher concept entails conducting encryption operations over multiple rounds. Each of the rounds is designed to use a different subkey that is created from the primary or original key of the cipher. One of the largest known implementations of this cipher was the Feistel network. The network was named for Horst Feistel and was also used in widely employed DES cipher. The United States National Bureau of Standards (rebranded as the National Institute of Standards in Technology, or NIST today), published the DES cipher in 1977. This publication was predominant in helping the public understand how modern ciphers worked. The publication of DES also helped to influence the growth of cryptanalysis in the public domain and academia at the time. This work helped develop various attack methods that new block ciphers have to guard / be tested against today. Today, secure block ciphers remain suitable for the encryption of one block of information using a fixed key. There have been numerous modes of operation developed for the cipher to allow repeated use in secure channels in order to achieve authenticity and confidentiality. Block ciphers have also been used as the foundation protocol in more complex cryptographic protocols to include pseudo-random number generators and universal hashing functions. What is a Block Cipher? Block ciphers include two paired algorithms today. One of the algorithms is used for decryption (D), and one for encryption (E). Each of the algorithms is able to accept two inputs for operations: 1 – A key size consisting of (K) bits, Each of these inputs will then produce an output block of the size of “N.” Similarly, the associated decryption algorithm in block ciphers is defined to consist of the inverse of the encryption function. Formally described by the equation, D = E-1. Block Cipher Modes of Operation When employing a block cipher in a stand-alone fashion, there is a limitation of only being able to encrypt a single block of data that is the length of the cipher’s block length. For variable length messages, information has to be split out into separate blocks of data appropriate for the block cipher. Electronic Codebook Mode The simplest method of running a block cipher is in the electronic codebook (ECB) mode. In this scheme, the message to be encrypted is first broken up into blocks of data equivalent to the cipher block size. If the fragment is less than this length, then padding can be used to ensure the entire block of information is filled. This method is typically insecure against modern cryptanalysis since the equal plaintext blocks always create equal ciphertext blocks using the same key. As a result, patterns from the plaintext message can be detected in the ciphertext output and ultimately be cracked. Overcoming Electronic Codebook Mode Limitations In order to overcome the limitations associated with ECB (Electronic Codebook), there have been several block cipher modes of operation developed. The over-reaching concept for these modes is to leverage the randomization of plaintext information based on an additional input value. This value is commonly referred to as an initialization vector that is used to help create probabilistic encryption. The cipher block chaining (CBC) mode, the initialization vector is sent along with the plaintext message. The value of the initialization vector has to be a pseudo-random or random value. It is added to the first plaintext block using an XOR operation prior to the initial encryption operation. The ciphertext output from the first encryption block is subsequently used as the initialization vector for the next plaintext block meant to encrypt. The OFB (output feedback) mode repeatedly encrypts the vector to help create a key stream to emulate a synchronous stream cipher. The CTR (new counter) mode also makes use of a key stream, but the required randomness of the vector is created by using the initialization vector as a block counter. This counter is then encrypted for each block of plaintext that requires encryption. How Does Block Cipher Padding Work? Some block cipher modes such as CBC, will only work when provided with a complete plaintext block of data. If the message is only extended to meet the length requirement by using zero bits, it will prove insufficient since a receiver is not able to differentiate between messages that only differ in the total number of padding bits. The use of zero bits also provides an attacker an opening to use the efficient padding oracle attack. As a result, a padding scheme that is not predictable is required to match the plaintext block of information to the required cipher block length. Although most solutions have proven to be susceptible to the padding oracle attack, the padding method 2 defined by ISO/IEC 9797-1 has been proven to be the most secure block cipher padding scheme. This method adds a “one-bit” and then extends the final block with zero bits. Famous Block Ciphers DES and Lucifer The first civilian block cipher is generally recognized to be the Lucifer cipher created at IBM in the 1970s. The cipher was based on Horst Feistel’s work. This algorithm was subsequently revised and adopted as the U.S. Federal Information Processing Standard, otherwise referred to as DES (Data Encryption Standard). The United States National Bureau of Standards (NBS) selected the algorithm after making a very pubic invitation for submissions from industry and the public. Once the NBS (and allegedly the National Security Agency) made internal changes to the algorithm, DES was released to the public in 1976. The DES algorithm was created to help make a cipher that was resistant to attacks that were only known to the NSA and later by IBM at the time of publication. These attacks would be “rediscovered” and later published by Adi Smair and Eli Biham in the late 1980s. The technique published was called differential cryptanalysis and continues to remain one of the effect attacks against block ciphers today. Another method used to attack block ciphers is linear cryptanalysis, but it is not known if this method was known by the NSA prior to the publication of the attack by Mitsuru Matsui. The publication of DES resulted in a significant amount of publications in the cryptography field and helped inspire new cipher designs in both industry and government circles. The DES cipher includes a standard block size of 64 bits and a key size of only 56. The 64 bit size would become the de-facto standard block size in block ciphers subsequently created and modeled off of the DES algorithm. The 56 bit key size was mandated by government law and would ultimately prove crackable by the Electronic Frontier Foundation in 1998. As a result, DES was extended through the release of Triple DES. In Triple DES, each block is encrypted with three independent keys (of 168 and 112 bit lengths) or using two keys of 112 and 80 bits. Industry widely adopted triple DES as the replacement for single DES. At the time of this writing, Triple DES is considered secure; however, NIST does not recommend using the two-key version of the algorithm in the wild due to the lack of security inherent with the use of the 80 bit key. IDEA (International Data Encryption Algorithm) is a block cipher that was first described in 1991 by James Massey and Xuejia Lai as a potential replacement for DES. The IDEA algorithm uses a 128 bit key and works on 64 bit blocks of information. There are a total of eight transformations in a single round of encryption along with an output transformation that is referred to as a half-round. The encryption and decryption process for the algorithm is similar and the security of the cipher is aided through interleaving operations from different groups. These include modular multiplication and addition and the use of XOR. Ronald Rivest designed the RC5 block cipher in 1994. A unique difference in RC5 when compared to other block ciphers is that it makes use of a variable key size (0 to 2040 bits) as well as a variable block size (32, 64, or 128 bits). The cipher is designed to also have a variable number of rounds ranging from zero to 255. The originally published settings for the algorithm were a 128 bit key, 64 bit data block, and 12 rounds of encryption. Today, 18-20 rounds of the algorithm are considered to be necessary to avoid being susceptible to a differential attack using chosen plaintexts. The overall structure of the RC5 algorithm resembles a Feistel network and the encryption and decryption routines are able to be identified in a few lines of programming code. The key schedule expands on the primary key through the use of one-way functions that include the binary expansion of both e and the use of the golden ratio. DES was ultimately succeeded by NIST in 2001 by the Advanced Encryption Standard (AES). The AES algorithm was created by Vincent Rijmen and Joan Daemen under the original submission name of Rijndael. The published cipher includes key sizes of 128, 192, or 256 bits as well as a fixed block size of 128 bits. The original Rijindael algorithm was able to use any block and key size that was a multiple of 32 with a minimum size of 128 bits. AES conducts operations on a 4×4 column that is a major order matrix of bytes that is called the “State.” The cipher uses the key size to determine the total number of repetitions of transformation rounds that will be used to convert plaintext to ciphertext. In AES, the following the total number of “cycles” conducted based on the key size: 10 cycles of repetition for 128-bit keys. 12 cycles of repetition for 192-bit keys. 14 cycles of repetition for 256-bit keys. Every round of AES includes a number of processing actions. Each of these will include five different states that also includes one depending on the original encryption key. When decrypting ciphertext, reverse rounds are applied to transform ciphertext back to original plaintext using the same key for both operations. Stream Cipher Background Stream ciphers make use of a symmetric key that uses plaintext combined with a pseudorandom cipher digit stream also known as a keystream. Stream ciphers will encrypt plaintext digitse “one at a time” along with the corresponding figure of the keystream. The resulting output will provide the corresponding output of the ciphertext stream. Another name for the stream cipher is the state cipher since every digit is dependent on the current state of the cipher. Typically a digit will be a bit and the combination operation will use the XOR operation. Pseudorandom keystreams are normally created from a random seed value that uses digital shift registers. The seed value will also function as the key for decrypting the cipher stream. Unlike block ciphers, stream ciphers represent a different approach to encrypting and decrypting information. In order to avoid being cracked, stream ciphers should not use the same seed twice or else and adversary may be able to crack the code. What are the Types of Stream Ciphers? Stream ciphers will create successive elements of keystreams based on their internal state. The state is updated either independently of the plaintext and ciphertext messages which is a synchronous stream cipher. Self-synchronizing stream ciphers on the other hand are able to update their state that is based on previous ciphertext digits. Synchronous Stream Ciphers Synchronous stream ciphers use a stream of pseudo-random digits that are created independently of the ciphertext and plaintext messages. These digits are subsequently combined with plaintext for encryption or with the ciphertext for decrypting information. In the most common implementation of the synchronous stream cipher, binary digits are used and the keystream is combined with plaintext using the XOR operation. The official term for the combination of this information is the binary additive stream stream. For synchronous stream ciphers, both the sender and receiver must use the same information in order for decryption of the ciphertext to be successful. If synchronization between the sender and receiver is out of synch, there are a few approaches to resynch the two stations. First is the use of various offsets to use systematically until synchronization is achieved. Another approach to resynchronize the two stations is to tag ciphertext with markers at set points in the cipher output. If there is any digit corrupted in this transmission then only one digit will be corrupted in the plaintext and the error will not impact the remainder of the message. If the transmission error rate it high, this method is useful. As a result of this property; however, synchronous stream ciphers can be very susceptible to active attacks by adversaries with access to the stream. Self-Synchronizing Stream Ciphers The self-synchronizing stream cipher uses a number of the previous N ciphtertext digits to aid in the computation of the keystream. This scheme is known as the self-synchronizing stream cipher, or ciphertext autokey (CTAK). This concept was originally patented in 1946 and allows the receiver to automatically synchronize with the keystream generator after receiving N ciphertext digits. This makes it easier for a sender or receiver to recover if there are digits dropped or added to the message stream. In this scheme, the single-digit error will be limited in overall effect. A block cipher that operates in CFB (cipher feedback) mode is an example of a self-synchronizing stream cipher. RC4 is the most widely used stream cipher in software throughout the world. Other ciphers that use this technique include: A5/1, A5/2, Chameleon, FISH, Helix, ISAAC, MUGI, Panama, Phelix, Pike, SEAL, SOBER, SOBER-128 and the WAKE cipher.
1 Introduction to Fluidised Bed Reactor Design for Pyrolysis Pyrolysis is the process where thermal conversion of organic matter occurs using a catalyst in the absence of oxygen, typically to produce liquid fuel. The products of biomass pyrolysis include bio-oil, biochar and Non-Condensable Gases (NCG) including various compositions of hydrogen, carbon monoxide and carbon dioxide depending on the biomass selected (1). The condensed gases collected during pyrolysis result in the final desired product, bio-oil. Pyrolysis has been drawing increasing attention due to its high efficiency and reduced impact on the environment in contrast to the crude oil sector as it creates an opportunity to recover energy from waste materials. Various raw materials may be pyrolysed, for example, used tyres are pyrolysed in some regions of the world. In this article, the pyrolysis of agricultural residues will be analysed. The bio-oil collected during pyrolysis produces approximately 40 MJ kg-1, this is similar to the amount of energy contained in other commercially produced fuels such as crude oil, diesel and petrol which contain 45.5 MJkg-1, 45.8 MJ kg-1 and 46.6 MJkg-1 respectively (2). During pyrolysis woody biomass is converted according to the reactions illustrated below: Biochar produced during pyrolysis, as a by-product, enhances crop yield and acts as a soil enhancer providing numerous nutrients to the soil and thus improving crop yield. Excess biochar generated can be sold to the agricultural sector to recover costs. As a result, it is recommended that biochar generated be sold as a soil amendment (3). For the maximum production of char and gases, the thermal degradation process and conditions should be optimised for maximum bio-oil production for fast pyrolysis (4). For optimum reaction conditions, the following is recommended (5): moderate temperatures (optimal temperature taken as 500 °C) rapid heating of biomass particles short residence time of the pyrolysis vapours fast quenching of pyrolysis vapours to condense the bio-oil Fast pyrolysis typically results in a product distribution of 75 wt.% bio-oil, 12 wt.% char and 13 wt.% gases (5). The bio-oil generated has the following properties: low pH, low heating value, poor, volatility, high viscosity and high oxygen content. The quality of the bio-oil can be enhanced by using a catalyst during the pyrolysis (6). Char is a solid by-product consisting of carbon, oxygen, hydrogen and Nitrogen. The char yield range is dependent on the pyrolysis temperature and the yields can vary from 10 – 20 wt.%. Coke is formed on the surface of the catalyst during fast pyrolysis. This catalytic coke can result in the deactivation of the catalyst and, as a result, should be removed. This is done by burning it away (1). 2.1 Catalyst selection Numerous investigations have concluded that catalytic fast pyrolysis optimises bio-oil yield and quality by enhancing the NCG emitted. The amount of char produced is also decreased, which thereby minimises the instability or ageing of the bio-oil (7). Due to the endothermic nature of this pyrolysis reaction, adding a catalyst will decrease the overall process costs and energy consumption, as it lowers the temperature of the reaction. A variety of catalyst choices exist, and the selection is dependent on process feedstock and the pyrolysis process system selected. An LDH catalyst is recommended. The catalyst eliminates the need for bio-oil upgrading and simplifies the production procedure (8). 2.2 Biomass Choice Eucalyptus was selected as the type of biomass modelled due to its rapid growth rate and abundant supply in South Africa. Also, it contains a smaller percentage of ash and Nitrogen when compared to other types of biomass. (9). 2.3 Heat transfer The rate of heating directly influences the reaction pathway and substances produced. Rapid heating results in smaller amounts of char. Moreover, the oil yield is affected by the heating rate, which decreases at lower heating rates (10). The main methods that heat transfer occurs during flash pyrolysis are gas-solid heat transfer using convection and solid-solid heat transfer utilising of conduction. A fluidised bed is beneficial in that 90% of the heat transfer is from conduction and the remainder from convection. Also, due to fluidisation, attrition occurs whereby there is friction between the biomass and the hot catalyst. This erodes the surface of the biomass, thereby exposing fresh biomass for a reaction as well as eroding the carbon layer around the catalyst which can hinder its activity. This can slightly reduce particle size of the biomass. However, the con is that micro carbon is formed, which can prove difficult to remove from the vapour phase and may make up a component of the bio-oil. However, the amount of micro carbon formed is minimal in fluidised beds when compared to other types of pyrolysis reactors (4) The hot sand flows from the combustor to the pyrolyser as illustrated below: 2.4 Residence Times Vapour residence times of less than 2 s are recommended as longer residence times result in secondary cracking of the primary products leading to reduced yield and negatively influencing the quality of the bio-oil (4). 2.5 Particle Size A wood particle size of 4 mm was used as recommended by Liao & Thomas (11). 3 Mass and Energy Balance A mass and energy balance is vital for the sizing of the reactors and needs to be conducted over the enter pyrolysis process as a whole. The moisture content of the biomass should also be known for this mass and energy balance. The chemical formula for the biomass can be obtained in a study conducted by Adebayo et al. (3) For a full layout of the entire pyrolysis production process, refer to the diagram below. This article will illustrate the calculations necessary to size the combustor and pyrolyser, outlined in red. The main goal for conducting an energy balance in the pyrolyser is to achieve a temperature of 500 °C to enable fast pyrolysis to produce a high-quality bio-oil. The energy required for the pyrolyser is obtained by heating the catalyst, modelled as sand using the software AspenPlus for the simulation, in the combustor which should operate at 900 °C. The catalyst is then fluidised and overflows via the overflow pipe into the pyrolyser for heat exchange with the woody biomass. An energy balance should be first conducted over the pyrolyser to determine the catalyst flow rate required to provide sufficient energy. The diagram below illustrates the main streams for this energy balance. The catalyst (Qcatalyst) needs to provide enough energy to heat the biomass (Qbiomass) and water contained in the wood (Qwater,1) exiting the dryer at a specified temperature and heat it to 500 °C, evaporate the water at 100 °C (Qevap), heat the water from boiling point to 500 °C (Qwater,2) as well as supply energy for pyrolysis (Qpyrolysis) which is an endothermic reaction. Lastly, adequate energy is needed to compensate for energy losses to the environment (QLoss,pyr). Equation 1 illustrates how the above energy requirements are determined. All Q values listed in equation 1 are found using equation 2, Qevap is found using equation 3 and the latent heat of vaporisation and mass of water vaporised, m. The energy needed to heat the catalyst (Qcatalyst) is supplied by the combustor, which burns both NCG and biochar to provide this energy. In the NCG, the hydrogen (QH2 in NCG) and carbon monoxide (QCO in NCG) combust to provide energy and, in the biochar, the carbon (QC in char) and hydrogen (QH in char) also combust to provide energy. The amount of biochar needs to: provide sufficient energy to heat the sand from 500°C to 900 °C (QNCG), heat up the NCG exiting the heat exchanger from 500 °C to 900 °C, heat the air from 25 °C to 900 °C (Qair) and heat up the carbon dioxide (Qco2) and water vapor (QH2O) formed from the combustion of bio-char at the specified temperature it enters at to 900°C and provides energy to recompensate for the heat loss to the environment, QLoss,comb. The bio-char flow rate is varied until the energy balance is satisfied. The amount of NCG selected is a fixed variable and based on the volumetric flowrate of gas entering the bed to provide a suitable operating velocity and a sized diameter of the bed as explained in the Fluidised Beds section of this article. Qcatalyst, QNCG, Qair, Qco2 and QH2O are determined according to equation 2. QLoss,comb is found as shown in the Insulation section. QC in char, QHinchar, QH2 in NCG and QCO in NCG are found by multiplying the molar flowrate by the heat of reactions. Equation 4 shows the energy balance. 4 Fluidised Bed Reactor Design for Pyrolysis: Fluidised Bed Design The process consists of fluidised beds, namely the Combustor and the Pyrolyser as depicted in the figure below: Figure 1: Schematic of fluidised beds adapted from Swart (12). The operating velocity, height, diameter, wall thickness as well as the distributor plate are designed according to the procedure below (13) using the values for the density, and viscosity, to be calculated for the relevant components at the specified temperature and pressure of 101,325 kPa. The assumption that the char particles immediately react to form flue gases in the combustor and that the wood chips instantly pyrolyse due to fast pyrolysis to form NCG is made. Therefore, the following calculations are done by only considering the solid catalyst particles which are present throughout fluidisation in the circulation fluidised beds. According to the Geldhart chart, the sand was classified as a group B powder. Thus the relevant equations for group B powders were used below. Figure 2: Geldhart powder categorisation (14) The Archimedes number, the ratio of viscous forces within the bed compared to outside the bed, is calculated according to equation 5, this is used in equation 6, (Wen and Yu (15) to solve for the Reynolds number at minimum fluidisation (Remf). Using Remf it is possible to solve for the velocity at minimum fluidisation (umf) in equation 7 which is the start of fluidisation. Increasing the velocity beyond this results in the creation of bubbles. If the velocity is increased more significantly, the velocity will surpass the terminal velocity (ut) calculated using equation 8, which is the velocity to transfer particles out of the bed. Further increasing the velocity results in entrainment. The operating velocity, uop, should be greater than umf but smaller than ut. The porosity at minimum fluidisation is the bed voidage at the instant fluidisation occurs. It can be solved according to equation 9. The height at minimum fluidisation, Hmf, assumes that the extra height of the particles at fluidisation compared to without fluidisation is the height of the particles in equation 10. The slugging velocity, ums in equation 11, is the velocity at which the bubbles created to have the same diameter as that of the bed. The operating velocity should be below the slugging velocity. The operating Reynolds number is determined according to Where the operating velocity is selected to meet the criteria outlined in equation 13. Furthermore, an appropriate range of operating velocities for fluidised beds is in the range 1 ms-1 to 3 ms-1 (16). The pressure drop across the bed is found by equation 14 where g is 9.81 ms-1 and hp is the height of the particles in the bed. The pressure drop across the distributor, , is the larger of 10 % of or 3500 kPa. The gas velocity through an orifice in the distributor is found by equation 15. Corr is 0.6. It should be ensured that the orifice velocity is greater than 50 m s-1 to prevent particle backflow through the orifice (12). The number of orifices per bed area, Norr, is found by equation 16. Multiplying the bed area by the number of orifices enables the total number of orifices to be found. The height of the beds is calculated using equation 17 and 18 to determine the total disengagement height to find the height to place the overflow pipe. The figure below illustrates the zones calculated when determining the bed height. The region depicted “Bed” is the packed bed height. Figure 3: Height regions in a fluidised bed (14) In equation 17, dbv is the diameter of a bubble at the surface. The total heights should be multiplied by a safety factor of 1.2, and the overall height for both beds is selected as the height of the taller bed. The thickness of the walls is important to ensure that the walls can withstand the pressure created. Equations 19 and 20 allow the calculation of the appropriate wall thickness due to hoop stress and longitudinal stress, respectively. P is the design pressure of 101.325 kPa for both beds, S is the maximum allowable yield stress of stainless steel and E is the joint efficiency which is approximated to be 0.6. The final minimum wall thickness is the sum of these two values. 5 Fluidised Bed Reactor Design for Pyrolysis: Insulation Insulation has demonstrated to be advantageous in the following circumstances: Decreasing energy costs Improving the safety of employees working in hot environments Temperature control of equipment Decrease in the utilisation of natural resources Decrease in pollution as a result of noise. Insulation materials should be selected based on the service temperature range of the materials, whether they would react with the raw materials and their combustibility. Furthermore, the thickness choice should be based on the thickness of the material typically available from suppliers (17,18). Protection of insulation is crucial for the longevity of the materials. As a result, firebrick clay is recommended as the outermost layer of insulation, both to protect the inner insulation as well as to provide an additional layer of insulation (18). The Qloss used in the energy balance is determined according to equation 25 outlined below. In equation 21, RinsulatorX refers to the outermost layer of insulation. The figure below depicts an example of materials used in combination with equation 21(21) Heat transfer by convection and conduction are modelled according to equations 22 to 24. In equation 23, and represent the radii of the combustor before and after the layer of insulation where L is the height of the reactor. A represents the area across which heat transfer occurs. Equations 26 and 27 illustrates how the area is calculated where L is the height of the reactor, already determined. Temperatures larger than 60°C cause discomfort in a plant environment for the plant workers. For this reason, the surface wall temperature of the fire clay brick on the sides should be restricted to a maximum of 55 °C. 6 Fluidised Bed Reactor Design for Pyrolysis: Safety Some safety elements have already been considered in the previous sections of this report, for example, a maximum allowable wall temperature of 55 °C. A key safety consideration is the absence of oxygen in the pyrolyser as this could lead to an explosion. To combat this issue, a para-magnetic sensor to detect the presence of oxygen is placed in the pyrolyser to issue a warning (3). Additionally, the entire system should be cleaned with inert gas, such as Nitrogen, to eliminate the presence of oxygen both at the commencement of the process as well as whenever the oxygen sensor issues a warning. A pressure sensor should be placed in both the combustor and the pyrolyser to detect significant pressure changes as the normal operating pressure of both beds is atmospheric pressure. An anomalous pressure reading could indicate a blockage in either the screw conveyors or the distributor and would require further investigation. A level sensor should be placed above the total disengagement height to detect the presence of large amounts of particles and to issue a warning that the fluidisation velocity is too high, and that reactor inspection is necessary. Before any inspection, the reactors should be allowed to cool to room temperature. Employee education is vital to ensure a safe process. Personnel working around the fluidised beds should be informed of safety strategies and become aware of all hazards as well as safeguard themselves by wearing PPE. Aho, A, Salmi, T and Murzin, DY (2013), Role of Catalysis for the Sustainable Production of Bio-Fuels and Bio-Chemicals, Elsevier. Merckel, R (2014), “Fast and microwave-induced pyrolysis bio-oil from eucalyptus grandis: Possibilities for upgrading,” Department of Chemical Engineering, University of Pretoria. Adebayo, K., Coetzee, D., Leher, S. and Viljoen, S, (2019), “Catalytic Pyrolysis of Biomass”, Department of Chemical Engineering, University of Pretoria. Bridgwater, T, Meier, D and Radlein, D (1999), “An overview of fast pyrolysis of biomass,” Organic Geochemistry, 30, 1479 – 1493. Bridgewater, A (2002), Fast Pyrolysis of Biomass: A Handbook Volume 2, CPL Press, Newbury, UK. Huber, G, Iborra, S and Corma, A (2006), “Synthesis of transportation fuels from biomass: Chemistry, catalysts, and engineering,” Chemical Reviews, 106 (9), 4044 – 4098. Zhang, L, Bao, Z, Xia, S, Lu, Q and Walters, K (2018), “Catalytic pyrolysis of biomass and polymer wastes,” Catalysts, 8 (659), 2–45. Merckel, R (2019), “The impact of oxygen exothermicity on energy quality of biofuels, and catalytic upgradation,” Department of Chemical Engineering, University of Pretoria. Guerrero, R, M. and Millera, A (2005), “Pyrolysis of eucalyptus at different heating rates: studies of char characterisation and oxidative reactivity,” Journal of Analytical and Applied Pyrolysis, 74, 307 – 314. Sinha, S, Jhalani, A, Ravi, M and Ray, A (sa), Modelling of Pyrolysis in Wood: A Review, Department of Mechanical Engineering, Indian Institute of Technology, New Delhi. Liao, W and Thomas, SC (2019), \Biochar particle size and post-pyrolysis mechanical processing affect soil pH., water retention capacity, and plant performance,” MDPI Soil Systems, 3 (14). Swart, S. D., (2012), “Design, Modelling and Construction of a Scalable Dual Fluidised Bed Reactor for the Pyrolysis of Biomass,” Department of Chemical Engineering, University of Pretoria. Bamido, A (2018), “Design of A Fluidised Bed Reactor for Biomass Pyrolysis,” Master’s Thesis, University of Cincinnati. Du Plessis, B. (2019), “Fluidisation”, University of Pretoria, Dept of Chemical Engineering. Wen, C. Y., and Y. H. Yu. “A Generalised Method for Predicting the Minimum Fluidisation Velocity.” Freshwater Biology, Wiley/Blackwell (10.1111), 17 June 2004. Vakkilainen, E.K, (2017), Steam Generation from Biomass, Butterworth-Heinemann The Thermal Insulation Association of Southern Africa, (2001), “Thermal Insulation Handbook,” Association of Architectural Aluminium Manufacturers of South Africa, Lyttelton.
What are tropical grassland explain? Tropical grasslands, or savannas, are also the homes of primates in Africa and Asia, no savanna-living primates exist in South America. Tropical grasslands comprise a mixture of trees and grasses, the proportion of trees to grass varying directly with the rainfall. What are tropical grasslands Class 7? They grow in regions of moderate rainfall. Grass in this region grows up to 3-4 metres high, and the Savannah grasslands of Africa are of this type. Elephants, zebras, giraffes, deer, leopards, etc. are animals found in this region. Where are tropical grasslands? The savannas of Africa are probably the best known but tropical grasslands are also located in South America, India and Australia. There are llanos in Colombia and Venezuela, campos of the Brazilian highlands, pantanals of Upper Paraguay, plains in Australia and the Deccan Plateau of India. What are tropical grasslands Class 8? The tropical grasslands are found between the equatorial forests and the tropical deserts. These areas receive moderate rainfall during the summer season. These areas also experience a distinct dry season. Thus, tall grasses grow in such areas. What are tropical grasslands Class 9? Natural Vegetation in Tropical Grasslands Tropical grasslands, also known as Savannas, have tall grasses and short trees. The grass is coarse and grows up to 12 feet. Grasses have long roots which go down deep down into the soil in search of water. Trees are short and scattered because of lack of rainfall. What are the main features of tropical grasslands? Answer: Main feature of tropical grassland is rainfall, anf moisture in soil whereas the main feature of temperate grassland tree and shrubs are very rare. Where are tropical grasslands found Class 7? Answer: Tropical grasslands occur on either side of the equator and extend till the tropics. This vegetation grows in the areas of moderate to low amount of rainfall. The grass can grow very tall, about 3 to 4 metres in height. Savannah grasslands of Africa are of this type. What are grasslands Class 5? Grasslands cover more than one-fifth of the land surface on the earth. These are enormous and flat plains of grass with very few trees and bushes. The summers are hot and the winters are cold. The rains are less, so forests cannot grow. What is grassland short answer? grassland, area in which the vegetation is dominated by a nearly continuous cover of grasses. Grasslands occur in environments conducive to the growth of this plant cover but not to that of taller plants, particularly trees and shrubs. Is tropical grassland of Brazil? The Tropical Grasslands of Brazil are known as Campos. The Campos, grassland with few trees or shrubs except near streams, lies between 24°S and 35°S, it includes parts of Brazil, Paraguay and Argentina, and all of Uruguay. How many tropical grasslands are there? There are five main types of biomes, aquatic, forest, desert, grassland, and tundra. Tropical Grassland Vegetation. |Tropical Grassland||Temperate Grassland| |Grass in this region can grow up to 3-4 metres tall, and the Savannah grasslands of Africa are an example.||The grass is short and nutritious.| What are the 3 types of grasslands? The grassland biome includes terrestrial habitats that are dominated by grasses and have relatively few large trees or shrubs. There are three main types of grasslands—temperate grasslands, tropical grasslands (also known as savannas), and steppe grasslands. What is tropical grassland savanna? Savannas – also known as tropical grasslands – are found to the north and south of tropical rainforest biomes. … Savanna vegetation includes scrub , grasses and occasional trees, which grow near water holes , seasonal rivers or aquifers . Plants and animals have to adapt to the long dry periods. Where are savannah grasslands found Class 7? In Africa, tropical grasslands are known as Savanna, in South America they are known as Llanos and Campos in Brazil. 3. These grasslands are known as Prairies in North America, Pampas in South America, Veld in South Africa, Steppes in Europe and Down in Australia. What is tropical grasslands climate? Tropical grasslands have dry and wet seasons that remain warm all the time. Temperate grasslands have cold winters and warm summers with some rain. … A few trees may be found in this biome along the streams, but not many due to the lack of rainfall. What are tropical grasslands known in Africa? The tropical grasslands of Africa are known as Savannas. A savanna is a rolling grassland scattered with shrubs and isolated trees, which can be found between a tropical rainforest and desert biome. What is the name of tropical grassland of Venezuela? Llanos, (Spanish: “Plains”) wide grasslands stretching across northern South America and occupying western Venezuela and northeastern Colombia. How many grasslands are there in India? Now imagine billions of them smiling from seven different unique grasslands that dot the country—coastal grasslands, riverine alluvial grasslands, montane grasslands, sub-Himalayan grasslands, tropical savannas and wet grasslands. And how unknowingly we never noticed! Why are tropical grasslands important? But tropical grasslands and savannas, including Africa’s Serengeti and Brazil’s Cerrado, are also important tropical ecosystems. They are home to many of the world’s large mammals and they provide important livestock grazing lands and sources of food for vast numbers of people. What are the various uses of tropical grassland? Grasslands clearly provide the feed base for grazing livestock and thus numerous high-quality foods, but such livestock also provide products such as fertilizer, transport, traction, fibre and leather. What are the types of grasslands? There are two main kinds of grasslands: tropical and temperate. Examples of temperate grasslands include Eurasian steppes, North American prairies, and Argentine pampas. Tropical grasslands include the hot savannas of sub-Saharan Africa and northern Australia. What are temperate grasslands? Definition of temperate grassland Temperate grasslands are characterized by the predominant vegetation i.e. grasses. Temperate grasslands generally have no trees. Temperatures can vary very much in this biome. … Prairies have long grasses whereas steppes have short grasses, but both are Temperate Grasslands. What is the name of grassland of South Africa? The grasslands of South Africa is known as Veld. What is in a grassland? The grassland biome is made up of large open areas of grasses. They are maintained by grazing animals and frequent fires. Types of grasslands include savannas and temperate grasslands. Do we have grasslands in the Philippines? Of the forage resources, it is estimated that the Philippines has 3.5 million hectares of open grasslands and about 400,000 hectares out of the 2.5 million hectares of land under coconuts which are currently utilized for grazing. What is a grassland habitat? Grassland habitats are places that receive more rain than deserts but less precipitation than forests. Most of the plants here are grasses, which don’t need as much water as forest vegetation. Are there grasslands in Antarctica? Grasslands cover one fourth of the Earth’s land and are found on every continent, except for Antarctica. Grasslands occur where it is too wet for deserts but too dry for forests. What are grasslands in Argentina called? the Pampas, also called the Pampa, Spanish La Pampa, vast plains extending westward across central Argentina from the Atlantic coast to the Andean foothills, bounded by the Gran Chaco (north) and Patagonia (south). Is Australia a temperate grassland? Positioned between mesic forests and the arid interior of Australia, the Southeast Australian Temperate Savannas span a broad north-south swatch across New South Wales. What are the other names for tropical grasslands? Tropical grasslands can also be called tropical savannas. A savanna is another word for ‘plain. ‘ What is the difference between tropical and temperate grasslands? Tropical grasslands have dry and wet seasons that remain warm all the time. Temperate grasslands have cold winters and warm summers with some rain. The grasses die back to their roots annually and the soil and the sod protect the roots and the new buds from the cold of winter or dry conditions. Which of the following is type of tropical grassland ecosystem? Moreover, tropical grasslands are also called Savanna. These grasslands contain shrubs and small trees that are dry in nature. Also, the tropical grasslands contain quite short plants which makes it an excellent hunting ground. For instance, the African savanna is one of the tropical grasslands. What countries have tropical grasslands? Tropical grasslands can be found in Australia, India, Africa, and South America. They surround tropical forests. Tropical grasslands can also be found in North America. What latitude are tropical grasslands? The tropical savanna is found on various continents in the tropical region of our planet, alongside the equator at around 10°–20° latitude both North and South. What is the difference between tropical rainforest and tropical savanna? Rainforests are characterized by lots of rain and a dense canopy, usually with very large tries and an incredible varied ecosystem. Savannas, or savannahs, are usually grasslands, drier, and their trees are shorter and more sparse. Which part of Brazil is grasslands? Pampas and Campos is the name of tropical grassland in Brazil. What are the grasslands of Australia called? The Southeast Australia temperate savanna ecoregion is a large area of grassland dotted with eucalyptus trees running north–south across central New South Wales, Australia. |Southeast Australia temperate savanna| |Biome||temperate grasslands, savannas, and shrublands| Where are Savannah grasslands located *? Where are Savannah grasslands found? They are found in East Africa.
It is called twilight at the interval before sunrise or after sunset, during which the sky is still somewhat illuminated. Twilight occurs because sunlight illuminates the upper layers of the atmosphere. The light is diffused in all directions by the molecules of the air, reaches the observer and still illuminates the environment. The map shows the parts where they are during the day and where they are at night. If you want to know exactly the time that dawns or dusk in a specific place, in the meteorological data we have that information. Why do we use UTC? Universal coordinated time or UTC is the main standard of time by which the world regulates clocks and time. He is one of the several successors closely related to Greenwich Mean Time (GMT). For most common purposes, UTC is synonymous with GMT, but GMT is no longer the most precisely defined standard for the scientific community.
Authors: Dr. Francis Collins While primarily a respiratory disease, COVID-19 can also lead to neurological problems. The first of these symptoms might be the loss of smell and taste, while some people also may later battle headaches, debilitating fatigue, and trouble thinking clearly, sometimes referred to as “brain fog.” All of these symptoms have researchers wondering how exactly the coronavirus that causes COVID-19, SARS-CoV-2, affects the human brain. In search of clues, researchers at NIH’s National Institute of Neurological Disorders and Stroke (NINDS) have now conducted the first in-depth examinations of human brain tissue samples from people who died after contracting COVID-19. Their findings, published in the New England Journal of Medicine, suggest that COVID-19’s many neurological symptoms are likely explained by the body’s widespread inflammatory response to infection and associated blood vessel injury—not by infection of the brain tissue itself . The NIH team, led by Avindra Nath, used a high-powered magnetic resonance imaging (MRI) scanner (up to 10 times as sensitive as a typical MRI) to examine postmortem brain tissue from 19 patients. They ranged in age from 5 to 73, and some had preexisting conditions, such as diabetes, obesity, and cardiovascular disease. The team focused on the brain’s olfactory bulb that controls our ability to smell and the brainstem, which regulates breathing and heart rate. Based on earlier evidence, both areas are thought to be highly susceptible to COVID-19.
1. The principle of magnetic levitation: the rotor will deviate from its reference position when subjected to a downward disturbance. At this time, the sensor detects the displacement of the rotor away from the reference point, and the microprocessor as the controller converts the detected displacement into a control signal, and then power The amplifier converts this control signal into a control current, which generates a magnetic force in the actuator magnet, which drives the rotor back to its original equilibrium position. 2. Magnetic levitation technology (English: electromagnetic levitation, electromagnetic suspension, referred to as EML technology or EMS technology) refers to a technology that uses magnetic force to overcome gravity to suspend objects. 3. The current levitation technologies mainly include magnetic levitation, optical levitation, acoustic levitation, airflow levitation, electric levitation, particle beam levitation, etc. Among them, the magnetic levitation technology is relatively mature. 4. There are many forms of magnetic suspension technology, which can be mainly divided into passive suspension in which the system is self-stabilizing and active suspension in which the system cannot be self-stabilizing. 5. Maglev train is a new type of vehicle composed of non-contact magnetic support, magnetic guidance and linear drive system, mainly including superconducting dynamic magnetic levitation train, normally conducting electromagnetic attraction type high-speed maglev train and normally conducting electromagnetic attraction type medium and low speed maglev. . First Like Comment Share Report Post time: Sep-06-2022
mRNA has a linear structure having uracil base instead of thymine, and its secondary structure could be hairpin, stem-loop, etc. ; while tRNA has Cloverleaf structure that carries three specific stem-loops; and rRNA has much complex structure with numerous folds and loops. mRNA acts as the messenger of DNA; tRNA carries amino acids during protein synthesis; rRNA is the protein producer of the cell. These three RNA plays a vital role in the process of transcription and further in protein synthesis. These are essential factors for every cell as life would not have been possible in their absence. Ribonucleic acid is abbreviated as RNA, which is the compound active in cellular protein synthesis. It has high molecular weight and acts as a genetic code in some viruses. They have nitrogenous bases as adenine, guanine, cytosine and uracil (replacing thymine of DNA). They are single-stranded biopolymer. The RNA has ribose nucleotides where the nitrogenous bases are attached to the ribose sugar which are attached by the phosphodiester bonds forming the chain or strands of different lengths. In the year 1965, R.W. Holley described the RNA structure. The essential and significant process of molecular biology is the flow of genetic information in a cell, which is three steps; DNA makes RNA that leads to proteins. Therefore, proteins are regarded as the workhorses of the cell, that play essential roles in the cell. So, whenever the cell needs any protein it sends signals by activating the that particular protein’s genes and the DNA coding for that protein, produce multiple copies of that part which are further processed, transcribed and translated. The process of RNA transcription is mediated by the RNA polymerase (enzyme) that construct RNA complement to template DNA. The method of transcription is appropriately controlled by three chief factors, promoter, regulator and inhibitor. In this context, we will discuss the structural as well as the functional differences between the three types of RNA in eukaryotic cells. Content: mRNA Vs tRNA Vs rRNA |BASIS FOR COMPARISON||mRNA||tRNA||rRNA| |Meaning||mRNA or messenger RNA is the connection between gene and protein, and it is the result of the transcribed gene by RNA polymerase.||tRNA or transfer RNA is a cloverleaf shaped RNA molecule and provides specific amino acids to the ribosomes.||rRNA or ribosomal RNA is used for the formation of the ribosomes.| |Role||mRNA carries genetic information from the nucleus to ribosomes for the synthesis of proteins.||tRNA carries specific amino acids to the ribosomes to assist the protein biosynthesis.||rRNA these provide the structural framework for the formation of ribosomes.| |Size||In mammals, the size of the molecules is around 400 to 12, 000 nucleotides (nt).||The size of the molecule of tRNA is 76 to 90 nucleotides (nt).||The size of the molecule of rRNA may vary from the 30S, 40S, 50S and 60S. |Shape||mRNA is linear in shape.||tRNA is a cloverleaf shape.||rRNA is a sphere shape (complex structure).| |Comprise of||mRNA is comprised of codons.||tRNA is comprised of anticodons.||rRNA does not have anticodon or codon sequences. Definition of mRNA The synthesis of messenger RNA or mRNA takes place in the nucleus (in eukaryotes) as heterogeneous nuclear RNA (hnRNA). Further, the processing of hnRNA releases mRNA. Now, this (mRNA) will enter the cytoplasm to take part in protein synthesis. mRNA has a short half-life, with high molecular weight. These are said as the link between gene and protein. This form of RNA or the eukaryotic mRNA is exclusively modified (post-transcription modification) just to prevent from hydrolysis by 5′-exonucleases (enzyme). So, these are capped at 5′-terminal ends by 7-methylguanosine triphosphate. This capping also helps in recognition of the mRNA for protein synthesis. At the 3′-terminal end, of mRNA there is a polymer of adenylate residues (20 – 20 nucleotides) known as poly (A) tail or polyadenosine tails. This tail provides the stability of mRNA and also prevent the attack of 3′-exonucleases. mRNA molecules also have certain modified bases like 6-methyladenylates in the internal structure; these mRNA also have intron, which is spliced out before the formation of the mature mRNA molecule. Definition of tRNA Transfer RNA or tRNA is the soluble RNA, the molecules contain approximately 75 nucleotides and have a molecular weight of 25,000. There are 20 species of tRNAs that corresponds to 20 amino acids present in the protein structure. The structure of the tRNA was first described by Holley. During protein, translation tRNA is the decoder of the message of the mRNA. The tRNA structure resembles the cloverleaf model. The structure has four arms: the acceptor arm, the anticodon arm, the D arm, the TψC arm and the variable arm. The acceptor’s arm is capped with CCA sequence (5′ to 3′). The amino acids are attached to the acceptor’s arm. The acceptor’s arm has three specific nucleotides bases (anticodon), which recognise the triplet codon of mRNA. The D arm is named after the presence of dihydrouridine. The TψC arm has the sequence of T, pseudouridine and C. The variable arm is the most variable arm and has two categories which are Class I and Class II tRNAs. The tRNA is also modified after transcription like inosine, methylguanosine, and pseudouridine. This is done to include nonstandard bases. As the ribosome cannot form protein with the help of mRNA; the anticodon, a sequence of three key bases of tRNA are complementary to the codon of three bases of mRNA. This is the first chief role of tRNA, and then the process continues as each molecule carries an amino acid that matches the mRNA codon. Definition of rRNA Ribosomal RNA or rRNA is the primary factor of ribosomes. These are factories for protein synthesis. The eukaryotic ribosomes are made up of two nucleoproteins complexes – the 60S and 40S subunits. The 60s subunit is further divided into 28S RNA, 5S RNA and 5.8S RNA, whereas 40S RNA has 18S RNA as its subunit. Key Differences Between mRNA, tRNA and rRNA Given below are the critical points to understand the variations among the mRNA, tRNA and rRNA: - mRNA or messenger RNA is the connection between gene and protein, which are formed from the transcribed gene by RNA polymerase; tRNA or transfer RNA is cloverleaf shaped RNA molecule, and assist in giving specific amino acids to the ribosomes; rRNA or ribosomal RNA is the used for the formation of the ribosomes. - mRNA carries genetic information from the nucleus to ribosomes for the synthesis of proteins; while tRNA carries specific amino acids to the ribosomes to assist the protein biosynthesis, and on the other hand, rRNA provides the structural framework for the formation of ribosomes. - mRNA is synthesised in nucleus, tRNA is synthesised in the cytoplasm, whereas rRNA is synthesised in the ribosome. - In mammals, size of the molecules of mRNA is around 400 to 12, 000 nucleotides (nt), while the size of a molecule of tRNA is 76 to 90 nucleotides (nt) and that of rRNA may vary from the 30S, 40S, 50S and 60S. - The mRNA are linear in shape; tRNA has the cloverleaf shape, and rRNA are sphere shape (complex structure). - mRNA is comprised of codons, whereas tRNA is comprised of anticodons, and rRNA does not have anticodon or codon sequences. There are three major types of RNA in a cell, which are mRNA, tRNA and rRNA. These play a significant role in protein synthesis. The mRNA are the carriers of the message and thus initiate the protein formation. This process also involves tRNA and rRNA, where tRNA brings the specific amino acids and rRNA play a role in the formation of ribosomes. The whole process takes place from the nucleus to the ribosome.
Terns are in the same family as gulls and skimmers, and 40 tern species are found worldwide. The arctic tern migrates from Antarctica and back again over a six month period. Their journey is not necessarily in a straight line, and on average the round trip migration distance covers an unbelievable 44,000 miles. All four tern species regularly found in Wisconsin, Forster’s tern, black tern, common tern, and Caspian tern are listed as state endangered. Wisconsin DNR and other groups have developed innovative ways to assist these species, and particular progress has been made with Forster’s terns. The Forster’s tern is a colony nester. Bird colonies are usually found on sites that are well protected from the usual terrestrial predators. Because colony nesting birds only congregate for a small portion of the season, local predator populations do not boom as they might with a permanent high energy food source. Forster’s terns breed in large marshes with abundant emergent vegetation, and most of the breeding population in Wisconsin inhabits the east central portion of the state or lower Green Bay. In pre-settlement days, an estimated 10 million acres of wetlands existed in Wisconsin according to Wisconsin DNR. Around 50 percent of those acres were destroyed because of conversion to agriculture, urban area, or other uses. Some colony nesters have a remarkably high site fidelity, meaning that they return to the same breeding site year after year. Forster’s terns are more nomadic than others in this sense. They require specific water levels for optimal nest success; plenty of water to maintain prey populations and deter predators, but not so much that their source of cover is below the water line. Historically Forster’s terns had many more options when choosing a colony site. Greater habitat choice led to more successful nests and higher populations. Because of low nest success and declining populations, the Wisconsin DNR began deploying nesting platforms for Forster’s terns in 1983. Nest platforms provide high quality nesting sites when natural substrates are of low quality or in short supply. After eggs are laid, rising water can easily submerge nests. Fluctuation in marsh water levels is likely exacerbated by hard surfaces and row crop cultivation due to their low water infiltration potential. Nesting platforms float on the water surface. They are attached loosely to a pole which allows them to rise and fall in response to changing water levels. Nesting platforms significantly increase nest success, and 100 percent of deployed platforms were utilized by terns in 2018. This year at Lake Puckaway, 200 breeding pairs produced 259 eggs without the assistance of platforms. At Grand River Marsh, 19 breeding pairs that were occupying tern platforms produced 58 eggs. This means that breeding pairs of Forster’s terns were able to produce over two times the eggs using platforms than on natural substrate. Lake Puckaway flooded after this nest survey was completed, and all eggs were lost. Forster’s terns do renest, but their egg production decreases after each nesting attempt. The Wisconsin DNR plans to increase the number of tern platforms to 150. Ryan Zabs, a Sun Prairie Eagle Scout, reached out to Goose Pond Sanctuary in search of his final Eagle Scout project. He helped us take a huge step towards those 150 by building 50 platforms in a single weekend. Sumner Matteson with Natural Heritage Conservation Bureau at WIDNR generously provided funding for the platform materials. While the success of tern platforms certainly is exciting, it must be noted that this is not a permanent solution. Continued restoration of large wetlands and further protection of coastal wintering grounds is the best hope to keep Forster’s terns on the Wisconsin landscape now and far into the future. Written by Graham Steinhauer, Goose Pond Sanctuary land steward
Identifying A Problem Interpreting the universe isn’t something that everyone can do. There’s a reason the people who man the Hubble telescope and other instruments like it are well-educated and fully trained. However, it appears that even these astronomers can make mistakes at the best of times. It turns out that for the last few years there’s been a discrepancy in how the universe expands, but people haven’t been able to prove there’s a problem with the measurements. Now, it seems, they can. Two Different Results To identify how fast the universe is expanding, something that’s referred to as a cosmic distance ladder is used. Essentially, this involves taking measurements of the distance between known galaxies, then using this data to make predictions about galaxies further away. These measurements have grown more accurate over the years as technology has improved, but the findings have contradicted what astronomers have expected. The contradiction between the two results has led scientists to question if what they’re doing is correct. Faster Than Anticipated Initially, it was believed that the universe is expanding at a rate of 41.6 miles per second per megaparsec. This is a finding that was observed by the European Space Agency’s Planck satellite some years ago. However, new estimates made by the Hubble telescope suggest that it’s actually growing at a rate of 46 miles per second per megaparsec. That’s an increase of 9% of what astronomers originally thought. What’s The Issue? There have been various suggestions for what has caused the discrepancy in results. One is the growing presence of dark energy which apparently now makes up 70% of the content in the universe. Given astronomers rely on light to work out their measurements, this would certainly explain any issues. Other scientists have suggested dark matter may be to blame, or maybe even the arrival of a new subatomic particle. No-one can be sure quite yet, and it might be a while until they get a definite answer. The universe is so big that we’ll probably never understand all its secrets. However, that doesn’t mean we can’t celebrate each new discovery when it comes along.
Vitamin D acts as an essential enhancer of phosphate and calcium homeostasis. Scientists are familiar with the role the hormone plays in the formation and maintenance of strong bones and teeth in vertebrates. As a hormone, it can be synthesized by the skin after prolonged exposure to sunlight. It is also obtained from a wide variety of animal products such as eggs, cheese, and milk. Lack of hormones causes various deficiencies such as rickets. You cannot easily diagnose its lack without testing for its biomarkers. There are different biomarkers, all of which have atypical actions in the body for which you can test. Out of the more than 50 known Vitamin D metabolites, only a few have been quantified scientifically. As scientists continue to widen the scope of study in this subject, surprising findings have been documented. Some of them include: In the general population, these studies are essential, and Vitamin D sampling in individual and population settings allows scientists to estimate its total supply. Since Vitamin D deficiency is a widespread public health concern, sampling allows scientists to recommend the action needed for various population groups. For example, infants, children, specific ethnicities, and women of reproductive age require different active Vitamin D biomarkers. Consistent patient monitoring is required to fully document the presence of various biomarkers in different individuals. Previously, this required either home doctor visits or having patients visit a clinic. Remote microsampling is a novel alternative that opens new pathways of care. It allows patients to take samples at home so scientists, doctors, researchers, and other healthcare professionals can easily monitor their health and wellness. The application of this smarter healthcare technology allows for unprecedented flexibility in monitoring. It's a huge improvement over traditional methods, making way for new innovations and cost savings and an improved patient and practitioner experience.
by S. Marvin Friedman About 70% of Earth’s surface is ocean. In all of them, the temperature at depths of 1000 meters or more is a constant 4 °C, constituting a vast environment populated by a diverse group of psychrophilic (“cold-loving”) microorganisms. Much of terra firma also lies in the realm of the psychrophiles: more than 20% of all soils are permafrost. Scattered about are a variety of other specialized psychrophilic environments, including cryopegs (saltwater pockets within permafrost at –10 °C that have remained liquid for 10,000 years), Antarctic dry valleys, liquid brine veins among sea-ice crystals, and cryoconite holes on the surface of glaciers. Thus psychrophiles may be the most abundant extremophiles on the planet. Yet research on these fascinating microbes has lagged behind studies on thermophiles (“heat-loving”) and halophiles (“salt-loving”). Much of what is known about psychrophiles concerns enzymes and membranes. Their cold-adapted enzymes lack non-covalent stabilizing interactions, such as hydrogen bonds, making them especially flexible and vigorous. They have high specific activities, which compensates somewhat for reaction rates being slower at low temperatures. The membranes of psychrophiles remain fluid and thus active at low temperature because the fatty acids in their phospholipids are cis-unsaturated or branch-chained, both of which sterically hinder crystallization of lipids. The genomes of psychrophiles reveal other adaptation mechanisms for coping with cold environments. Because the solubility of oxygen and the stability of free radicals are greater at low temperatures, psychrophiles are exposed to higher concentrations of reactive oxygen species (ROS). They cope by producing reductases that repair oxidized molecules, using fewer oxidizable amino acids in their proteins, and employing dioxygenases to introduce dioxygen into oxidized macromolecules. Some avoid the issue by eliminating ROS-synthesizing pathways altogether. To survive freezing, psychrophiles synthesize cryoprotectants such as glycine betaine and polymers. Cleverly, these are recycled as carbon and nitrogen reserves after prolonged periods of starvation. At low temperatures, nucleic acids become stiffer because bonds stabilizing their secondary structure are strengthened. Consequently, the efficiency of transcription and translation is reduced. Psychrophiles, accordingly, produce a large number of RNA helicases involved in facilitating both RNA folding and degradation. During growth at low temperature, psychrophiles accumulate Cold- Acclimated Proteins (CAPs), a characteristic that distinguishes them from mesophiles. Identifying CAPs has been the subject of a recent study on Pseudoalteromonas haloplanktis, a psychrophilic γ-proteobacterium from the Antarctic. Using two-dimensional gel electrophoresis, these researchers compared the proteomes of this organismwhen grown at 4 °C and 18 °C. The major protein that is upregulated at 4 °C is the trigger factor (TF), a chaperone that interacts with the elongating peptides on the ribosome to maintain them in an extended configuration until they’re long enough to initiate correct folding. TF also aids the cis-trans isomerization of proline peptide bonds, which is a rate-limiting step in protein folding. Accompanying the upregulation of TF at 4 °C is the downregulation of DnaK and GroEL, the two major heat-shock chaperones. The authors propose that protein folding is growth-rate limiting at low temperature and that under these conditions TF is the functional chaperone. Although not shown definitively, they have indications that, unlike the case in mesophiles, TF is essential in P. haloplanktis. In P. haloplanktis, TF is a monomeric protein with unusually low conformational stability (its melting point Tm is 33 °C) indicating that its function depends on increased flexibility to compensate for reduced molecular motion at low temperature. Its chaperone activity is temperature dependent—it only binds stably to an unfolded peptide at a near-zero temperature. Two oxidative stress proteins, glutathione synthetase and superoxide dismutase, are also upregulated at 4 °C. This is a clear response to the oxidative stress brought about by high oxygen solubility and the increased stability of reactive oxygen species. It is a major adaptive strategy of P. haloplanktis to enhance its redox buffering capacity at low temperature. The authors speculate that during cold acclimation on an evolutionary scale the expression of CAPs has shifted from the transient expression of cold shock proteins to their sustained synthesis. It should be noted that a comparison of the cold adapted proteins produced by P. haloplanktis, three permafrost bacteria, and an Antarctic archaeon reveals both qualitative and quantitative differences. Thus, it appears that cold adaptation mechanisms are species-specific and that no general scheme has evolved. A similar heterogeneity is also found among bacteria acclimated to grow at elevated temperatures. This study provided valuable new insights into the cellular mechanisms by which psychrophiles survive and prosper at low temperature. It’s another example of the astounding versatility that bacteria display in acclimating to extreme environments. Marvin is Professor Emeritus in the Department of Biological Sciences at Hunter College of CUNY in New York City, and an Associate Blogger for Small Things Considered. Piette F, D'Amico S, Struvay C, Mazzucchelli G, Renaut J, Tutino ML, Danchin A, Leprince P, & Feller G (2010). Proteomics of life at low temperatures: Trigger factor is the primary chaperone in the Antarctic bacterium Pseudoalteromonas haloplanktis TAC125. Molecular Microbiology, 76 (1), 120-32 PMID: 20199592
Humankind has gulped down mouthfuls of milk and other dairy products from animals, such as sheep, goats and cows, for at least 9,000 years, a new study suggests. Researchers made the discovery after analyzing and dating more than 500 prehistoric pottery vessels discovered in the northern Mediterranean region, which includes the modern-day countries of Spain France, Italy, Greece and Turkey. During each examination, they looked for remnants of milk, which indicated that people had used animal dairy products. The scientists also examined the ceramic pots for residue from animal fat and other evidence, such as skeletal remains, that would suggest Neolithic people slaughtered domesticated animals for meat; they examined these bony remains from 82 sites around the Mediterranean dating from the seventh to fifth millennia B.C. [10 Mysteries of the First Humans] Information about ancient dairy use and meat production can help scientists understand what factors drove the domestication of cud-chewing animals, the researchers said. Dairying was popular in some, but not all, northern Mediterranean areas, the researchers found. The eastern and western parts of the northern Mediterranean, including parts of modern-day Spain, France and Turkey, commonly practiced dairying, but northern Greece did not, they said. Rather, "lipids from pots and the animal bones tell the same story: Meat production [in northern Greece] was the main activity, not dairying," they said. The new analysis supports the team's earlier work showing "that milk use was highly regionalized in the Near East in the seventh millennium B.C.," study researchers Mélanie Roffet-Salque and Richard Evershed, chemists at the University of Bristol in the United Kingdom, said in a statement. "This new multidisciplinary study further emphasizes the existence of diverse use of animal products in the northern Mediterranean Neolithic." The varying landscape in the northern Mediterranean likely influenced what sort of animals the Neolithic people domesticated, the researchers added. "For example, rugged terrains are more suitable for sheep and goats, and open well-watered landscapes are better suited for cattle," said study researchers Rosalind Gillis and Jean-Denis Vigne, archaeozoologists at the Centre National de la Recherche Scientifique in the National Museum of Natural History in Paris. Dairying began with the onset of agriculture, and likely helped early farmers, said the study's lead researcher, Cynthianne Spiteri, a junior professor of archaeometry at the University of Tübingen in Germany, who conducted the residue analysis as part of her doctorate in archaeology at the University of York in the United Kingdom. "[Milk] is likely to have played an important role in providing a nourishing and storable food product, which was able to sustain early farmers, and consequently, the spread of farming in the western Mediterranean," Spiteri said. However, more research is needed to verify that Neolithic people consumed milk products. This could be accomplished by analyzing ancient human skeletons, said study researcher Oliver Craig, a professor of archaeology at the University of York. [8 Grisly Archaeological Discoveries] "Despite this deficiency, our research shows that they certainly exploited milk because we have found organic remnants in the pots they were using," Craig said. "This implies they were transforming milk into dairy products, such as yogurt and cheese, to remove the lactose," which some people are unable to digest, he said. "We know that much of the world's population today are still intolerant to lactose, so it is very important to know at what point people in the past were exposed to it and how long they have had to adapt to it," Craig said. The study was published online Nov. 14 in the journal the Proceedings of the National Academy of Sciences. Original article on Live Science.
- Scientific Name: Platinista gangetica. - Common Name: South Asian River Dolphin, Blind River Dolphin, Ganges Susu (Ganges River Dolphin subspecies), Bhulan (Indus River Dolphin subspecies). - The South Asian River Dolphin is listed as Endangered by the Internation Union for the Conservation of Nature. - It is listed under Appendix I of the Convention on International Trade of Endangered Species of WIld Fauna and Flora. - Water development projects such as dams, canals, barrages and water diversion have degraded and diminished the quality and quantity of habitat fragmenting its already vulnerable population. - Pollution in South Asia rivers have increased with industrialization. This dolphin lives in one of the most populated areas in the world. Pollutants such as mercury, salt, arsenic, fertilizers and pesticides are diluted into the river affecting the ecosystem of South Asia River dolphins and other living creatures that depend on freshwater. - Killing river dolphins for its meat has declined in the past decades but they are still hunted by local people. - Incidental catch and killing by gillnets as they get tangled in fishing gear. River dolphins share their habitat with other fish. Distribution and Population Ganges River Dolphin - The Ganges River Dolphin (Platinista gangetica gangetica) subspecies is found in Eastern India, Nepal and Bangladesh in the Ganges-Brahmaputra-Meghna and Karnaphuli-Sangu River systems, tributaries and lakes. Its range has progressively declined since the 19th century due to pollution, industrialization and construction activities. - Ganges River Dolphin population is extremely fragmented and is estimated at 1,200 to 1,800 according to the IUCN. Indus River Dolphin - The Indus River Dolphin (Planitista gangetica minor) subspecies is found in Pakistan in the lower Indus River system. Historically its distribution reached from the Indus delta to the Himalayan foothills covering about 3,400 km of the Indus River. Today its home range is 20% of its range in 1870 according to Biological Conservation , this is due to the construction of irrigation systems that have fragmented its population which no longer occurs in the Indus River Tributaries. - The Indus River Dolphin population is extremely fragmented and is estimated at about 965 individuals according to the IUCN. Ganges River Dolphin - The Ganges River Dolphin is a freshwater dolphin species that inhabit the muddy river waters of the Ganges, Brahmaputra, Meghna, Karnaphuli and Sangu Rivers systems, their tributaries, lakes, ponds and streams. - They concentrate in counter current pools in channel islands, river bends and convergent tributaries. - During monsoon floods their range expands and they migrate to other tributaries and during the dry winter season they return to the larger river channels. - Because their population spread through a wider range they can tolerate a wide variety of temperatures from 46.4ºF to 91.4ºF (8ºC to 33ºC). Indus River Dolphin - They usually occur in the deepest river channel of the Indus River at depth greater than 3.3 feet (1 meter). Their preferred habitat include deep low velocity water, channel constrictions and confluences. - It no longer occurs in the Indus River tributaries. - Both subspecies of the South Asian River Dolphin are physically identical. - They can be easily identified by their long snout, a particular characteristic of all river dolphins. The snout can reach 20% of the length of the body reaching a length of 8.3 inches (21 cm) on average. Mature females have slightly longer snouts than males. The snout becomes wider towards the tip. - They have long sharp teeth that are visible even when their mouths are closed. As they age teeth are worn and become flat. - Their eyes are extremely small and lack a lens making them blind. They use echolocation to navigate and hunt. Their eyes function as light detector. - Their dorsal skin color is grey brown while their ventral skin is lighter. - They do not have a dorsal fin, instead they have a small triangular lump. - They have long, thin flippers and tail in relation to their body size. The flippers can be up to 18% and the tail 25% of their total body length. - Females are larger than male. - This species length ranges from 78.7 to 157.48 in (2 to 4 meters) and their weight from 112 to 196 lb (51 to 89 kg). - Sexual maturity in males and females is reached at around 10 years old. - Breeding occurs year round but it peaks from October to March, gestation lasts from 8 to 10 months. - Females give birth to a single calf who depends on its mother for up to 12 months. After weaning the calf becomes independent. - This species has a unique feature among cetaceans in that they can swim on their sides. - The South Asian River Dolphin is a solitary species with the exception of mother and calf. Occasionally they have been seen in congregations of 3 to 10 individuals. - They are blind but they can detect light. They have a highly developed sonar system, also known as echolocation. Dolphins send pulse sounds or clicks which bounce back from objects in the form of echo giving them information about distance, shape, speed, material. - These dolphins are top predators in their river ecosystems. - They get most of their food from the bottom of the rivers and their diet includes crustaceans, fish, mollusks and aquatic plants. - The oldest male on record lived to 28 years and the oldest female to 17.5. - Kingdom: Animalia - Phylum: Chordata - Class: Mammalia - Order: Artiodactyla - Infraorder: Cetacea - Family: Platanistidae - Genus: Platanista - Species: Platanista gangetica Other freshwater dolphins References and further research - Status Assessment of the Indus \river Dolphin, Platinista gangetia minor, March-April 2001. Biological Conservation. - The Animal aging and Longevity Database Platinista gangetia - International Union for the Conservation of Nature – IUCN Platinista gangetica - International Union for the Conservation of Nature – IUCN Platinista gangetica gangetica - International Union for the Conservation of Nature – IUCN Platinista gangetica minor - University of Michigan Museum of Zoology Platinista gangetica - Convention on International Trade of Endangered Species of Wild Flora and Fauna - Convention on the Conservation of Migratory Wild Species – CMS Platinista gangetica - Food and Agriculture Organization of the United Nations – FAO Platinista gangetica - National Oceanic and Atmospheric Organization – NOAA Platinista gangeitca minor
Representation of some features and relations in some territory; in function with specified domain and range. A.k.a. mapping. The ideas of a map and the closely related mapping are very fundamental, and are somehow involved in much or all of human cognition and understanding - which after all is based on the making of mental maps or models of things. The first definition that is given is from the use of "map" in cartography and the second from mathematics, but both are related, and mappings can be seen as mathematical abstractions from maps. 1. maps: It is important to understand that one of the important points of maps (that also applies to mappings) is that they leave out - abstract from, do not depict - many things that are in the territory (or set) it represents. More generally, the following points about maps are important: · the map is usually not the territory (even if it is part the map does usually not represent all of the territory but only certain kinds of things occurring in the territory, in certain kinds of the map usually contains legenda and other instructions to the map usually contains a lot of what is effectively maps are on carriers (paper, screen, rock, sand) the map embodies one of several different possible ways of representing the things it does the map usually is partial, incomplete and dated - and a map is usually no map at all to understand the territory the map is about (supposing the map represents some truth) maps may represent non-existing territories and include guesses and declarations to the effect "this is uncharted territory" It may be well to add some brief comments and explanations to these points Maps and territories: In the case of paper maps, the general point of having a map is that it charts aspects of some territory (which can be seen as a set of things with properties in relations, but that is not relevant in the present context). Thus, generally a map only represents certain aspects of the territory it charts, and usually contains helpful material on the map to assist a user to relate it properly to what it charts. And maps may be partially mistaken or may be outdated and still be helpful to find one's way around the territory it charts, while it also is often helpful if the map explicitly shows what is guessed or unknown in it. 2. mappings: In mathematics, the usage of the terms "map" and "function" is not precisely regulated, but one useful way to relate them and keep them apart is to stipulate that a function is a set of pairs of which each first member is paired to just one second member, and a map is a function of which also the sets from which the first and second members are selected are specified. (These sets are known respectively as domain and range, or source and target. See: Function.) Note that for both functions and maps the rule or rules by which the first members in the pairs in the functions and maps need not be known or, if it is known, need not be explicitly given. Of course, if such a rule is known it may be very useful and all that may need to be listed to describe the function or Here are some useful notations and definitions, that presume to some extent standard set theory. It is assumed that the relations, functions and maps spoken of are binary or two-termed (which is no principal restriction, since a relation involving n terms can be seen as pair of n-1 terms and the n-term). In what follows "e" = "is a member of": A relation R is a set of pairs. A function f is a relation such that (x)(y)(z)((x,y) e f & (x,z) e f --> y=z). A map m is a function f such that (EA)(EB)(x)(y)((x,y) e f --> xeA & yeB). That m is a map from A to B is also written as: "m : A |-> B" which is in words: "m maps A to B". There are several ways in which such mappings can hold, and I state some with the usual wordings: m is a partial map of A to B: m : A |-> B and not all xeA are mapped to some yeB. m is a full map of A to B: m is a map of A to B and not partial. m is a map of A into B: m : A |-> B and not all yeB are mapped to some xeA. m is a map of A onto B: m : A |-> B and not into. One reason to have partial maps (and functions: the same terminology given for maps holds for functions) is that there may well be exceptional cases for some items in A. Thus, if m maps numbers to numbers using 1/n the case n=0 must be excluded.
For many students, reading a Shakespearean play or poem is like piecing together a hopeless puzzle of archaic words. They wonder why students today are forced to study the writings of an English man who lived 500 years ago and wonder how they could possibly relate to stories about kings, fairies, and donkeys. Shakespeare’s stories transcend time, and with enough background knowledge, we can learn much about ourselves and human nature through his writings. You can feel confident that our tutors have ample knowledge about the historical and personal events that influenced Shakespeare to write his literary masterpieces. Before you know it, your child will have Hamlet’s famous “To be, or not to be” soliloquy memorized by heart! Your child will learn how to: - examine Shakespeare’s plays, including (but not limited to) Romeo and Juliet, Macbeth, Hamlet, King Lear, A Midsummer Night’s Dream, Othello, The Tempest, and Richard III - read and interpret Early Modern English - distinguish the differences between the three types of Shakespearean plays: Tragedies, Comedies, and Histories - critically analyze the major characters, themes, symbols, and motifs in Shakespeare’s plays - compose a Shakespearean sonnet - discuss the historical, social, political, and personal impact that Shakespeare has made on us today Learn more about our other services:
The main aim of Religious Education is to encourage respect for all beliefs and cultures as well as helping to promote the pupils’ spiritual, moral, physical and cultural development. The teaching of RE is a legal requirement under the Education Act (1996) which requires that: RE should be taught to all pupils in full-time education in school except for those pupils withdrawn at the request of their parents (details to be found in DCSF publication). The Importance of RE: - RE should develop the pupil’s knowledge, understanding and awareness of Christianity, the other principal religions and their traditions. - RE should enhance pupil’s awareness and understanding of religious beliefs, teachings, and practices, forms of expression, family life, communities and cultures. - RE should offer opportunities for personal reflection and spiritual development. - RE should encourage pupils to learn from different religious beliefs, values and traditions. - RE encourages pupils to develop a sense of belonging and identity. - RE enables pupils to flourish within their communities as well as individually as citizens in a multicultural, pluralistic and global society. - RE is important in preparing pupils for adult life, employment and lifelong learning - RE helps students to show respect, kindness, tolerance towards others, including animals and wildlife. - RE enables pupils to become more aware of ethical and moral issues within the community and society as a whole. - RE is also important in helping our students to express their feelings, thoughts and in helping them to make choices and decisions. For our pupils on Early Years, Nurture & Engagement and Life Skills & Practical Skills Pathways, the curriculum is presented throughout the day, through play and daily routines with opportunities for 1:1 and group work which takes full account of personal, social and emotional development. Pupils within Key Stage 2 / 3 Academic Pathway have a timetabled RE lesson once a week. The syllabus is based on the EQUALS Curriculum. For all pupils there are RE learning opportunities through themed days, events and celebrations, including our annual Harvest Festival and a Multi-Faith Day.
Obesity has become a disease of epidemic proportion having profound negative health, psychological, and social consequences for both children and adults in the United States. Obesity is a major risk factor for four of the six leading causes of death in the country, including coronary heart disease, certain types of cancer, stroke, and Type II diabetes. Psychological, social, emotional, and health problems resulting from obesity in children can continue into adulthood. Unfortunately, few studies have been undertaken on childhood obesity, especially for children ages 6 The purpose of this study was to assess the prevalence of overweight and obesity among Head Start children ages 3-4 in North Carolina and to identify factors contributing to obesity among this group. The specific objectives were to (1) assess the dietary habits and intake of the children, (2) assess their exercise and lifestyle habits, (3) determine their parents’ perceptions and attitudes regarding obesity, (4) assess parental knowledge of nutrition, and (5) determine predictors of child obesity, such as dietary intake, exercise habits, and parents’ nutrition knowledge and attitude toward nutrition and obesity. The setting of the study in North Carolina is particularly important given that children in this State have been found to be less flexible, have greater body fat, and have poorer fitness than youth nationwide. In fact, youth in North Carolina are more likely to be obese than other children in the Nation as a whole. One Head Start center in North Carolina, with 4 satellite locations, selected for this study provided height and weight data for 244 children ages 3-4. A survey questionnaire was administered to the parents who agreed to participate in the study. The survey instrument contained questions on the demographic profile of their children, their children’s dietary habits, lifestyle/exercise habits, and food intake. In addition, parents were asked a series of questions designed to capture their views on food intake issues, their attitudes/perceptions/knowledge of nutrition, and their demographic characteristics. Finally, parents were asked to maintain a 5-day log of their children’s dietary intake, TV watching, and exercise regimen. Pre- and posttest instruments were used to assess the effect of nutrition knowledge and attitudes by parents before and after completion of a nutrition education Of the 244 children whose weight and height were obtained and body mass index (BMI) calculated, about 25 percent were overweight (at or above the 95th percentile), 19 percent were at risk for being overweight (85th-94th percentile), 48 percent were in the healthy range, and 8 percent were underweight. These figures tend to be higher than a 2003 national study that involved children ages 2-4. Some 147 parents of the 244 children whose weights and heights were obtained returned their questionnaires. However, only 109 of these surveys were sufficiently complete for use in this study. Over one-half of their children involved in the Head Start program were 4 years old, while the others were 3 years old. Nearly 75 percent of the children were African-American, while some 26 percent were of Hispanic background. Over 28 percent of the 109 children whose parents completed the surveys were classified as overweight. Two-thirds of the overweight children were African-American. Results of the survey showed that a majority of the children were afraid to try new foods, regularly ate breakfast, and had good appetites. Nearly 60 percent of the parents stated that their children often ate fruits and vegetables (perhaps as a result of foods eaten while attending the Head Start program). About 48 percent of the parents allowed their children to choose their snacks when shopping for food, an item that had a strong correlation with the BMI of these children. As for the frequency of food intake, three of every five parents indicated that their children often or always consumed whole milk, regular cheese, and processed meats. Nearly one-half noted that their children always or sometimes ate deep fried and breaded foods. Statistical analyses of the dietary intake of children revealed that the type of food and frequency consumed was significantly correlated with children’s BMI, especially consumption of desserts, foods containing rich sauces and gravies, salted nuts, chips, and doughnuts. When the focus of the study shifted to parents’ attitudes toward nutrition, about one-half of the parents indicated that they often made children finish the food on their plates, offered them dessert as a way to make them finish the food on their plates, or removed privileges from their children if they felt they did not eat enough at mealtimes. These attitudes had a positive correlation on the BMI of their children. Parents were then asked to respond to 11 nutritional-knowledge multiple-choice questions. The percentage of correct responses ranged from a low of 25 percent to a high of 71 percent. In order to assess the diversity of factors that might influence the BMI of Head Start children in the study, a multiple regression model was developed that contained eight key independent variables (children’s dietary habits, food intake, exercise habits, and family weight status and parents’ BMI, exercise habits, attitudes towards nutrition, and nutritional knowledge). The results suggested that few of the variables proved significant and that the explained variance was very low. The results of the study offer some inkling of the factors that place young children at risk with regard to their weight. The small sample of Head Start children ages 3-4 revealed that many are already showing symptoms of being overweight. Parents play a critical role in determining the type of food their children eat and the frequency with which they eat it. But, the study shows that parents had poor nutritional knowledge and contributed to their children’s weight problems by allowing them to choose foods when shopping (many which have limited nutritional value) and feeding them foods that were high in fats and calories.
Routers are even smaller than bridges and switches. But routers operate on the Network layer, which is a higher level in the OSI conceptual model. While bridges and switches operate on the Data Link layer. Like switches, routers use a combination of software and hardware, but it is used to route data from its source to its destination. Routers actually have a sophisticated OS that allows them to configure various connection ports. You can setup a router to route data packets from different network protocol stacks, which include TCP/IP, IPX/SPX and AppleTalk. Routers are used to segment LANs that have become so large that data traffic has become congested. Routers are also used to connect remote LANs together using different WAN technologies. But, when a router has become large, the large network is divided into logical segments called subnets. This division of the network is based on the addressing scheme related to a particular subnet is kept local. The router only forwards data that is meant for the subnets on the extended network. This routing of network data helps conserve network bandwidth. Routers also help to decide how to forward data packets to their destination based on the routing table. The protocols built into the router's operating system is used to identify neighboring routers and their network addresses. This allows routers to build a routing table.
We spend our lives on a spinning globe -- it takes only 24 hours to notice that, as night follows day and the cycle repeats. But what causes Earth to rotate on its axis? The answer starts with the forces that formed our solar system. A fledgling star gathers a disk of dust and gas around itself, said Kevin Luhman, an assistant professor of astronomy at Penn State. As things coalesce, the star's gravitational orbit sets that dust and gas to spinning. "Any clump that forms within that disk is going to naturally have some sort of rotation," Luhman said. As the clump collapses on itself it starts spinning faster and faster because of something called conservation of angular momentum. Figure skaters exploit this law when they bring their arms closer to their bodies to speed up their rate of spin, Luhman explained. Since gravity pulls inward from all directions equally, the amorphous clump, if massive enough, will eventually become a round planet. Inertia then keeps that planet spinning on its axis unless something occurs to disturb it. "The Earth keeps spinning because it was born spinning," Luhman said. Different planets have different rates of rotation. Mercury, closest to the sun, is slowed by the sun's gravity, Luhman noted, making but a single rotation in the time it takes the Earth to rotate 58 times. Other factors affecting rotational speed include the rapidity of a planet's initial formation (faster collapse means more angular momentum conserved) and impacts from meteorites, which can slow down a planet or knock it off stride. Earth's rotation, he added, is also affected by the tidal pull of the moon. Because of the moon, the spin of the Earth is slowing down at a rate of about 1 millisecond per year. The Earth spun around at a faster clip in the past, enough so that during the time of the dinosaurs a day was about 22 hours long. In addition to slowing the Earth's rotation, the moon's tidal pull is causing the moon to slowly recede from the Earth, at a rate of about 1 millimeter per year. In the distant past, the moon was closer. "It would have appeared much larger in our sky than it does now," Luhman said. Millions of years from now, he added, the cycle of a day on Earth will likely stretch to 25 or 26 hours. People will have to wait a little longer for the rising of the sun. Source: By Mike Shelton, Research Penn State Explore further: Second time through, Mars rover examines chosen rocks
The Whooping Crane, a symbol of national and international efforts to recover endangered species, has returned from the brink of extinction but remains at risk. In 1941, the species reached a low of 15 or 16 migratory individuals wintering in Texas (Boyce 1987) and 6 non-migratory birds in Louisiana. The Louisiana population did not survive. All whooping cranes alive today (437 in the wild + 162 in captivity = 599 as of August 2011 [Stehn 2011]) are descendants of the small remnant flock in Texas in winter 1941-42. Although that population increased to 283 by winter 2011-12 (Stehn and Haralson-Strobel 2014), several factors, especially human development and long-term water issues on the wintering grounds, continue to place it in jeopardy. Despite intense management efforts, the whooping crane remains one of the rarest birds in North America. Establishment of additional populations by reintroduction has so far been unsuccessful, although progress has been made in reintroduction methods. Because of the concern this species has generated, it is arguably one of the best-studied birds in North America. Within the United States, the Whooping Crane is listed as Endangered; recovery actions have been accomplished cooperatively by Canada and the United States, assisted by provincial and state agencies, nongovernment groups, and the private sector. The common name of the Whooping Crane is probably derived from its Guard Call or Unison Call vocalizations. In the 1800s, this species was widespread but apparently never common in the tall- and mixed-grass prairie marshes of the north-central United States and southern Canada. It remains ecologically dependent on such inland freshwater wetlands and, in winter, on coastal brackish wetlands. The only remaining self-sustaining wild population nests in or near Wood Buffalo National Park in the Northwest Territories and adjacent areas of northeastern Alberta, Canada, and winters on the Texas coast of the Gulf of Mexico. Attempted reintroductions in the Rocky Mountains (migratory) and in Florida (non-migratory) were unable to produce self-sustaining populations and have been discontinued. Reintroduction of a population migrating between Wisconsin and Florida began in 2001 and met with initial success, but its future will depend on solution of persistent nest failure. In 2010 a fourth reintroduction, to establish a non-migratory population, began in Louisiana. As of June 2014, 164 birds are maintained in captivity: 152 at five captive propagation facilities (Patuxent Wildlife Research Center, Maryland; International Crane Foundation, Wisconsin; Calgary Zoo, Alberta; Audubon Species Survival Center, Louisiana; and San Antonio Zoo, Texas), and an additional 12 birds at seven display facilities (S. Zimorski pers. comm.). This species is perennially monogamous and typically begins egg production at ages 3 or 4 years in the wild, but often not until ages 5 to 11 in captivity. Females usually lay a 2-egg clutch annually but seldom fledge more than 1 young. Both parents care for the young for 10 to 11 months, and young learn migration routes by following their parents. Wild birds may survive an estimated 25 years, captive birds 40 or more years. The definitive historical reference on Whooping Cranes is Allen (1952). When that work was published, the species was nearly extinct and the nesting area of what was soon to be the only surviving natural population was unknown. Allen (1956) completed this foundation reference with a supplement after the nesting area was discovered in 1954. The joint Canadian-U.S. Whooping Crane Recovery Plan (CWS and FWS 2007) serves as an excellent reference on the species and updates recovery actions through 2005. Beginning in 1975, the crane conservation community has held regular conferences at intervals of approximately 3 years. The North American Crane Working Group (NACWG) was formally established in 1988 to organize these events and publish the resulting Proceedings of the North American Crane Workshop. The papers therein are peer-reviewed and cover all aspects of Sandhill and Whooping Crane conservation and biology. Additional information, especially updates on populations and research projects, appear in two newsletters, NACWG's Unison Call and the Whooping Crane Conservation Association's Grus Americana.
At Hueco Tanks State Park and Historic Site, unique geology and rainwater create a sanctuary for living things. Hueco Tanks lies in the southeast part of the Basin and Range physiographic province. In this province, broad flat basins separate isolated and nearly parallel mountain ranges. The Spanish called these basins bolsons, meaning large purse. The park is in the north end of the Hueco Bolson, which extends southeast along the Rio Grande. The Hueco Mountains (to the east) and the Franklin Mountains (to the west) flank this part of the basin. When you approach the park, you will notice three low mountains rising from the Chihuahuan Desert. Around 35 million years ago, an underground mass of hot magma (or molten rock) pushed upward here, and then cooled under a layer of limestone. Over millions of years, wind and water wore away the limestone covering. These same elements then sculpted the mountains’ igneous rock (or cooled magma) into its current form. The igneous formations in the park capture precious rainwater to create a rich oasis in the arid desert. Water comes to Hueco Tanks in a variety of ways. Runoff from the Hueco Mountains flows through arroyos on either side of the park. Within the park, cracks in the igneous rock capture rainwater and channel it downward. Cracks and hollows (huecos in Spanish) in the rocks also hold rain, hence the park’s name. Most of these huecos occur naturally, but early residents built dams and tanks, too. The huecos hold water for several days to several months. How long a hueco holds water depends on the size of its pool, and whether it is in open air or protected from evaporation. Not only do the mountains conserve rainwater, they also provide shelter, shade and pockets of fertile soil that create “microhabitats.” These microhabitats allow species not normally found in the desert to survive here. Hueco Tanks State Park is home to a much wider variety of animal species than the surrounding desert. We see carnivores such as bobcat, gray fox, coyote, javelina, badger, ringtail, skunk, raccoon and mountain lion (or their tracks) regularly. Three types of rabbits forage in the park: black-tailed jackrabbit, desert cottontail and eastern cottontail. A number of rodents live here, and six species of bats roost in these rock hills. Not surprisingly, many reptiles live here, too. Five rattlesnakes native to the Trans-Pecos region have been spotted: blacktail, Mojave, mottled rock, western diamondback and prairie. They join 25 other species of snakes in the park. Other reptiles include 17 species of lizards, one of which is the Texas horned lizard. Seven amphibian species usually found in wetland areas have also been seen here. These include six species of toads and the barred tiger salamander. More than 200 species of birds have been recorded at Hueco Tanks. Around 44 species may breed here, including the prairie falcon, burrowing owl, white-throated swift, ash-throated flycatcher, blue grosbeak and Scott’s oriole. Many wading birds, waterfowl and shorebirds stop at the park during migration periods. Migratory songbirds spend time here in the spring and fall. More than 20 sparrow species overwinter at Hueco Tanks. “Fairy” Shrimp: Tiny, translucent freshwater shrimp live in the huecos. These little fellows lie dormant until it rains. Then they spring to life, and become food for predators like lizards. Find more information on the animals of Hueco Tanks State Park and Historic Site: - Texas Wildlife Fact Sheets - Butterflies and Moths of El Paso County - Texas Beyond History: Hueco Tanks Animals - Just for Kids: Honor Roll - Just for Kids: Desert Dwellers. The park hosts an interesting mix of plant species from desert, mountain, aquatic and grassland habitats. Desert scrub and grasslands of the Chihuahuan Desert surround the mountains. Creosotebush, honey mesquite, ocotillo, lechuguilla, sotol, prickly pear and other cacti grow here. Grasses include gramas, goosefoot and amaranth near the rocks and fourwing saltbush and other grasses on the desert flats. In narrow canyons and at the base of hills, moist habitats and ponds support mature trees and other plants. Tree species include netleaf hackberry, Texas mulberry, Mexican buckeye, Arizona white oak and rose-fruited juniper. Water-loving plants like leafy pondweed, hairy pepperwort and Rio Grande cottonwood thrive in moist soils near dams and seeps. Look for a few rare-to-Texas plants. The only known population of erect colubrine (Colubrina stricta) in the United States grows here. Abutilon mollicomum, a tall spindly mallow, grows in two locations in the park. Mosquito plant (Agastache cana) grows on rocky slopes, crevices and ledges in the western Trans-Pecos and New Mexico. This perennial grows only in a small area, but thrives in the park. Find more information on the plants of Hueco Tanks State Park and Historic Site:
Hemothorax is a collection of blood in the space between the chest wall and the lung (the pleural cavity). The most common cause of hemothorax is chest trauma. It can also occur in patients who have: Your doctor may note decreased or absent breath sounds on the affected side. Signs of hemothorax may be seen on the following tests: The goal of treatment is to get the patient stable, stop the bleeding, and remove the blood and air in the pleural space. A chest tube is inserted through the chest wall to drain the blood and air. It is left in place for several days to re-expand the lung. When a hemothorax is severe and a chest tube alone does not control the bleeding, surgery (thoracotomy) may be needed to stop the bleeding. The cause of the hemothorax should be also treated. In people who have had an injury, chest tube drainage is often all that is needed. Surgery is often not needed. The outcome depends on the cause of the hemothorax and how quickly treatment is given. Call 911 if you have: Go to the emergency room or call the local emergency number (such as 911) if you have: Use safety measures (such as seat belts) to avoid injury. Depending on the cause, a hemothorax may not be preventable. Light RW, Lee YCG. Pneumothorax, chylothorax, hemothorax, and fibrothorax. In: Mason RJ, Broaddus CV, Martin TR, et al. Murray & Nadel's Textbook of Respiratory Medicine. 5th ed. Philadelphia, Pa: Saunders Elsevier;2010:chap 74.
Originally, ADHD was known as hyperkinetic impulse disorder. It wasn’t until the late 1960s that the American Psychiatric Association (APA) formally recognized ADHD as a mental disorder. What Is ADHD? Attention Deficit Hyperactivity Disorder (ADHD) is a condition that affects children and young adults and can continue into adulthood. Symptoms include difficulty remaining still for long periods of time, limited attention spans, and high activity levels. You may notice that these are generally common behaviors in young children; however, the difference with children who have ADHD is that their hyperactivity and inattention are noticeably greater than that of their peers. This can lead to distress and/or problems functioning at home, school, or with friends and family. ADHD is diagnosed as one of three types: - Inattentive type - Hyperactive/impulsive type - Combined type Although some research indicates that genetics may play a factor in ADHD, scientists have yet to discover the specific cause of this mental disorder. What Are the Effects of ADHD? Many adults and young adults with ADHD do not realize they have the disorder, which can put them at a higher risk for developing other issues such as depression or anxiety. Often someone with undiagnosed ADHD will turn to substance use to self-medicate in an attempt to calm themselves or control feelings of anxiety or depression. Addiction and other compulsive habits are more likely in adults with undiagnosed ADHD than individuals in the general population are. Potential symptoms of ADHD include: - Chronic lateness and forgetfulness - Low self-esteem - Employment problems - Difficulty controlling anger - Substance abuse or addiction - Poor organization skills - Low frustration tolerance - Chronic boredom - Difficulty concentrating when reading - Mood swings - Relationship problems Attaining the right diagnosis and the proper treatment can transform your life. ADHD Help at The Meadows The Claudia Black Young Adult Center, a specialized treatment program of The Meadows, utilizes the Test of Variables of Attention (T.O.V.A) in the assessment protocol of its young adult patients. Essentially, T.O.V.A. is a computerized test of attention that assists in screening, diagnosis, and treatment monitoring of attention disorders, such as ADHD. T.O.V.A. complements the work of the multidisciplinary treatment team at the Claudia Black Young Adult Center. The T.O.V.A. report often accompanies a history of substance use disorders, relational trauma, anxiety disorders, and mood dysregulation. The symptoms of ADHD may at times be directly due to a substance withdrawal syndrome, the consequences of trauma or a mood disorder itself. The presence of such comorbidity complicates the diagnostic process and necessitates a careful consideration of the specifics unique to each individual’s clinical presentation. Using the T.O.V.A assessment raises the high standard of service and outcomes at the Claudia Black Young Adult Center. At The Meadows family of treatment programs, we work closely with patients to tailor treatment to best fit their unique needs. This highly specialized focus is one of the many reasons why we have successfully treated thousands of patients for over 40 years. To learn more about The Meadows or the Claudia Black Young Adult Center, please call 800-244-4949.
What is Mathematical Thinking? Mathematical Thinking does not mean making top grade in math classes. It isn’t always found in those who have taken many years studying math in college. Many of your math teachers do not understand mathematical thinking. Mathematical Thinking isn’t about what you KNOW but how you THINK about math. It includes four characteristics: 1. A habit of questioning A child might ask “What is the biggest number? Why isn’t there a biggest number? Why do we count big things and little things the same way. This glass has water that is higher than the others. Why doesn’t that mean it has the most water? Mathematicians through the ages have explored methods of measurement. How can we measure the distance around the earth …. the distance to the moon, sun, or stars, or the size of atoms. Great mathematical questions are questions with no known answer. Just as some people enjoy doing crossword puzzles or jigsaw puzzles, the mathematical thinker finds great joy in a mathematical questions that others have not been able to solve. 2. Creative Problem Finding The mathematical thinker doesn’t wait for someone else to ask the question. They look for opportunities to apply mathematics to all areas of life. It could be in economics, tracking economic trends, in geology, studying the rates of change in the earth’s crust, or perhaps in the study of weather and global warming. Everywhere they look, they search for patterns and ways to use math to understand the world in new ways. 3. Inventive Problem Solving They don’t just use basic problem solving methods. They look for a new and creative strategies for solving problems. Sometime, to solve an important problem, they invent an entirely new branch of mathematics. 4. Use mathematical methods of proof We have seen movies or pictures where a mathematician has worked on a problem, covering an entire blackboard with his calculations. It might take years to discover the way to begin with what we know to mathematically prove a new idea. To prove a science theory, we use experiments, but even the most careful experiments cannot prove an idea beyond all doubt. New evidence mightbe found and scientists must re-think their ideas. In Math, the proof is absolute. How does this apply to Students in High School or College? 1. Ask questions about mathematics. Ask why we use a certain method to solve a problem. Ask how mathematical procedures were discovered. Ask how you might apply the procedures you are learning in solving practical problems. 2. Be a problem finder. When you play a game, watch the team play football or other sports, use a recipe to make a new dessert, visit the doctor’s office, or watch the evening news, think about ways math is being used in these areas, and new ways it might be applied in the future. 3. Use different strategies for problem solving. After you understand how your teacher expects you to solve a problem, look for other methods that might work. You aren’t likely to discover new and better methods, but when you can do a problem in several ways, you can begin with one method and check your work with another. If you can’t remember how to solve a problem using algebra, you might use the “guess and check” method. 4. Practice Mathematical Reasoning. When you learn something new, try to understand the problems, the methods used, and why they work. As you look at the problems on the next page and my explanations on the page after that, you will see how I often explain things that might not have made sense to you before. If you do understand all the problems, you are already using mathematical reasoning. You can use your math notebook to write your own explanations for new ideas you learn in math. Understanding math is much more important that making straight A’s in math. Making straight A’s is high school, especially if you forget much of what you had studied, won’t be much help in college. Understanding math, learning mathematical reasoning, will help you in college and for the rest of your life. 5. Use Mathematical Thinking in other subjects. You might evaluate statistics used as evidence or at probablity improperly used. Too many people are mathematically illiterate. Test Yourself: A Tests of Mathematical Reasoning The next page has 15 questions about math. You don’t need high school math to answer these questions. Middle school students knows everything they need to know. This is a test of how well you understand and can reason on problems based on elementary school math. You might also want to read: Study Math and Improve Math Skills: 12 Tips for Learning Math
Artist's rendering of Eris, announced in July 2005 by Mike Brown of Caltech. It is more massive than Pluto. The sun is in the background. The dwarf planet that effectively forced astronomers to strip Pluto of its planethood is not only bigger than the former ninth planet, but also much more massive, a new study finds. Michael Brown, a planetary scientist at Caltech, and his graduate student Emily Schaller have determined that Eris, discovered in 2005 by Brown and his team, is about 27 percent more massive than Pluto. The finding, detailed in the June 15 issue of the journal Science, also confirms Eris and Pluto have similar compositions. The moon holds the key Eris circles the sun from about 9 billion miles away—about twice the distance of Pluto at the farthest point in its orbit. Its discovery was one of several factors that led some astronomers to create a new definition for planethood at the 2006 meeting of the International Astronomical Union (IAU) in Prague. The ruling reduced the planet count in our solar system to eight and left Pluto renamed as a "dwarf planet." To determine Eris’ mass, the researchers used the Hubble Space Telescope and the Keck Observatory to calculate the orbital speed of its moon, Dysnomia. According to Newtonian physics, the more massive a celestial object is, the faster its satellite will zip around it. “So by looking at the time it takes the moon to go around Eris, we’re able to calculate the mass,” Schaller said. Because Eris and Dysomnia are located more than 90 times farther from the sun than Earth--out in the Kuiper Belt region of the solar system, they appear as little more than pricks of light in telescope observations. “Eris is slightly larger than a point source, but just barely,” Schaller said. Dysnomia is thought to be less than 100 miles (150 km) across and to take about 16 Earth-days to make one trip around Eris. Eris itself is believed to have a diameter of 1,490 to 1,860 miles (2,400 to 3,000 km). “To put that into perspective, if you took all the asteroids in the asteroid belt [between Mars and Jupiter] and multiplied by four, they would easily all fit into Eris,” Schaller told SPACE.com. Pluto has a diameter of about 1,430 miles (2,300 km) across. Knowing Eris’ mass and size, the researchers were also able to confirm that Eris’ density is similar to that of Pluto, and that it is therefore likely made up mainly of rock and water ice. Born into controversy Formerly known as 2003 UB313, the dwarf planet was rechristened Eris (pronounced ee’-ris) by astronomers last year. The name is fitting: Eris is the Greek goddess of discord and strife, who stirred up jealousy and envy among the goddesses that led to the Trojan War. When Eris the dwarf planet was discovered, it created a furor among astronomers that led to the controversial decision last year to demote Pluto. While some planetary scientists still oppose the decision on grounds that the new definition of planet is not specific enough, Schaller thinks the IAU made the right choice. “I think that really only the big eight planets distinguish themselves as clearly different from all the other objects,” Schaller said. Schaller points to the example of Ceres, a former asteroid whose naming history resembles that of Pluto. When Ceres was first discovered in 1801, it was classified as a planet on account of its large size (it is 530 miles across). But “once they started discovering more and more asteroids, it got a bit ridiculous,” Schaller said. As with Pluto, Ceres was downgraded to the status of asteroid once scientists realized it was just the first-known of a class of rocky bodies residing mainly between the orbits of Mars and Jupiter. Last year, the IAU voted to reclassify Ceres as a dwarf planet, elevating it to the same ranks as Pluto and Eris. In the current debate surrounding Pluto, Schaller sees history repeating itself. “Pluto was discovered and for a really long time there wasn’t anything else discovered,” she said. “But if it had transpired in the same way to the asteroid belt where the following year many more objects were discovered, then I think we wouldn’t be having this discussion right now.” - Video: Planet Hunter - Image Gallery: The New Solar System - Reaching for the Edge: New Horizons Spacecraft Bound for Pluto
Are you smarter than a Neanderthal toolmaker? Archaeologists look into whether Neanderthals copied human tools or innovated their own designs Mary Beth Griggs Neanderthals may have been pretty good at innovating tools. [Image Credit: Lígia Santos Rodrigues, flickr] Could a Neanderthal use a hammer? Maybe. But could he build one himself without imitating humans? The question of whether our close hominid cousins had the ability to innovate new kinds of tools, and not just imitate, is coming up in scientific circles as archaeologists re-evaluate old archaeological sites. Two recent studies examined the possibility that Neanderthals created a new toolkit in Europe about 30,000 to 40,000 years ago. For the past few decades, most archeologists assumed that Neanderthal stone tools were simple and roughly shaped. But that assumption may be undermined by the discovery at some Neanderthal sites of thinner, more blade-like stones, some with jagged toothed edges, and others that had one sharp edge and a dull, curved back. They were similar to tools favored by humans during the same time period, leading some experts to assume that Neanderthals were heavily influenced by human culture. Now, some archaeologists are viewing Neanderthals in a more favorable light, casting them as an intellectual match for humans and calling into question the widely-held idea that changes in Neanderthal culture were introduced by Homo sapiens. The first of the recent studies was set in southern Italy, where researchers examined a group of artifacts known as the Uluzzian culture. For archaeologists, often all that remains of a group of people are the things they leave behind (their material culture), so the ‘Uluzzian culture’ is the collective material remains of a group that lived about 30,000 years ago, in the Stone Age of southern Italy. Because the sites are so ancient, the archaeologists have little to work with in the present, leaving much about whoever left the Uluzzian remains unknown. At the time, Neanderthals were making their last stand in Europe, and the climate was seesawing between cold snaps and warmer periods. In such harsh and varying climates, the tools that Neanderthals traditionally used may not have been as useful, forcing them to improvise. “There would have been an advantage to pause and develop new strategies,” said Julien Riel-Salvatore, lead author of the study, which was published last August in the Journal of Archaeological Method and Theory. A central question for Riel-Salvatore was whether or not the Uluzzian style could have developed independently of modern humans, who were creating similar technologies to the north, in the heart of Europe. The Uluzzian area, located at the bottom of Italy’s boot, was isolated by water on three sides, and bordered to the north by what Riel-Salvatore described as a Neanderthal “population buffer” in central Italy. The population buffer was a large population of Neanderthals that lived to the north of the Uluzzian area, and showed no signs of interacting with humans or changing their tool-making methods. Because the Neanderthals were cut off from the rest of world, the Uluzzian toolkit could have developed independently of human influence. The findings indicate that even though Neanderthals eventually died off, it’s possible that they attempted to adapt to their changing ecosystem. The other new Neanderthal study, published two months later in the Proceedings of the National Academy of Sciences, was conducted by the archaeologist Thomas Higham of Oxford University. His research was focused at the Grotte du Renne, a site in France that archeologists have excavated since the 1930s. Researchers there have been exploring a similar tool grouping, which is referred to as a ‘material culture’ by archaeologists, from around the same time period as the Uluzzian, known as the Chatelperronian culture. The set of tools was ascribed to Neanderthals, because Neanderthal remains (some teeth and a part of a skull) were found there. “[Historically,] this is the first site in the world that the Chatelperronian was associated with Neanderthals,” said Higham. The combination of bones and tools proved to be a convincing argument, until Higham’s paper showed definitively that the site at Grotte du Renne was disturbed long after its initial use. Because of this disturbance, it calls into question whether Neanderthals were even around when the inhabitants at the Grotte du Renne were making Chatelperronian tools. Higham’s paper threw Riel-Salvatore’s initial findings into a different light, casting doubt on the idea that Neanderthals created the Uluzzian culture. Riel-Salvatore wasn’t perturbed, though. He agreed with Higham that it was still too early to rule out Neanderthal toolmakers at either Chatelperronian or Uluzzian sites, and that more research into the subject was needed. Further muddying the issue is the fact that no one is certain whether the new, sharper tools were really more effective in coping with the cooling climate than Neanderthal tools. The blunt tools favored by Neanderthals were more clumsy-looking than the bladed stone tools their human contemporaries used, but were produced more efficiently and lasted longer. If Neanderthals did not develop new tools, it may not have been because they were insufficiently intelligent, but because they were already smart enough to know they didn’t need the cool new tools that the humans used in the cave next door. “Unchanging technology used to equal inability to innovate, but it could have just been them reaching the peak [of efficiency],” said archaeologist Metin Eren of Southern Methodist University in Texas. “There’s no question that they were a different population,” said Riel-Salvatore, but it may be time to give Neanderthals a bit more credit. Whether they were able to innovate on their own, or just adopted human tools when they were advantageous, many experts now agree that they weren’t just brutish cavemen.
One of the highest points of Jewish existence in Egypt occurred early in history, including the centuries following the invasion of Alexander the Great in the fourth century BCE. Combined cultural influences between the Jews and Greeks led to the development of a Hellenistic Judaism, much as the Jews later became integrated into Egyptian society and created a type of Arabic- Jewish culture. The Egyptian Jews pursued and excelled in the fine arts, philosophy and literature: Hellenistic culture and religious virtues, and during this period, the Jews prospered, building many synagogues and temples. Unfortunately, this period did not last long; the onset of the Roman and later Christian influences in Egypt would bring with them a rising anti-Semitic sentiment throughout the second and third centuries CE. The Jews tried to resist, but were overwhelmed; at the same time, the Jewish community itself began to atrophy through emigration and intermarriage. It was not until the Arab conquest (640 CE) that the Jews began to regain their social and cultural strength. From 640 to the late 900s, Jews owned and ran their own universities, served in the courts, and saw a period of relative economic prosperity. From 969, the Fatimid caliphs ruled Egypt as part of what was known as the Ayyubid empire (969-1250), and the Jews continued to flourish in cultural and political spheres, gaining recognition at court and the right to self-rule. In 1301, however, the new Mameluke rulers, who formerly had been slaves, began a campaign to identify and exterminate non-Muslims. The Jews, along with others including the Christians and Samaritans, began to flee or were executed until their numbers were diminished to less than 900, a far cry from the estimated 12-20 000 who flourished in the mid- twelfth century. After 1492, as a result of their forced expulsion from Spain and Portugal, the Sephardim of the Iberian Peninsula began a mass emigration to Egypt. In the ensuing years, many Jews gained high posts in the Ottoman (Turkish) courts which ruled at that time, and the Jewish finance minister was officially regarded as the political leader of the Jews. At the same time, the Jews of North- West Africa began to move into Egypt, and the Jewish community gradually became more complex. In the meantime, the Turks grew less tolerant of the Jews, and when Egypt tried to break free of Turkish rule, the Jews suffered. Nevertheless, the Jews continued to resist pogroms, persecution and economic containment's, including the heavy taxation enforced by governor Ali Bey during the emancipation, in his attempt to re-establish the old Ayyubid empire in 1768. Napoleon's influence in Egypt, between 1798 and 1801, led to yet another difficult time for the Jews. While he appeared to support the Jews, much of his activity was, in fact, deleterious to the Jewish community. Once again, heavy taxes and violence emerged, and in particular, Napoleon was responsible for destroying an Alexandrian synagogue. But the retreat of the French brought upon a sudden surge in the overall European population in Egypt, and Jewish numbers began to rise once more. New legislation protected the Jews and gave them new privileged status, tax exemptions, and legal protection as foreign nationals. With these reforms came a new growth in the economic and cultural roles of the Egyptian Jew. Among the most noted Jews of this period was Ya'qub Sanu' (Sanua) , a satirist playwright who achieved prominence until his expulsion in 1878. The year 1881 brought the British to Egypt and, with them, came an increased tolerance, which helped to raise the Jews to a new level of prosperity. A form of economic and cultural renaissance followed, during which time many elegant homes and temples were built, schools were established, and ultimately, the Jews in Egypt began to surpass the native Egyptian in both education and cultural integrity. By 1917, the numbers of Jews in Egypt had risen to 60 000, most of whom had been deeply affected by European influences. Most had been educated in foreign schools and spoke Arabic only as a second language, and the Jewish community was understood to be entirely distinct from Egyptian or Arabic cultures. Individual Jews played an important role in Egyptian nationalism. Jewish scholar Murad Beh Farag (1866-1956) was an Egyptian nationalist. His poem, 'My Homeland Egypt, Place of my Birth', expresses loyalty to Egypt, while his book, al-Qudsiyyat (Jerusalemica, 1923), defends the right of the Jews to a State. Farag was also one of the co-authors of Egypt's first Constitution in 1923. Another famous Egyptian Jew of this period was Yaqub Sanu, who became a patriotic Egyptian nationalist advocating the removal of the British. He edited the nationalist publication Abu Naddara 'Azra from exile. This was one of the first magazines written in Egyptian Arabic, and mostly consisted of satire, poking fun at the British as well as the Monarchy which was a puppet of the British. Another was Henri Curiel, who founded 'The Egyptian Movement for National Liberation' in 1943, an organization that was to form the core of the Egyptian Communist party. After 1937, anti- Semitic activities in Egypt increased. Suddenly, anti-Semitic violence was no longer considered to be simply a political manoeuvre for the personal gain of the rising political power, but instead was regarded as a symbolic act of retribution. An increase in legislated forms of oppression made it illegal for non- nationals to hold high political, economic or educational posts (geared toward the largely foreign Jewish population) and contributions were "solicited" for the Egyptian army. In 1947, there were 65 639 Jews in Egypt, who could be categorized into four distinct components by 1951: Arabic- speaking Jews of old Egyptian ancestry, Berber Jews, the Sephardim of Spanish- Portuguese stock, and Ashkenazim, or central and eastern European Jews. At the same time, Egypt was home to the largest body of Karaites, descendants of eighth century Jews who split from the main body of Judaism. These groups varied from each other because of their different cultural and historical pasts, and yet the Jews of Egypt, as a whole, held together as a distinct people. The many foreign influences, including Jewish immigrants who had come from abroad, resulted naturally in some internal conflicts based on cultural differences and a wide range of religious convictions. Furthermore, the integration of the Jewish people into the commercial and cultural fabric of Egypt took its toll. This resulted in a decrease in the intensity of religious beliefs among the later generations. From the early - twentieth century until the expulsion of the Jews in 1956, Thousands of Jews had their possessions confiscated and thousands more were arrested. Between November 1956 and September 1957, 21 000 Jews were expelled from Egypt, and by 1960, only 8500 remained. By the end of the Six-Day War in 1967, only 800 Jews were left in Egypt, and in 1980 less than 300 were known to exist in the country which had been the home of generations of Jews for over thirty-two centuries.
Coiling the wire, or wrapping it into a pretzel shape, for that matter, won’t affect its resistance l = length of the wire, length of the original wire does. In effect, the resistivity represents the resistance across two opposite faces of a cubic metre of material (in the same way that density is the mass of a cubic metre) resistivity tells how resistive a material is resistivity has the symbol r - (rho to rhyme with snow) and its units are ohm metres. How does the length of a wire affect its resistance introduction: this first report in physics will show the investigation of how the length of a wire affects its resistance. Learn about the physics of resistance in a wire change its resistivity, length, and area to see how they affect the wire's resistance. The thicker wire in (4) has a lower resistance than the thinner wire in (3) the resistance of a wire decreases with increasing thickness introduction metallic bonding a battery (cell) a very simple electric circuit concentration gradient potential energy some definitions resistance and length resistance and thickness of wire resistance and. Does it get lower or higher when the length of the wire increases. Essay on investigating how the length of wire affects the resistanceand current when the length of wire changes, so i then can work out the resistance i will be using constantan wire starting of with 1m length and then decreasing it by 010m intervals down to 020m long. A third variable that is known to affect the resistance to charge flow is the material that a wire is made of where l represents the length of the wire. Investigating how the length of a wire affects its resistance planning: risk assessment: i will handle the power supply carefully i am going to only use a maximum voltage of 2 volts. This causes the wire to warm up and release heat energy if the resistance of the wire is low and the current is large, the wire may get red hot conductors like this that provide a high resistance are called resistors this statement can help to predict the fact that as the length of wire increases the resistance increases. Hello, i would think that it would, because if the wire is longer there is more of a chance of their being foreign objects interfering, which could. How can the diameter of wire affect the resistance the material make up of a wire affects its resistance why does the length of wire affect the resistance. Why is it that when the length of wire increases(acting as a variable resistor) , so does resistance i really don't get it help please :. Free essay: how the length of a wire affects its resistance in my physics coursework i am going to investigate the effect of the length of a wire on its. The length of the wire affects resistance because once electricity is passed through it the wire will become tight and therefore the resistance will become. Resistance is directly proportional to the length of the wire, and inversely proportional to the cross sectional area of the wire r = pl/a, where r is resistance, p is the material's resistance in ohms, l is the length, and a is the cross sectional area in m^2 as a wire gets longer its resistance increases, and as it gets thinner its resistance also. Resistance is the measure of how hard it is to get a current through a component in a circuit at. In my physics coursework i am going to investigate the effect of the length of a wire on its resistance resistance is the measure of how easy it is for current to flow through a wire. The length of a wire and its effect on resistance introduction: in an electrical circuit, the current (flow rate of charge) depends on the battery voltage that causes the charge to flow through the circuit and the components in the circuit a circuit consists of a bulb, a battery and a resistor. Class practical a simple investigation of the factors affecting the resistance of a wire investigating the effect of length on resistance is common but some. Does temperature affect conductivity and resistance by resistance in a wire increases as: x length of the wire affects the increase in the resistance of the. Yes, resistance is directly proportional to the length, and inversely proportional to the cross sectional area r = pl/a where r is the resistance of the piece of conducting material, p is greek letter rho, representing the resistivity of the material, l (lower case l) is the length, and a is the area. How the length of a wire affects its resistancein my physics coursework i am going to investigate the effect of the length of a wire on its resistance resistance is the measure of how easy it is for current to flow through a wire. Now if you increase the length of this wire to 20 cm its resistance r will be twice than it was at 10 does a splice of a wire have an effect on voltage or current. How does the length of a wire affect its resistance essays: over 180,000 how does the length of a wire affect its resistance essays, how does the length of a wire affect its resistance. If the wire melts or burns then the whole experiment will have to be started again fair testing- to make this experiment a fair test i have to make sure i keep the other factors, which affect its resistance, constant throughout the experiment by keeping the wire and its width the same and to test the wire at room temperature. Answerscom ® wikianswers ® categories science physics how does the length of a wire affect the resistance does the length of copper wire affects its resistivity. How does the length of a wire affect its resistance how does the length of a wire affect its resistanceintroduction: this first report in physics will show the investigation of how the length of a wire affects its resistance. There are a few factors that affect how much resistance electricity encounters, including the width of the wire, the wire material and the length of the wire a longer wire requires more width and should be made of a material that is. Wire resistance is measured by ohms (unit of resistance) per length a longer wire, with a longer path for electricity to pass through, has a greater resistance to electricity. Unlike most editing & proofreading services, we edit for everything: grammar, spelling, punctuation, idea flow, sentence structure, & more get started now.
Listed below is a short glossary of terms, should you need it: A method of concealment in which entire words or phrases are substituted for other words or phrases. Example: dog = hideout. This means that most of what people call "codes" are not really codes, but ciphers (see below). Everything in this instructable is a cipher, except for the book code as noted in step 4. However, since everyone is used to hearing the word "code," I will use it interchangeably with "cipher" throughout this instructable. A method of concealment in which individual letters are substituted or transposed (switched around). Example: Agent = tnega (Agent backwards). Pig Latin is also a cipher. A method of encryption in which the letters in the alphabet are replaced directly. This means that everywhere an S appears in your message, it would be substituted with, for instance, M. This is generally a weak type of encryption. This means that a letter could have more than one meaning. So every time an S appears in your message, the first time it might get replaced with an E, the next time with a W, then a D, and so on. Every cipher in this instructable is poly-alphabetic. A key is what is needed to decode a message. It may be a word known only to you and your partner (such as the Playfair cipher in step 2), the settings of rotors for the Enigma machine in step 3, or a title of a book for the book code in step 5 Brute force attack This is when someone tries to break a code by just trying every single possible combination, one at a time, until they get something that makes sense. The average person can break a mono-alphabetic cipher this way, but anything more complicated will likely need a computer. Your message before it is encrypted. (readable) The message after it is encrypted (unreadable)