content
stringlengths
275
370k
This list is not meant to be all-inclusive. Rather, it should be used to stimulate and encourage other ideas and possibilities on the part of the students. There is certainly lots to explore, lots to discover, and lots to investigate in a science fair. - How animals grow - Animals and their young - Animals in our lives - Endangered animals - What do roots, stems, and leaves do? - Growing and caring for flowers - The food we get from plants - How are some plants dangerous? - Properties of matter - How does matter change? - Work, force, and energy - Pushing and pulling - How is energy used? - Simple machines and their uses - Compound machines and their uses - How does sound move through matter? - The earth's crust and how it changes - Rocks and minerals - Important natural resources - Water and the water cycle - Weather and weather predicting - Clouds and storms - The moon and sun - Planets in the solar system - Taking care of your body - Healthy habits - What are the proper foods to eat? Back to Introduction Back to Helping Students Select a Topic
John Frederick William Herschel John Herschel was born in Slough, England on March 7, 1792, the only child of renowned scientist and astronomer William Herschel. He received an excellent education, and while studying for his undergraduate degree in mathematics, instituted the Analytical Society of Cambridge in conjunction with George Peacock and Charles Babbage. The group introduced Leibniz notation into English mathematics to supplant the unwieldy Newtonian symbols. When Herschel graduated from Cambridge in 1813, he was at the top of his class. Herschel began to study in London for a career in law in 1814, but changed his mind within a year. He then returned to Cambridge for a brief stint as a teacher of mathematics, but left in 1816 to assist in the astronomical research of his aging father. The years they spent working together served as the groundwork from which the younger Herschel would build the rest of his career. In 1820, Herschel was one of the founding members of the Royal Astronomical Society and when his father died in 1822 he carried on his work, making a detailed study of double stars. In collaboration with James South he compiled a catalog of observations that was published in 1824. The work garnered the pair the Gold Medal from the Royal Astronomical Society and the Lalande Prize from the Paris Academy of Sciences. Herschel was also knighted 1831. In 1833, Herschel decided to temporarily relocate with his family to Cape Colony, South Africa in order to observe the skies not visible in England. Herschel's research was carried out at a brisk rate, and by the time they ventured home four years later, he had amassed an amazing amount of data. He had made catalogs of nebulae and double stars, had described the Magellanic clouds, which are only visible in the southern hemisphere, and had made a study of the intensity of solar radiation using an actinometer, a device he invented in 1825. Upon his return to England, Herschel began analyzing the data he had compiled in Africa, but also experimented in photography, a field he had made advances in previously. A skilled chemist, Herschel had discovered in 1819 the solvent power of sodium hyposulfite on otherwise insoluble silver salts, which was the prelude to it being utilized as a fixing agent in photography. In 1839, he developed a technique for creating photographs on sensitized paper, independently of William Fox Talbot, but did not attempt to commercialize the process. However, Herschel published several papers on photographic processes and was the first to utilize the terms positive and negative in reference to photography. During the following decade, Herschel carried on a number of different projects. Expanding upon the work of his father, he researched infrared light, discovering in 1840 the existence of Fraunhofer lines within that spectral region. He also wrote a popular laymen's guide to astronomy, which was published in 1849, and released a volume entitled Results of Astronomical Observations, Made During the Years 1834-38 at the Cape of Good Hope in 1847. Particularly important to the future of science, in 1845 Herschel reported the first observation of the fluorescence of a quinine solution in sunlight. The following is an excerpt of his findings as presented to the Royal Society of London: "The sulphate of quinine is well known to be of extremely sparing solubility in water. It is however easily and copiously soluble in tartaric acid. Equal weights of the sulphate and of crystallized tartaric acid, rubbed together with addition of a very little water, dissolve entirely and immediately. It is this solution, largely diluted, which exhibits the optical phenomenon in question. Though perfectly transparent and colorless when held between the eye and the light, or a white object, it yet exhibits in certain aspects, and under certain incidences of the light, an extremely vivid and beautiful celestial blue colour, which, from the circumstances of its occurrence, would seem to originate in those strata which the light first penetrates in entering the liquid, and which, if not strictly superficial, at least exert their peculiar power of analyzing the incident rays and dispersing those which compose the tint in question, only through a very small depth within the medium. To see the colour in question to advantage, all that is requisite to dissolve the two ingredients above mentioned in equal proportions, in about a hundred times their joint weight of water, and having filtered the solution, pour it into a tall narrow cylindrical glass vessel or test tube, which is to be set upright on a dark colored substance before an open window exposed to strong daylight or sunshine, but with no cross lights, or any strong reflected light from behind. If we look down perpendicularly into the vessel so that the visual ray shall graze the internal surface of the glass through a great part of its depth, the whole of that surface of the liquid on which the first light strikes will appear of a lively blue, ... If the liquid be poured into another vessel, the descending stream gleams internally from all its undulating inequalities, with the same lively yet delicate blue colour, ...thus clearly demonstrating that contact with a denser medium has no share in producing this singular phenomenon. The thinnest film of the liquid seems quite as effective in producing this superficial colour as a considerable thickness. For instance, if in pouring it from one glass into another, ...the end of the funnel be made to touch the internal surface of the vessel well moistened, so as to spread the descending stream over an extensive surface, the intensity of the colour is such that it is almost impossible to avoid supposing that we have a highly colored liquid under our view." In a footnote to the report Herschel points out that he was writing from memory, having carried out the experiment more than twenty years before. Nevertheless, his reminiscence was enough to spark further exploration, eventually resulting in the modern understanding of fluorescence. In fact, even today, quinine is one of the most commonly utilized fluorophores for spectroscopy, enjoyed by many for the strange, but beautiful fluorescence that was first observed, but unable to be unexplained, by Herschel. In 1850, Herschel's scientific work was put on hold when he was appointed Master of the Mint. Apparently unhappy in his new line of work, he suffered a nervous breakdown in 1854 and resigned from the position two years later. Herschel returned to his love of astronomy during his remaining years and continued to add to his catalogs of stars. When he died on May 11, 1871, he was appropriately buried in Westminster Abbey amid other distinguished scientists. BACK TO PIONEERS IN OPTICS BACK TO FLUORESCENCE INTRODUCTION Questions or comments? Send us an email. © 1995-2013 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners. Last Modification Friday, Aug 01, 2003 at 11:43 AM Access Count Since March 12, 2003: 21691 Visit the websites of our partners in education:
Psychosocial nursing diagnosis is the best-known gateway for treating psychological disorders. Psychosocial is the combination of two words, psycho (meaning mental or psychological) and social, which collectively gives a meaning of mental disorders affected by social factors. There are different diseases, which are classified as psychosocial disorders. Most common are eating, developmental, dissociative, cognitive, factitious, mood and various other disorders. Adjustment and anxiety disorders are also classified as psychosocial disorders. All these disorders are impacted by social factors as well as mental conditions and can be treated by psychosocial nursing diagnosis. Social factors responsible for psychosocial disorders: Common communal factors which induce or aggravate such illnesses are peer pressure, fear of becoming unfit for the society, social and economic conditions, maternal support, relationships and religious matters. These factors are most likely to affect an individual’s mental health and leads him to some serious conditions like schizophrenia, and drug abuse. The effects of these factors can be eradicated by psychosocial nursing diagnosis, and this method is commonly practiced nowadays. Types of psychosocial disorders: Considering the types of psychological disorders, there are many. • Eating disorders are getting common in western countries. These illnesses include serious diseases like bulimia nervosa and anorexia nervosa. Both these disorders are more common in females and particularly teenage girls. Girls suffering from Anorexia nervosa fear weight gain and restrict their diet. This causes excessive weight loss and weakness. This disease emerges in early adulthood and if untreated for a long time; it can cause many nutritional problems. The psychological factor involved in this disease is the course of physical changes through which a girl passes at the beginning of her puberty. Social factors include fear of being unfit for the society. Bulimia nervosa is also an eating illness and is characterized by self-induced vomiting after eating. Both these diseases can be treated by psychosocial nursing diagnosis. • Some adjustment disorders are also faced by individuals. Fear of getting into new places can be treated by psychosocial therapy. • Cognitive disorders include illnesses like Alzheimer’s disease, Parkinson’s disease and many others, which are related to solving sums and memory. Dementia is a common problem which occurs due to mental disorders. Social factors like less exposure to practice, play an important role in aggravating them. Amnesia is also a term used for memory loss. Short-term memory loss usually occurs due to anxiety and lack of coordination. When an individual keeps himself pre-occupied with the fear of imminent actions, the brain memory cells do not function properly, and memory is affected. • Dissociative disorders are those in which memory loss leads to the complete unawareness of identity. It is probably the most crucial which can be solved only by psychosocial nursing diagnosis and other therapies. • Some psychological diseases arise due to other medical reasons as well. For instance, psychosis due to AIDS, disorders emerging out due to epilepsy, and depression due to diabetes. To deal with such illnesses, common practice is to eradicate the root cause. If the depression is caused due to diabetes, firstly, treatment for diabetes is ensured. After that, the subject problem is dealt with psychosocial nursing diagnosis. • Various mood disorders are also identified. These include bipolar illness and major depressive illness. These are actually types of depression. Depression is caused due to stress and anxiety. Anxiety disorders have become major problem in western countries. According to recent research, about 18% of adults and teenagers in United States have at least one anxiety disorder. Reasons of such issues are commonly rooted in the society due to the social lifestyle. PSYCHOSOCIAL INTERVENTION TECHNIQUES 1.Nursing Care Evaluation 2.Psychological aspects Treatment 3.Managing Mental Health questionnaires 4.How to handle the emotional aspects in an interview. 5.Know therapies (individual, family …) - It is a direct relationship between two people. Directly transmits feelings of both. - It uses a symbolic communication channel, preferably but not exclusively oral oral. We must learn to manage the look. - There is a role assignment, to do an interview you have to ask permission. According to its purpose; types: Depending on the degree of structure; types: Questions and answers Structuring conducting the interview Structuring the recording and processing of information. Structuring the interpretation of information. Basic conditions for a successful interview: The data that we ask must be accessible (nursed until age) Knowledge and understanding by the interviewee of their role and how it has to pass the information we request. There has to be motivation enough for the respondent to assume their role and meet the requirements. Phases of an interview: Clearly define what you want to evaluate. Set selection criteria (whom I interviewed). Guidelines for Conducting the interview (presentation, questions, if you want to help …) That type of record we have and how I will develop the information. Depth knowledge of the subject to be treated. Before interviewing anyone, must know the details of the respondent on the issue. Make a diagram of action. Brief introduction: aspects of the interviewer, the interviewee, the situation, the process that will develop and achieve goals. The information we have to give the patient does not have to go all at once but through dialogue. The respondent must tell us why attends the center and waiting for the interview. Once we know we both make a leveling expectations (reducing uncertainty interviewed through structured efforts of the interviewer). Develop an agreement on how you will conduct the interview (have reserves in case one fails) with the contract increased the motivation (to ensure confidentiality), if named do sign a consent form. Body of the interview: Initial: open and facilitated Media: specification and clarification. Final: confrontation and synthesis. Things to positive effect in the interview: Demonstrate competence and experience. Having an open spontaneous and expressive style. Demonstrate warmth, empathy and conclusiveness. Things with negative effect on the interview: Attitudes distant, domineering, controlling and hostile. Make a brief summary of the information obtained, this provides a basis for dialogue and discussion (we used this to lighten dark areas, incomplete or unconscious). Guide the conversation toward the future, making at two levels: Asking the patient how he sees his future The interviewer has to give the respondent to perform tasks in the immediate future. We must end the interview on a positive moment and rounded (not things we can be outstanding). Do not cut the patient when we are showing a negative mood either through expression or their utterances. Self-reports / questionnaires: They are a set of questions that the patient is responding, eg Lagner and Zung. economic: in time and personnel. We reported the subjective experience of the patient. It allows us to explore motor behavior, physiological and cognitive. It is not always objective, the patient can lie to us. We used to detect problem behaviors We used to evaluate the results of therapy. We use it to investigate. Types of self reports in Mental Health: General: GHQ: Goldberg SM questionnaire (28 items) Specific: measure about us: Depression and Zung Lagner. Self-observation and self-registration: Is to pay attention to their own behavior and is linked to the car registration. Targeting what we observed in our own behavior. We can use it as: Evaluation method: if we want this method to be successful we need to ensure that the patient identifies the behavior we want to observe, record comprises methods and recognizes the importance of obtaining reliable data. When we make a car registration targeting evaluation methods need: the frequency of a behavior, the duration and intensity. In the car registration records are collected, the behavior itself and the consequences. Self observation is indicated to promote self. It is important for people who have a personal history of dependence between. The characteristics of a dependent are: difficulty making decisions and passivity. People with rigid behaviors and have feelings of helplessness. People who feel that their actions are not just environmental effects. In people with no real skills to change their behavior. Self useful observation when we want to change behaviors almost automatic and we have to be especially careful and never indicate the observation car with suicidal people with recurrent obsessive thoughts. 1. Behavioral therapy: Basic principles upon which rests the TC: Both abnormal and normal behavior is learned and kept likewise. The social environment is largely responsible for the maintenance of learning and behavior of both the normal and abnormal. The primary goal of treatment is the problem behavior itself. To address any behavior must crumble into very small components (such behavior). Based on a scientific approach and that as a result of them can replicate. 1. Systematic awareness: (It is the most common treatment for phobic disorders). Is to develop a hierarchy of feared situation. Training in muscle relaxation techniques. Subjecting the patient to a treatment that consists in associating the link that produces less anxiety with muscle relaxation until it no longer produces anxiety. 2. Exposure therapy: We subject the patient to the feared situation but anxiety-provoking. Exposure instructions refer to the patient needs to know that anxiety is a curve, increases, reaches a maximum and then decreases. Over time curve becomes flatter. Exposure therapy assisted by the therapist Exposure therapy group. Flood: submit the patient to the highest level of phobic anxiety. Exposure therapy the patient himself being performed at the individual level. It consists in associating an event to cause the execution of a behavior we want to change. It’s called positive reinforcement: when the application of that event increases behavior. Positive reinforcement is usually considered pleasant but need not be. They are used in all therapies. The most widely used positive reinforcement is verbal attention. In the case of behavioral therapy used positive reinforcement but consciously. (Using a lot in schizophrenic prosocial behavior …) It’s called negative reinforcement: refers to a process by which a behavior decreases. Overall negative reinforcement reinforcement associate unpleasant but is not necessarily so. Example: patients with anorexia nervosa who do not want to eat them threatens to put them in a nasogastric tube if not gain weight. A variety of reinforcements theory is called: EXTINGUISHING: a procedure which involves the removal of positive reinforcement for behavior and thus decrease reaches disappear. Example: in a patient admitted for control and positive reinforcement and avoid behavior appears. It’s theme of nurses teach the family this technique once the patient is discharged. PUNISHMENT: aversive stimulus is applied to undesirable behavior. Used very little and usually when life is at risk and failed patient management positive reinforcement. A variety of punishment: Aversion therapy: is to associate an unpleasant stimulus with undesirable behavior. (Alcohol + = nausea medication). It is based mainly on classical conditioning and is most effective if involved in unwanted behavior. Typical example: alcoholism. 4. Modeling (or drama): In this procedure, the therapist performs a desired behavior to the patient imitate him. To make a modeling technique we have to analyze the behavior that we want to modify and break it down into simple elements to imitating the therapist and the patient will be able to copy the. 5. Social skills training: It is used in people who have problems interacting with others. Both individual and group level. First analyzes of social skills deficits in behavioral terms concretos.Y then used positive reinforcement or modeling is performed. - Alcoholism (addiction) - Depressive disorders - Somatic diseases: - Risk factors for cardiovascular diseases (hypertension, hipercolerestomía, overweight). - Prevent or improve headaches - Sleep Disorders - Gastrointestinal problems - Bronchial asthma. You can also use behavioral therapy for medication non-compliance. 2. Cognitive Therapy: Any “event” we live is accompanied by a “cognitive appraisal” and based on that assessment we live a certain “emotional state”. This emotional state leads to a “conceptual bias”. This results in a “conduct” in relation to this event lived. Event emotional state If the cognitive assessment is erroneous, the emotional state is depressed or anxious what will lead to a disoriented behavior. Cognitive appraisals incorrect. The most common: Selective Abstraction: reach a conclusion, just looking at some of the information. Arbitrary Inference: a conclusion without sufficient evidence, or despite having evidence to the contrary. Absolutist thinking: I qualify myself or my experiences as all or nothing. Magnification or minimization: is to overestimate or underestimate the importance of a personal attribute, a life event, or a future possibility. Personalization: is to relate external events to oneself and all this without any real basis for doing so. It is important to differentiate it from paranoia and delusions (the difference is the degree of conviction). Catastrophic thinking: always imagine the worst that can happen. Adaptive and maladaptive schemas: Thoughts that help or hinder us in our lives. (Photocopies) 3. GROUP THERAPY: It is based on relationships that are crucial to the psychological development. Moreover, these are the foundations of personality and behavior patterns, would be about 5 years but is modified throughout life. The number of people we serve is greater than we can meet the individual level. It is also more effective. It is used for both psychiatric and non-psychiatric patients. Instilling hope is to give people hope that being in the group will help to improve the situation we are living Universality: think that only happens to you. Disseminate information: it is an activity that can be used to group level in various ways, mainly two: Groups of EPS Activity of the therapist and the other members. Altruism: experience of being useful to other group members. Development of socializing techniques (eg, diabetes) Catharsis: refers to the release of emotions that occur when you share my concerns in the group. Corrective recapitulation of the primary family group: a change about how you see your family. Existential factors: the group is a place to share feelings about the existential feelings of every human being (death, loneliness, freedom …) Group Cohesion: refers to the attraction of the group members to the group. The members of a cohesive group are accepted, and are inclined to support you establish meaningful relationships. Group cohesion is the precondition for any change in the group. Interpersonal learning: refers to the activity that results when a person changes their behavior in the field observing the behavior of other members of the group. Interpersonal learning steps: When you get to a group are manifestations of interpersonal pathology. Feedback is given and it can self-observation, and so begins to share reactions. I examine how I feel to share the reactions (catharsis). Following is an understanding of the opinion I have of myself. From this develops a sense of responsibility. Then, an awareness of my power to make the change, and finally made the change by a high affection. Decision to establish a group: determine the field (referred to in that context we will develop the TG refers to the physical space where they will take place meetings) and group size (6 to 8 people). It has to have certain characteristics: privacy, comfort, it is in a pleasant … decide on the frequency and duration of group sessions (about 90 min. minimum or 120 min. maximum). Open or closed group Deciding whether or not to co-therapist. Time of forming the group: the therapist must decide objectives (better to have them written and measurable) select patients who may meet the objectives sought by the group. Prepare patients for TG must be given an explanation from a rational point of view the process of TG, also describing the type of queries that are expected to perform in the group of patients. Establish a contract on the attendance at meetings. Creating expectations about the effects of TG Anticipating some of the problems that will appear on the group’s life, such as conflicts with other peers, discouragement, frustration … construction and maintenance of the group: create the culture of the group (which is going to be in the group?) identify and resolve common problems are frequent subpools, conflicts are created. Use appropriately aid. Types of Groups: Interaction oriented groups: The duration of this group is indefinite (up to fix the problem) Attendance is voluntary They usually last between 1 and 2 years change the character of the people Main therapeutic factors: corrective recapitulation of the primary family group. patients with high-functioning. They must want to change. They must be able to tolerate an interpersonal approach. Acute patient groups: Group life is indefinite. Attendance is generally mandatory. Each patient is usually between 1 to 2 days, or weeks (depending on the length of your income) Objective: To restore the functions are altered. Main therapeutic factors: Inclusion criteria: the patient must be able to tolerate the group. Groups of post-hospitalization follow: Group life is indefinite and is part of a comprehensive program regarding the monitoring of hospitalized patients. The assistance often is mandatory. There is usually a fixed number of sessions (5-6) Main therapeutic factors: irritating behavior (advice) patients requiring follow-up or post-hospital care able to tolerate the frame group and attend meetings required. Clinical groups control medication: Group life is indefinite. Attendance is voluntary The patient’s stay in the group is indefinite. Objective: support, education and maintenance functions. Main therapeutic factors: Inclusion criteria: people who are able to tolerate the group setting. Groups oriented behavior therapy: The life of the group is limited. This is usually between 6 and 12 sessions. Support group is voluntary The average patient stay is what does the group Objective: behavioral change Main therapeutic factors: behavior modification techniques, universality and irritating behavior. Inclusion criteria: patients with specific medical problems that are subsidiary Behavioral modification (eg eating disorders) Groups specializing in medical conditions: The life of the group is limited (6-12 sessions) Attendance is voluntary The average patient stay is the duration of the group Objectives: education, support and expertise Main therapeutic factors: Inclusion criteria: patients with specific medical problems who want a higher education (eg diabetics) Groups specializing in life events: Life is limited (8-10 sessions) Attendance is voluntary and often flexible. The average length of stay of the patient is the same as does the group Objective: To support, catharsis and socialization Main therapeutic factors: Inclusion criteria: patients who have had a life event in your life (eg Grief, rape, divorce …) 4. Family therapy: Not only is a therapy also is a conceptual orientation of those problems. As a consequence, evaluation and treatment of the individual problems must be performed in the context of the family unit. This approach, besides giving importance to the family, gives importance to cultural factors and socio-economic The purpose of this therapy is to modify dysfunctional family patterns The theories of this T. family, that family is considered an open system that works in relation to a socio-cultural context, and evolves over the life cycle. Principles governing family functioning: Context: the individual’s problems must be understood in a social context. Interaction: the connections between biopsychosocial factors and transactional patterns are essential to understand mental health problems Fit: dichotomies: function / dysfunction, normality / pathology should be considered in relation to the fit between the individual and his family and the context Causality: Causality of problems, both physical and mental, should be understood as processes or recurring circular and mutually reinforcing. No additionality: the family is more than the sum of its parts: TF structural involves three distinct processes: meeting, approval and structuring. This is done in 3 distinct phases: In the 1st phase, the therapist aims to interact and harmonize with the family to get access to the system and thus get enough influence to create change. In the 2nd phase, the therapist evaluates experimentally the family, making its members represent a current problem facing them and putting them under pressure to test their limits and emotional flexibilities. At this time, the therapist makes the diagnosis and structural map of the immediate family camp. In Stage 3: the therapist guidelines and tasks designed to restructure the family. 1 What it does is block dysfunctional coalitions. And then finally t promotes functional partnerships are strengthened and reinforced parental subsystems generational boundaries. Systemic strategic approach: families act in a certain way for two reasons: They think it’s the best way to solve the problem. They ignore other ways to solve it. Therapist functions: stop approaches ineffective. Each family must define what is normal or healthy. The therapist’s responsibility is limited to initiate change that will lift the family inefficient pattern you are using. The goal: solving the problem that is queried Relabeling and reframing: the redefinition of a problem or situation, so formal, visible from a new perspective. Useful for changing visions rigid, stereotyped responses and reproaches unproductive. Give behavioral tasks. Behavioral therapy approach: applying T.Conductual family level, the most important are family rules and communication processes. The goal and treatment of problems are defined as specific observable behaviors. What we have to do is guide the therapist family in learning more effective ways of relating. They work mainly communication issues and problem solving. A lot of research is being done on causes, symptoms and treatments of all the psychosocial disorders, which emerge out frequently. So far, best treatment is psychosocial nursing diagnosis and therapy.
NASA’s Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) now has a comet discovery. Officially named “C/2014 C3 (NEOWISE)”, the first comet discovery of the renewed mission came on Feb. 14 when the comet was about 143 million miles (230 million kilometers) from Earth. The odd thing about this comet is that is in a retrograde orbit. Amy Mainzer, the mission’s principal investigator from NASA’s Jet Propulsion Laboratory in Pasadena, Calif. “This comet is a weirdo – it is in a retrograde orbit, meaning that it orbits the sun in the opposite sense from Earth and the other planets.” Check out the story here at the WISE website.
African Americans in the Revolutionary War In the American Revolution, gaining freedom was the strongest motive for black slaves who joined the Patriot or British armies. The free black may have been drafted or enlisted at his own volition. Nash says that they enlisted more often than did whites. Additional motives for those who joined the rebel American forces could have been a desire for adventure, belief in the goals of the Revolution, or the possibility of receiving a bounty. Bounties were both monetary payments and the chance to be given freedom; they were promised to those who joined either side of the war. Free blacks in the North and South fought on both sides of the Revolution; slaves were recruited to weaken those masters who supported the opposing cause. Nash reports that most blacks fought on the patriot side; recent research concludes there were about 9000 black Patriot soldiers, counting the Continental Army and Navy, and state militia units, as well as privateers, wagoneers in the Army, servants to officers, and spies. Ray Raphael notes that while thousands joined the Loyalists, many more, free and slave, sided with the Patriots. - 1 African American Patriots - 2 African American sailors - 3 Patriot resistance to using African Americans - 4 African American Loyalists in British military service - 5 Dunmore's proclamation - 6 Patriot military response to Dunmore's proclamation - 7 Black Regiment of Rhode Island - 8 Fate of Black Loyalists - 9 In popular culture - 10 See also - 11 Footnotes - 12 Bibliography African American Patriots Prior to the revolution, many free African Americans supported the anti-British cause, most famously Crispus Attucks, believed to be the first person killed at the Boston Massacre. At the time of the American Revolution, some blacks had already been enlisted as Minutemen. Both free and enslaved Africans had served in local militias, especially in the North, defending their villages against attacks by Native Americans. In March 1775, the Continental Congress assigned units of the Massachusetts militia as Minutemen. They were under orders to become activated if the British troops in Boston took the offensive. Peter Salem, who had been freed by his owner to join the Framingham militia, was one of the blacks in the militia. He served for seven years. In the Revolutionary War, slave owners often let their slaves enlist in the war with promises of freedom, but many were put back into slavery after the conclusion of the war. In April 1775, at Lexington and Concord, blacks responded to the call and fought with Patriot forces. Prince Estabrook was wounded some time during the fighting on 19 April, probably at Lexington. The Battle of Bunker Hill also had African-American soldiers fighting along with white Patriots, such as Peter Salem; Salem Poor, Barzillai Lew, Blaney Grusha, Titus Coburn, Alexander Ames, Cato Howe, and Seymour Burr. Many African Americans, both enslaved and free, wanted to join with the Patriots. They believed that they would achieve freedom or expand their civil rights. In addition to the role of soldier, blacks also served as guides, messengers, and spies. American states had to meet quotas of troops for the new Continental Army, and New England regiments recruited black slaves by promising freedom to those who served in the Continental Army. During the course of the war, about one fifth of the northern army was black. At the Siege of Yorktown in 1781, Baron Closen, a German officer in the French Royal Deux-Ponts Regiment, estimated the American army to be about one-quarter black. African American sailors Because of manpower shortages at sea, both the Continental Navy and Royal Navy signed African Americans into their navies. Even southern colonies, which worried about putting guns into the hands of slaves for the army, had no qualms about using blacks to pilot vessels and to handle the ammunition on ships. Some African Americans had been captured from the Royal Navy and used by the Patriots on their vessels. Patriot resistance to using African Americans Revolutionary leaders began to be fearful of using blacks in the armed forces. They were afraid that slaves who were armed would rise against them. Slave owners became concerned that military service would eventually free their people. In May 1775, the Massachusetts Committee of Safety, enrolled slaves in the armies of the colony. The action was adopted by the Continental Congress when they took over the Patriot Army. Horatio Gates in July 1775 issued an order to recruiters, ordering them not to enroll "any deserter from the Ministerial army, nor any stroller, negro or vagabond. . ." in the Continental Army. Most blacks were integrated into existing military units, but some segregated units were formed. African American Loyalists in British military service The British regular army had some fears that, if armed, blacks would start slave rebellions. Trying to placate southern planters, the British used African Americans as laborers, skilled workers, foragers and spies. Except for those blacks who joined Lord Dunmore's Ethiopian Regiment, only a few blacks, such as Seymour Burr, served in the British army while the fighting was concentrated in the North. It was not until the final months of the war, when manpower was low, that loyalists used blacks to fight for Britain in the South. In Savannah, Augusta, and Charleston, when threatened by Patriot forces, the British filled gaps in their troops with African Americans. In October 1779, about 200 Black Loyalist soldiers assisted the British in successfully defending Savannah against a joint French and rebel American attack. Lord Dunmore, the royal governor of Virginia, was determined to maintain British rule in the southern colonies. On November 7, 1775, he issued a proclamation: "I do hereby further declare all indented servants, Negroes, or others, (appertaining to Rebels,) free, that are able and willing to bear arms, they joining His Majesty's Troops." By December 1775 the British army had 300 slaves wearing a military uniform. Sewn on the breast of the uniform was the inscription "Liberty to Slaves". These slaves were designated as "Lord Dunmore's Ethiopian Regiment." Patriot military response to Dunmore's proclamation Dunmore's Black soldiers aroused fear among some Patriots. The Ethiopian unit was used most frequently in the South, where the African population was oppressed to the breaking point. As a response to the fear that armed blacks might pose, in December 1775, Washington wrote a letter to Colonel Henry Lee III stating that success in the war would come to whatever side could arm the blacks the fastest. Washington issued orders to the recruiters to reenlist the free blacks who had already served in the army; he worried that some of these soldiers might cross over to the British side. Congress in 1776 agreed with Washington and authorized re-enlistment of free blacks who had already served. Patriots in South Carolina and Georgia resisted enlisting slaves as armed soldiers. African Americans from northern units were used to fight in southern battles. In some Southern states, southern black slaves substituted for their masters in Patriot service. Black Regiment of Rhode Island In 1778, Rhode Island was having trouble recruiting enough white men to meet the troop quotas set by the Continental Congress. The Rhode Island Assembly decided to pursue a suggestion made by General Varnum and enlist slaves in 1st Rhode Island Regiment. Varnum had raised the idea in a letter to George Washington, who forwarded the letter to the governor of Rhode Island. On February 14, 1778, the Rhode Island Assembly voted to allow the enlistment of "every able-bodied negro, mulatto, or Indian man slave" that chose to do so, and that "every slave so enlisting shall, upon his passing muster before Colonel Christopher Greene, be immediately discharged from the service of his master or mistress, and be absolutely free...." The owners of slaves who enlisted were to be compensated by the Assembly in an amount equal to the market value of the slave. A total of 88 slaves enlisted in the regiment over the next four months, joined by some free blacks. The regiment eventually totaled about 225 men; probably fewer than 140 of them were blacks. The 1st Rhode Island Regiment became the only regiment of the Continental Army to have segregated companies of black soldiers. Under Colonel Greene, the regiment fought in the Battle of Rhode Island in August 1778. The regiment played a fairly minor but still-praised role in the battle, suffering three killed, nine wounded, and eleven missing. Like most of the Continental Army, the regiment saw little action over the next few years since the focus of the war had shifted to the south. In 1781, Greene and several of his black soldiers were killed in a skirmish with Loyalists. Greene's body was mutilated by the Loyalists, apparently as punishment for having led black soldiers against them. Fate of Black Loyalists On July 21, 1781 , as the final British ship left Savannah, more than 5,000 enslaved African Americans were transported with their Loyalist masters for Jamaica or St. Augustine. About 300 blacks in Savannah did not evacuate, fearing that they would be re-enslaved. They established a colony in the swamps of the Savannah River. By 1786, many were back in bondage. The British evacuation of Charleston in December 1782 included many Loyalists and more than 5,000 blacks. More than half of these were slaves held by the Loyalists; they were taken by their masters for resettlement in the West Indies, where the Loyalists started or bought plantations. The British also settled freed slaves in Jamaica and other West Indian islands, eventually granting them land. Another 500 slaves were taken to east Florida. Many of the Loyalist slaves who left rebels to side with the British were promised their freedom. In New York City, which the British occupied, thousands of refugee slaves had crowded into the city to gain freedom. The British created a registry of escaped slaves, called the Book of Negroes. The registry included details of their enslavement, escape, and service to the British. If accepted, the former slave received a certificate entitling transport out of New York. By the time the Book of Negroes was closed, it had the names of 1336 men, 914 women, and 750 children, who were resettled in Nova Scotia. They were known in Canada as Black Loyalists. About 200 former slaves were taken to London with British forces as free people. Blacks living in London and Nova Scotia struggled with discrimination and, in Canada, with the more severe climate. Supporters in England organized to establish a colony in West Africa for the resettlement of Poor Blacks of London, most of whom were former American slaves. Freetown was the first settlement established of what became the colony of Sierra Leone. Black Loyalists in Nova Scotia were also asked if they wanted to relocate. Many chose to go to Africa and on January 15, 1792, 1,193 blacks left Halifax for West Africa and a new life. Later the African colony was supplemented by Afro-Caribbeans from Jamaica, as well as Africans who were liberated by the British in their intervention in the slave trade, after Britain prohibited it in 1807. The African-American Patriots who served the Continental Army, found that the postwar military held no rewards for them. It was much reduced in size, and state legislatures such as Connecticut and Massachusetts in 1784 and 1785, respectively, banned all blacks, free or slave, from military service. Southern states also banned all slaves from their militias. North Carolina was among the states that allowed free people of color to serve in their militias and bear arms. In 1792, the United States Congress formally excluded African Americans from military service, allowing only "free able-bodied white male citizens" to serve. At the time of the ratification of the Constitution in 1789, free black men could vote in five of the thirteen states, including North Carolina. That made them citizens not only of their states but of the United States. Many slaves who fought gained freedom, but others did not. Some owners reneged on their promise to free them after their service in the military. Some African-American descendants of Revolutionary war veterans have documented their lineage. Professor Henry Louis Gates and Judge Lawrence W. Pierce, as examples, have joined the Sons of the American Revolution based on documenting male lines of ancestors who served. In the first two decades following the Revolution, numerous slaves were freed. In the US as a whole, by 1810 the number of free blacks reached 186,446, or 13.5 percent of all blacks. Northern states abolished slavery by law or in their new constitutions. By 1810, 75 percent of all African Americans in the North were free. By 1840, virtually all African Americans in the North were free. However, in the Upper South, especially, numerous slaveholders were also inspired by the revolution to free their slaves, and Methodist, Baptist and Quaker preachers also urged manumission. The proportion of free blacks in the Upper South increased markedly, from less than 1 percent of all blacks to more than 10 percent, even as the number of slaves was increasing overall. More than half of the number of free blacks in the United States were concentrated in the Upper South. In Delaware, nearly 75 percent of blacks were free by 1810. After that period, few were freed, as the development of cotton plantations featuring short-staple cotton in the Deep South drove up the internal demand for slaves in the domestic slave trade. In popular culture The 2000 film, The Patriot features an African American character named Occam (played by Jay Arlen Jones), a slave who fights in the war in place of his master. After serving a year in the Continental Army, he becomes a free man but still serves with the militia until the end of the war. In 2010, conservative talk-show host and columnist Glenn Beck held a series of "Founders Fridays" shows with one of them being about African Americans. it was the highest rated prime time cable news show between the ages of 25 and 54. - National Liberty Memorial - proposed memorial to commemorate African Americans who fought in the Revolutionary War - The Colored Patriots of the American Revolution (History Book) - Gary B. Nash, "The African Americans' Revolution," in Oxford Handbook of the American Revolution (2012), edited by Edward G Gray and Jane Kamensky, pp 250-70 - Nash, "The African Americans' Revolution," at p 254 - Ray Raphael, A People's History of the American Revolution (2001) p 281 - Thomas H. O'Connor, The Hub: Boston Past and Present (Boston: Northeastern University Press, 2001), p. 56 ISBN 1555535445. - "Fighting... Maybe for Freedom, but probably not?". - "SALEM, April 25". Essex Gazette. Essex, Massachusetts. 25 April 1775. Retrieved 19 April 2015. - Foner, 43. - Liberty! The American Revolution (Documentary) Episode II:Blows Must Decide: 1774–1776. ©1997 Twin Cities Public Television, Inc. ISBN 1-4157-0217-9 - "The Revolution's Black Soldiers" by Robert A. Selig, Ph.D., American Revolution website, 2013-2014 - Foner, 70. - [needs citation] - "Continental Army". United States History. Retrieved 7 August 2016. - Lanning, 145. - Lanning, 148. - White, Deborah; Bay, Mia; Martin, Waldo (2013). Freedon: on My Mind. Bostan: Bedford/St.Martin's. p. 129. - Nell, William C. (1855). "IV, Rhode Island". The Colored Patriots of the American Revolution. Robert F. Wallcut. - Foner, 205. - Foner, 75–76. - Lanning, 76–77. - Lanning, 79. - Lanning, 161–162. - Lanning, 181. - Abraham Lincoln's Speech on the Dred Scott Decision, June 26, 1857 Archived September 8, 2002, at the Wayback Machine.. - Peter Kolchin (1993), American Slavery, p. 81. - Peter Kolchin (1993), American Slavery, pp. 77–78, 81. - Kolchin (1993), American Slavery, p. 78. - Kolchin (1993), American Slavery, p. 87. - "'Glenn Beck': Founders' Friday: African-American Founders". Fox News. 2010-03-28. - "Glenn Beck's African American Founders Special #1 On Cable News Friday". Mediaite. 2010-06-02. - Blanck, Emily. "Seventeen eighty-three: the turning point in the law of slavery and freedom in Massachusetts." New England Quarterly (2002): 24-51. in JSTOR - Carretta, Vincent. Phillis Wheatley: Biography of a Genius in Bondage (University of Georgia Press, 2011) - Foner, Philip. Blacks in the American Revolution. Westport, Conn.: Greenwood Press, 1976 ISBN 0837189462. - Frey, Sylvia R. Water from the Rock: Black Resistance in a Revolutionary Age (1992) excerpt and text search - Gilbert, Alan. Black Patriots and Loyalists: Fighting for Emancipation in the War for Independence (University of Chicago Press, 2012) - Lanning, Michael. African Americans in the Revolutionary War. New York: Kensington Publishing, 2000 ISBN 0806527161. - Nash, Gary B. "The African Americans' Revolution," in Oxford Handbook of the American Revolution (2012) edited by Edward G Gray and Jane Kamensky pp 250–70. - Piecuch, Jim. Three Peoples, One King: Loyalists, Indians, and Slaves in the Revolutionary South, 1775-1782 (Univ of South Carolina Press, 2008) - Quarles, Benjamin.The Negro in the American Revolution. Chapel Hill: University of North Carolina Press, 1961 ISBN 0807846031. - Whitfield, Harvey Amani. "Black Loyalists and Black Slaves in Maritime Canada." History Compass 5.6 (2007) pp: 1980-1997. - Wood, Gordon. The American Revolution: A History. New York: Modern Library, 2002 ISBN 0679640576. - Kearse, Gregory S. "The Bucks of America & Prince Hall Freemasonry". Prince Hall Masonic Digest Newspaper, Washington, D.C. ( 2012): 8. - Nell, William C. The Colored Patriots of the American Revolution (1855) Full text
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. The widely known Brownian motion was first introduced in 1828 by a Scottish botanist named Robert Brown. Brown used this concept in order to illustrate an irregularity in patterns of movement by pollen grains suspended in liquid. In 1900, Louis Bachelier considered the Brownian motion as a possible solution to modeling stock market prices. As a result, in 1923, Norbert Wiener was the first to give the Brownian Motion a rigorous definition, reconstructing the model. This use of the Brownian theory is often referred to as the Wiener process. Finally, after a series of efforts from a number of scientists, Black and Scholes (1973) introduced the famous Black-Scholes option pricing formula in a paper that remained unpublished. However, later that year, Robert Merton published a follow-up paper that integrated the no-arbitrage condition, generalizing in this way Black-Scholes formula. After Wiener's work, Brownian motion was considered, until very recently, the most appropriate process for describing asset returns when operating within a continuous time framework. Nevertheless, a large number of relatively recent studies conclude that Brownian motion may not be the most appropriate process. There are three principal objectives that illustrate this opinion in real life data. First, the volatility of returns is not constant. It changes stochastically, according to time variation. Secondly, asset prices for real data are not continuous, but instead they demonstrate jumps. This, in turn leads to the non-normality of returns. Finally, the returns and their volatilities are not actually independent, rather they often illustrate signs of correlation and sometimes the correlation might even be negative. In particular, Fama (1963) stated that returns are actually more leptokurtic than the normal ones, especially when the holding period is small. In addition, prices on options demonstrate the famous volatility smile and are actually higher than what the Black-Scholes formula predicts. It is also important to note that the Black-Scholes formula assumes normality in log-increments. However, empirical evidence has proved that this is actually false, leading researchers to initiate studies in other ranges of processes. One of the most famous of such family models is the Lévy processes. Their greatest advantage is that they allow the flexibility of not requiring returns to follow a normal distribution. Finally, the growth of the market has led to the demand of new and more complex financial derivatives for which the Black-Scholes model cannot be used for pricing. As an example, exotic options illustarte this difficulty in pricing very clear, since most times it is impossible to derive a closed-form solution for pricing them. Nonetheless, there are other factors that amplify the complication of pricing an option, as stated in Anyaoku (2005). One such factor, is the early exercise, seen in American options or credit derivatives constructed from a mixture of correlated underlying assets. The path dependent derivatives such as Lookback or Asian options should also be included since their payoff depends on past values of the asset. This kind of options have actually increased in the Over-The-Counter (OTC) market and in order to be traded and hedged, they first need to be priced. The aim of this paper is to exhibit the use of Lévy processes. In particular, we will examine the use of a Variance-Gamma (VG) process (Madan et al. (1998)) in overcoming all the issues of the Brownian motion, mentioned above, when pricing options and in particular American and Asian options. It will turn out that the empirically examined properties of real world data fit a lot better with Lévy processes than the Black-Scholes model. Jump-diffusion processes and Lévy models have been widely related with option pricing, since they take into account the implied volatility smiles and generally capture all the flaws mentioned above in the Black-Scholes pricing model. The pricing will be employed using Monte Carlo simulation method since it has so far proved an excellent tool for solving problems of higher dimensions. 2. Literature Review 2.1. Construction of Lévy processes There is an enormous amount of studies illustrating different option pricing models, as also different processes describing asset prices that were produced as an alternative to the flawed Black-Scholes model. Lévy process was one of the most widespread alternatives and researchers up until now are studying the different versions of it. In particular, the first studies about the Lévy process go back to the late 20's but its final structure was gradually discovered by a number of researchers such as De Finetti, Kolmogorov, Lévy, Khintchine and Itô. Broadly speaking, a Lévy process is a stochastic process in continuous time with independent and stationary increments, similar to the independent and identically distributed improvement of discrete monitoring. The two most famous Lévy processes are, the Brownian motion with drift, used in the Black-Scholes model (1973) and the compound Poisson process which underlies Merton's (1976) jump diffusion model. Carr and Wu (2004) mentioned in their paper about time-changed Lévy processes that a pure jump Lévy process generates non-normal innovations and in order to capture the stochastic volatility, they applied a stochastic time change to the process. In addition, they mentioned that in order for the correlation between returns and their volatilities to appear, they had to allow improvements in the process to be correlated with improvements in the random clock on which it is run. In the case where the later correlation becomes negative, it means that when the Lévy process falls the clock has a faster running trend. This is what detains the leverage effect described in Black (1976). There is a huge variety of different types of Lévy processes that are used in pricing options and researchers share separate opinions about which one is actually more suitable. This variety of opinions was clearly illustrated by Carr and Wu (2004). One sort consists of those who believe that compound Poisson processes are appropriate for fitting jumps as it was first mentioned by Merton (1976). Similar to this was Heston's (1993) idea in using a mean reverting square root process when modeling stochastic volatility. Among those who undertook the same research path were Andersen et al. (2002), Bates (2000) and Pan (2002). The second sort considers general jump structures which allow an infinite number of jumps to arise within a finite time interval. One of the most commonly used models in this strand is the inverse Gaussian model, introduced by Barndorff-Nielsen (1998), as well as the Variance-Gamma (VG) model, introduced by Madan et al. (1998). This strand basically supports the use of infinite-activity process when modeling returns, which leads to the recognition of stochastic volatility. The best elements of the two strands mentioned above are suitably combined in time-changed Lévy processes, creating an even better fit. In particular, the time-changed framework relaxes the affine requirement and allows greater generality for the jump structure. In addition, it allows volatility changes to be correlated arbitrarily with asset returns. This leads to the privilege of attaining both the leverage effect missing from the Black-Scholes model and the high jump activity as seen in Carr and Wu (2004). 2.2. Recent work on pricing using Lévy Processes Option pricing is an issue that concerned researchers for decades and it will likely continue to be an issue for years to come. The methodology used for observing pricing behavior varies largely, depending on the type of options and process you are dealing with. Most of the times, pricing of options requires numerical integration or solving a partial differential equation (PDE). However, this is not always feasible. When the dimension of the problem is large, both numerical integrals and solving PDEs become hard to implement since formulas become intractable and a large amount of accuracy is lost (Jia (2009)). This is where estimation methods are introduced, in order to offer more accurate results. Several groups of numerical methods, implemented in recent studies, exist for analyzing option pricing. The most commonly used are the Monte Carlo simulations and Fast Fourier Transforms (FFT). Monte Carlo simulation, in particular, can be applied in evaluating options that contain multiple sources of ambiguity or obscured characteristics. However, a number of other differing analytical approximations are used, as well as derivations of integral equations. One approach was that of Madan et al. (1998) who examined a stochastic process, in which financial information arrived via jumps. In their study, they applied a high-activity process with infinite small jumps, combining them with the lower-frequency large jumps. Following their work, Carr and Wu (2003) examined the necessity of a diffusion component, when using high-activity pure jump processes. However, they did not manage to reach an exact conclusion since often jump processes in the limit imitate the performance of a diffusion process. Nonetheless, they realized that when pricing short-term index options at-the-money, a diffusion component offers some contribution. Benhamou (2000), in his effort to model the smile effect, implemented both Fourier and Laplace transforms in a semi-parametric method. The only assumption required was that Lévy processes are appropriate in modeling underlying price processes. However, no other constraints were employed on the price process. In this way, he managed to broaden the Black-Scholes model to a variety of Lévy processes, due to the fact that the latter includes both continuous time diffusion and jump process. Another approach was that of Lord et al. (2007). They examined the pricing of early-exercise options by introducing a novel quadrature-based method which mainly relies on Fast-Fourier transformations called the Convolution method. Their work was a combination of the recent quadrature pricing methods of Andricopoulos et al. (2003) and O'Sullivan (2005) and the Fourier transformation methods initiated by Carr and Madan (1999), Raible (2000) and Lewis (2001). The main idea of the procedure applied was to reformulate the commonly used risk-neutral valuation formula on the basis that it forms a convolution. The Convolution method is then used for pricing American and Bermudan options. In order to implement this method, they had to impose only one restriction, basically that of a known conditional characteristic function for the underlying asset. However, since the method was applied within an exponential Lévy framework, which includes the exponential affine jump-diffusion models, the constraint for a known conditional characteristic function was fulfilled. In the report, the flexibility of choosing the asset price process within a range of jump processes was illustrated in numerical examples by examining three different processes; namely, the Geometric Brownian Motion (GBM), the VG and the CGMY (Carr et al.(2002)). Finally, an apparent example on the use of Monte Carlo simulation in pricing exotic options by making use of general pricing techniques for vanilla options is given by Schoutens and Symens (2002). A thorough observation should be made of their work, since this report will use the same procedure in pricing American and Asian options. The processes exploited when pricing the options were the VG process, the Normal Inverse Gaussian (NIG) process and the Meixner process, which were then used to implement a Lévy Stochastic Volatility (Lévy-SV) model. Broadly speaking, in a stochastic volatility model, the time of the process becomes stochastic. This means that time is running slow when we are at times of low volatility and time is running fast when we are at times of high volatility. As a rate of time change, they used the Cox-Ingersoll-Ross (CIR) model and they calibrated the Lévy-SV model to fit their data set, which was completed with mid-prices of a set of European call options on the S&P 500 index. The calibration produced the risk-neutral parameters for each model, on which they carried out simulations and obtained the option prices for all proposed models. Finally, the use of the technique of control variates was made in order to decrease the standard error of the simulations to a minimum. As a result, they realized that although in the Black-Scholes framework, prices of exotic options depend highly on the volatility parameter chosen, which is not apparent; in the Lévy-SV model, prices are almost equal. This is what led them to the conclusion that pricing exotic options on the Lévy-SV model is more reliable than the Black-Scholes model. 3. Data Requirements and Methodologies to be used Following the procedure above by Schoutens and Symens (2002), we will price an American option and an exotic option, specifically an Asian fixed strike call option with arithmetic average, using Monte Carlo simulations. Generally, there are two types of research approaches: quantitative and qualitative. The main difference of the two is that the first is objective in nature, i.e. it is not influenced, whilst the second approach is subjective in nature, i.e. it is influenced. The data we would use to implement our model will be FTSE 100 index prices for a 5 year period. The payoff of an American and an Asian option are given in the following way. Where is the stopping time representing the optimal stopping time and is the payoff of the American contract to be priced. Asian Fixed Strike Call Option: Let = average of the asset price over Assume stock prices are recorded every periods from time to time , then there are periods over . The payoff of the Asian fixed strike call option is given by , where is the strike price. By using a VG model, we will calibrate model prices in order to try and match them to market prices by minimizing the least squared error of their differences. In particular, for computing the average absolute error we would use the following formula The procedure begins by simulating different paths for our stock prices process (in our case the VG process) and then for each path evaluate the payoff function for , where is the number of options in our data set. From that, we calculate the Monte Carlo estimate of the expected payoff as The above estimate is used to discount the final option price, i.e. . The standard error of this estimate is then required, which is given by: Generally, it is observed that in order to decrease the standard error of the estimate, you need to increase the number of simulations. However, one should note that when increasing the number of simulations, the speed of the procedure decreases. We continue by simulating our VG process up to time The final step includes the rescaling of our path based on the path of the stochastic business time of our choice and then inserting it in the formula for the behavior of the stock price. Finally, we should note that the stochastic time we chose to use is the Cox-Ingersoll-Ross (CIR (1985a/b)) model. In this report, our aim is to illustrate a way of pricing American and Asian options that detain all key aspects on real world financial securities; mainly, jumps, stochastic volatility and the leverage effect. By using a VG process, we obtain a framework that allows jumps and with the help of the CIR model, as the stochastic time, we allow for stochastic volatility to be introduced. Finally, the leverage effect is captured through the correlation between the VG innovation and the time change. Generally, the Black-Scholes model will always remain as a point of reference against all other models or extensions pioneered and it will continue to be seen as a paradigm of option pricing regardless of the amount of research that has occurred, proving the opposite. However, recent studies have been evident of the flaws of the Black-Scholes model when applied to the real world framework. In addition, it has been proven that Lévy processes can be considered as the most recent effective fit to the actual market. Nevertheless, in future research everything is possible to revolutionize.
Parts of nervous system - The nervous system consist of the brain, spinal cord, spinal nerves and receptors. - Its allows organisms to react to their surroundings and to coordinate their behavior. Types of neurones - Motor neurone: Impulse travels away from cell body. - Sensory neurone: Impulse travels towards cell body. - Relay neurone: Impulse travels firts towards and then away from cell body. Neurons are elongated to make connections between parts of the body. Connections between neurons Neurons do not touch each other; there is a gap between them called a synapse. Types of receptor Receptors detect stimuli. - Light- receptors in the eyes. - Sound- receptors in the ears. - Change in position- receptors in the ears. - Taste- receptor in the tonge. - Smell- receptors in the nose. - Touch, pressure, pain and temperature- receptors in the skin. - Reflex action speeds up theresponse time by missing out the brain completly. - The spinal cord acts as the coordinator and passes impulses directly from a sensory neurone to a motor neurone via a relay neurone. - Reflex actions are automatic and quick. Humans need to keep their internal environment relatively constant. - Temperature: Increased by shivering and narrowing skin capillaries; Decreased by sweating and expanding skin capillaries. - Water content: Gained by drinking; Lost by breathing via the lungs and sweating; Any excess is lost via the kidneys in urine. - Ion content: Gained by eating and drinking; Lost by eating and drinking; Lost via sweating; Excess is lostvia the kidneys in urine. - Blood sugar (glucose) levels: Glucose provides the cells with a constant supply of energy; Gained by eating and drinking. How conditions are controlled? - Many processes within the body are coordinated by hormones. - Hormones are chemical substances. - Hormones are produced by glands. - Hormones are transported by the bloodstream. Hormones and fertility - Hormones regulate the functions of many organs and cells. - A woman naturally produces hormones that cause the release of an egg from her ovaries. - These hormones are produces by the pituitary gland and the ovaries. Natural control of fertility - Follicle stimulating hormone (FSH): Produced in the pituitary gland; Causes the ovaries to produce oestrogen and an egg to mature. - Oestrogen: Produces in the ovaries; Inhibits the production of FSH and causes the production of luteinising hormone (LH). - LH: From th epituitary gland; Stimulates the release of an egg in the middel of the menstrual cycle. Artificial control of fertility FSH and oestrogen can be given to women in order to achieve opposing results. - Increasing fertility: FSH is given as a fertility drug to women who don't produce enought naturally, to stimulate eggs. - Reducing fertility: Oestrogen is given as an oral contraceptive to inhibit FSH production. This means that eggs don't mature in the ovary.
Thursday, October 1, 2009 Every grade has been working on this short unit on abstract composition the past two weeks. I don't usually do the same unit across the grade levels, but this one is quick and every grade has LOVED it and been very successful. This is our first art unit this year using recycled materials; for this unit we incorporated them as tools for art-making! 1. Arrange a composition by tracing the bottle caps using markers. Students are encouraged to make choices about what size caps they will use, whether the circles will be separate, touching, overlapping, inside or outside, or even going off the page. They can also choose whether or not they will make their composition symmetrical or asymmetrical, or even draw just a partial circle. 2. Add more color, detail, design and pattern to the composition by coloring with construction paper crayons. Students could color inside, outside, around, and on top of the circles, add other lines, shapes and patterns, etc. It's important to encourage the children to push hard with the crayons so that the colors are more vivid. The only rule is that students did not make anything "recognizable" like a person, place, animal, car, etc. since they were doing an entirely abstract composition. This unit is great for a number of reasons. One, it ties in with our Eco-friendly art theme for the year, and two, it allows for a LOT of creative self-expression and individual problem-solving. I can't even begin to describe the beautiful variety of the work! The students felt such a sense of ownership over their assignment! The artwork I'm showing here was created by students K-5th grade! Every grade was super successful! I modified the unit for grades 3-5 (and some 2nd grade classes) so that the assignment is a little more challenging. We reviewed cool colors (greens, blues and purples) and warm colors (reds, oranges, and yellows.) Then, we discussed how they are opposites of one another and when placed against each other, they have a lot of contrast. In other words, these colors placed next to each other really "POP"! Students were directed to choose the color paper they wanted, and then figure out which color family (warm or cool) that it belongs to. Then, students had to use only colors from the opposite color family. For example, if a student used a blue paper (part of the cool family) then they could only use warm colors on their paper. We are working on artistic concentration (which means staying on task and working quietly) so that we can get more work done in a short amount of time. The past two weeks, we listened to music during work time (to help us focus on the creative vibe), and practiced silent communication skills. Students were nodding their heads to the beat, and really focused well and used their time wisely! This also challenged the students working with warm or cool colors to figure out whether or not each color on their table was warm or cool without help from me or one another. When the students realized they could solve the problem on their own (by looking up the colors on the color wheel) they felt good about it! Totally engaging, fun and creative! What a wonderful week!
TOMAH (Tomas, Tomer, Tomma), PIERRE, Malecite chief; fl. 1775–80 in the Saint John valley (N.B.). During the American revolution the Malecite Indians seemed important to the European conquerors of North America for the last time. The governments of rebellious Massachusetts and loyal Nova Scotia believed these inhabitants of the Saint John valley and their neighbouring tribes held the balance of power north of the Bay of Fundy. Leaders of both colonies remembered earlier struggles with the Indians and French and, anticipating similarly devastating raids, vied with each other for Indian support. The Malecites, however, were reluctant to enter combat. During the preceding century they had watched Massachusetts destroy tribe after tribe. Demoralized by these defeats and economically depressed by the decline of the fur trade, they sought to preserve what remained of their traditional way of life. The diplomatic situation was difficult for they needed to balance between Massachusetts, with its genocidal methods of warfare against Indians, and the British in Nova Scotia, with their growing presence on the Saint John. The disputes that caused the war were of no concern to the tribe, but after years of fighting colonial neglect it desperately needed the provisions that the warring colonies would furnish in return for support. In response to a Massachusetts initiative of May 1775 the Malecites moved to establish closer relations with the Americans. Pierre Tomah and Ambroise Saint-Aubin arrived at the Penobscot truck house (Bangor, Maine) in September and dispatched a letter of support to the rebel government. The chiefs asked that goods be sent them and stated that they had no other place to trade. The Massachusetts government responded, and more than a year of close cooperation followed. Tomah and Saint-Aubin led a Malecite contingent which accompanied Jonathan Eddy*’s attack on Fort Cumberland (near Sackville, N.B.) in the fall of 1776. In December Tomah and others met with George Washington on the banks of the Delaware River. Massachusetts did its best to supply the tribe with provisions. Early in 1777 it even attempted to establish a truck house on the Saint John at Maugerville. The British, however, drove the Americans from the Saint John in July. This evidence that the rebels could not protect the tribe on its ancestral territory caused a rift among the Malecites. Tomah’s group was willing to swear allegiance to Britain to forestall hostilities and to try to accommodate both sides. Most of the tribe, however, fled with Saint-Aubin to Machias (Maine). From this time on, Tomah travelled freely between the British and the Americans, performing occasional services for both. He delivered letters for the American agent John Allan* and in 1778 helped him avoid a split among the Indians at Machias, some of whom, excited by France’s entry into the war, wished to give her their immediate support. He also stopped a threatened assault on James White, the British deputy Indian agent for the area, who was attempting to prevent a Malecite attack on settlements near Fort Howe (Saint John). In September 1778 at a major conference at Menagouèche, near Fort Howe, Tomah signed a treaty with the British and a letter forbidding Allan to interfere with the Indians east of Machias. A year later, however, he was back in Machias, assuring Allan that he had acted out of fear and offering to renounce all connection with the British if Allan would provision the tribe. When the Americans could not meet his demands, he led the Malecites eastward to Passamaquoddy Bay. On 31 May or 1 June 1780 he told the American agent that the tribe appreciated his efforts but that poverty and religious zeal required them to meet Michael Francklin, Nova Scotia’s superintendent of Indian affairs, who was waiting on the Saint John with supplies and an Acadian priest (Joseph-Mathurin Bourg). Tomah’s name subsequently disappears from the records but he probably led the Malecites until after the end of the war. In any case, the policies he devised must have guided them since they “lived at the joint expense of the contending parties.” Traditional Canadian and American writers saw Tomah’s activities as evidence of the manipulation of the Malecites by the government. That ethnocentric view did not admit that the Indians were capable of designing and executing a policy to meet their own purposes, and it led to castigation of the Malecites for their “weaknesses of Indian nature” and their failure to rally to the proper cause. Tomah’s ability to protect his people and make the war serve their ends, however, clearly confounds such a low opinion of Indian capabilities. PAC, MG 11, [CO 217], Nova Scotia A, 72, pp.44–45; 74, p.94; 75, pp.24–25, 41–42; 78, pp.83–85; 83, pp.19–24; 87, pp.123–24; 97, pp.209, 228; 98, pp.180–83; 101, pp.134, 268–69; 102, pp.16, 52–53; [CO 220], Nova Scotia B, 12, pp.158–59; 13, p.216; 14, pp.90–91. Documentary history of Maine (Willis et al.), XIV-XIX, XXIV. Military operations in eastern Maine and N.S. (Kidder). J. H. Ahlin, Maine Rubicon; downeast settlers during the American revolution (Calais, Maine, 1966). R. I. Hunt, “British-American rivalry for the support of the Indians of Maine and Nova Scotia, 1775–1783” (unpublished ma thesis, University of Maine, Orono, 1973). R. H. Lord et al., History of the archdiocese of Boston in the various stages of its development, 1604 to 1943 (3v., New York, 1944), I. Raymond, River St. John.
The National Curriculum for design and technology aims to ensure that all pupils: Develop the creative, technical and practical expertise needed to perform everyday tasks confidently and to participate successfully in an increasingly technological world Build and apply a repertoire of knowledge, understanding and skills in order to design and make high-quality prototypes and products for a wide range of users Critique, evaluate and test their ideas and products and the work of others Understand and apply the principles of nutrition and learn how to cook Key approaches we use to achieve this are: Using a variety of teaching and learning styles in Design and Technology lessons. The principal aim is to develop children’s knowledge, skills and understanding in Design and Technology. Teachers ensure that the children apply their knowledge and understanding when developing ideas, planning and making products, and then evaluating them. We do this through a mixture of whole-class teaching and individual or group activities. Within lessons, we give children the opportunity both to work on their own and to collaborate with others, listening to other children’s ideas and treating these with respect. Children critically evaluate existing products, their own work and that of others. They have the opportunity to use a wide range of materials and resources, including ICT. In all classes, there are children of differing ability. We recognise this fact and provide suitable learning opportunities for all children by matching the challenge of the task to the ability of the child. We achieve this through a range of strategies.
HEPA filters are great at filtering particles such as dust, PM2.5 and PM10. They also do an incredible job of capturing nanoparticles including bacteria, COVID-19, and other viruses. What HEPA filters don’t do though, is destroy the particles. This has led to a lot of marketing hype and confused air breathers. Some companies claim they have superior technology that never re-releases captured particles such as viruses back into the air, or even eliminates particles altogether. The logic seems reasonable. If a HEPA filter is like a sieve, then when you turn it upside down the particles can fall out. Here’s a super simple rendition of how we think air filters work, using a sieve and tea leaves. HEPA Filters are Not Sieves The truth is, HEPA filters don’t work like this. At least, not for the tiny small particles like viruses and bacteria. These particles are so small, chemical forces stick them to the HEPA filter. These forces are called van der Walls forces. It’s the same chemical magic that makes geckos stick to surfaces. What The Data Says on HEPA Filters Releasing Viruses and Other Particles Back into the Air Okay, so that’s the theory, but where’s the data to prove it? Good news. Scientists in Japan have tested this. They shot plutonium particles that were 100-200nm in size at HEPA filters. For reference 100-200nm is roughly the same size as the COVID-19 (60-140nm) and many other viruses. They then measured how many were re-released back into the air over a 20 day period. They did this while blowing air forwards through the filter, and in a reverse direction. The scientists even measured whether ‘jolting’ the filter would dislodge more particles. They found that when blowing air forwards through the filter, there was a very minimal re-release of particles back into the air. Much lower than the 0.03% of particles that would normally get through a 99.97% filter anyway. Things got interesting when they blew air through a really full (well used) filter in a reverse direction. In this case, more particles were dislodged. This makes sense, clogged/full HEPAs are more likely to release viruses and other particles back into the air. But only when air is blown through the filter in a reverse direction. This is an extreme case, unlikely to occur unless a used filter is re-inserted into an air purifier in the wrong direction. However, the scientists concluded that filters should be handled with care when removing and replacing them. Are Particles and Viruses Dislodged from HEPA Filters Really a Health Risk? Data shows that some particles can become dislodged from HEPA filters during use. Despite this, the health risk may be minimal, since: - The number of particles dislodged in the forward-flow direction is far less than what would get through the HEPA filter anyway. - If we’re considering viruses, they typically die within 3-24hrs on dry surfaces. That means that any viruses that do escape the HEPA filter will most likely be dead by the time they do.
Prototyping with Microcontrollers, Sensors, and Materials (Virtual) The objective of this lab is to utilize electrical components, an Arduino board, and the Arduino IDE (Integrated Development Environment) to control an LED without and with a button, to take temperature readings, and to evaluate the design of a prototype for a product. The Arduino IDE will be used to program the Arduino board. The prototyping will focus on the design of a thermal insulation device, the testing of the device, and the analysis of the resulting data. The designs will be measured by their capacity to slow the rate of heat loss from hot wax placed inside them. The design will be entered in a competition that will be judged by a ratio that uses the cost of the device, its insulating capacity, the final temperature of the melted wax inside the device, and the room temperature. The lowest design ratio wins. For the virtual semester, the competition will not be conducted, but a predetermined thermal insulation device design and its associated data will be given to you to analyze. You will still have to conduct the data analysis for the design that is given to you. Extra credit will be offered in the form of constructing your thermal insulation device with materials at home. Prototyping is the process of designing and building an early model of a product to test a concept or process. Any system or device that will be sold to consumers, government agencies, or businesses will begin as a prototype that typically does not have all of the components or functions that will be used in the product that is eventually brought to market. A prototype can serve as a proof of concept showing that the system or device can be built and will perform correctly. In this lab, a prototype of a thermal insulation device will be built. Its function will be to reduce the rate of heat loss from a container of heated wax placed in the device. The prototype insulating device will use a TMP 36 sensor to measure heat loss and it will be operated by a microcontroller. Electricity is the movement of electrons. Electrons flow through a conductive wire when there is a difference in charge between two points in the wire. This flow of electrons is the electrical current and it is measured in amperes (A). Electrical current flows in the opposite direction of the electrons. The difference in charge is the electrical voltage and is measured in volts (V). Certain materials resist the flow of electrons. This property is electrical resistance and is measured in ohms (Ω). Ohm’s Law (1) describes the relationship of the voltage across a resistor, the current (I) flowing through a resistor, and the resistance (R) of the resistor. Variations in voltage and current are also used in digital signal processing to operate complex systems and devices and simpler devices, such as microcontrollers and household digital instruments. Several basic electrical components are used to build simple circuits. Some of these components are polarized, which means the way they are connected affects their functionality. Breadboards (Figure 1) are small boards that are commonly used for circuit prototyping. They allow the circuit’s components to be connected without making permanent connections. The red and blue strips on the sides of the board (sections A and D) called power rails are connected down the board and are usually used for powering and grounding. The non-colored rows between the power and ground strips (sections B and C) are connected across and are usually used for making the connections between components and sensors. Sections B and C are not connected to each other across the bridge in the middle of the board. DC (Direct Current) Voltage Sources DC voltage sources are used to power circuits because they have a voltage difference across their terminals. DC voltage sources are usually batteries (e.g. AA, AAA). Arduino boards can be powered by a battery, a USB cable, or an AC adapter. When the Arduino is powered, it can be used as a 5V DC voltage source. They are polarized. Resistors (Figure 2) are components that reduce the current flowing through a circuit and convert the excess current to thermal energy. Resistors can be used to control the voltage and current of circuits. Resistors are color coded to indicate their resistance (Figure 3). They are not polarized. Figure 2: Symbol for a Resistor Push-Buttons and Switches Push-buttons and switches (Figure 4) are mechanical devices that interrupt or divert current. Basic push-buttons are polarized while basic switches are not. Diodes and Transistors (BJT/MOSFETs) A TMP36 sensor (Figure 5) uses the voltage-temperature relationship of diodes to measure temperature. Essentially, the voltage across the diode of the sensor will change proportionally to the temperature. The output voltage can then be converted to a temperature reading. Diodes and transistors (Figure 6) allow current to only pass in one direction. MOSFETs are electrically controlled switches. They can also be used to amplify signals. They are polarized. Light Emitting Diodes Light emitting diode (LED) is a small electric light that uses low voltage and current (Figure 7). The orientation of an LED is important since it is a diode and only allows current to flow in one direction. Most LEDs also require a resistor (typically 470 Ω) in series because they will burn out almost instantly when they encounter high current. They are polarized. Figure 7: Symbol for an LED A microcontroller is an inexpensive, programmable computer without any peripherals, such as a mouse, keyboard, or screen. Microcontroller boards have direct access to the input and output pins of their processing chips so that the user can directly read from sensors and perform actions. Microcontrollers are used in many electrical appliances, such as microwaves. Arduino boards (Figure 8), which use a microcontroller, were designed to be easily programmed and assembled into larger projects. These boards come in many shapes and sizes and some contain additional features, such as WiFi or Bluetooth connectivity. Different boards can also have different features, such as processing speed and memory. This lab will use an Arduino UNO board created by SparkFun called a RedBoard (Figure 9). A RedBoard has many of the basic functions of a computer. All Arduino boards have a general layout is similar to that shown in Figure 10. Not all the sections and pins will be used in this lab or for the Semester Long Design Projects. - Reset Button: Restarts the board - USB Connector: Provides power and connects it to the computer - Pin 13 LED: Usable LED without making an LED circuit - Serial LEDS: Shows if the Arduino is transmitting or receiving data from pins 0, 1, or the USB connection The power pins are used to supply voltage to other pins, and are also used to ground pins (Figure 11). - 3.3V: Usually used to power low-voltage sensors - 5V: Most common power pin used to power circuits - GND: Ground pin, which is 0V - VIN: Voltage-in can be used to power the board using a battery or other external voltage source The digital and analog pins are used for input and output commands to the microcontroller and electrical components (Figure 12). They can be used with both analog and digital devices, as the Arduino board converts analog inputs to a digital input. - A0-A5: Identical analog pins that can be used to read sensors or control analog devices. Pins A0-A3 are more stable than A4 and A5 - Pins 0-1: Transmitter and receiver pins. Do not use these pins for this lab - Pins 2-12: Digital pins that can be switched between HIGH states and LOW states - Pin 13: Connected to the onboard LED, use it only as an input pin The Arduino IDE is a program that can be used to edit, compile, and upload code to a supported microcontroller. Figure 13 is a screenshot of the program. - Verify: Checks code for errors and points to the errors after it finishes - Upload: Verifies code and then uploads it to the Arduino board if there are no errors - Console: Shows any errors the software found in the hardware - Serial Monitor: A tool used to send messages to and receive messages from the board Programs written in Arduino are called sketches. A basic sketch can be broken up into three different areas: global, setup, and loop. These areas are pictured in Figure 14. - Global: Constants and imported libraries go here - Setup: Activate the pins and sensors used. This code only runs once - Loop: The code that runs continuously for reading sensors and turning pins HIGH or LOW The Arduino programming language is based on C/C++, but it is designed to be simpler and easier to learn. The most intuitive way to think about programming is like building with LEGO blocks: certain rules must be followed and different building blocks can be used to build bigger parts. Every line must end with a semicolon (;) unless it is a conditional, loop, or function. Comments start with two backslashes (//). Comments are text that the program ignores and are used to label and explain code. Datatypes are the different kinds of data values that can be used, manipulated, and stored using C++. Table 1 shows the most basic and widely used datatypes. |Datatype||What It Stores (Examples)||Default Value||Notes| |True value (1, HIGH) or false value (0, LOW) |0, FALSE, LOW||-| |Integer number (e.g. -5, 15, 1047)||0||Positive or negative| |Decimal number (e.g. -0.5, 123.77)||0||Positive or negative| |Single character (e.g. ‘c’, ‘A’, ‘5’, ‘?’)||Indeterminate||Enclosed in single quotes| |Sequence of characters (e.g. “Hello World!”, “10”, “157+5”)||Empty (“”)||Enclosed in quotes| Operators perform operations on variables and constants. The results of these operations are usually stored in a variable. Table 2 displays common operators. |Operator||What it Does||Notes| |=||Assigns a value to a variable||-| |+||Adds two or more values||-| |-||Subtracts two or more values||-| |*||Multiplies two or more values||-| |/||Divides two or more values||-| |++||Increment by 1||Usually used in loops| |--||Decrement by 1||Usually used in loops| |==||Checks if two values are equal||Usually used in conditionals| |!=||Checks if two values are not equal||Usually used in conditionals| |> or <||Less than, greater than comparison||Usually used in conditionals| |<= or >=||Less than, greater than, or equal to comparison||Usually used in conditionals| |&& or ||||Boolean AND or Boolean OR ssed to cascade multiple Boolean operations||Usually used in conditionals| Constants and Variables Constants and variables (Figure 15) hold data according to their datatype. They must be given a name so they can be referred to later. Constants hold data that will not change while a program is running. Constants usually contain pin numbers or sensor threshold values. Variables contain data that will change while a program is running. Variables usually contain sensor values and other values that must have mathematical operations done on them. Figure 15 shows how to create different constants and variables. Conditional statements (Figure 16) run code enclosed by their curly brackets when a condition is met. Loops (Figure 17) run the code enclosed by their curly brackets a specific number of times or until a condition is met. While loops are used to perform a task until a condition is met. In Figure 17, the while loop runs only if the state of the button is HIGH. When the state of the button becomes LOW, the while loop will immediately stop running. For loops are used when an action must run a specific number of times. Although they seem complicated at first, the structure of most for loops is the same. In Figure 16, the first part of the for loop sets a variable (usually i for index) to a value used to begin a count, the middle sets the condition to make the loop stop, and the third part is where the variable is incremented or decremented with each run of the loop. The first time the loop runs, i = 0 which does not meet the loop end condition (i > 10) and the code will run. At the end of the loop code, i is incremented by 1 and becomes i = 1. For the second time the loop runs, i = 1 which again does not meet the loop end condition and the code runs once again. i then becomes i = 2 at the end of the loop. This repeats until i = 10. Commonly Used Arduino Functions Table 3 shows commonly used functions in the Arduino IDE that are specifically used to work with the digital and analog pins of the board. |Function||What it Does| |pinMode(pin, mode)||Sets a pin as an input or output| |digitalWrite(pin, value)||Sets a digital output pin to HIGH or LOW| |digitalRead(pin)||Reads a digital input pin as HIGH or LOW| |analogWrite(pin, value)||Sets an analog output pin to a value 0-1023| |analogRead(pin)||Reads an analog output pin as a value 0-1023| |delay(milliseconds)||Pauses program for a certain amount of time| |Serial.print(value)||Prints the value (variable) to the Serial Monitor| Tinkercad is a cloud-based, in-browser software that will be used for the simulation of microcontrollers and associated electrical components. Additionally, code can be programmed in-browser in the Arduino programming language and simulated with the virtual circuit. A link to the Tinkercad website can be found here. You can sign into Tinkercad with your Autodesk account, which you should have created in Lab 1. Once a new circuit has been created, the interface appears like that in Figure 18. Components can be dragged and dropped from the sidebar into the work area. To simulate the program, the Code block can be opened and the code typed into it. To run the simulation with the code and virtual circuit, click Start Simulation (Figure 19). Heat and Heat Transfer Heat is a form of energy. Heat can be beneficial and it can be destructive. For example, the heat created by the combustion of fossil fuels in a combustion engine or the heat generated by the friction between parts of an engine in contact can destroy an engine. A heating, ventilation, and air conditioning (HVAC) system in a home keeps the home warm at cooler temperatures and cool at warmer temperatures. Many systems and devices use components and materials that are designed to remove heat from the system or retain heat in the system. Heat transfer is the process of thermal energy moving from one body to another as a result of a temperature difference. Temperature is the measure of the average kinetic energy of atomic motion. The faster the atoms are moving, the higher the temperature. The main mechanisms of heat transfer are conduction, convection, and radiation. Conduction occurs when there is a temperature difference within a solid body or between solid bodies in contact. During conduction, energy (heat) will flow from the region of higher temperature to the region of lower temperature. Imagine a metal rod that is heated at one end. The atoms of the rod collide at the point where the temperature differs, transferring heat until the temperature of the rod becomes uniform. Convection is the transfer of heat within a fluid medium (fluids consist of gasses and liquids). Convection can occur as natural or forced convection. Forced convection (Figure 20) occurs when the main mechanism for heat transfer is due to an outside force causing the fluid to move. Natural convection results from the natural difference in density between fluids, causing a liquid or gas to rise. Radiation is the process by which energy in the form of electromagnetic radiation is emitted by a heated surface in all directions. It does not require an intervening medium to carry it. The heating of the Earth by the Sun is an example of heat transfer by radiation. Electromagnetic radiation is a means of energy transfer that occurs when an atom absorbs energy. This electromagnetic wave can propagate as heat, light, ultraviolet, or other electromagnetic waves depending on the type of atom and the amount of energy absorbed. Color is a property of light. When an object appears white, it virtually reflects all the electromagnetic waves coming to it while a black object absorbs the waves. Color (reflectivity) should be considered when choosing materials for thermal insulation. The container that will be built in the lab is a thermodynamic system (Figure 21), which is a part of the Universe separated from the surroundings by an imaginary boundary. There are three types of systems: open systems, closed systems, and isolated systems. Open systems allow the transfer of mass and heat. For example, an open pot of boiling water is an open system. It exchanges heat and water vapor with the air around it. If collected, the water vapor can be condensed back to liquid water that has some mass. Closed systems only allow heat to be transferred to the surroundings. A hermetically sealed bottle of soda is a closed system. If placed in a hot environment, it absorbs energy in the form of heat, but the amount of liquid within does not change. Isolated systems do not interact with the surroundings at all. No exchange of heat or mass is possible. An ideal Thermos® is an isolated system. If hot chocolate is poured within, the same amount at the same temperature will be poured out later. No such system can be fabricated, but they can be approximated. Understanding how to minimize heat loss is the key to designing a successful insulating container. The first consideration is the materials chosen. Using materials that are poor conductors of heat, such as glass, will minimize heat loss. Plastic is also a poor conductor of heat. Foam cups are made of plastic that has tiny air bubbles suspended in it. Air is among the poorest conductors of heat. A vacuum, or the complete lack of air, is the best insulator of all. This is the principle employed in the Thermos® design. The other important consideration in creating the container is its cost. Minimal design uses the fewest resources while maintaining the safety and efficacy of a product. Materials and Equipment - Computer with internet access - Arduino IDE - The following are the materials that can be used in Tinkercad: - Arduino UNO microcontroller - Tinkercad code block with Arduino IDE - 220 Ω - 10 kΩ - TMP 36 Sensor IN-PERSON STUDENTS will be following the standard procedure for this lab, and should not be following the instructions on this page. 1. LED with Button Circuit - Copy the Part 1 template to the Tinkercad workspace using the steps in Starting a New Circuit in Tinkercad. - Go to tinkercad.com and sign in with an Autodesk account. - Open the Tinkercad link for the part of the lab you want to work on. The links are provided in the procedure below. - Select the Copy & Tinker option. This will copy the template to the workspace so it can be edited. - A circuit using an LED and a button will be made in Part 1. The LED should be on when the button is pressed, and off when the button is not pressed. The programming flowchart and circuit diagram are shown in Figures 22 and 23. - Before breadboarding the circuit, look at the button to determine which pins are connected. With an actual button, this is done by checking the bottom side (pin side) of the button (Figure 24). Note which pins are for power, ground, and digital input into the Arduino. In Tinkercad, the button is depicted with the pins extending from the sides, which shows the direction of the pin orientation. - Place the button on the breadboard and make sure it straddles the bridge down the center of the breadboard (Figure 24). - Wire the circuit diagram as shown in Figure 25. - Write the Arduino program to implement the flowchart. Code can be added by clicking the “Code” button at the top of the workspace. Include comments explaining what the code is doing. The code must have comments to be approved by a TA - In the Global Area, create the constants used to hold the pin numbers for the button and the LED. - In the Setup Area, use pinMode() to set up the LED and Button. The LED will be set to OUTPUT since it will be controlled by the program. The button will be set to INPUT since the program will receive data from the button. - In the Loop Area, use digitalRead(), digitalWrite(), and a conditional to write the code that controls the button - Create a variable named buttonState, that will hold the input from the button - Use digitalRead() to set the buttonState variable to the current state of the button - Next, set up the conditional statement that will be used to control the LED. In this code, the conditional is controlled by the button state. - Last, use digitalWrite() to control the LED. Remember, we want the LED to be on when the button is pressed, and off otherwise - This line will turn the LED on - This line will turn the LED off - Click Start Simulation to check your code. Fix any errors and ensure the code is working correctly before moving on to the next step. - Screenshot the final code, circuit, and results (the LED state when the button is pressed and is not pressed). 2. Prototyping a Thermal Insulation Device The following rules must be observed at all times during the competition. Violation of any of these rules will result in disqualification. - The placement of the thermistor with respect to the lid may not be changed - The thermal insulating device may not be held while testing - External heat sources are prohibited - The jar of beeswax must be inside the container within 30 seconds of it being received - The jar of beeswax cannot be returned (no restarts) - At least one material must be used in the design of the insulating device - The competition will be judged by the minimal design ratio (MDR) (2) of the design. The design with the lowest MDR wins. - In (2), IC is the insulating capacity, Cost is the cost of the container, TR is the temperature of the room, and TF is the final temperature read by the thermistor. Ask a TA for the room temperature - The insulating container will be built using the materials in Table 4. Select the materials carefully. Consider their cost and their use as an insulator. Review the competition ratio before purchasing materials. |Material||Unit||Cost Per Unit| |Large foam cup||1||$0.50| |Pack of clay||1 bag||$0.20| |Wool fabric||2 pieces||$0.10| |Cotton balls||3 balls||$0.05| |Aluminum foil||1 ft2||$0.30| |Plastic wrap||1 ft2||$0.02| Note that in a non virtual lab, a thermistor is used instead of a TMP 36 sensor. A thermistor also converts an analog reading to a temperature, similar to the TMP 36 sensor. A TMP 36 sensor will now be built in the following section. 3. Competition Procedure Using a Temperature Sensor Using the Arduino, the temperature change of the hot wax will be recorded over 10 minutes. The Arduino will read the temperature through the TMP 36 sensor and print the values into the Serial Monitor in the Arduino IDE. Because this lab is performed virtually, you will create the TMP 36 circuit virtually in Tinkercad, but will not be able to measure a thermal insulation device. You will complete the circuit for collecting data and test it as if you were to collect the data for 15 minutes in-person and then receive the Serial Monitor readings from a Lab TA. - Copy the Part 3 template to the Tinkercad workspace using the steps in Starting a New Circuit in Tinkercad. - Wire the circuit according to the configuration in Figure 29. Make sure that pin A0 is used, and the circuit is powered to 5V and grounded. - There are missing components in the code. Read through the comments and insert code where necessary. After the program is completed, click Start Simulation and open the Serial Monitor to check if the readings are correct. - Clicking the TMP sensor shows a slider for the temperature of the “surroundings” in degrees Celsius. - The Serial Monitor reading is in Fahrenheit, comment out the line for converting Celsius to Fahrenheit (type // before the line that has the conversion) and check that your Serial Monitor reading matches the number on the slider (please be aware that a small error ± 1°C is expected). - The TMP 36 sensor should take a temperature reading every 5 seconds. The analog voltage value read * through pin A0 should then be converted to degrees Fahrenheit. The temperature reading should be printed to the Serial Monitor along with the seconds timestamp of the reading. Insulating Device Design - Analyze the materials and consider the design options, keeping in mind the lab's specifications. Make preliminary sketches during this process. - Sketch the design. Label the drawings clearly. Prepare a price list and write down the total cost for the insulating container based on the materials chosen. Receive approval of the design sketch from a TA. - The TA will build the insulating device based on the completed sketch. They will send the data received from testing the device, which will then be used in Part 4. 4. Data Analysis - The data pasted from the Serial Monitor into Excel is not correctly separated into columns. To present the data in the correct format, highlight all the data, and go to the Data tab and select “Text to Columns”. - In the Text-to-Columns wizard, select Delimited as the data type and click Next. Then select “Comma” as the delimiter, and hit “Finish”. Ensure that the data has been separated into two separate columns. - Create a graph of the data using the X, Y Scatter template. First, select the data starting from the where the temperature readings stop increasing (example below). Next go to the Insert tab, and click on the first Scatter option under the Charts group.This should create a new scatter plot, as seen below. - Next add axis labels and a trendline. First click on the graph, and then the green “+” icon next to it. Then, check off “Axis Titles, Chart Titles, and Trendline”. Click on the added labels to edit the text. When finished, the graph should look similar to Figure 29. Figure 29: Temperature vs. Time Graph of the Thermal Insulation Device - Calculate the insulating capacity (IC) of the design. This will be done using the trendline equation. Right-click on the trendline and select “Format Trendline”. On the “Trendline Options” tab, ensure that the trendline is linear. Select the “Display Equation on Chart option”. The slope of this linear equation is the IC. - Give the IC, final temperature, and total cost to the TA to enter the design's performance into the competition score sheet and calculate the MDR. The lab work is now complete. Please clean up the workstation. Return all unused materials to a TA. Individual Lab Report Follow the lab report guidelines laid out in the EG1004 Writing Style Guide in the Technical Writing section of the manual. Use the outline below to write this report. - Describe the basics of Arduino and its application - Explain heat, heat transfer, and all the mechanisms that perform heat transfer. Discuss which of these mechanisms applied to the design - Define thermal insulation and the different types of thermodynamic systems - Discuss minimal design and its importance - Describe the container's design. Explain the choices made. Include a discussion of the materials chosen and why. Talk about the strategy for winning the competition - What changes would have increased/decreased the MDR or IC? - How was the IC value derived? - Should the Temperature vs. Time graph be smooth or should it have spikes? Explain how closely the curve approximates the ideal and what would affect the data recorded - Describe how the design succeeded or failed. Discuss design improvements - Include the spreadsheet with the competition results. Describe the results and talk about other designs in the class - Discuss what part of the lab each individual member completed for the group and how it was important to the overall experiment. Note: It is not unusual to experience instrumentation errors in this lab, leading to incorrect temperatures being recorded. Be sure to read How to Handle Unusual Data in the manual to learn how to handle this. Team PowerPoint Presentation Follow the presentation guidelines laid out in the EG1004 Lab Presentation Format in the Technical Presentations section of the manual. When preparing the presentation, consider the following points. - What is the importance of prototyping and using Arduino? - What is the importance of minimal design? - What is the importance of materials in prototyping? - Why is it important to minimize heat loss? - How can the design be improved? There were no references used.
For use with the Nikkei Center Suitcase, I wanted to create a lesson that could be used in upper elementary. This is the age range that most often checks out the suitcases. However, one has to be discerning of what one can discuss with students at that age, and for the most part, I left that up to the person teaching the lesson. Therefore, this lesson focuses more on the changes in physical features. It is a compare and contrast lesson on the past and future. There are elements of history, current day features, discussion and writing. This is a lesson that focuses on Japan town before Japanese Incarceration during World War II. Students will gain an understanding of change over a long period of time as well as some of the causes for that change. Students will also gain some writing practice. It was very interesting learning about a part of this city’s history. I feel that this lesson can be used and modified across the elementary grades and that it will give a foundation of deeper learning about the situation in the future. Having written several lessons at the third grade level at this point, I believe that this is both appropriate and useful at that grade.This lesson is merely a guide to a deeper lesson. Teachers may chose to modify it as they see fit for their classroom or grade level. Class/Topic: Social Studies Time: approx. 35 minutes for discussion and some writing time. Might be a good supplement for regular writing lesson. Grade Level: 3-5 Date: This lesson would be a basic overview of how things have changed in what used to be Japantown (and is now partly Chinatown) and what might have caused physical features as well as themes and ideas to change. To do this, students will look at old images; find similarities and differences to their own experiences and time. Then students will discuss and write about these themes. This is a lesson on the general Japanese-American experience before Japanese Incarceration occurred during WWII. - Students will gain a familiarity with how and why things change over time. - Students will be able to discuss things stay the same across time and how things change. Students will be gaining knowledge of how to work with images and documents as well as gaining familiarity over their own community as it was in the past and how it is now. This will also give them experience in finding similarities and differences as well as some writing skills. More: Download Incarceration Lesson PDF version of the complete lesson (81kb) Image Credit: http://www.oregonencyclopedia.org/entry/view/oregon_nikkei_legacy_center/
There’s no doubt that ice can reshape mountains. Researchers in South Africa now suggest adding another major player: lightning. They argue that the intense, momentary heat that lightning produces can crack apart mountain surfaces. This can slowly sculpt peaks, leaving rubble behind. Some critics, however, challenge whether lightning is really more than a bit player. Jasper Knight and Stefan Grab of the University of the Witwatersrand in Johannesburg propose that lightning may be a major reshaper of mountains in the Jan. 1 issue of Geomorphology. They make this claim after studying a rugged area called Drakensberg. It’s a mountainous region in Lesotho, a country in southern Africa. There the pair found substantial amounts of rocky debris on the slopes. In many areas, especially in polar regions and at high elevations, glaciers carve away parts of mountains. As these long-lived ice streams flow, they grind away at the bedrock below. This can break loose boulders that the glaciers then carry with them. Along the way, the glaciers and the rocky debris can gouge out broad, U-shaped valleys. Ice can play a big role in reshaping rocks even in cold areas without glaciers. There, water seeps into cracks and crevasses in rocks. When it freezes, the ice expands. This can sometimes exert enough force to bust large rocks into jagged pieces. But the mountain sites that Knight and Grab studied seldom freeze. They do, however, get plenty of thunderstorms. That got the researchers thinking: Could lightning be blasting the region’s rocks to pieces? After all, some regions midway between the poles and tropics may get hit with 150 bolts of lightning per square kilometer each year. (That’s about 390 strikes per square mile.) On upwind mountain slopes where thunderstorms are especially common, rates of lightning strikes can be even higher. And lightning is quite powerful. A bolt can heat the surrounding air, as well as any object it strikes, to a temperature approaching 30,000° Celsius in less than 1 millisecond (one-thousandth of a second). It can explode into steam any water on a rock. If that water is inside the rock, either soaked into its pores or stuck in a crack, the pressure increase as it flashes to steam could shatter rock. Another side effect: Lightning’s intense heat can reset the magnetic field locked within a rock’s magnetic minerals. Knight and Grab now report evidence of both at Drakensberg. The pair studied jagged rocks at dozens of sites. Eventually, they focused on eight sites scattered across several peaks. All were within 1 kilometer of each other. Throughout these areas were many, many signs of lightning strikes, notes Knight. Many rocks, for instance, seemed to be freshly broken. Also, no lichens grew on them. That last point makes sense, Knight says, because lightning can create temperatures high enough to quickly incinerate anything growing on a rock. (Lichens are not plants, but rather a blend of a fungus and an alga. They come in many shapes, colors and sizes, and they can grow in harsh environments. You may be most familiar with the flat, frilly types that grow on tree trunks, rocks and old tombstones.) These seemingly fresh surfaces also were harder than similar rocks nearby. But that’s not because the rocks were “cooked” by lightning, Knight notes. It’s because chemicals in the air or dissolved in rainwater can soften exposed rock surfaces over time. Finally, the researchers held a compass near these freshly broken rocks. It was the type of compass that hikers and explorers use. And at once, its needle moved. That was a sign that the rocks had a magnetic field. The closer the compass was held to the place where lightning had supposedly struck, the farther the needle moved, says Knight. That shows the area near the strike zone was magnetized differently than surrounding rocks. “The new findings are intriguing,” says Olav Slaymaker at the University of British Columbia in Vancouver. As a geomorphologist, he studies Earth’s landscapes and the processes that shape them. Knight and Grab have shown without a doubt that lightning strikes have an effect on rocks, he says. However, he adds, “there are a lot of questions still to be answered.” For instance, he notes, lightning is rare in some places but common in others. So maybe the small area that this team studied is unusual. Further field studies would be needed to make sure that lightning strikes are no more common here than at other sites in the area. Still, he argues, “geologists have not taken [lightning] as seriously as they might have.” Perhaps. But geomorphologist Ian Meiklejohn does not buy the new study’s conclusions. He works at Rhodes University in Grahamstown, South Africa. Having studied the rocks in the Drakensberg area for many years, he argues that simple water erosion can produce the same results. “There’s no need to look at exotic explanations when regular ones will do,” he says. What’s more, Knight and Grab focused on a tiny fraction of the Drakensberg region to explain what’s happening throughout these mountains generally. Finally, he contends that it is hard to tell whether recent lightning caused the magnetic changes noted by Knight and Grab. In fact, the magnetic changes could be very, very old. If that’s so, lightning may be shaping the mountains more slowly than Knight and Grab propose. Considering all the evidence, “I think it’s okay to say lightning makes a few jagged rocks here and there,” Meiklejohn says. But when it comes to mountain sculpting, he suspects that lightning’s role “is minuscule.” algae Single-celled organisms, once considered plants, that grow in water and depend on sunlight to make their food. erosion The process that removes rock and soil from one spot on Earth’s surface and then deposits the material elsewhere. Erosion can be exceptionally fast or exceedingly slow. Causes of erosion include wind, water (including rainfall and floods), the scouring action of glaciers, and the repeated cycles of freezing and thawing that often occur in some areas of the world. fungus (plural: fungi) Any of a group of unicellular or multicellular, spore-producing organisms that feed on organic matter, both living and decaying. Molds, yeast and mushrooms are all types of fungi. geology The study of Earth’s physical structure and substance, its history and the processes that act on it. People who work in this field are known as geologists. Planetary geology is the science of studying the same things about other planets. geomorphology The study of Earth’s landscapes and the processes that shape them. glacier A slow-moving river of ice hundreds or thousands of meters deep. Glaciers are found in mountain valleys and also form parts of ice sheets. lichen A blend of a fungus and an alga. Neither of these organisms is a plant; the lichen isn’t a plant either. magnetic field A region around a magnetic material where the force of magnetism acts. pressure Force applied uniformly over a surface, measured as force per unit of area.
The hip is an important joint that helps us walk, run and jump. The ball-and-socket joint of the hip is formed by the round end of the femur (thighbone) and the cup-shaped socket of the acetabulum (part of the pelvis). Stability of the hip joint is achieved by the labrum (a strong fibrous cartilage that covers the acetabulum and seals it), ligaments (tissues connecting bone to bone) and tendons (tissues connecting muscle to bone) that encase the hip and support the hip movements. What is Snapping Hip Syndrome? Snapping hip syndrome is a condition in which you hear or feel a snapping sound in the hip when you swing your legs, run, walk or get up from a chair. The sound can be experienced in the back, front or side of the hip. Symptoms of Snapping Hip Syndrome Snapping hip syndrome is usually harmless, but may be accompanied with pain and weakness. Sometimes, the syndrome can lead to bursitis, a painful swelling of the fluid-filled sacs called bursae that cushion the hip joint. Causes of Snapping Hip Syndrome The movement of muscles or tendons over a bony protrusion in the hip region gives rise to the snapping sound. The most common cause of snapping hip syndrome is tightness in the muscles and tendons surrounding the hip. Sometimes, a loose piece of cartilage, a cartilage tear or pieces of broken cartilage or bone in the joint space can cause the snapping sound. This may also lock the hip, causing disability along with the pain. However, this is less common. Sports or dance activities that involve repeated bending make you vulnerable to snapping hip syndrome. It may affect your performance. Diagnosis of Snapping Hip Syndrome Your doctor will review your medical history and symptoms and conduct a physical examination to detect the exact cause of snapping. You may be required to reproduce the snapping sound by moving your hip in different directions. Imaging tests may be ordered by your doctor to rule out bone and joint problems. Treatments for Snapping Hip Syndrome Rest and modification of activities may be suggested initially by your doctor followed by conservative therapeutic options. The therapeutic strategies for snapping hip include: A few home remedies can be followed if you experience minor snapping hip pain, which include: - Applying ice to the affected area - Using NSAIDs to reduce discomfort - Avoiding repetitive hip movements by changing your activities Consult your doctor if the discomfort persists even after following the home remedies. Your doctor may teach you certain exercises to strengthen and stretch the musculature surrounding the hip. You may be guided by a physical therapist. Tendon stretching exercises such as iliotibial band stretch and piriformis stretch will be indicated depending on the type of snapping you experience. Your doctor may recommend a corticosteroid to be injected into the bursa to reduce the pain and inflammation in the hip joint in case you have hip bursitis. Surgery is recommended when conservative approaches do not have an effect in resolving the snapping hip syndrome (which is rare). The type of surgery will depend on the factors that cause the syndrome. Surgical procedures include: - Open procedure: An open incision of several centimeters will be made to resolve the issue of snapping hip. The open surgery can help your surgeon to gain better access to the hip problem. - Hip arthroscopy: This procedure is usually performed to remove or repair the torn labrum. Your surgeon will insert an arthroscope (small camera) into your hip joint so that minute surgical instruments can be guided with the help of real-time image that is displayed on a large screen. Very small cuts are required for this procedure due to the presence of a small arthroscope and surgical instruments. Your surgeon will discuss the best surgical option depending on your condition.
Fever in children is a necessary and healthy part of every child’s growth. As a first-time parent, it’s only normal to get bothered when your kid has a fever. For small children and newborns, a slightly elevated temperature may signal a significant illness. Although the level of fever will not specifically reveal the seriousness of the disease, any severe sickness could potentially cause an increased or a minimal fever. There is often some confusion with regards to what constitutes a fever in children. However, in this post, you will get to understand why your kid has a fever and how to reduce it. Causes of Fever in Children Fever by itself is not an illness but a symptom of an underlying problem. Children tend to develop fever more easily than adults, mainly due to their relatively immature immune systems. Potential causes of fever would include the following: - Infection: Infection such as flu, chicken-pox, cold etc., are some of the most common causes of fever in children. A child gets a fever when the body is trying to kill bacteria caused by an infection, but sometimes the fever can be too high for the body. To avoid serious infections, you need to take care of your child properly. - Overdressing: Newborns are not able to regulate their body temperatures properly. They may get a fever if they are overdressed or are left in hot environments for long periods. - Reactions: Children may be having a fever if their bodies react to some medications or meals. These reactions make the body vulnerable, which can cause an increased temperature and attack any part of the body such as the stomach, throat, nose, lungs, etc. - Immunization: Immunizations given to infants help to build the immune system that fights against diseases and protects them while growing up. When the immune system is at work, kids might experience symptoms such as swelling or fever. So, if your child has a fever after immunization, do not panic because it is only normal. Other illnesses that can cause fever in children include malaria, pneumonia, diarrhea, Tumors, and other viral infections. Temperature in Children: When to Worry Usually, when children experience fever, it does not affect their actions. They play, eat, drink, and perform other activities normally. You don’t have to worry if their temperature is around 98°F, especially if they are between 3 months old to 3 years old. If your child is older than 3yrs old, it is still expected for them to have a temperature around 100°F. However, you would only need to be bothered when: - Fever persists for more than five days: This could be a sign of a serious illness, so you need to reach out to a doctor for an expert examination. - An infant develops fever: Children are young, and their immune system is not strong. So when you notice that your infant has a low temperature or a higher temperature than usual, you need to see a doctor. - Your child is not acting fine: Infants and older children experience difficulty with eating when they have a fever. So when you notice that your child is not acting fine, you need to visit a doctor. Other signs are if your baby wets his or her diaper more than usual or if older children easily get dehydrated. How to Reduce Fever in Children Not all fevers should get you rushing to the emergency room at night. However, when the body temperature in your children is over 101°F, we can term it as a high fever. Controlling high fever in kids is important to prevent many complications in the body. Once you practise these tips below, it will help you reduce fever in your kids: - Let them take a bath with lukewarm water. - Give them lots of fluid. - Check their body temperature with a thermometer within 30-minutes intervals. - Dress them up in light clothing only. - Avoid giving them heavy meals. - Ensure they rest. - Make use of fever reducers. It is difficult to prevent fever in children. So, understanding that fever in children is a healthy immune response will keep you less worried. While you don’t have to worry much about fever, consult a doctor if the temperature does not reduce within the next 24hours.
- DESCRIPTION OF SCHISTOMIASIS Schistosomiasis, also known as bilharzia, snail fever, and Katayama fever, is a disease caused by parasiticflatworms of the Schistosomatype. The urinary tract or the intestines may be infected. Signs and symptoms may include abdominal pain, diarrhea,(Akpinar, 2012). Bloody stool, or blood in the urine. In those who have been infected a long time, liver damage, kidney failure, infertility, or bladder cancer may occur. In children, it may cause poor growth and learning difficulty(Antounet al., 2005). The disease is spread by contact with fresh water contaminated with the parasites. These parasites are released from infected freshwater snails. The disease is especially common among children in developing countries as they are more likely to play in contaminated water(Akpinar, 2012). Other high risk groups include farmers, fishermen, and people using unclean water during daily living. It belongs to the group of helminth infections. Diagnosis is by finding eggs of the parasite in a person’s urine or stool. It can also be confirmed by finding antibodies against the disease in the blood(Duke, 2002). Methods to prevent the disease include improving access to clean water and reducing the number of snails.(Duke,.2002) In areas where the disease is common, the medication praziquantel may be given once a year to the entire group. This is done to decrease the number of people infected and, consequently, the spread of the disease. Praziquantel is also the treatment recommended by the World Health Organization(WHO) for those who are known to be infected(Akpinar, 2012).. (Antounet al., 2005) Schistosomiasis affected almost 210 million people worldwide as of 2012. An estimated 12,000 to 200,000 people die from it each year. The disease is most commonly found in Africa, as well as Asia and South America. Around 700 million people, in more than 70 countries, live in areas where the disease is common. In tropical countries, schistosomiasis is second only to malaria among parasitic diseases with the greatest economic impact. Schistosomiasis is listed as a neglected tropical disease(Akpinar, 2012). 1.1 STUDY AREA This study was carried out at START RIGHT MODEL SCHOOL Sango Ota Ogun State Ado Odo Ota Local Government Area. Schistosomiasis is noticed at this area of Ogun State Nigeria and some neighboring areas including Ado-Odo, Owode, and e.t.c. This is as a result of some factor like; - Poor drainage system - Poor waste disposal - Over flooding etc. 1.2 PURPOSE OF STUDY This study focuses on creating awareness to citizen of Sango Ota Ogun State AdoOdo Ota Local Government Area on the presences of intestinal schistosomiasis and its possible prevention and control. 1.3 SIGNIFICANCE OF THE STUDY The importance of this study is to identify the presences of intestinal schistosomiasis and its possible prevention and control at Sango Ota Ogun State Ado Odo Ota Local Government Area. 1.4 SCOPE OF STUDY This research focuses on the prevalence of intestinal Schistosomiasis among pupils in START RIGHT MODELSCHOOL Sango Ota Ogun State AdoOdo Ota Local Government Area and possible prevention and control method. 1.5 LIMITATION OF THE STUDY Some of the limitation faced in the course of carrying out this project is: - Illiteracy of parent in the area - Poor finance - Difficulty in the collection of sample 1.6 DEFINITION OF TERM Helminths: are large multicellular organisms, which when mature can generally be seen with the naked eye. They are often referred to as intestinal worms even though not all helminths reside in the intestines; for example schistosomes are not intestinal worms, but rather reside in blood vessels. Mutualism:a relationship between two species of organisms in which both benefit from the association. Commensalism: A type of relationship between two species of a plant, animal, fungus, etc., in which one lives with, on, or in another without damage to either. [email protected].[email protected].
Scientists and authorities are concerned about recent reports of a dangerous fissure and thinning of the Doomsday or Thwaites Glacier in Antarctica. Thwaites contains a colossal amount of ice, enough gradually raise the sea level by more than two feet, although its collapse in warm climates could release many more feet of neighboring glaciers. The Antarctic glacier has destabilized, retreating nearly nine miles since the 1990s. If much of it gradually melts over the coming decades and centuries, large swaths of coastal cities and towns around the world could be flooded and easily destroyed by storms. The latest 2023 study, straight from the source in West Antarctica, also shows how the glacier is melting. The critical point is under the ice shelf of the Doomsday Glacier, which is the end that extends over the ocean. It is important to note that ice shelves connect to the ocean floor, acting like a “cork in a bottle”, preventing the rest of the colossal glaciers from flowing unhindered into the sea. So if the ice shelf eventually disappears, so will the glacier. A recent study that was published in a scientific journal Nature, shows two main findings: - The glacier continues to melt underwater, but in the flat areas that make up most of this ice shelf, this thinning is slower (about six to 16 feet or two to five meters per year) than usual. expected. - Still Thwaites is melting faster than expected in fissures below a critical floating ice shelf. Scientists suspect that the relatively warmer water seeps into natural cracks and crevices, amplifying melting in those weaker spots. “Our results came as a surprise, but the glacier is still in trouble,” said Peter Davies, an oceanographer with the British Antarctic Survey who took some of the recent measurements at Thwaites. said in a statement. “If the ice shelf and the glacier are in balance, the ice leaving the continent will match the amount of ice lost through the melting and calving of icebergs. We found that despite a small amount of melt, the glaciers are still rapidly retreating, so it doesn’t seem like it takes long for a glacier to get out of balance.” On a recent excursion to West Antarctica, researchers set up camp on the remote Thwaites Ice Shelf and dropped the Icefin robot into the water below. Rare images, shown in the British Antarctic Survey video below, show what is happening with thinning ice. Melting in the crevasses left “ladder” formations at the bottom of the Doomsday Glacier. Source: Digital Trends I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.
The Linux Programming Interface Kernel: The central software that managers and allocates computer resources (i.e., the CPU, RAM, and devices). Although it is possible to run programs on a computer without a kernel, the presence of a kernel greatly simplifies the writing and use of other programs, and increases the power and flexibility available to programmers. The kernel does this by providing a software layer to manage the limited resources of a computer. - Process scheduling - A computer has one or more CPUs, which execute the instructions of programs. - Like other UNIX systems, Linux is a preemptive multitasking OS. - Multitasking means that mutiple processes (i.e., running programs) can simultaneously reside in memory and each may receive use of the CPUs. - Preemptive means that the rules governing which processes receive use of the CPU and for how long are determined by the kernel process scheduler (rather than by the processes themselves). - Memory management - Like most modern OS, Linux employs virtual memory management, a technique that confers two main advantages. - Advantage 1: processes are isolated from one another and from the kernel, so that one process can't read or modify the memory of another process or the kernel. - Advantage 2: only part of a process needs to be kept in memory, thereby lowering the memory requirements of each process and allowing more processes to be held in RAM simultaneously. This leads to better CPU utilization, since it increases the likelihood that, at any moment in time, there is at least one process that the CPUs can execute. - Provision of a file system - The kernel provides a file system on disk, allowing files to be created, retrieved, updated, deleted, and so on. - Creation and termination of processes - The kernel can load a new program into memory, providing it with the resources (e.g., CPU, memory, and access to files) that it needs in order to run. - Such an instance of a running program is termed a process. - Once a process has completed execution, the kernel ensures that the sources it uses are freed for subsequenty reuse by latter programs. - Access to devices - The devices (mice, monitors, keyboards, disk and tape drives. and so on) attached to a computer allow communication of information between the computer and the outside world, permitting I/O. - The kernel provides programs with an interface that standardizes and simplifies access to devices, while at the same time arbitrating access by multiple processes to each device. - The kernel transmits and receives network messages (packets) on behalf of user processes. This task includes routing of network packets to the target system. - Provision of a syscall API - Process can request the kernel to perform various tasks using the kernel entry points known as syscalls. - Modern processor architectures typically allow the CPU to operate in at least two different modes: user mode and kernel mode. Hardware instructions allow switching from one mode to the other. - When running in user mode, the CPU can access only memory that is marked as being in user space; attempts to access memory in kernel space result in a hardware exception. - When running in kernel mode, the CPU can access both user and kernel memory space. - Certain operations can be performed only while the processor is operating in kernel mode. Examples include: - executing the halt instruction to stop the system - accessing the memory-management hardware - initiating device I/O operations. The kernel knows and controls everything: - A process can create another process => A process can request the kernel to create another process. - A process can create a pipe => A process can request the kernel to create a pipe - A process can write data to a file => A process can request the kernel to write data to a file - A process can terminate by calling exit()=> A process can request the kernel to terminate by calling Universality of I/O: The same syscalls ( close(), and so on) are used to perform I/O on all types of files, including devices. Thus, a program employing these syscalls will work on any type of file. The kernel essentially provides one file type: a sequential stream of bytes, which, in the case of disk files, disks, and tape devices, can be randomly accessed using the The I/O syscalls refer to open files using a file descriptor, a (usually small) non-negative integer. A file descriptor is typically obtained by a call to open(), which takes a pathname argument specifying a file upon which I/O is to be performed. Normally, a process inherits 3 open file descriptors when it is started by the shell: - Descriptor 0 - standard input (stdin) - The file from which the process takes its input - Descriptor 1 - standard output (stdout) - The file to which the process writes its output - Descriptor 2 - standard error (stderr) - The file to which the process writes error messages and notification of exceptional or abnormal conditions. To perform file I/O, C programs typically employ I/O functions contained in the standard C library. This set of functions, referred to as the stdio library, includes: The stdio functions are layered on top of the I/O syscalls: Process: A process is an instance of an executing program. A process is logically divided into the following parts, known as segments: - Text (.text): the instructions of the program. - Data (.data for initialized and .bss for uninitialized): the static variables used by the program. - Heap: an area from which programs can dynamically allocate extra memory. - Stack: a piece of memory that grows and shrinks as functions are called and return and that is used to allocate storage for local variables and function call linkage information. A process can create a new process using the fork()system call. The process that calls fork()is referred to as the parent process, and the new process is referred to as the child process. The kernel creates the child process by making a duplicate of the parent process. The child inherits copies of the parent's data, stack, and heap segments, which it may then modify independently of the parent's copies. The program text, which is placed in memory marked as read-only, is shared by the two processes. The child process goes on either to execute a different set of functions in the same code as the parent, or, frequently, to use the execve()syscall to load and execute an entirely new program. An execve()call destroys the existing text, data, stack, and heap segments, replacing them with new segments based on the code of the new program. Each process has a unique integer process identifier (PID). Each process also has a parent process identifier (PPID) attribute, which identifies the process that requested the kernel to create this process. A process can terminate in one of two ways: - 1.By requesting its own termination using the _exit()system call (or the related exit() library function) - 2.By being killed by the delivery of a signal. In either case, the process yields a termination status, a small nonnegative integer value that is available for inspection by the parent process using the wait()system call. In the case of a call to _exit(), the process explicitly specifies its own termination status. If a process is killed by a signal, the termination status is set according to the type of signal that caused the death of the process. By convention, a termination status of 0 indicates that the process succeeded, and a nonzero status indicates that some error occurred. Most shells make the termination status of the last executed program available via a shell variable named Each process has a number of associated user IDs (UIDs) and group IDs (GIDs). These include: - Real user ID (rUID) and real group ID (rGID): These identify the user and group to which the process belongs. A new process inherits these IDs from its parent. A login shell gets its real user ID and real group ID from the corresponding fields in the system password file. - Effective user ID (eUID) and effective group ID (eGID): These two IDs (in conjunction with the supplementary group IDs discussed in a moment) are used in determining the permissions that the process has when accessing protected resources such as files and interprocess communication objects. Typically, the process's effective IDs have the same values as the corresponding real IDs. Changing the effective IDs is a mechanism that allows a process to assume the privileges of another user or group, as described in a moment. - Supplementary group IDs: These IDs identify additional groups to which a process belongs. A new process inherits its supplementary group IDs from its parent. A login shell gets its supplementary group IDs from the system group file. Privileged process => eUID == 0. Such a process bypasses the permission restrictions normally applied by the kernel. By contrast, the term unprivileged (or nonprivileged) is applied to processes run by other users. Such processes have a nonzero eUID and must abide by the permission rules enforced by the kernel. A process may be privileged because it was created by another privileged process— for example, by a login shell started by root. Another way a process may become privileged is via the Set-UID mechanism, which allows a process to assume an eUID that is the same as the UID of the program file that it is executing. Since kernel 2.2, Linux divides the privileges traditionally accorded to the superuser into a set of distinct units called capabilities. Each privileged operation is associated with a particular capability, and a process can perform an operation only if it has the corresponding capability. root process = all capabilities enabled Granting a subset of capabilities to a process allows it to perform some of the operations normally permitted to the superuser, while preventing it from performing others. When booting the system, the kernel creates a special process called init, the "parent of all processes", which is derived from the program file /sbin/init. All processes on the system are created (using fork()) either by init or by one of its descendants. The init process always has the process ID 1 and runs with superuser privileges. The init process can't be killed (not even by the superuser), and it terminates only when the system is shut down. The main task of init is to create and monitor a range of processes required by a running system. Read the "Processes" section: mmap()syscall creates a new memory mapping in the calling process's virtual address space. Mappings fall into two categories: - A file mapping maps a region of a file into the calling process's virtual memory. Once mapped, the file's contents can be accessed by operations on the bytes in the corresponding memory region. The pages of the mapping are automatically loaded from the file as required. - By contrast, an anonymous mapping doesn't have a corresponding file. Instead, the pages of the mapping are initialized to 0. The memory in one process's mapping may be shared with mappings in other processes. This can occur either because two processes map the same region of a file or because a child process created by fork()inherits a mapping from its parent. When two or more processes share the same pages, each process may see the changes made by other processes to the contents of the pages, depending on whether the mapping is created as private or shared: - When a mapping is private, modifications to the contents of the mapping are not visible to other processes and are not carried through to the underlying file. - When a mapping is shared, modifications to the contents of the mapping are visible to other processes sharing the same mapping and are carried through to the underlying file. Read the "Memory Mappings" section: A running Linux system consists of numerous processes, many of which operate independently of each other. Some processes, however, cooperate to achieve their intended purposes, and these processes need methods of communicating with one another and synchronizing their actions. One way for processes to communicate is by reading and writing information in disk files. However, for many applications, this is too slow and inflexible. Therefore, Linux, like all modern UNIX implementations, provides a rich set of mechanisms for interprocess communication (IPC), including the following: - Signals, which are used to indicate that an event has occurred; - Pipes (familiar to shell users as the |operator) and FIFOs, which can be used to transfer data between processes; - Sockets, which can be used to transfer data from one process to another, either on the same host computer or on different hosts connected by a network; - File Locking, which allows a process to lock regions of a file in order to prevent other processes from reading or updating the file contents; - Message Queues, which are used to exchange messages (packets of data) between processes; - Semaphores, which are used to synchronize the actions of processes - Shared Memory, which allows two or more processes to share a piece of memory. When one process changes the contents of the shared memory, all of the other processes can immediately see the changes. Read the "Interprocess Communication" section: Signals are often described as "software interrupts". The arrival of a signal informs a process that some event or exceptional condition has occurred. There are various types of signals, each of which identifies a different event or condition. Each signal type is identified by a different integer, defined with symbolic names of the form Signals are sent to a process by the kernel, by another process (with suitable permissions), or by the process itself. For example, the kernel may send a signal to a process when one of the following occurs: - The user typed the interrupt character (usually ctrl+c) on the keyboard - One of the process's children has terminated - A timer (alarm clock) set by the process has expired - The process attempted to access an invalid memory address Within the shell, the killcommand can be used to send a signal to a process. The kill()system call provides the same facility within programs. When a process receives a signal, it takes one of the following actions, depending on the signal: - It ignores the signal - It is killed by the signal - It is suspended until later being resumed by receipt of a special-purpose signal For most signal types, instead of accepting the default signal action, a program can choose to ignore the signal (useful if the default action for the signal is something other than being ignored), or to establish a signal handler. A signal handler is a programmer-defined function that is automatically invoked when the signal is delivered to the process. This function performs some action appropriate to the condition that generated the signal. In the interval between the time it is generated and the time it is delivered, a signal is said to be pending for a process. Normally, a pending signal is delivered as soon as the receiving process is next scheduled to run, or immediately if the process is already running. However, it is also possible to block a signal by adding it to the process's signal mask. If a signal is generated while it is blocked, it remains pending until it is later unblocked (i.e., removed from the signal mask). Read the "Signals" section: Threads are a set of processes that share the same virtual memory, as well as a range of other attributes. Each thread is executing the same program code and shares the same data area and heap. However, each thread has it own stack containing local variables and function call linkage information. Threads can communicate with each other via the global variables that they share. The threading API provides condition variables and mutexes, which are primitives that enable the threads of a process to communicate and synchronize their actions, in particular, their use of shared variables. Threads can also communicate with one another using IPC and synchronization mechanisms. The primary advantages of using threads are that they make it easy to share data (via global variables) between cooperating threads and that some algorithms transpose more naturally to a multithreaded implementation than to a multiprocess implementation. Furthermore, a multithreaded application can transparently take advantage of the possibilities for parallel processing on multiprocessor hardware. Read the "Threads" section: A syscall is a controlled entry point into the kernel, allowing a process to request that the kernel perform some action on the process's behalf. The kernel makes a range of services accessible to programs via the syscall API. These services include, for example, creating a new process, performing I/O, and creating a pipe for interprocess communication. Before going into the details of how a system call works, we note some general points: - A syscall changes the processor state from user mode to kernel mode, so that the CPU can access protected kernel memory. - The set of system calls is fixed. Each system call is identified by a unique number. - Each system call may have a set of arguments that specify information to be transferred from user space (i.e., the process's virtual address space) to kernel space and vice versa. From a programming point of view, invoking a syscall looks much like calling a C function. However, behind the scenes, many steps occur during the execution of a system call. To illustrate this, we consider the steps in the order that they occur on a specific hardware implementation, the x86-32. The steps are as follows: - 1.The application program makes a syscall by invoking a wrapper function in the C library. - 2.The wrapper function must make all of the syscall arguments available to the syscall trap-handling routine (described shortly). These arguments are passed to the wrapper via the stack, but the kernel expects them in specific registers. The wrapper function copies the arguments to these registers. - 3.Since all syscalls enter the kernel in the same way, the kernel needs some method of identifying the system call. To permit this, the wrapper function copies the syscall number into - 4.The wrapper function executes a trap machine instruction ( int 0x80), which causes the processor to switch from user mode to kernel mode and execute code pointed to by location 0x80(128 decimal) of the system’s trap vector. - 5.In response to the trap to location 0x80, the kernel invokes its system_call()routine to handle the trap. This handler: - Saves register values onto the kernel stack. - Checks the validity of the syscall number. - Invokes the appropriate syscall service routine, which is found by using the syscall number to index a table of all syscall service routines (the kernel variable sys_call_table). If the syscall service routine has any arguments, it first checks their validity; for example, it checks that addresses point to valid locations in user memory. Then the service routine performs the required task, which may involve modifying values at addresses specified in the given arguments and transferring data between user memory and kernel memory (e.g., in I/O operations). Finally, the service routine returns a result status to the - Restores register values from the kernel stack and places the syscall return value on the stack. - Returns to the wrapper function, simultaneously returning the processor to user mode. - 6.If the return value of the syscall service routine indicated an error, the wrapper function sets the global variable errnousing this value. The wrapper function then returns to the caller, providing an integer return value indicating the success or failure of the syscall. The following diagram illustrates the above sequence using the example of the Steps in the execution of a sycall
This activity can be used in a co-design or other group design session to encourage participants to work together toward more creative and resourceful solutions, by thinking beyond the current or obvious. You will need a deck of object cards. You can create your deck using any available materials. You might print images of different objects on paper or cardstock, use language learning flashcards, cut out magazine images and paste them onto cards, or find other ways to create a collection of random objects. If desired and available you can provide additional tools and materials that can be used together with the object cards to build prototypes (e.g. scissors, glue, tape, pipe cleaners, etc.). - Break into smaller groups (3 to 5 people per group is usually best) - Keeping the objects on the cards hidden, each group selects the same number of cards from the main deck (this number can be decided by the larger group). - Each group then chooses or is assigned one function from a collection of functions – these can be determined by the facilitator ahead of time or collectively by the group. The functions can be realistic or can be more experimental and exploratory. For example: communicating with a neighbour during a power failure, sending money to relatives overseas, taking your newborn baby to work, walking your dog in an airplane, looking after your parents remotely, etc. - The challenge for each group is to combine and use all of the objects on their cards to create something that fulfills the selected function. One way to approach this might be to tell a story about how your objects come together to fulfill the function. Tip: Brainstorm different approaches and don’t limit your ideas to what is technically possible. Tip: If there are members in your group that require audio description, be sure to describe the object cards and the process as you move forward. Allow for tactile exploration where appropriate. Tip: You can prototype or communicate your solution/idea/story in whatever way that works best for your group (e.g. sketching, building, describing, etc). Tip: In addition to the object cards, you can use any available materials to prototype your idea. The goal of this activity is to promote creative thinking over immediate feasibility. It encourages participants to dig deeper to find unexpected and creative ways of addressing a problem.
Bark beetles are difficult to manage with pesticide treatments. Good cultural practices such as pruning and timely watering seem to offer the best methods of control of bark beetles With over 600 species of bark beetles, the good news is bark beetles do not usually kill healthy trees. The 5mm insects tend to attack trees that are already under environmental stress caused by drought, infection or old age or recently transplanted trees. The bad news is, if a healthy tree becomes infested there is little to do except cut off affected limbs. Bark beetles, of which there are many varieties, damage trees by burrowing under the bark to feed on the inner layer of wood and to lay eggs. When the eggs hatch the bark beetle larvae digs tunnels into the wood beneath the bark. The beetle develops from the larva into a pupa and finally into an adult before it leaves the chamber and flies to another tree to start the cycle over again. Bark beetles have several generations each year. You can take measures to protect trees by knowing what kind of beetle is infesting trees in your area and when they are flying. The type of tree species attacked will help distinguish the kind of beetle you are battling. On the positive side, bark beetles are ecologically helpful because they feed on dead wood to assist in decomposition. They also benefit the overall health of a forest because they feed on already weakened trees and help make room for new, healthy trees. Healthy trees repel bark beetles by leaking sap that contains insecticidal chemicals driving off burrowing beetles. However, large outbreaks of bark beetles can overwhelm even healthy trees. In recent years bark beetles have been spreading through British Columbia because there has not been enough freezing weather to reduce their population. Researchers fear global climate change might create a growing bark beetle population problem, a potential disaster for the lumber industry. If the bark beetle's burrowing was not bad enough, they also carry fungal spores that can infect trees. Dutch Elm disease, for example, is spread by Elm bark beetles and can easily kill a healthy tree, sometimes in as little as two months. The disease is spread when the beetles emerge as adults from an infected tree and carry the spores to a new tree by burrowing into it. The fungus actually helps the beetle by slowing the flow of tree sap, but the spores interfere with the distribution of water and nutrients clogging up the tree's interior system. There are ongoing forest management efforts by the US Forest Service and state agencies to save bark beetle infested trees and forests. Infested trees are marked and removed. Root connections are severed to help save trees from spreading fungi. Pruning dead branches or branches where bark beetle larvae have begun burrowing can help save a healthier tree. Insecticides do not kill the bark beetle larvae tunneling beneath the bark. If pesticides are used, the bark must be sprayed so that adult bark beetles are killed as they attempt to bore into the bark to lay eggs. Sprays wear off in the rain in sun, so fresh treatments are required. Plus, treating badly infested trees will have no effect at all, but may kill beneficial insects. Improving the overall health of a tree by giving plenty of water in the summer and by pruning dead branches will often be enough to prevent a bark beetle predicament.
How Do Scientists Know If You Have COVID-19? In this exhibit, we are showing how to test for COVID-19 using saliva instead of nose swabs, using a method originally developed by the University of Colorado, that relies on the virus's genetic code to see if you're infected. It's also important to note that the test is non-diagnostic, which means it can't be used to diagnose whether or not you have COVID, but it can be helpful for figuring out if COVID-19 is spreading in a certain community. In the first step of the test, the viruses are broken open by boiling to release their RNA, then added to a special mix of chemicals that turn the saliva pink. The chemicals include a special protein called polymerases, which look for whether or not certain genes are present. For this test, we are looking for the gene for the spike protein found on the surface of the COVID-19 virus. If you are not infected (and thus do not have COVID-19 virus in you), the saliva will remain pink. Otherwise, if you are infected, the test will detect the virus and turn your saliva yellow.
WHAT ARE BLACK HOLES?: A friend wanted to know: what black holes are made of? If they form from the collapse of a huge star which runs out of hydrogene and helium, and after it explodes in supernova explosion, then the rest of gasses is away. It collapses into heavy elements stored nucleus. What happens with matter under extreme conditions we cannot create on earth to watch closely? However, very high pressures and very high and low temperatures are generated. Let us keep certain observations in mind. 1) The black holes are known to have intense magnetic fields. It is known that when heated above 176° Fahrenheit (80° celsius), magnets will quickly lose their magnetic properties. The magnet will become permanently demagnetized if exposed to these temperatures for a certain length of time or heated at a significantly higher temperature (Curie temperature). Modern magnet materials do lose a very small fraction of their magnetism over time. For Samarium Cobalt materials, for example, this has been shown to be less that 1% over a period of ten years. Thus, to have the intense magnetic field, the interor of black holes cannot be hot. However, the modern notion is that it must be tremendously hot - millions of degrees Kelvin - which is self-contradictory. 2) The event horizon of a black hole is said to be a one-way filter in the black hole: anything can enter it, but nothing can leave it. The concept of event horizon itself is wrong. If a light pulse expanding in two dimensions and time will sketch concentric circles and not as you have shown, which is a three dimensional structure. If we add the third dimension, then it will sketch concentric spheres of increased radius and not a time cone. This has misguided science for a long time and needs correction. We see when light from a radiating body reaches our eyes. It has no colors. Light itself is not visible unless it meets our eye to show its source. We see only the source and not the light proper. We see color only when a reflected light meets our eyes. All radiation move in curved path, i.e., waves within a fixed band. But once it is received in an apparatus including the eye, it behaves as a straight line. In both cases, it does not behave like a cone. 3) But that notion is now changing. Black holes do not have such "event horizons" according to Hawking, conformed by NASA. In that case, the event horizon would, in theory, become smaller than the apparent horizon. Hawking's new suggestion is that the apparent horizon is the real boundary. The absence of event horizons means that there are no black holes — in the sense of regimes from which light cannot escape to infinity. This was suggested earlier by Abhas Mitra, but his solution - Magnetically Eternally Collapsing Object (MECO) - is again wrong. If it is magnetic, it cannot be hot and nothing can collapse eternally. 4) Black holes are detected indirectly by the intense x-rays detected by them. When material falls into a black hole from a companion star, it gets heated to millions of degrees Kelvin and accelerated. The superheated materials emit X-rays, which can be detected by X-ray telescopes. But the difference in origin of x-rays and gamma rays is that, x-rays are emitted by the negatively charged outer electron cells of atoms, whereas gamma rays are emitted by the positively charged nucleus. There is no reason to believe that in a black hole, it happens otherwise. The nature of negative charge is to confine positive charge. Thus, there must be a positive charge in the core of the black hole, which should not be hot. The only possibility is it has to be antimatter. 5) A black hole is said to be a very simple object: it has only three properties: mass, spin and electrical charge. Because of the way in which black holes form, their electrical charge is said to be probably zero. But then charge neutral objects do not emit x-rays. If the radiation coming from the positively charge core, it should be gamma rays and not x-rays. 6) An object with immense mass (hence gravitational pull) like a galaxy or black hole between the Earth and a distant object could bend the light from the distant object into a focus – gravitational lensing. If a visible star or disk of gas has a "wobbling" motion or spinning and there is not a visible reason for this motion and the invisible reason has an effect that appears to be caused by an object with a mass greater than three solar masses (too big to be a neutron star), then it is possible that a black hole is causing the motion. Scientists then estimate the mass of the black hole by looking at its effect on the visible object. For example, at the center of the Milky Way, we see an empty spot where all of the stars are circling around as if they were orbiting a really dense mass. That is where the black hole is. 7) Black holes have spin. But since it is constituted of antimatter core, it cannot have normal spin, but internal spin, which would make entry into a black hole a winding path. Interestingly, our ancients have described some such object, which has a winding path to the core, which is lighted but not hot, and any matter coming into contact with it gets annihilated. They describe the object by various names like Shilocchaya (meaning compact object), Guha (meaning visibility from within), whose center was described as negatively charged and called Swayamprabha (meaning self illuminated). If we compare this description and the fact that the centers of galaxies have black holes, we come to an interesting conclusion. Think of a charge neutral object coming near a positively charged object. Part of the charge neutral object facing the positively charged object suddenly develops a negative charge and the other end positive charge, so that the negative charge, which generally confines the positive charge, is now itself confined between positive charges. This in turn leads to reaction involving release of high energy and further realignment restoring balance. Such a thing happens inside atoms continuously involving protons and neutrons. The extra energy released appears as W bosons with a mass of 80.385 +/- 0.015 GeV/c^2, even though the masses of protons and neutrons are of the order of 938.28 MeV/c^2 and 939.57 MeV/c^2 respectively. Conclusion: Black holes are macro equivalents of neutrons.
The Rodrigues fruit bat is only found on the island of Rodrigues in the western Indian Ocean. Head and body length of adults is about 35 cm, wingspan is about three feet, and they can weigh up to 250 grams. Their fur is thick and dark brown in color and their heads are covered with golden brown mantles that vary in size and color. The face is similar to that of a fox and its ears are short. The Rodrigues fruit bat prefers large, contiguous tracts of woodland with mature trees for its habitat. The trees are needed for roosts and protection against the frequent cyclones that occur in the area. They are also a source of food for the bats diet, which consists of leaves, flowers, and fruit. These bats are nocturnal and social, preferring to live in groups of eight to 15 bats. The groups mainly consist of one male and the rest female, but some have been known to allow more than one male in the group. Females produce only one pup after a gestation period of 120 to 180 days. Rodrigues fruit bats were once abundant in the wild, but their numbers were decreased significantly between 1968 and 1972 during cyclones, which were considered natural climatic disasters. A number of bats were thrown into the sea and some of the surviving bats did not survive because of lack of food and shelter. Natural recovery of the species has been slow, but future cyclones and loss of habitat due to deforestation still threaten the species. Rodrigues Fruit Bat Facts Last Updated: May 11, 2017 To Cite This Page: Glenn, C. R. 2006. "Earth's Endangered Creatures - Rodrigues Fruit Bat Facts" (Online). Accessed 6/14/2021 at http://earthsendangered.com/profile.asp?sp=27&ID=1. Need more Rodrigues Fruit Bat facts? Rare Black Panther Seen Alive in Kenya Biologists have recently documented rare footage of a black panther stalking the forests of Kenya. The team of biologists shot the footage of the sleek big cat after spending months watching and waiting, according to a conservation scientist from the San Diego Zoo.
The gene-editing technology CRISPR has the potential to change everything about medicine. With CRISPR, scientists and doctors can potentially edit a person’s genome on the fly, fixing all manner of genetic diseases with a simple, non-invasive procedure. At least, that’s the plan. In reality, CRISPR is pretty complicated, and any attempt to use it in human patients inevitably leads to some complex engineering. A recent paper from a group of Stanford researchers even found that most humans may even be immune to CRISPR altogether. The findings were written in a preprint paper, which means it hasn’t been peer reviewed or published in a journal yet. Nevertheless, the paper has received a significant amount of attention from experts in genetics. Part of the CRISPR system comes from bacteria, which often use it as a defensive weapon to attack viruses and other bacteria. CRISPR was discovered in 2012 and modified to deliver custom DNA to human cells, but at its core it’s still something that came from bacteria. That means that our immune systems have probably evolved to fight it. At the center of the issue is a protein called Cas9, which is used as part of CRISPR-Cas9 to target and cut out specific sections of DNA. Without Cas9, CRISPR wouldn’t work, but it’s this specific protein that our bodies have learned to fight. Cas9 is typically found in harmful bacteria like Staphylococcus aureus and Streptococcus pyogenes, which cause staph and strep infections, respectively, so it’s usually a good thing that our bodies fight the protein. However, this does present a hurdle for researchers learning how to use CRISPR in human trials. There are already some tricks that researchers are using to circumvent our immune systems, like only using CRISPR outside of the body or in places that immune cells can’t reach, but ultimately we may have to abandon Cas9 for a different protein that doesn’t set off our bodies’ defenses. And if that’s the case, CRISPR researchers could be facing a serious setback. Source: The Atlantic
Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 954 An anecdote, subsequently mentioned frequently by reviewers, exemplifies the topic that Hirsch addresses. A secondary school pupil, informed that Latin is a dead language, reacted disbelievingly: “What do they speak in Latin America?” This vignette reinforces a widely shared observation that American schools are largely unsuccessful in instilling in students the information and skills required to be effective in the contemporary world. Hirsch contends that deficiencies in skill and information (cultural literacy) are inseparable; skill (reading) is dependent upon having information, and not merely that pertaining to the skill itself. Briefly, reading comprehension is a product, in part, of cultural literacy: possession of the knowledge needed to thrive in the modern world. By the late 1960’s, young Americans were weak in both areas, and national scores for successive classes of high schools continued to decline. How did this occur? Hirsch finds the root of the problem in the educational theories of France’s Jean-Jacques Rousseau and America’s John Dewey. Rousseau proposed that children should be allowed to develop and learn naturally, unrestrained by adult preferences, or nearly so. Dewey, the most influential figure in American education, adapted Rousseau’s ideas in promoting progressive education, a curriculum that assumed content (information) to be distinctly secondary to skill. Moreover, skill, which could be acquired in a few direct experiences, was considered to be readily transferable from one context to another. Thus, the hallmark of nineteenth century education, the memorization of information, including poetry and prose, sharply declined in American schools. Memorization was replaced by educational formalism as the dominant instructional method. From an examination of research on reading and memory, Hirsch concludes that the use of prototypes or schemata are crucial to comprehension and retention of what is read. He finds supporting data in research from the fields of cognitive psychology and artificial intelligence. Scholarly findings thus confirm what some say common sense suggests: that young people enjoy memorization, whether it deals with baseball statistics, popular music, or history. Reading and memorization are particularly important in the first years of school. Both processes are contextual, or reliant upon cultural literacy. Hirsch adds a new justification for cultural literacy: It is essential to the development and maintenance of nationhood. He argues that the larger governmental entities that make up the contemporary international community cannot maintain themselves by military or economic power alone. Regional dialects that sufficed when nations were small geographic units become divisive as national boundaries expand. Those nations that have thrived in the modern era have, in one manner or another, promoted a national culture, including a national language. Nations that failed to do this, such as China and Russia before their twentieth century revolutions, have been frustrated in attaining an effective national presence, despite often-superior physical resources. Hirsch does not conclude that a national language and culture, the precursors of cultural literacy, are sufficient in themselves to assure national integration, but he insists that they are essential to it. In other words, nations of any considerable size are likely either never to coalesce effectively or to fragment without a national culture that can be transmitted to subsequent generations through cultural literacy. This literacy is one glue that holds together the diverse elements of a large nation and enables them to function cohesively. Yet is the advocacy of a body of knowledge that everyone in a nation should learn not a form of cultural elitism? Does it not promote the values in the United States of the dominant white Anglo-Saxon Protestant life-style to the detriment of cultural minorities, black and Hispanic ones, for example? Hirsch marshals two arguments to buttress his negative response to these questions. An examination of the writings and speeches of leaders of minority communities reveals that, perhaps unintentionally, these persons draw upon cultural literacy. The “I Have a Dream” speech delivered by Martin Luther King, Jr., during the 1963 March on Washington is full of references to the Bible and to the ideas of white American political figures. Moreover, in a pluralistic nation such as the United States, elements of minority cultures are, unavoidably, part of cultural literacy. In these and other ways, the literate culture is less elitist than is a subcultural vocabulary (ethnic community, youth culture, pop culture) because it is not restricted to any generational group, geographic region, or in-group. How is the lack of cultural literacy to be overcome, especially among younger Americans? Hirsch offers a two-step solution. Initially, a list of terms must be compiled. These words and phrases would constitute the shared vocabulary of cultural literacy, those terms that an American needs in order to be effective in the modern world. In collaboration with two colleagues, a historian and a physicist, Hirsch has drawn up a list of these terms, noting that the list will change over time, with some items being dropped and others added. It is desirable but certainly not necessary for one to be fully conversant with each term. In most situations, for example, knowing that Hamlet (c. 1600-1601) is a play by William Shakespeare is sufficient; the details of the play are not required. The second step is more complex. In the United States’ federal system of government, policy for precollegiate education is set by local school boards under guidelines enacted by each state government; there is no single authority that makes curricular decisions. As concern mounts over low performances by high school students, and as national testing spreads, however, Hirsch hopes that educators and state government officials will see the wisdom in his approach and adopt measures to implement his remedy for cultural illiteracy. He does not expect a single nationwide syllabus but hopes for a state-by-state revision of public school curriculum to include more factual information and traditional lore. Unlock This Study Guide Now Start your 48-hour free trial and unlock all the summaries, Q&A, and analyses you need to get better grades now. - 30,000+ book summaries - 20% study tools discount - Ad-free content - PDF downloads - 300,000+ answers - 5-star customer support
Last time we learned about the SAS Theorem, which is used as a shortcut to show that two triangles are congruent. In fact, this is not the only “shortcut” of this type — there are three others, which we will learn about today. They are called angle side angle (ASA), angle angle side (AAS), and side side side (SSS). This video gives an overview of the congruence theorems above, and also discusses a little bit why the other combinations (AAA and SSA) are not theorems. Figuring out which theorem applies can sometimes be a challenge — this is usually one step in a larger problem, but the following video focuses just on this step. It presents a collection of triangles and uses them to discuss when each of the theorems can be applied. In your homework, you will see that we often are not given all the information required — we have to rely on other facts (such as facts about parallel lines, and so on). This video gives an example of how we use these theorems in practice. Example: Showing that a point is the midpoint of a line segment. The final topic of the day is isosceles triangles, or triangles with two equal sides. Even though the definition speaks about sides, it is related to angles by the following theorems: - If two sides of a triangle are equal, then the angles opposite those sides are equal. - If two angles of a triangle are equal, then the sides opposite those angles are equal. Why is this true? We can prove it, using what we have learned about congruent triangles: What about having all three sides equal? Such a triangle is called equilateral. Once again, we can use what we have learned before to show the following: - If all three sides of a triangle are equal, then all three angles are also equal (“an equilateral triangle is equiangular”) - If all three angles of a triangle are equal, then all three sides are also equal (“an equiangular triangle is equilateral”) Many of the homework problems that you will encounter will require you to combine various facts from geometry and also algebra. Here is an example that involves parallel lines and isosceles triangles:
What is diabetes? Diabetes mellitus is a group of metabolic diseases characterized by high blood sugar (glucose) levels that result from defects in insulin secretion, or its action, or both. Diabetes mellitus, commonly referred to as diabetes (as it will be in this article) was first identified as a disease associated with “sweet urine,” and excessive muscle loss in the ancient world. Elevated levels of blood glucose (hyperglycemia) lead to spillage of glucose into the urine, hence the term sweet urine. Normally, blood glucose levels are tightly controlled by insulin, a hormone produced by the pancreas. Insulin lowers the blood glucose level. When the blood glucose elevates (for example, after eating food), insulin is released from the pancreas to normalize the glucose level. In patients with diabetes, the absence of insufficient production of, or lack of response to insulin causes hyperglycemia. Diabetes is a chronic medical condition, meaning that although it can be controlled, it lasts a lifetime.
- SLO1: Develop the ability to analyze everyday experience from a sociological perspective (Chap 1-3) - SLO2: Demonstrate an understanding of the scientific nature of social research (Chap 4-7) This course is a study of human behavior in society, including social groups, culture, personality, social stratification, social change, collective behavior and social institutions. - Front Matter - 1: Understand how social forces influence people. - 2: Apply sociological imagination to recognize inequality of race, gender and class. - 3: Evaluate how institutions and organizations impact individuals. - 4: Understand the components of research. - 5: Develop an ability to interpret facts through critical thinking and the use of the inquiry method. - 6: Recognize the various methods of research. - 7: Compare and contrast the three sociological theories. - Back Matter Open Educational Resources - https://2012books.lardbucket.org/books/sociology-comprehensive-edition/index.html (resource for this class)
When one thinks about the genetic makeup of a human, or indeed any organism, it is natural to focus on the protein-coding genes. After all, that is the part of the genome that controls biochemical activities of cells and the processes of growth and development. But the protein-coding genes whose function is summarized in the "Central Dogma" (DNA ↔ mRNA → polypeptide) account for only about 3% of the DNA in a human cell. The genome also contains a large array of DNA sequences that have other functions (Figure 4-1) or that perhaps have no function at all. Some sequences represent the no-longer functional copies of duplicated genes, pseudogenes, produced at an earlier time in a species' history. In other cases, the regulatory functions of regions like microRNAs have only recently been recognized. Thus, the genome must be understood as a package of informational, historical, and noncoding DNA along with regions that hold secrets that researchers continue to unravel with the tools of molecular biology. Overview of the kinds of DNA sequences found in the human genome (after Stracham and Read, Garland Science, NCBI Bookshelf). For additional details, see Tables 4-1 and 4-2. (Reprinted with permission from Brooker RJ: Genetics: Analysis & Principles, 3rd ed. New York: McGraw-Hill, 2008.) In Chapter 1 we saw that the chromosomes of eukaryotes (Figure 4-2) are made up of DNA complexed with proteins to form a nucleoprotein structure. The DNA molecule in each chromosome is a single, very long double helix. If one took each of the 23 chromosomes in one haploid set of human chromosomes, removed the protein, and stretched the DNA molecules out end-to-end, they would reach about a meter in total length. On average, then, each human chromosome's DNA strand is about 4.3 cm long (100 cm/23 linkage groups) and can be composed of as many as several hundred million nucleotide base pairs. Within this molecule, some genes follow the diploid organization we have assumed to this point, with one copy of each gene per haploid genome. But many genes are actually found in multi-gene families that often have large numbers of copies, and in fact the number of copies can change over time. A typical eukaryotic chromosome showing some of the genetic structures and activities it can carry. (Reprinted with permission from Brooker RJ: Genetics: Analysis & Principles, 3rd ed. New York: McGraw-Hill, 2008.) The first step is to understand the kinds of sequences present in the genome and their functions, if any, for the cell or their use to researchers, which is not necessarily the same thing. We will then explore how this vast amount of DNA is packed within the tiny confines of a nucleus ...
The tremendous power of a river is diminished when it is fragmented into little streams. However, when the streams are channelled together, it then can develop a deep flow. So it is with learning.¹ Children are free thinkers. They tend to make free associations in their minds linking experience to knowledge through memory as they come across the “new” in life. This is particularly true for pre-school children and greatly explains why these early years are ones full of the wonder of discovery. Good primary schools can prolong this golden age of learning but often when the child arrives at secondary school the river of learning is split into many unconnected channels. Chosen by our 7 to 10 yrs group ENGINEERINGChosen by our 11 to 14 yrs group Students want to learn in a way that engages them by making connections between the theoretical and the practical, between one subject and another. They want to feel that learning is actually useful, fun, enchanting, inspiring and relevant. Our curriculum provides an academic framework that encourages pupils to embrace and understand the connections between traditional subjects and the real world, enabling them to become analytical, reflective, and creative thinkers who are able to realise the interconnectedness of all things. We teach through a combination of cross-subject learning and single subject learning. This means part of the curriculum is learnt through topics that embrace a variety of subjects while other aspects of the curriculum are taught through traditional classes to make sure students cover everything they need to prepare them for their GCSEs. They are not forced to think in boxes but are instead supported to think openly and freely, being encouraged to express their thoughts and feelings. Even when being assessed students are given points for creativity, originality, interconnected thinking and expressing complex emotions and ideas. The cross-subject curriculum is planned by all the teachers together and taught as a whole. Each teacher is aware of what the others are doing and every effort is made to connect the learning in one class to that of another. The teachers themselves broaden and deepen their practice by linking their subject to others and excitement of learning is spread all around the school. This collaborative approach to planning, teaching, assessing and evaluating allows us to be holistic in the true sense of the word as it is genuinely a whole school approach.
20 SES 09, Bringing You Inspiring Practice For Inclusive Education: Teaching diverse learners in (school)subjects / TdiverS (Part 1) Symposium to be continued in 20 SES 10 A Introduction Germany has signed the UN Convention and ratified in 2009. Since than almost all 16 federal States have changed their school legislation towards inclusive education. But still about 5% of all pupils are send to special schools because their classification with so called “special educational needs”. The quota is differing between the States and between 2 and 7%, this is a predictor for an inconsistent diagnostic process and different criteria. Only few of the school legislation are congruent with the UNCRPD and the goal of the development of an inclusive educational system. The UN Convention on People with Disabilities presents the Federal Republic of Germany, especially the individual states, with great challenges in the area of education. Recognising the human right of inclusive education, Article 24 UN CRPD states that no person may be excluded from mainstream education because of a disability; persons with a disability have equal rights to others living in the same community; they should have access to inclusive, high quality and free education at primary and secondary level; and appropriate provisions must be taken in individual cases” (GIHR, 2011; Merz-Atalik, 2013). Methods The new generation of teachers and teacher students mainly have been enculturalized in the selective educational system (tri-apartheid-system in the secondary schools) and don´t have their own experiences in learning and teaching in heterogeneous learning groups with an inclusive pedagogy and didactic. Therefor the project was dedicated to collect and validate in a dialogic approach together with the classroom-teacher’s examples of inspiring practice of inclusive teaching in school subjects. In cooperation with two schools and three teachers in inclusive classes, we worked out the theoretical basics of an inclusive didactics and closely developed the concrete teaching in the classroom. Over a period of one year we did videographic observations in the classroom and used the videos to reflect on the development of inclusive approaches in group-discussions with the teachers. Results The videos have been commented together with the teachers. One exemplary video from a physics lesson in a secondary school with children with special needs will be shown. The film shows modifications of movement and sports games as examples of teaching. Such changes are not only important in inclusive settings, as students every class are quite diverse. The teacher explains, by reference to five different games, which strategies he uses that all children are able to participate in his PE lessons. Amrhein, Bettina et al. (2014): Fachdidaktik inklusiv. Auf der Suche nach Leitlinien für den Umgang mit Vielfalt in der Schule. Waxmann: Münster/ New York Merz-Atalik, Kerstin (2013): Inklusion / Inklusiver Unterricht an der Gemeinschaftsschule. In: Bohl, Thorsten/ Meissner, Sybille (Hrsg.): Expertise Gemeinschaftsschule: Forschungsergebnisse und Handlungsempfehlungen für Baden-Württemberg. Beltz: Weinheim und Basel, 61-76. Merz-Atalik, Kerstin (2013): Der Forschungsauftrag aus der UN-Behindertenrechtskonvention – Probleme, Erkenntnisse und Perspektiven einer Inklusionsforschung im schulischen Feld. In: Trumpa, Silke/ Seifried, Stefanie/ Franz, Eva/ Klauß, Theo (Hrsg.): Inklusive Bildung: Erkenntnisse und Konzepte aus Fachdidaktik und Sonderpädagogik. 24-46. Prengel, Annedore (2012): Humane entwicklungsförderliche und leistungsförderliche Strukturen im inklusiven Unterricht. In: Moser, Vera (Hrsg.): Die inklusive Schule. Standards für die Umsetzung. Kohlhammer: Stuttgart, 175-183 Reich, Kersten (2014): Inklusive Didaktik. Bausteine für eine inklusive Schule. Beltz: Weinheim/ Basel 00. Central Events (Keynotes, EERA-Panel, EERJ Round Table, Invited Sessions) Network 1. Continuing Professional Development: Learning for Individuals, Leaders, and Organisations Network 2. Vocational Education and Training (VETNET) Network 3. Curriculum Innovation Network 4. Inclusive Education Network 5. Children and Youth at Risk and Urban Education Network 6. Open Learning: Media, Environments and Cultures Network 7. Social Justice and Intercultural Education Network 8. Research on Health Education Network 9. Assessment, Evaluation, Testing and Measurement Network 10. Teacher Education Research Network 11. Educational Effectiveness and Quality Assurance Network 12. LISnet - Library and Information Science Network Network 13. Philosophy of Education Network 14. Communities, Families and Schooling in Educational Research Network 15. Research Partnerships in Education Network 16. ICT in Education and Training Network 17. Histories of Education Network 18. Research in Sport Pedagogy Network 19. Ethnography Network 20. Research in Innovative Intercultural Learning Environments Network 22. Research in Higher Education Network 23. Policy Studies and Politics of Education Network 24. Mathematics Education Research Network 25. Research on Children's Rights in Education Network 26. Educational Leadership Network 27. Didactics – Learning and Teaching The programme is updated regularly (each day in the morning) - Search for keywords and phrases in "Text Search" - Restrict in which part of the abstracts to search in "Where to search" - Search for authors and in the respective field. - For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference - If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.
Fact Sheet #7 – Feb 2004 John Leys, David Roget and Gupta Vadakattu, CSIRO Fallowing of a paddock for the preparation of a crop is a major land management activity in the Mallee. Fallowing has been used to conserve moisture, mineralise nitrogen, control weeds and prepare the soil for sowing. Fallow lengths vary from 3 to 12 months, with approximately two thirds of fallows longer than 6 months. Long fallow management varies from full chemical fallow to full mechanical tillage ranging from 0 to12 cultivations (with an average of 3.3 across the Mallee Sustainable Farming (MSF) Inc. area), when preparing for a wheat crop. Fallowing has been cited as a major cause of environmental problems because: • it can reduce the number of growing plants in a paddock, and therefore the amount of deep draining soil water used; and • it can reduce groundcover and soil aggregation levels to below 50%, which greatly increases the risk of wind erosion in the Mallee.
Measles, mumps, and rubella are viral diseases that can be life-threatening in extreme cases. These three diseases are all very contagious and mostly affect infants, children and young adults. The measles, mumps, rubella vaccine (MMR vaccine) is one vaccine that protects against all three diseases. What is measles? Measles, also known as morbilli, English measles or rubeola (not to be confused with rubella or roseola) is an infection of the respiratory system caused by the measles virus. The measles virus, which is a paramyxovirus (from Greek para-, beyond, -myxo-, mucus or slime) of the Morbillivirus genus, normally grows in the cells that line the back of the throat and lungs. Complications from measles include ear infections, pneumonia, encephalitis and death. The disease is very rare in countries and areas where vaccination coverage is high; yet, it still kills an estimated 164,000 people per year. What is mumps? Mumps or epidemic parotitis is an infectious disease that leads to painful swelling of the salivary glands. The disease is caused by the mumps virus which is a paramyxovirus of the Rubulavirus genus. Mumps normally subsides by itself, while there is no specific treatment available. The outcome is normally good, but complications may include infection of the testes, spontaneous abortion in pregnant women, meningitis, pancreatitis and encephalitis. Before routine vaccination programs were introduced, mumps was a common illness among infants, children and young adults. Nowadays, it is very rare in the United States. What is rubella? Rubella, also known as German measles or three-day measles, is an infectious disease caused by the rubella virus, a togavirus of the genus Rubivirus, that leads to fever and rash. Rubella is derived from Latin, meaning “little red”. It mostly affects children and young adults. This disease is often mild and attacks often pass unnoticed, while typically not lasting longer than 3 days. However, infection of a pregnant woman can be very serious: if the woman is within the first 20 weeks of pregnancy, the child may be born with congenital rubella syndrome (CRS) which includes multiple serious ailments. Where does measles come from? The earliest report of measles came from an Arab physician of the 9th century who described the differences between measles and smallpox in his medical report. In 1757, Scottish physician Francis Home showed that measles was caused by an infectious agent present in patients’ blood. In 1954 the virus that causes measles was isolated in Boston, Massachusetts, by John F. Enders and Thomas C. Peebles. Where does mumps come from? The first descriptions of mumps were provided by Hippocrates in the 5th century BC, when the Greek physician described the typical symptoms of the disease, such as swelling of the face and throat, as well as the occasional swelling of the testes. Mumps outbreaks occurred widely during the 18th and 19th century around the world, especially in places where people lived in close spaces like military barracks, boarding schools or prisons. In 1934, Claud D. Johnson and Ernest W. Goodpasture discovered that mumps is caused by a virus they found in saliva samples. Where does rubella come from? Rubella was first described in 1740 century by Friedrich Hoffmann, a discovery later confirmed by other German physicians in the 1750s. In 1814, George de Maton first suggested that the disease is different from both measles and scarlet fever. As all these physicians were German, the disease was known as Rötheln (contemporary German Röteln, derived from the German word for red, rot), hence the common name of “German measles”. Henry Veale coined the term rubella in 1866. In 1914, Alfred Fabian Hess hypothesized that the disease is caused by a virus, which was confirmed in 1938. In 1962, the virus was first isolated by Paul D. Parkman and Thomas H. Weller. How is measles transmitted? Measles is spread to others from 4 days before to 4 days after the rash appears. In fact, measles is so contagious that if one person has it, 90% of the people close to that person who are not immune will also become infected. The disease is spread by contact with droplets from the nose, mouth, or throat of an infected person. The measles virus lives in the mucus of the nose and throat of the infected person. Direct contact with the virus typically occurs through sneezing and coughing, which sprays contaminated droplets into the air. These droplets can infiltrate other people’s noses and throats when they breathe or touch their mouth or nose after touching contaminated surface (where the virus can live up to 2 hours). How is mumps transmitted? Mumps is spread through droplets of saliva or mucus from the mouth, nose, or throat of an infected person. This occurs usually when the infected person coughs, sneezes, or talks, and the other person breathes in the droplets containing the mumps virus. A person can also get infected if they share and item, like eating or drinking utensils, with an infected person. Moreover, a person can also get infected if the touch contaminated surfaces and subsequently touch their mouth or nose. Most mumps transmission likely occur before the salivary glands begin to swell and up to 5 days after the swelling begins. How is rubella transmitted? Rubella is spread through the air or by close contact. The rubella virus replicates in the nasopharyngeal mucosa and local lymph nodes of an infected person, and usually infects people that breath in droplets sprayed in the air through coughing and sneezing. Rubella is most contagious from 1 week before to 1 week after the rash appears. Infants who have congenital rubella syndrome can shed the virus in urine and fluid from the nose and throat for a year or more and may pass the virus to people who have not been immunized. What are measles symptoms? Symptoms usually begin 8 – 12 days after exposure to the virus and include bloodshot eyes, cough, fever, photophobia, muscle pain, rash, conjunctivitis, runny nose, sore throat and Koplik’s spots (tiny white spots inside the mouth). The rash normally appears 3-5 days after the first symptoms of sickness, and lasts about 4-7 days. It typically starts on the head and spreads down to other parts of the body. The rash can appear as flat, discolored areas (macules) and as solid, red, raised areas (papules). The rash is itchy. What are mumps symptoms? Mumps symptoms include face pain, fever, headache, sore throat, swelling of the parotid glands (the largest salivary glands, located between the ear and the jaw), and swelling of the temples or jaw (temporomandibular area). Swelling of the parotid glands (parotitis), while the most recognizable mumps symptom, actually only affects 30-40% of cases. In male patients, symptoms may also include testicular lumps, testicle pain and scrotal swelling. Other patients may have nonspecific symptoms, and up to 20% of infected individuals may experience no symptoms at all. Symptoms normally appear 2 to 3 weeks after exposure to the virus. What are rubella symptoms? Rubella symptoms normally include low-grade fever, respiratory problems, and most remarkably a rash of pink or light red spots that typically starts on the face and spreads downward. The rash occurs about two to three weeks after exposure to the virus. While children usually experience mild symptoms, adults may suffer from complications like arthritis, encephalitis, and neuritis. A woman who contracts rubella during pregnancy and passes the virus to the fetus can experience complications like a spontaneous abortion or premature birth. If the fetus survives, the child may have a number of birth defects like deafness, eye defects, cardiac defects, mental retardation and bone lesions. What is the MMR vaccine? The MMR vaccine is a combination vaccine that protects against the three viral diseases. The vaccine contains live, attenuated viruses of each disease, and is given via subcutaneous injection. The MMR vaccine cannot cause measles, mumps or rubella. MMR vaccination is recommended in 2 doses for children, with the first dose administered at ages 12-15 months and the second dose administered at ages 4-6 years. Nearly 10 million doses of MMR vaccine are distributed each year in the United States alone. The first measles vaccine was introduced in 1963, while the first mumps vaccine became available in 1967 and the first rubella vaccine in 1969. The three vaccines were combined in 1971. What are the MMR vaccine side effects? The MMR vaccine side effects are normally rare and very mild. They may include fever, mild rash and swelling of glands in the cheek or neck. Moderate symptoms may include seizure, pain and stiffness in the joints and low platelet count. The symptoms occur usually within 6-14 days after the shot, if at all. Some people who are allergic to any of the components in the vaccine might be at risk for experiencing a severe allergic reaction like deafness, long-term seizures, coma and permanent brain damage. It is also not clear whether these severe symptoms are caused by the vaccine. However, generally speaking, getting MMR vaccine is much safer than getting measles, mumps or rubella. The MMR-vaccine-autism relation has been extensively studied and there is no scientific evidence that MMR vaccine causes autism. Who should not get the MMR vaccine? Anybody with a life-threatening reaction to the antibiotic neomycin, or any other component of MMR vaccine, should not get the vaccine. People who had a life-threatening allergic reaction to a previous dose of MMR or MMRV vaccine should not get another dose. People who are sick at the time of the scheduled vaccination should wait. Pregnant women should not get MMR vaccine and should wait until after giving birth. Anyone with HIV/AIDS, under treatment with immunosuppressive drugs like steroids, who has cancer and/or is treated for cancer, who has had low platelet count, who has had a recent blood transfusion or received other blood products or who has received another vaccine within the last 4 weeks should consult with a doctor before getting the MMR vaccine.
Rearrangements (Alkyl shift) Rearrangements (Alkyl shift) Definition: The rearrangement is a reaction in which an atom or bond moves or migrates, having been initially located at one site in a reactant molecule and ultimately located at a different site in a product molecule. Migrations of a carbon atom with its lone pair are called alkyl shifts. Rearrangements (Alkyl shift) Explained: A rearrangement reaction is a board class of organic reactions and it can accompany many of the reactions such as substitution, addition, and elimination reactions. In this example above, we can occur that if we have quaternary carbon next to secondary carbocation the most common situation is a migration of alkyl group (or just alkyl shift). The carbocation is a carbon atom with six electrons bearing a positive charge. The carbocation is an electron deficient and it needs two electrons to full octet. This means that the carbocation has empty π orbital and the C-C bond can donate the pair of electrons into this orbital. In the transition state, there are partial bonds between the carbon being transferred and each of the two adjacent carbon atoms. Then, as one bond shortens and the other lengthens, we end up with a (more stable) tertiary carbocation. Reactions that go through carbocations can sometimes undergo rearrangements. Stability of carbocations increases as you go from primary to secondary to tertiary. The carbocation is also stabilized by resonance and in every situation where we have unstable carbocation can be transformed into a more stable carbocation. Take a look at this example of an SN1 with alkyl shift. In the first step, we have loss of the leaving group and we get secondary carbocation. But, like in the first example, quaternary carbon is in adjacent to secondary carbocation and this is a good way to get more stable carbocation. The tertiary carbocation is then attacked by the nucleophile (water in this case) which deprotonated to give the neutral alcohol. In the following example, we can see that the methyl group does not always have to be a group that moves. Even though the CH3 could potentially migrate in this case, it is favorable to shift one of the alkyl groups in the ring, which leads to ring expansion and the formation of a less strained, five-membered groups in the ring.
The CBSE class 9 maths sample papers SA2 are a way of checking students performance that how much they are prepared to face the summative assessment 2 exams. In SA 2 exam complete syllabus is not asked so students can easily score good marks. But for that, they need daily practice. Also, they have to be aware of the types of questions asked in the exam. So, to help them we have here provided the CBSE Sample paper for class 9 Maths SA2. CBSE Class 9 Maths Sample Papers SA2 CBSE Class 9 Maths SA 2 Sample Papers is designed by experts. Solving different type of questions will give good practice to the students before the exam. They can download this sample paper for free from the link shown below; Students are highly advised to solve the sample papers for fruitful results. CBSE Class 9 Maths Sample Papers SA2 provides a brief idea about the actual question paper. Students must start solving the sample papers once they have completed their syllabus. It will help them to manage their time efficiently. Moreover, they get to know the important topics of maths syllabus. We hope students have found this information on “CBSE Class 9 Math Sample Papers SA 2” useful for their exam preparation. Keep learning and stay tuned for further updates on CBSE and other competitive exams. Download BYJU’S App and subscribe to YouTube channel to access interactive Maths and Science videos.
The difference between radio waves and microwaves are often blurred. Within the telecommunications industry, for example, all microwaves are generally called radio waves. Take note that both of them are also types of electromagnetic radiation. However, radio waves and microwaves have some distinctive technical characteristics or properties and specific applications that give them their respective spot in the electromagnetic spectrum. Radio waves vs. microwaves: A comparison The following are the key differences between radio waves and microwaves: 1. Frequency and Wavelength: Specific types of electromagnetic radiation with frequencies ranging from 3 KHz to 300 GHz in the electromagnetic spectrum are generally classified as radio waves. However, radio waves with frequencies between 300 MHz and 300 GHz or those falling under the very high frequency and extreme high-frequency range are technically classified as microwaves. Hence, microwaves are essentially radio waves with higher frequencies The wavelength of radio waves ranges from 100 kilometers and 100 meters for low to medium frequencies, to 10 meters to 1 millimeter for high frequencies. Within this range, microwaves have wavelengths between 1 meter and 1 millimeter. From a technical standpoint, “radio waves” have longer wavelengths while “microwaves” have shorter wavelengths. Note that to understand further the difference between radio waves and microwaves, it is important to note that as the frequency of an electromagnetic radiation increases, its wavelength decreases. Energy requirement or consumption also increases as frequency goes higher. Hence, those low-frequency to mid-frequency waves are commonly referred to as radio waves and essentially, they have longer wavelengths. On the other hand, microwaves have high frequencies and shorter wavelengths. Both radio waves and microwaves are used in wireless communication. However, radio waves are used for long-distance communication within the ground due to their longer wavelengths. On the other hand, microwaves are used for short distance communication such as in mobile broadband and GSM standards, WLAN via Wi-Fi network, as well as wireless personal network protocols such as Bluetooth and Wi-Fi Direct. Take note that microwaves are also used in long-distance communication, especially through relay links. Radio waves within frequencies between 3 KHz and 300 MHz are reflected by the ionosphere. However, microwaves can pass through the atmosphere because of their shorter wavelengths, thus making them suitable for satellite communication between the ground and beyond the atmosphere. Another essential difference between microwaves and radio waves with lower frequencies centers on applications beyond wireless communication. Because of their higher frequencies, microwaves have more energy. Such is demonstrated in the application of microwaves in heating, such as in the case of microwave ovens, as well as in wireless power transmission and direct-energy energy weaponry. A note on the difference between radio waves and microwaves The term “radio waves” has been historically applied to all types of electromagnetic radiation used in wireless communication, particularly in radio broadcasting and telecommunication. The prefix “micro” was later added to the term “wave” to refer to specific radio waves with higher frequencies and shorter wavelengths, thus giving birth to the term “microwave.” Nevertheless, these two terms are not mutually exclusive. Microwaves are essentially radio waves. This ambiguity warrants the use of qualifiers, particularly by mention the frequency or frequency range instead of simply calling a particular type of electromagnetic radiation either as a “radio wave” or a “microwave.”
Arc Welding 101 What is arc welding? Arc welding fuses parts permanently together by using a power supply to generate an electric arc between an electrode mounted in a torch and a metal. This arc is an electric current flowing between two electrodes through an ionized column of gas is able to produce a heat intense enough to melt the metal. The heat is produced through a negatively charged cathode and a positively charged anode. The negative and positive ions are bounced off of each other in the plasma column at an accelerated rate. The electrode is typically a specially prepared rod or wire that not only conducts the current but also melts and supplies filler metal to the joint. The arc is formed between the actual work and an electrode (stick or wire) that is guided along the joint. It provides the heat needed to melt the electrode and the base metal, and sometimes also supplies the means to transport the molten metal from the tip of the electrode to the work. This intense heat (around 6500ºF at the tip) applied at the joint between two parts is melted and caused to intermix with an intermediate molten filler metal. This heat melts both the base metal and the electrode, creating molten droplets that are detached and transported to the work through the arc column when the electrode is consumable. In carbon or TIG welding there are no molten droplets to be forced across the gap and onto the work so filler metal is melted into the join from a separate rod or wire. Regardless, this pool of molten metal is sometimes called a "crater." The crater solidifies behind the electrode (see "slag" below) as it is moved along the joint. The result is a metallurgical fusion bond that produces a weldment that has a similar strength to the metal of the parts. This is different than non-fusion processes of joining where the mechanical and physical properties of the base materials cannot be duplicated at the joint. This metal transfer can occur one of two ways: Surface Tension Transfer where a drop of molten metal touches the molten metal pool and is drawn into it by surface tension or Spray Arc where the drop is ejected from the molten metal at the electrode tip by an electric pinch propelling it to the molten pool (great for overhead welding). Powering Arc Welding: Below is a diagram of the basic arc-welding circuit that includes a power source. There must be an ionized path to conduct electricity across a gap and then some sort of ignition in order to start the arc. This is usually caused by supplying an initial voltage high enough to cause a discharge or by touching the electrode to the work and then withdrawing it as the contact area becomes heated. When welding, one can use a direct current (DC) with the electrode either positive or negative or alternating current (AC). The choice of current and polarity depends on the process, the type of electrode, the arc atmosphere, and the metal being welded. Whatever power source is necessary is fitted with necessary controls that are connected by a work cable to the work piece. Arc shielding is a necessary process in arc welding as any metal at high temperatures has a higher reactivity to chemical elements in the air. So, the process has a means of covering the arc and molten pool with a protective shield of gas. Shielding gas is used while the torch is joining parts together in order to prevent contamination. This also confirms the strength of welds and minimizes post weld cleanup of a part. Robotic Arc Welding: Since the 1980s, welding automation has become a much more robust and mature process. It has advanced quickly and has become extremely effective at meeting tedious production demands at fast rates, while also being economical, efficient, dependable, and enabling further protection of workers from dangerous tasks. 6-axis robot systems are able to more than mimic a human arm’s movement within a cell and allow the torch to be placed exactly where necessary in order to work in the most efficient, repeatable position. In just about every industry, big or little, welding is used as a principal means of the fabrication and repair processes for practically any application. Robotic welding includes processes like arc welding, MIG welding, TIG welding, laser welding and spot welding. These benefits are causing the welding robot market to yield growth rates faster than any other industrial robot. This is evident in the estimate that the automated welding and accessories division alone will be worth $1.9 billion in 2017. - Speed: Robotic welding systems are masters at getting the job done quickly. Robots don’t have any required lunch breaks nor needed vacation times; they will work continuously without interruption, 24/7. There is less handling compared to a manual weld cycle so the robot achieves much higher levels of arc-on time and typically will increase output by a factor of two to four. Subsequently you will see a dramatic increase in your throughput and productivity that increases your company’s overall efficiency. - Accuracy: Robotic systems have regulations that enable them to make fewer errors than a manual welding system which helps to decrease wasted material and time. There are also designs to help reach difficult or slim places, such as a slender robotic arm. - Consistency: The regulations and resulting accuracy produce weldments that are extremely consistent. This increases the confidence in the product quality, every single time. - Safety: Welding can be a very dangerous application as it deals with flash, fumes, sparks and heat. When you automate, you are able to protect workers as the robots can endure these hazards. Employees claiming compensation if they are put in harm’s way can also be reduced. Companies can reduce the risk of their employees claiming compensation if they are affected by the hazardous working environment. - Cost Reduction: The cost of manual welding can be steep as it requires time, skill, and concentration. Robotic welding takes less time and allows you to cut costs of direct labor, conserve energy (fewer start-ups), and conserving materials. Insurance and accident related costs are also reduced. The cost savings that robot welding brings can help companies to be more competitive with low cost manufacturing countries in Eastern Europe or China. - Quality: A robot has an excellent path following accuracy and can present a welding gun at the correct welding angle, speed, and distance at a very high repeatable accuracy (± 0.04 mm). This presents the ability for optimum welding conditions to be used for each and every joint which results in consistent high quality output, 24/7, with reduced costs for rework, scrap, wire consumption, or removal of weld splatter. - Labor: Manual welding will always be required to some degree, but it brings its own sets of challenges… training, recruitment, and dealing with a high turnover rate all can come at high costs. Robotic automation brings much more stability and dependability for your company’s welding job. - Flexibility: The robot can be used to weld many different products such as MIG, TIG, SMAW, FCAW, SAW, and PAW. This also allows companies to produce a variety of products with a very quick turnaround. - Floor space: Compared to the same output from manual welding bays the robot requires less floor space. Get Started with Robotic Arc Welding Today: The demand for arc welding automation systems today is increasingly rising as companies are aware of the low availability in qualified welders paralleled by the increase in competition demands seen in the global market. After viewing all of the benefits and looking at your projected ROI, you shouldn't need any more convincing. Don't waste any more time, RobotWorx is ready to find you the perfect arc welding automation system. We have over 25 years of experience, especially with arc welding automation. We have many different new and used arc welding robots and workcells in stock ready for integration.
Since we were talking about grocery shopping this week, I thought it would be nice to do something fun with identifying and counting fruits. Give them the simple and clear instructions, it will be such fun and a pleasure to have them around. Give them each a shopping list, with very specific requirements, such as 5 ripe tomatoes (which means soft), 1 litre of milk, 500g of broccoli, 6 unripe (means hard) pears (learn vocabulary)… you can get the idea. So are you ready for shopping with your preschoolers? - Step 1 Plan your shopping day, i.e. Wednesday - Step 2 Plan your time to leave for the grocery shopping. Let them know in advance the time you are leaving. Point to them the long and short hand of the clock and let your child know that when the short hand points to a certain number they have to come and get you. - Step 3 Get your child to help you with the shopping list. Let them know the things that you would like on the shopping list. For example, give your children the responsibility for getting apples and milk. If your child is not reading yet, use old catalogues and cut the pictures of the apples and milk and paste them on the shopping list. If you would like a richer exercise, you can be a bit more specific; you can say SIX RED APPLES for example. So your child will cut out 6 apples he/she can find from the catalogue; if your child is able to write and identify number 6, then he/she (or you) can write down 6 next to the apples. If not, get your child to cut out six apples or draw apples to make 6. As for the milk, do the same exercise as above. If you want to create more challenges, state that you want 2 liters milk but you want it in a bottle and not a carton. Note you are stating “volume – 2 liter” and they have to identify “bottle’. So let’s sum up what your child will learn from this exercise: - Step 1 – Your child will learn the day of the week. - Step 2 – Your child will learn about time and reading numbers. - Step 3 – Your child will learn quantity, indentify number, colour, volume and terms of carton and bottle. You can use this concept with your older primary school children. Change the task and make it a bit challenging. They will love the challenge…and it’s a fun and a very inexpensive outing! By involving your children, you can create connections with them through this fun challenge! One more very important thing, always finish off a challenge with a REWARD! My reward is that they can each choose ONE item that is no more than $2 from the shelf (or save the money as pocket-money). Happy counting and learn through FUN!
Alpine tundra is located around the world on high alitutude mountains above the tree line. These windswept areas are characterised by an abundance of barren rocks or thin soils. Frequent drops in temperature to freezing followed by thaws cause the rocks to break up. In the well drained soil, the growing season is only 180 days. Temperatures drop below freezing at night. There are very few wildlife species found in the Alpine Tundra, and these include: Mammals – pikas, marmots, mountain goats, sheep, elk Birds – ptarmigan, kea parrots & other grouselike birds Insects – springtails, beetles, grasshoppers, butterflies Yellow bellied marmots and ground squirrels hibernate for eight months of the year. Pikas (who are related to rabbits) hide from the weather under rocks, storing food in piles to graze on during the cold winters. They have evolved small ears and tails to protect them from the harsh winter. Ptarmigans manage to survive year-round, whereas the New Zealand kea parrot will move to lower ground. Kea are able to eat a leaf buds, roots, berries, fruit, seeds, blossoms, nectar and insects. Mountain goats, elk and bears move lower further down the mountain to take advantage of the available food supply there. Alpine plants are generally dark in colour in order to absorb more heat and grow low to the ground. Only low growing shrubs, cushion plants, small forbs survive here, producing large bright flowers (but only every 4-5 years), as well as lush meadows of sedges and grasses. Most are slow-growing and long-lived perennials. Ninety percent of the total structure is in the roots where nutrients and energy are stored. Some plants have a waxy coating or hairs in order to maximize heat and water. There is a direct relationship to duration of snow cover (e.g. dense willow bushes will survive on the lee side of ridges where a deep snow cover will protect the fragile buds during the winter). A small difference in elevation will produce special plants that are adapted to these differences.
During the Fourth Crusade in 1204, much of Constantinople and its priceless architecture was destroyed by the crusading Christian armies from western Europe. Following a lengthy siege, the Latin Christian armies broke through the city’s defences and sacked some of its most important sites, including the Hagia Sofia and Justinian’s tomb. Although the attackers were fellow Christians, the cultural differences that had developed during the long separation of the eastern and western churches meant that the sacking of the city was especially brutal. After Constantinople briefly became the centre of the Latin Empire, its prized monuments and buildings were largely neglected because funds were severely limited and had to be spent on the tenuous defence of the crusader empire. The emperor of Nicaea, John III, reportedly sent his own funds to the Latin Crusaders to ensure that they did not further desecrate important churches in their search for saleable materials.
How commercial rooftop solar works? Solar panels are primarily made from silicon, with other materials added to enable an electrical reaction when in contact with sunlight. This reaction creates an electrical charge, in the form of DC (direct current) power. This is why solar power systems are also known as PV – photovoltaic, which essentially means ‘sunlight-generated electricity’. Click here for LiveScience article. Panels are fixed to the roof using rails and clamps, which predominantly make use of existing screw or nail holes and are silicon sealed for waterproofing. This power is channelled via cables to an inverter typically located in the building but can be on the roof, which converts the electricity to AC (alternate current) which is what appliances within the building run off. The AC power is fed from the inverter to the main switchboard and out into the building. The inverter is the brains of the system and records all solar generation. All systems come with optional monitoring software that allows you to check generation at a particular time or across periods, and from inception. The switchboard prioritises solar-generated power to grid power. If the amount of solar electricity being generated exceeds the level of consumption (usage) in the building at that moment, it is fed back into the network (‘the grid’) and the power retailer pays a fixed amount for this (in New Zealand most power companies pay 8c/kWh but you’d need to check with your retailer first or confirm with us). How commercial rooftop solar works
DNA computing, the performing of computations using biological molecules, rather than traditional silicon chips. The idea that individual molecules (or even atoms) could be used for computation dates to 1959, when American physicist Richard Feynman presented his ideas on nanotechnology. However, DNA computing was not physically realized until 1994, when American computer scientist Leonard Adleman showed how molecules could be used to solve a computational problem. Solving problems with DNA molecules A computation may be thought of as the execution of an algorithm, which itself may be defined as a step-by-step list of well-defined instructions that takes some input, processes it, and produces a result. In DNA computing, information is represented using the four-character genetic alphabet (A [adenine], G [guanine], C [cytosine], and T [thymine]), rather than the binary alphabet (1 and 0) used by traditional computers. This is achievable because short DNA molecules of any arbitrary sequence may be synthesized to order. An algorithm’s input is therefore represented (in the simplest case) by DNA molecules with specific sequences, the instructions are carried out by laboratory operations on the molecules (such as sorting them according to length or chopping strands containing a certain subsequence), and the result is defined as some property of the final set of molecules (such as the presence or absence of a specific sequence). Adleman’s experiment involved finding a route through a network of “towns” (labeled “1” to “7”) connected by one-way “roads.” The problem specifies that the route must start and end at specific towns and visit each town only once. (This is known to mathematicians as the Hamiltonian path problem, a cousin of the better-known traveling salesman problem.) Adleman took advantage of the Watson-Crick complementarity property of DNA—A and T stick together in pairwise fashion, as do G and C (so the sequence AGCT would stick perfectly to TCGA). He designed short strands of DNA to represent towns and roads such that the road strands stuck the town strands together, forming sequences of towns that represented routes (such as the actual solution, which happened to be “1234567”). Most such sequences represented incorrect answers to the problem (“12324” visits a town more than once, and “1234” fails to visit every town), but Adleman used enough DNA to be reasonably sure that the correct answer would be represented in his initial pot of strands. The problem was then to extract this unique solution. He achieved this by first greatly amplifying (using a method known as polymerase chain reaction [PCR]) only those sequences that started and ended at the right towns. He then sorted the set of strands by length (using a technique called gel electrophoresis) to ensure that he retained only strands of the correct length. Finally, he repeatedly used a molecular “fishing rod” (affinity purification) to ensure that each town in turn was represented in the candidate sequences. The strands Adleman was left with were then sequenced to reveal the solution to the problem. Although Adleman sought only to establish the feasibility of computing with molecules, soon after its publication his experiment was presented by some as the start of a competition between DNA-based computers and their silicon counterparts. Some people believed that molecular computers could one day solve problems that would cause existing machines to struggle, due to the inherent massive parallelism of biology. Because a small drop of water can contain trillions of DNA strands and because biological operations act on all of them—effectively—in parallel (as opposed to one at a time), it was argued that one day DNA computers could represent (and solve) difficult problems that were beyond the scope of “normal” computers. However, in most difficult problems the number of possible solutions grows exponentially with the size of the problem (for example, the number of solutions might double for every town added). This means that even relatively small problems would require unmanageable volumes of DNA (on the order of large bathtubs) in order to represent all possible answers. Adleman’s experiment was significant because it performed small-scale computations with biological molecules. More importantly, however, it opened up the possibility of directly programmed biochemical reactions. Biochemistry-based information technology Programmable information chemistry will allow the building of new types of biochemical systems that can sense their own surroundings, act on decisions, and perhaps even communicate with other similar forms. Although chemical reactions occur at the nanoscale, so-called biochemistry-based information technology (bio/chem IT) is distinct from nanotechnology, due to the reliance of the former on relatively large-scale molecular systems. Although contemporary bio/chem IT uses many different types of (bio) chemical systems, early work on programmable molecular systems was largely based on DNA. American biochemist Nadrian Seeman was an early pioneer of DNA-based nanotechnology, which originally used this particular molecule purely as a nanoscale “scaffold” for the manipulation and control of other molecules. American computer scientist Erik Winfree worked with Seeman to show how two-dimensional “sheets” of DNA-based “tiles” (effectively rectangles made up of interwoven DNA strands) could self-assemble into larger structures. Winfree, together with his student Paul Rothemund, then showed how these tiles could be designed such that the process of self-assembly could implement a specific computation. Rothemund later extended this work with his study of “DNA origami,” in which a single strand of DNA is folded multiple times into a two-dimensional shape, aided by shorter strands that act as “staples.” Other experiments have shown that basic computations may be executed using a number of different building blocks (for example, simple molecular “machines” that use a combination of DNA and protein-based enzymes). By harnessing the power of molecules, new forms of information-processing technology are possible that are evolvable, self-replicating, self-repairing, and responsive. The possible applications of this emerging technology will have an impact on many areas, including intelligent medical diagnostics and drug delivery, tissue engineering, energy, and the environment. Learn More in these related Britannica articles: Leonard M. Adleman…the first successful example of DNA computing, in which he used DNA to solve a simple problem in graph theory involving a seven-node Hamiltonian circuit, an NP-complete problem (i.e., a problem for which no efficient solution algorithm is known) similar to the traveling salesman problem. Adleman has been credited with… Molecule, a group of two or more atoms that form the smallest identifiable unit into which a pure substance can be divided and still retain the composition and chemical properties of that substance.… Silicon (Si), a nonmetallic chemical element in the carbon family (Group 14 [IVa] of the periodic table). Silicon makes up 27.7 percent of Earth’s crust; it is the second most abundant element in the crust, being surpassed only by oxygen.… Atom, smallest unit into which matter can be divided without the release of electrically charged particles. It also is the smallest unit of matter that has the characteristic properties of a chemical element. As such, the atom is the basic building block of chemistry.… FirefoxFirefox, free open-source Web browser created by American software company Mozilla Corporation. In 1998 American Internet services company Netscape Communications Corp. decided to designate its Navigator browser as open-source for users, who began the development of Mozilla Firefox. The Mozilla…
What: The telegraph key or sending key was an important part of the telegraph system. When the telegrapher pressed the key an electrical impulse was sent through wires to the next station. The system used a mixture of short impulses called dots and long impulses called dashes. Each letter of the alphabet was assigned its own combination of dots and dashes called Morse Code. For example, the letter A is a dot and a dash (A · —). The short and long sounds were then heard by the next operator on their sounder. The receiving telegrapher had to listen carefully and write down the message they heard. Take a look inside the telegraph room in Winsor Castle and learn more about the history of the Deseret Telegraph Company at Pipe Spring. Who: The pioneers used the telegraph system to send and receive messages. This was an important communication tool for people living so far away from others. To replay the movie of the telegraph key, right-click on it and select play. If this movie does not play, please install the flash player plugin. Last updated: February 24, 2015
Economy of the German Democratic Republic |This article needs additional citations for verification. (February 2008)| Like other states which were members of the Comecon between 1949 and 1991, the German Democratic Republic (GDR – East Germany) had a centrally-planned economy (CPE) similar to the one in the former Soviet Union (in contrast to the market economies or mixed economies of capitalist states). The state established production targets and prices and also allocated resources, codifying these decisions in comprehensive plans. The means of production were almost entirely state-owned. The East German economy was the Soviet Bloc's largest economy and one of the most stable economies in the "Second World" until Communism in Eastern Europe started to collapse from 1990 and the Soviet Union disintegrated in 1991. - 1 History - 1.1 Reparations - 1.2 Agrarian reforms - 1.3 Introducing the planned economy - 1.4 Economic growth and improvement - 1.5 New Economic System (1963–1968) - 1.6 Economic system of socialism (1968–70) - 1.7 The Main Task - 1.8 The "Coffee Crisis" of 1976 to 1979 - 1.9 External debt crisis - 1.10 Comparison with the West German economy - 2 Central leadership - 3 Central planning - 4 State industrial sector - 5 Agriculture - 6 Private sector - 7 See also - 8 Source - 9 References - 10 Further reading - 11 External links Each occupation power assumed authority in their respective zones by June 1945. The Allied powers originally pursued a common German policy, focused on denazification and demilitarization in preparation for the restoration of a democratic German nation-state. Over time, however, the western zones and the Soviet zone drifted apart economically, not least because of the Soviets' much greater use of disassembly of German industry under its control as a form of reparations. Military industries and those owned by the state, by Nazi party members, and by war criminals were confiscated. These industries amounted to approximately 60% of total industrial production in the Soviet zone. Most heavy industry (constituting 20% of total production) was claimed by the Soviet Union as reparations, and Soviet joint stock companies (German: Sowjetische Aktiengesellschaften -SAG-) were formed. The remaining confiscated industrial property was nationalized, leaving 40% of total industrial production to private enterprise. The reparations seriously hindered the ability of East Germany to compete with West Germany economically. The estimated 100 billion worth of reparations taken from the East, had it been invested in the East German Economy, with East Germany's average 18% rate of return on investments, would have compounded to give East Germans a per-capita income 15 times the level of West Germans.[dubious ] Retail trade in the east was slowly being absorbed by two state-controlled organizations (Konsum and Handelsorganisation -HO-) which were given special preferences. On 2 January 1949, a two-year plan of economic reconstruction was launched, aiming at 81% of the 1936 production level, and, by cutting 30 per cent off production costs, hoping to raise the general wage level 12% to 15%. The plan also called for an increase in the daily food ration from 1,500 to 2,000 calories. There were several reasons behind the backward economic situation in East Germany. Its lack of essential natural resources and the separation from its traditional West German market were probably the most important. Furthermore, while large sums had been poured into West Germany, especially by the United States, the Soviet Union not only put nothing into the economy of its zone but actually took out large amounts in reparations and occupation costs (c. 6 billion marks per year). It was estimated that by 1949, 100 per cent of the automotive, between 90% to 100% of the chemical, and 93% of the fuel industries were in Soviet hands. By the end of 1950 East Germany had paid $3.7 bn of Russia's $10 bn reparations demand. After the death of Joseph Stalin and the June 1953 uprising, the Soviet Union began to return the East German factories it had taken as reparations. The following case is typical of the economic relationship between the two eastern block countries. According to documents supplied by a shipbuilding official who escaped from the East, ships ordered by the Soviet Union for 1954 cost $148 million to build, but the Soviets paid only $46 million for them; the difference of $102 million was absorbed by the GDR. The agrarian reform ("Bodenreform") expropriated all land belonging to former Nazis and war criminals and generally limited ownership to 1 km². Some 500 Junker estates were converted into collective people's farms (German: Landwirtschaftliche Produktionsgenossenschaft -LPG-), and more than 30,000 km² were distributed among 500,000 peasant farmers, agricultural laborers, and refugees. Compensation was paid only to active anti-Nazis. In September 1947 the Soviet military administration announced the completion of agrarian reform through the Soviet zone. This report listed 12,355 estates, totaling 6,000,000 acres (24,000 km2), which had been seized and redistributed to 119,000 families of landless farmers, 83,000 refugee families, and some 300,000 in other categories. State farms were also established, called Volkseigenes Gut ("People's Owned Property"). Introducing the planned economy The Third Party Congress of the Socialist Unity Party of Germany (Sozialistische Einheitspartei Deutschlands—SED) convened in July 1950 and emphasized industrial progress. The industrial sector, employing 40% of the working population, was subjected to further nationalization, which resulted in the formation of the "People's Enterprises" (German: Volkseigene Betriebe—VEB--). These enterprises incorporated 75% of the industrial sector. The First Five Year Plan (1951–55) introduced centralized state planning; it stressed high production quotas for heavy industry and increased labor productivity. The pressures of the plan caused an exodus of GDR citizens to West Germany. The second SED Party Conference (less important than a party congress) convened from 9–12 July 1952. 1565 delegates, 494 guest-delegates and over 2500 guests from the GDR and from many countries in the world participated in it. In the conference a new economic policy was adopted, "Planned Construction of Socialism". The plan called to strengthen the state-owned sector of the economy. Further goals were to implement the principles of uniform socialist planning and to use the economic laws of socialism systematically. Joseph Stalin died in March 1953. In June 1953, the SED, hoping to give workers an improved standard of living, announced the New Course which replaced the "Planned Construction of Socialism." The New Course in the GDR was based on the economic policy initiated by Georgi Malenkov in the Soviet Union. Malenkov's policy, which aimed at improvement in the standard of living, stressed a shift in investment toward light industry and trade and a greater availability of consumer goods. The SED, in addition to shifting emphasis from heavy industry to consumer goods, initiated a program for alleviating economic hardships. This led to a reduction of delivery quotas and taxes, the availability of state loans to private business, and an increase in the allocation of production material. While the New Course increased the availability of consumer goods, there were still high production quotas. When work quotas were raised in 1953, it led to the uprising of June 1953. Strikes and demonstrations happened in major industrial centers, and workers demanded economic reforms. The Volkspolizei and the Soviet Army suppressed the uprising, in which approximately 100 participants were killed. Economic growth and improvement |This section needs additional citations for verification. (February 2014)| In 1956, at the 20th Congress of the Communist Party of the Soviet Union, First Secretary Nikita Khrushchev repudiated Stalinism. Around this time, an academic intelligentsia within the SED leadership demanded reform. To this end, Wolfgang Harich issued a platform advocating radical changes in the GDR. In late 1956, he and his associates were quickly purged from the SED ranks and imprisoned. An SED party plenum in July 1956 confirmed Walter Ulbricht's leadership and presented the Second Five-Year Plan (1956–60). The plan employed the slogan "modernization, mechanization, and automation" to emphasize the new focus on technological progress. At the plenum, the regime announced its intention to develop nuclear energy, and the first nuclear reactor in the GDR was activated in 1957. The government increased industrial production quotas by 55% and renewed emphasis on heavy industry. The Second Five-Year Plan committed the GDR to accelerated efforts toward agricultural collectivization and nationalization and completion of the nationalization of the industrial sector. By 1958 the agricultural sector still consisted primarily of the 750,000 privately owned farms that comprised 70% of all arable land; only 6,000 LPGs had been formed. In 1958–59 the SED placed quotas on private farmers and sent teams to villages in an effort to encourage voluntary collectivization. In November and December 1959 some law-breaking farmers were arrested by the Stasi. By mid-1960, nearly 85% of all arable land was incorporated in more than 19,000 LPGs; state farms comprised another 6%. By 1961 the socialist sector produced 90% of the GDR's agricultural products. An extensive economic management reform by the SED in February 1958 included the transfer of a large number of industrial ministries to the State Planning Commission. In order to accelerate the nationalization of industry, the SED offered entrepreneurs 50-percent partnership incentives for transforming their firms into VEBs. At the close of 1960, private enterprise controlled only 9% of total industrial production. Production Cooperatives (Produktionsgenossenschaften—PGs--) incorporated one-third of the artisan sector during 1960–61, a rise from 6% in 1958. The Second Five-Year Plan encountered difficulties, and the government replaced it with the Seven-Year Plan (1959–65). The new plan aimed at achieving West Germany's per capita production by the end of 1961, set higher production quotas, and called for an 85% increase in labor productivity. Emigration again increased, totaling 143,000 in 1959 and 199,000 in 1960. The majority of the emigrants were white collar workers, and 50% were under 25 years of age. The labor drain exceeded a total of 2.5 million citizens between 1949 and 1961. New Economic System (1963–1968) The annual industrial growth rate declined steadily after 1959. The Soviet Union therefore recommended that East Germany implement the reforms of Soviet economist Evsei Liberman, an advocate of the principle of profitability and other market principles for communist economies. In 1963 Ulbricht adapted Liberman's theories and introduced the New Economic System (NES), an economic reform program providing for some decentralization in decision-making and the consideration of market and performance criteria. The NES aimed at creating an efficient economic system and transforming the GDR into a leading industrial nation. Under the NES, the task of establishing future economic development was assigned to central planning. Decentralization involved the partial transfer of decision-making authority from the central State Planning Commission and National Economic Council to the Associations of People's Enterprises (Vereinigungen Volkseigener Betriebe—VVB--), parent organizations intended to promote specialization within the same areas of production. The central planning authorities set overall production goals, but each VVB determined its own internal financing, utilization of technology, and allocation of manpower and resources. As intermediary bodies, the VVBs also functioned to synthesize information and recommendations from the VEBs. The NES stipulated that production decisions be made on the basis of profitability, that salaries reflect performance, and that prices respond to supply and demand. The NES brought forth a new elite in politics as well as in management of the economy, and in 1963 Ulbricht announced a new policy regarding admission to the leading ranks of the SED. Ulbricht opened the Politbüro and the Central Committee to younger members who had more education than their predecessors and who had acquired managerial and technical skills. As a consequence of the new policy, the SED elite became divided into political and economic factions, the latter composed of members of the new technocratic elite. Because of the emphasis on professionalization in the SED cadre policy after 1963, the composition of the mass membership changed: in 1967 about 250,000 members (14%) of the total 1.8 million SED membership had completed a course of study at a university, technical college, or trade school. The SED emphasis on managerial and technical competence also enabled members of the technocratic elite to enter the top echelons of the state bureaucracy, formerly reserved for political dogmatists. Managers of the VVBs were chosen on the basis of professional training rather than ideological conformity. Within the individual enterprises, the number of professional positions and jobs for the technically skilled increased. The SED stressed education in managerial and technical sciences as the route to social advancement and material rewards. In addition, it promised to raise the standard of living for all citizens. From 1964 until 1967, real wages increased, and the supply of consumer goods, including luxury items, improved much. In 1968, Ulbricht launched a spirited campaign to convince the Comecon states to intensify their economic development "by their own means." Domestically the East German regime replaced the NES with the "Economic System of Socialism" (ESS), which focused on high technology sectors to make self-sufficient growth possible. Centralized planning was reintroduced in the so-called structure-determining areas, which included electronics, chemicals, and plastics. Industrial combines partnerships were formed to integrate vertically industries involved in the manufacture of vital final products. Price subsidies were restored to accelerate growth in favored sectors. The annual plan for 1968 set production quotas in the structure-determining areas 2.6% higher than in the remaining sectors to achieve industrial growth in these areas. The state set the 1969–70 goals for high-technology sectors even higher. Failure to meet ESS goals resulted in the conclusive termination of the reform effort in 1970. The Main Task The Main Task, introduced by Honecker in 1971, formulated domestic policy for the 1970s. The program re-emphasized Marxism-Leninism and the international class struggle. During this period, the SED launched a massive propaganda campaign to win citizens to its Soviet-style socialism and to restore the "worker" to prominence. The Main Task restated the economic goal of industrial progress, but this goal was to be achieved within the context of centralized state planning. Consumer socialism—the new program featured in the Main Task—was an effort to magnify the appeal of socialism by offering special consideration for the material needs of the working class. The state extensively revamped wage policy and gave more attention to increasing the availability of consumer goods. The regime accelerated the construction of new housing and the renovation of existing apartments; 60% of new and renovated housing was allotted to working-class families. Rents, which were subsidized, remained extremely low. Because women constituted nearly 50% of the labor force, child-care facilities, including nurseries and kindergartens, were provided for the children of working mothers. Women in the labor force received salaried maternity leave which ranged from six months to one year. Retirement annuities were increased. The "Coffee Crisis" of 1976 to 1979 Due to the strong German tradition of drinking coffee, imports of this commodity were important for consumers. A massive rise in coffee prices in 1976–77 led to a quadrupling of the amount of hard currency needed to import coffee. This caused severe financial problems for the GDR, which perennially lacked sufficient hard currency to address imports from the West. As a result, in the summer of 1977 the Politbüro withdrew most cheaper brands of coffee from sale, limited the use of coffee in restaurants, and effectively withdrew its provision in public offices and state enterprises. In addition, an infamous new type of coffee was introduced, Mischkaffee (mixed coffee), which was 51% coffee and 49% a range of filler including chicory, rye, and sugar beet. Unsurprisingly, the new coffee was generally detested for its comparatively poor flavour, and the whole episode came to be informally known as the "Coffee Crisis." The crisis passed after 1978 as world coffee prices began to fall again, as well as increased supply through an agreement between the GDR and Vietnam. The episode vividly illustrated the GDR's structural, economic and financial problems. External debt crisis Although in the end political circumstances led to the collapse of the SED regime, the GDR's growing hard currency debts were a cause of domestic instability. Debts continued to grow in the course of the 1980s to over 40 billion Deutsche Marks owed to western institutions, a sum not astronomical in absolute terms (the GDR's GDP was perhaps 250 billion DM) but large in relation to the GDR's capacity to export sufficient goods to the west to provide the hard currency to service these debts. While the problem was by no means unique to the GDR or to socialist economies in general, as the United States and other Western European nations faced far weaker debt-to-GDP ratios and trade deficits throughout the 2000s, in the context of failing support from its Soviet allies in the early 1990s, the gravity of these otherwise manageable issues became more severe for the GDR. In October 1989, a paper prepared for the Politbüro (the Schürer-Papier, after its principal author) projected a need to increase export surplus from around 2 billion DM in 1990 to over 11 billion DM in 1995 to stabilize debt levels. It is doubtful whether such a Herculean effort could have succeeded without political turmoil, or indeed at all. Much of the debt originated from attempts by the GDR to export its way out of its international debt problems, which required imports of components, technologies and raw materials; as well as attempts to maintain living standards through imports of consumer goods. The GDR was internationally competitive in some sectors such as mechanical engineering and printing technology. However the attempt to achieve a competitive edge in microchips against the research and development resources of the entire western world – in a state of just 16 million people – was perhaps always doomed to failure, but swallowed increasing amounts of internal resources and hard currency. A significant factor was also the elimination of a ready source of hard currency through re-export of Soviet oil, which until 1981 was provided below world market prices; the resulting loss of hard currency income produced a noticeable dip in the otherwise steady improvement of living standards. (It was precisely this continuous improvement which was at risk due to the impending debt crisis; the Schürer-Papier's remedial plans spoke of a 25–30% reduction.) Comparison with the West German economy |East Germany||West Germany| |GNP/GDP1 ($ billion)||159.5||945.7| |GNP/GDP per capita ($)||9,679||15,300| |Budget revenues ($ billion)||123.5||539| |Budget expenditures ($ billion)||123.2||563| 1 GNP, used for the GDR, includes income earned by its citizens abroad, minus income earned by foreigners from domestic production. GDP, used for the Federal Republic of (West) Germany, is not so modified. So, the two figures are not strictly comparable, but they serve reasonably well for the purposes of this table. Socialist Unity Party of Germany (SED) The ultimate directing force in the economy, as in every aspect of the society, was the SED, particularly its top leadership. The party exercised its leadership role formally during the party congress, when it accepted the report of the general secretary, and when it adopted the draft plan for the upcoming five-year period. More important was the supervision of the SED's Politbüro, which monitored and directed ongoing economic processes. That key group, however, could concern itself with no more than the general, fundamental, or extremely serious economic questions, because it also had to deal with many other matters. At the head of the state apparatus responsible for formally adopting and carrying out policies elaborated by the party congress and Politbüro was the Council of Ministers, which had more than forty members and was in turn headed by a Presidium of sixteen. The Council of Ministers supervised and coordinated the activities of all other central bodies responsible for the economy, and it played a direct and specific role in important cases. The State Planning Commission, sometimes called the "Economic General Staff of the Council of Ministers," advised the Council of Ministers on possible alternative economic strategies and their implications, translated the general targets set by the council into planning directives and more specific plan targets for each of the ministries beneath it, coordinated short-, medium-, and long-range planning, and mediated interministerial disagreements. The individual ministries had major responsibility for the detailed direction of the different sectors of the economy. The ministries were responsible within their separate spheres for detailed planning, resource allocation, development, implementation of innovations, and generally for the efficient achievement of their individual plans. In addition to the basic structure of the industrial sector, a supplementary hierarchy of government organs reached down from the Council of Ministers and the State Planning Commission to territorial rather than functional subunits. Regional and local planning commissions and economic councils, subordinate to the State Planning Commission and the Council of Ministers, respectively, extended down to the local level. They considered such matters as the proper or optimal placement of industry, environmental protection, and housing. The fact that the GDR had a planned economy did not mean that a single, comprehensive plan was the basis of all economic activity. An interlocking web of plans having varying degrees of specificity, comprehensiveness, and duration was in operation at all times; any or all of these may have been modified during the continuous process of performance monitoring or as a result of new and unforeseen circumstances. The resultant system of plans was extremely complex, and maintaining internal consistency between the various plans was a considerable task. Planning by timescale In May 1949, Soviet Foreign Minister Andrei Vishinsky claimed that production in the Soviet occupation zone in March 1949 had reached 96.6 per cent of its 1936 level and that the government budget of the Soviet zone showed a surplus of 1 billion East German Marks, despite a 30% cut in taxes. When the 1953 budget was introduced in the Volkskammer on 4 February, economic exploitation in the Soviet interest was still the dominant trend. The budget envisaged expenditures of 34.688 bn East marks, an increase of about 10% over the 31.730 bn of the 1952 budget. Its main object was to provide investments for strengthening the economy and for national defense. Operationally, short-term planning was the most important for production and resource allocation. It covered one calendar year and encompassed the entire economy. The key targets set at the central level were overall rate of growth of the economy, volume and structure of the domestic product and its uses, use of raw materials and labor and their distribution by sector and region, and volume and structure of exports and imports. Beginning with the 1981 plan, the state added assessment of the ration of raw material use against value and quantity of output to promote more efficient use of scarce resources. Low-range (five-year) planning used the same indicators, although with less specificity. Although the five-year plan was duly enacted into law, it is more properly seen as a series of guidelines rather than as a set of direct orders. It was typically published several months after the start of the five-year period it covered, after the first one-year plan had been enacted into law. More general than a one-year plan, the five-year plan was nevertheless specific enough to integrate the yearly plans into a longer time frame. Thus it provided continuity and direction. In the early 1970s, long-term, comprehensive planning began. It too provided general guidance, but over a longer period, fifteen or twenty years, long enough to link the five-year plans in a coherent manner. In the first phase of planning, the centrally determined objectives were divided and assigned to appropriate subordinate units. After internal consideration and discussion had occurred at each level and suppliers and buyers had completed negotiations, the separate parts were reaggregated into draft plans. In the final stage, which follows the acceptance of the total package by the State Planning Commission and the Council of Ministers, the finished plan was redivided among the ministries, and the relevant responsibilities were distributed once more to the producing units. The production plan was supplemented by other mechanisms that control supplies and establish monetary accountability. One such mechanism was the System of Material Balances, which allocated materials, equipment, and consumer goods. It acted as a rationing system, ensuring each element of the economy access to the basic goods it needed to fulfill its obligations. Since most of the goods produced by the economy were covered by this control mechanism, producing units had difficulty obtaining needed items over and above their allocated levels. Another control mechanism was the assignment of prices for all goods and services. These prices served as a basis for calculating expenses and receipts. Enterprises had every incentive to use these prices as guidelines in decision-making. Doing so made plan fulfillment possible and earned bonus funds of various sorts for the enterprise. These bonuses were not allocated indiscriminately for gross output but were awarded for such accomplishments as the introduction of innovations or reduction of labor costs. The system functioned smoothly only when its component parts were staffed with individuals whose values coincided with those of the regime or at least complemented regime values. Such a sharing took place in part through the integrative force of the party organs whose members occupied leading positions in the economic structure. Efforts were also made to promote a common sense of purpose through mass participation of almost all workers and farmers in organized discussion of economic planning, tasks, and performance. An East German journal reported, for example, that during preliminary discussion concerning the 1986 annual plan, 2.2 million employees in various enterprises and work brigades of the country at large contributed 735,377 suggestions and comments. Ultimate decision-making, however, came from above. State industrial sector On 1 January 1954, the Soviet government turned over to the GDR thirty-three industrial concerns including the Leuna and Buna chemical works, and the GDR became the owner of all the enterprises in its territory. Directly below the ministries were the centrally directed trusts, or Kombinate. Intended to be replacements for the Associations of Publicly Owned Enterprises—the largely administrative organizations that previously served as a link between the ministries and the individual enterprises—the Kombinate resulted from the merging of various industrial enterprises into large-scale entities in the late-1970s, based on interrelationships between their production activities. The Kombinate included research enterprises, which the state incorporated into their structures to provide better focus for research efforts and speedier application of research results to production. A single, united management directed the entire production process in each Kombinat, from research to production and sales. The reform also attempted to foster closer ties between the activities of the Kombinate and the foreign trade enterprises by subordinating the latter to both the Ministry of Foreign Trade and the Kombinate. The goal of the Kombinat reform measure was to achieve greater efficiency and rationality by concentrating authority in the hands of midlevel leadership. The Kombinat management also provided significant input for the central planning process. By the early 1980s, establishment of Kombinate for both centrally managed and district-managed enterprises was essentially complete. Particularly from 1982 to 1984, the government established various regulations and laws to define more precisely the parameters of these entities. These provisions tended to reinforce the primacy of central planning and to limit the autonomy of the Kombinate, apparently to a greater extent than originally planned. As of early 1986, there were 132 centrally managed Kombinate, with an average of 25,000 employees per Kombinat. District-managed Kombinate numbered 93, with an average of about 2,000 employees each. At the base of the entire economic structure were the producing units. Although these varied in size and responsibility, the government gradually reduced their number and increased their size. The number of industrial enterprises in 1985 was only slightly more than a fifth that of 1960. Their independence decreased significantly as the Kombinate became fully functional. The agricultural sector of the economy had a somewhat different place in the system, although it too was thoroughly integrated. It was almost entirely collectivized except for private plots. The collective farms were formally self-governing. They were, however, subordinate to the Council of Ministers through the Ministry of Agriculture, Forestry, and Foodstuffs. A complex set of relationships also connected them with other cooperatives and related industries, such as food processing. By 1 July 1954, there were 4,974 collective farms with 147,000 members which cultivated 15.7 per cent of the arable land of the territory. Legal private enterprise The private sector of the economy was small but not entirely insignificant. In 1985 about 2.8 percent of the net national product came from private enterprises. The private sector included private farmers and gardeners; independent craftsmen, wholesalers, and retailers; and individuals employed in so-called free-lance activities (artist, writers, and others). Although self-employed, such individuals were strictly regulated. In 1985, for the first time in many years, the number of individuals working in the private sector increased slightly. According to East German statistics, in 1985 there were about 176,800 private entrepreneurs, an increase of about 500 over 1984. Certain private sector activities were quite important to the system. The SED leadership, for example, encouraged private initiative as part of an effort to upgrade consumer services. In addition to those East Germans who were self-employed full-time, there were others who engaged in private economic activity on the side. The best known and most important examples were families on collective farms who also cultivated private plots (which could be as large as 53,820 ft² or 5,000 m²). Their contribution was significant; according to official sources, in 1985 the farmers privately owned about 8.2 percent of pigs, 14.7 percent of sheep, 32.8 percent of horses, and 30 percent of laying hens in the country. Professionals such as commercial artists and doctors also worked privately in their free time, subject to separate taxes and other regulations. Their impact on the economic system, however, was negligible. Informal economic activity More difficult to assess, because of its covert and informal nature, was the significance of that part of the private sector called is the "second economy." As used here, the term includes all economic arrangements or activities that, owing to their informality or their illegality, took place beyond state control or surveillance. The subject has received considerable attention from Western economists, most of whom are convinced that it is important in CPEs. In the mid-1980s, however, evidence was difficult to obtain and tended to be anecdotal in nature. These irregularities did not appear to constitute a major economic problem. However, the East German press occasionally reported prosecutions of egregious cases of illegal "second economy" activity, involving what are called "crimes against socialist property" and other activities that are in "conflict and contradiction with the interests and demands of society", as one report described the situation. One kind of informal economic activity included private arrangements to provide goods or services in return for payment. An elderly woman might have hired a neighbour boy to haul coal up to her apartment, or an employed woman might have paid a neighbour to do her washing. Closely related would be instances of hiring an acquaintance to repair a clock, tune up an automobile, or repair a toilet. Such arrangements take place in any society, and given the deficiencies in the East German service sector, they may have been more necessary than in the West. They were doubtlessly common, and because they were considered harmless, they were not the subject of any significant governmental concern. Another common activity that was troublesome if not disruptive was the practice of offering a sum of money beyond the selling price to individuals selling desirable goods, or giving something special as partial payment for products in short supply, the so-called Bückware (duck goods; sold from "below the counter"). Such ventures may have been no more than offering someone Trinkgeld (a tip), but they may have also involved Schmiergeld (bribes; lit. money used to "grease" a transaction) or Beziehungen (special relationships). Opinions in East Germany varied as to how significant these practices were. But given the abundance of money in circulation and frequent shortages in luxury items and durable consumer goods, most people were perhaps occasionally tempted to provide a "sweetener", particularly for such things as automobile parts or furniture. - Eastern Bloc economies - New Course - New Economic System - Economic System of Socialism - Volkseigener Betrieb - Volkseigenes Gut - Landwirtschaftliche Produktionsgenossenschaft - This article incorporates public domain material from websites or documents of the Library of Congress Country Studies. Library of Congress - Merkel, W. and S. Wahl. Das gepluenderte Deutschland: Die wirtschaftliche Entwicklung im östlichen Teil Deutschlands von 1949-1989. IWG: Bonn (1991) - Naumann, G. and E. Truempler. Von Ulbricht zu Honecker. Dietz: Berlin (1990) - A. Murphy – The Triumph of Evil. European Academic Publishing (2000) - "CIA World Factbook 1990". Retrieved 2011-04-25. - RFE/RL East German Subject Files: Industry Open Society Archives, Budapest
The cost of energy is rising at an alarming rate and the current infrastructure is costly, inefficient & damaging to the environment. Efforts to change this are being made in the form of renewable energy incentives but not always in the most beneficial way or in the time frame agreed upon at various climate change summits. Solar power is being pushed by many governments but the incentives keep changing to the detriment of the consumers and there are many areas of the planet that do not benefit from enough sunshine to make them an option. Many wind farm projects have been realised but these are dependant on a healthy supply of wind to stop them just being a blight on the landscape. Many people here in rural France have enough spare land to embrace alternative ground breaking solutions to the energy crisis. - The introduction of plantations of rapid growing coppiced hardwood trees would help in the fight against global warming. - Turning these plantations into wood chip to provide heating, hot water and heat energy is a carbon neutral solution. - Technology is available to produce electricity from this heat energy. - Organic Rankine Cycle - Stirling Engine – Old idea but still emerging technology ‘Organic Rankine Cycle’ The Rankine Cycle is that used by most power generation systems but in its basic form requires high temperatures and high pressures to work. Organic Rankine Cycle (ORC) uses alternative resources to water/steam (such as refrigerant gasses) which require lower temperatures and pressures to achieve the required results. This technology permits small scale systems to become viable alternative energy producers at a local level. The Stirling engine has been around since 1812 but it is only recently that the need for alternate energy sources has sparked renewed interest and research into producing efficient, viable models for small scale electricity production. Watch this space!
Our Key Stage 1 and Key Stage 2 interactive shows allow children to explore the Solar System and then to take part in a competitive quiz about the planets and moons that they have just seen. The theatre show is followed up with workshops which are tailored to the ability level of the children. These involve discussion and problem solving, and simple tasks that can be carried out relating to the workshop topics. So, for example we have tasks which task children to safely land an “Eggstronaut” from a great height; to make and discuss their own impact craters; and to inspect the properties of asteroids and meteorites. Key Stage 1: Years 3 & 4 During their Planetarium visit, Year 3 and 4 primary school pupils take part in an interactive Solar System show. The show includes a visual explanation of how the Moon's appearance changes in the sky as it waxes and wanes. After the show, the children then take part in a workshop which tells them all about the planet Mars and then asks them to design their own Martian Mission Patch. Other workshops include Rocks from Space, where they are shown how the Moon became covered in craters and get to make some craters of their own. Key Stage 2: Years 5, 6 and 7 Key Stage 2 pupils will take part in an interactive show which explores the Solar System and then take part in a workshop which asks them to design a landing system which will allow a safe descent for an “Eggstronaut”. They can also explore the Exhibition Areas where there are detailed images of the planets of the Solar System and models of robotic spacecraft and other space vehicles which collect the latest scientific data from space. They can see and touch a large 140 kg nickel iron meteorite from Campo del Cielo in Argentina. This iron meteorite probably hit the Earth around 4000 years ago. This is the largest meteorite on display in Ireland. Pupils will be invited to hold smaller meteorites and to discuss how they formed and what might happen if a meteoroid was on a collision course with the Earth. They can try making their own impact craters and learn how a huge asteroid struck the Earth 65 million years ago, ending the reign of the dinosaurs. Our workshops are designed to make the children think about the consequences of celestial events, like an asteroid impact. These topics can be explored more fully back in the classroom, and teachers will be able to emphasise the cross- curricular nature of such an investigation. More information on a typical school visit and prices.
|Our subscribers' grade-level estimate for this page: 4th - 5th| |Body-Related Books to Print||EnchantedLearning.com |Brain||Digestive System||Ears||Eyes||Heart||Lungs||Skeleton||Skin||Teeth||Urinary System||Classroom |Digestive System Label Me! Printout (Simple version) Digestive System Label Me! Printout Most of the digestive organs (like the stomach and intestines) are tube-like and contain the food as it makes its way through the body. The digestive system is essentially a long, twisting tube that runs from the mouth to the anus, plus a few other organs (like the liver and pancreas) that produce or store digestive chemicals. The Digestive Process: The start of the process - the mouth: The digestive process begins in the mouth. Food is partly broken down by the process of chewing and by the chemical action of salivary enzymes (these enzymes are produced by the salivary glands and break down starches into smaller molecules). On the way to the stomach: the esophagus - After being chewed and swallowed, the food enters the esophagus. The esophagus is a long tube that runs from the mouth to the stomach. It uses rhythmic, wave-like muscle movements (called peristalsis) to force food from the throat into the stomach. This muscle movement gives us the ability to eat or drink even when we're upside-down. In the stomach - The stomach is a large, sack-like organ that churns the food and bathes it in a very strong acid (gastric acid). Food in the stomach that is partly digested and mixed with stomach acids is called chyme. In the small intestine - After being in the stomach, food enters the duodenum, the first part of the small intestine. It then enters the jejunum and then the ileum (the final part of the small intestine). In the small intestine, bile (produced in the liver and stored in the gall bladder), pancreatic enzymes, and other digestive enzymes produced by the inner wall of the small intestine help in the breakdown of food. In the large intestine - After passing through the small intestine, food passes into the large intestine. In the large intestine, some of the water and electrolytes (chemicals like sodium) are removed from the food. Many microbes (bacteria like Bacteroides, Lactobacillus acidophilus, Escherichia coli, and Klebsiella) in the large intestine help in the digestion process. The first part of the large intestine is called the cecum (the appendix is connected to the cecum). Food then travels upward in the ascending colon. The food travels across the abdomen in the transverse colon, goes back down the other side of the body in the descending colon, and then through the sigmoid colon. The end of the process - Solid waste is then stored in the rectum until it is excreted via the anus. Digestive System Glossary: abdomen - the part of the body that contains the digestive organs. In human beings, this is between the diaphragm and the pelvis alimentary canal - the passage through which food passes, including the mouth, esophagus, stomach, intestines, and anus. anus - the opening at the end of the digestive system from which feces (waste) exits the body. appendix - a small sac located on the cecum. ascending colon - the part of the large intestine that run upwards; it is located after the cecum. bile - a digestive chemical that is produced in the liver, stored in the gall bladder, and secreted into the small intestine. cecum - the first part of the large intestine; the appendix is connected to the cecum. chyme - food in the stomach that is partly digested and mixed with stomach acids. Chyme goes on to the small intestine for further digestion. descending colon - the part of the large intestine that run downwards after the transverse colon and before the sigmoid colon. digestive system - (also called the gastrointestinal tract or GI tract) the system of the body that processes food and gets rid of waste. duodenum - the first part of the small intestine; it is C-shaped and runs from the stomach to the jejunum. epiglottis - the flap at the back of the tongue that keeps chewed food from going down the windpipe to the lungs. When you swallow, the epiglottis automatically closes. When you breathe, the epiglottis opens so that air can go in and out of the windpipe. esophagus - the long tube between the mouth and the stomach. It uses rhythmic muscle movements (called peristalsis) to force food from the throat into the stomach. gall bladder - a small, sac-like organ located by the duodenum. It stores and releases bile (a digestive chemical which is produced in the liver) into the small intestine. gastrointestinal tract - (also called the GI tract or digestive system) the system of the body that processes food and gets rid of waste. ileum - the last part of the small intestine before the large intestine begins. intestines - the part of the alimentary canal located between the stomach and the anus. jejunum - the long, coiled mid-section of the small intestine; it is between the duodenum and the ileum. liver - a large organ located above and in front of the stomach. It filters toxins from the blood, and makes bile (which breaks down fats) and some blood proteins. mouth - the first part of the digestive system, where food enters the body. Chewing and salivary enzymes in the mouth are the beginning of the digestive process (breaking down the food). pancreas - an enzyme-producing gland located below the stomach and above the intestines. Enzymes from the pancreas help in the digestion of carbohydrates, fats and proteins in the small intestine. peristalsis - rhythmic muscle movements that force food in the esophagus from the throat into the stomach. Peristalsis is involuntary - you cannot control it. It is also what allows you to eat and drink while upside-down. rectum - the lower part of the large intestine, where feces are stored before they are excreted. salivary glands - glands located in the mouth that produce saliva. Saliva contains enzymes that break down carbohydrates (starch) into smaller molecules. sigmoid colon - the part of the large intestine between the descending colon and the rectum. stomach - a sack-like, muscular organ that is attached to the esophagus. Both chemical and mechanical digestion takes place in the stomach. When food enters the stomach, it is churned in a bath of acids and enzymes. transverse colon - the part of the large intestine that runs horizontally across the abdomen. Digestive System (simple version) Label the digestive system. Label the digestive system. Over 35,000 Web Pages Sample Pages for Prospective Subscribers, or click below Overview of Site| Enchanted Learning Home Monthly Activity Calendar Books to Print Parts of Speech The Test of Time TapQuiz Maps - free iPhone Geography Game Biology Label Printouts Physical Sciences: K-12 Art and Artists Label Me! Printouts |Search the Enchanted Learning website for:|
Dr. George Baker, editor of Dental Times in 1865, concluded that “the very form and structure of woman unfits her for its [dental surgery] duties.” Unbeknownst to the good doctor, Emeline Roberts Jones had already established herself as the first woman to practice dentistry in the United States by tending to the teeth of numerous residents of northeastern Connecticut in the years prior to the Civil War. At age 18, Emeline Roberts married a dentist, Dr. Daniel Jones, who had acquired his knowledge of the field from Dr. R.B. Curtiss in Winsted. There were at that time only a handful of dental colleges in the country. When Emeline displayed an interest in her husband’s profession, she was met with resistance from Jones, who accepted the contemporary belief that dentistry was no occupation for the “frail and clumsy fingers” of a woman. Not to be denied, Jones pursued her interest in dentistry clandestinely. It was only after she had secretly filled and extracted several hundred teeth and demonstrated her skill and ability that her husband finally permitted her to work on some of his patients. Grudgingly, he allowed her to practice with him at his office in Danielsonville in 1855. Four years later, she became his partner, where she enjoyed a reputation as a skilled dentist. When her husband died in 1864, Emeline Jones was left with two young children. Nevertheless, she bravely carried on alone in order to support her family, traveling with her portable dentist’s chair to eastern Connecticut and Rhode Island. In 1876, she moved to New Haven, where she established a successful practice, which she maintained until her retirement in 1915. In a career that spanned six decades, Emeline Roberts Jones received numerous awards and honors. In 1893, she served on the Woman’s Advisory Council of the World’s Columbian Dental Conference. She was elected to the Connecticut State Dental Society in 1883, and in 1914, she was made an honorary member of the National Dental Association. During This Time 1800 - 1920: Industrialization & Reform When the Declaration of Independence announced that all men are created equal, the path to citizenship for both blacks and women had begun. Despite not having the right to vote, women had long petitioned governors and legislatures to articulate a family grievance. Activist women presented to the U.S. Congress a large-scale innovative petition on behalf of abolition. Harriet Beecher Stowe’s novel, Uncle Tom’s Cabin, is widely credited with stirring public opinion, especially among women, to anti-slavery sentiments. Helped by women’s efforts, abolitionists eventually secured their goal in the three post-Civil War amendments. Work in activities such as anti-slavery, temperance and moral reform led some middle-class women to the cause of women’s rights. They challenged the ideal of “separate spheres,” insisting on the same rights to life, liberty, property, and happiness as men. Through Elizabeth Cady Stanton’s lobbying, New York gave women control over their property, wages, and children. In 1848, Stanton and others met in Seneca Falls to discuss the “repeated injuries and usurpations on the part of man toward woman.” The resulting Declaration of Sentiments asserted that “woman is man’s equal.” The early industrial economy had changed women’s lives. Textile production shifted from home to factory towns where farm daughters hoped wage labor would open new opportunities. Arrangements seemed ideal, until declining profits caused owners to slash wages to reduce costs. Lowell women walked out in 1836, and later petitioned the legislature to investigate deteriorating conditions. The labor force also changed as more immigrants arrived, delegated to poorly paid factory and domestic work. In the Progressive Era, some benevolent women were committed to helping working-class women, and their needs received increased publicity after the tragic Triangle Shirtwaist Company fire in 1911. Others addressed civic concerns, established settlement houses, worked for protective labor legislation, and tried to ban child labor. Clerical work in offices opened up as a desirable field for women, and some gained greater entry into various professions, including medicine, law, social work, nursing, and teaching. Determined suffragists persisted in their political protests even after World War I broke out in 1914. Finally, seventy-two years after the Seneca Falls convention, employing new tactics and strategies, and a long, hard struggle at the state and national levels, the elusive goal was reached. The 19th amendment, proposed by Congress and ratified by the states in 1920, prohibited restrictions on the right to vote based on sex. It was one of the most successful mass movements in the expansion of political democracy in American history. An important expression of feminism (calling for change in women’s private lives, not in their public roles) was the campaign in favor of access to birth control. Nurse Margaret Sanger spoke and wrote on its behalf, though her mailings were judged as violating the anti-obscenity Comstock laws. In 1916, after opening the first birth control clinic in the country, she was arrested and sentenced briefly to jail. For forty years she promoted contraception as an alternative to abortion, foreshadowing Planned Parenthood. Special thanks to Barbara E. Lacey, Ph.D., Professor Emeritus of History, St. Joseph's College (Hartford, CT) for preparing these historical summaries.
1. 800: Charlemagne is crowned Holy Roman Emperor. Often called the “Father of Europe,” Charlemagne was a Frankish warrior king who united much of the continent under the banner of the Carolingian Empire. Beginning in the late 700s, Charlemagne forged a vast kingdom through extensive military campaigns against the Saxons, the Lombards and the Avars. A devout Catholic, he also aggressively converted his subjects to Christianity and instituted strict religious reforms. On Christmas Day 800, Pope Leo III crowned Charlemagne “emperor of the Romans” during a ceremony at St. Peter’s Basilica. This controversial coronation restored the Western Roman Empire in name and established Charlemagne as the divinely appointed leader of most of Europe. More importantly, it placed him on equal footing with the Byzantine Empress Irene, who ruled over the Eastern Empire in Constantinople. Charlemagne would serve as emperor for 13 years, and his legal and educational reforms sparked a cultural revival and unified much of Europe for the first time since the fall of the Roman Empire. 2. 1066: William the Conqueror is crowned king of England. The 1066 holiday season played host to an event that permanently changed the course of European history. On Christmas Day, William, Duke of Normandy—better known as William the Conqueror—was crowned king of England at Westminster Abbey in London. This coronation came in the wake of William’s legendary invasion of the British Isles, which had ended in October 1066 with a victory over King Harold II at the Battle of Hastings. William the Conqueror’s 21-year rule would see many Norman customs and laws find their way into English life. After consolidating his power by building famous structures such as the Tower of London and Windsor Castle, William also gave copious land grants to his French-speaking allies. This not only permanently changed the development of the English language—nearly one-third of modern English is derived from French words—but it also contributed to the rise of the feudal system of government that characterized much of the Middle Ages. 3. 1776: George Washington and the Continental Army cross the Delaware River. At the end of 1776, the Revolutionary War looked like it might be lost for colonial forces. A series of defeats by the British had depleted morale, and many soldiers had deserted the Continental Army. Desperate to strike a decisive victory, on Christmas Day General George Washington led 2,400 troops on a daring nighttime crossing of the icy Delaware River. Stealing into New Jersey, on December 26 the Continental forces launched a surprise attack on Trenton, which was held by a force of German soldiers known as Hessians. General Washington’s gamble paid off. Many of the Hessians were still disoriented from the previous night’s holiday bender, and colonial forces defeated them with minimal bloodshed. While Washington had pulled off a shock victory, his army was unequipped to hold the city and he was forced to re-cross the Delaware that same day—this time with nearly 1,000 Hessian prisoners in tow. Washington would go on to score successive victories at the Battles of the Assunpink Creek and Princeton, and his audacious crossing of the frozen Delaware served as a crucial rallying cry for the beleaguered Continental Army. 4. 1814: The Treaty of Ghent ends the War of 1812. On December 24, 1814, while many in the western world celebrated Christmas Eve, the United States and Great Britain sat down to sign a famous peace agreement ending the War of 1812. Negotiations had begun in Ghent, Belgium, earlier that August—the same month that British forces burned the White House and the U.S. Capitol in Washington. After more than four months of debate, the American and British delegations agreed to a settlement that essentially ended the war as a draw. All conquered territories were relinquished, and captured soldiers and vessels were returned to their respective nations. While the Treaty of Ghent effectively ended the 32-month conflict, it did not take effect in the United States until it was ratified in February 1815. In fact, one of the greatest American victories of the war—at the Battle of New Orleans in January 1815—came more than a week after the Treaty of Ghent had been signed. 5. 1868: President Andrew Johnson issues a final pardon to Confederate soldiers. At the tail end of his term as president, Andrew Johnson gave a handful of former Confederate rebels a famous Christmas present. By way of Proclamation 179, on December 25, 1868, Johnson issued amnesty to “all and every person” who had fought against the United States during the Civil War. Johnson’s blanket pardon was actually the fourth in a series of postwar amnesty orders dating back to May 1865. Earlier agreements had restored legal and political rights to Confederate soldiers in exchange for signed oaths of allegiance to the United States, but these pardons exempted 14 classes of people including certain officers, government officials and those with property valued over $20,000. The Christmas pardon stood as a final and unconditional act of forgiveness for unreconstructed Southerners, including many former Confederate generals. 6. 1914: The World War I Christmas Truce is reached. The year 1914 saw the Christmas spirit manifest itself in the most unlikely of places-a World War I battlefield. Starting on the evening of December 24, scores of German, British and French troops in Belgium laid down their arms and initiated a spontaneous holiday ceasefire. The truce was reportedly instigated by the Germans, who decorated their trenches with Christmas trees and candles and began singing carols like “Silent Night.” British troops responded with their own rendition of “The First Noel,” and the weary combatants eventually ventured into “no man’s land”—the treacherous, bombed-out space that separated the trenches—to greet one another and shake hands. According to accounts from the men involved, the soldiers shared cigarettes and pulls of whiskey, and some exchanged Christmas presents with men they had been shooting at only hours before. Taking advantage of the brief lull in combat, some Scottish, English and German troops even played a pick-up game of soccer on the frozen battlefield. The truce was not sanctioned by the officers on either side, and eventually the men were called back to their respective trenches to resume fighting. Later attempts at holiday meetings were mostly forbidden, but as the war dragged on the “Christmas Truce” would stand as a remarkable example of shared humanity and brotherhood on the battlefield. 7. 1968: Apollo 8 orbits the moon. As part of 1968′s Apollo 8 mission, astronauts Frank Borman, Jim Lovell and William Anders spent the night before Christmas orbiting the moon. The operation was originally planned to test out the lunar module—later used in the Apollo 11 moon landing—in Earth’s orbit. But when work on the module fell behind schedule, NASA ambitiously changed the mission plan to a lunar voyage. Apollo 8 went on to result in a series of breakthroughs for manned space flight: The three astronauts became the first men to leave Earth’s gravitational pull, the first to orbit the moon, the first to view all of Earth from space and the first to see the dark side of the moon. Apollo 8 is perhaps best remembered today for the broadcast the three astronauts made when they entered the moon’s orbit on Christmas Eve. As viewers were shown pictures of the moon and Earth from lunar orbit, Borman, Lovell and Anders read the opening lines of the book of Genesis from the Bible. The broadcast—which ended with the famous line “Merry Christmas, and God bless all of you, all of you on the good Earth”—became one of the most watched television events in history.
The objective case is used for nouns and pronouns which function as objects. There are three types of object: a direct object, an indirect object, and an object of a preposition. In English, the objective case only affects personal pronouns (e.g., I, he, she, we, they). For example, he becomes him, and they becomes them. Examples of the Objective Case (Direct Object)The direct object of a verb is the thing being acted upon by the verb. In other words, the direct object is the receiver of the action. The direct object can be found by locating the verb and asking "what?" or "whom?". For example: (In this example, the pronoun him is in the objective case. It has changed its form from he to him. He is the subjective case version.) (In this example, the noun phrase this letter is in the objective case. However, it does not change. Remember, only some personal pronouns change their forms in the objective case in English.) Examples of the Objective Case (Indirect Object)The indirect object is the recipient of the direct object. The indirect object can be found by locating the direct object (see above) and then asking who or what received it. In the examples below, the indirect objects are shaded, and the direct objects are in bold. (Q: Who (or what) received the letter? A: me) (Q: Who (or what) received it? A: you) (Not all personal pronouns change their forms in the objective case. In this example, you is in the objective case, which is the same spelling as the subjective case version.) Examples of the Objective Case (Object of a Preposition)The noun or pronoun after a preposition is known as the object of a preposition. In the examples below, the prepositions are in bold. The Objective CaseObjects (i.e., direct objects, indirect objects, and objects of prepositions) are always in the objective case. In English, this only affects pronouns (but not all pronouns). Here is a list of subjective pronouns and objective pronouns: The Accusative and Dative CasesWhen studying other languages, you might encounter the accusative case (for direct objects) and the dative case (for indirect objects). These two cases are used for the objects of prepositions too. In English, there is no distinction between the forms of the accusative case and dative case. The objective case covers both. What is the subjective case? What are prepositions? What is a direct object? What is an indirect object? What is the object of a preposition? What are noun phrases? What are noun clauses? What are objective pronouns? More on who & whom Glossary of grammatical terms Click on the WHO IS NEVER AN OBJECT Objects are put into the objective case, and the objective case of who is whom. Therefore, who is never an object. For example:
|Home > Conscious > Chapter 3 > 3.1. The Structure of Synapses| Figure 3-1. The structure of a synapse. [Source: Wikipedia] A synapse is the contact point between an axon terminal and another neuron. In fact, they do not actually touch. There is a gap about 20 nanometers between them, called "synaptic cleft". The neuron transmitting signals is called "presynaptic neuron", and the receiving neuron is called "postsynaptic neuron". The presynaptic axon terminal contains synaptic vesicles which store neurotransmitters. When a nerve impulse reaches the axon terminal, it will cause membrane depolarization, thereby opening the voltage-gated calcium channels and subsequent entry of calcium ions into the cell (Figure 3-1). Calcium ions are the master regulator of cellular operations, because they control the activities of many enzymes. The entry of calcium ions into the axon terminal will lead to a series of chemical reactions, resulting in the fusion of synaptic vesicles with the cell membrane and consequently the release of neurotransmitters into the synaptic cleft. These neurotransmitters may bind with their receptors in the postsynaptic membrane, triggering a variety of changes in the postsynaptic neuron. Figure 3-2. Dendritic shaft and dendritic spines. [Source: Wikipedia] Most synapses are formed between an axon terminal and the dendrites of another neuron. The contact point on dendrites is typically a raised structure called "dendritic spine" (Figure 3-2). Neurotransmitters may affect the chemical composition of dendritic spines and even their number, leading to changes in neuronal circuits. Examples related to memory will be given in Appendix B. The following sections describe only the basics required for discussion in later chapters. .
The purpose of this article is to discuss the main course of the normal development of human dentition, together with the concept of evolution of tooth development, clinical features of the dentition and the most common developmental disturbances. Knowledge of the normal development of the dentition and an ability to detect deviation from the normal are essential prerequisites for pedodontic diagnosis and a treatment plan. Dentition means a set of teeth. Teeth in the dental arch are used to designate the natural teeth in position in their alveoli. During evolution several significant changes took place in the jaws and teeth. When the Reptilian evolved to mammalian, the dentition went from “polyphydont” (many set of teeth) to “diphydont” (only two sets of teeth) and then to “hornodent” (all of same teeth) to heterodent (different types of teeth like incisor, canines, premolars and molars). There also arose a necessity for the teeth and bones to develop somewhat synchronously in order that the function of occlusion could be facilitated. Finally, The number of cranial and facial bones has been reduced by loss or fusion and the dental formula has is undergone changes. Stages of tooth evolution Graphically there are four stages of tooth evolution. i) The reptilian stage (Haplodont) ii) Early mammalian stage (Triconodont) iii) Triangular stage (Tritubercular molars) iv) Quadri tubercular molars The Reptilian stage: This stage is represented by the simplest form of dentition, the single cone type. It includes many teeth in both jaws which limit jaw movement. Thus the jaw movement is confined to that of a single hinge movement. Early mammalian stage: This stage exhibits three cusps in the line of development of the posterior dentition. The larger or anthropologically original cusp is centered with one smaller cusp located anteriorly and another posteriorly. Tritubercular stage: According to the recognized theories explaining evolutionary dentition development, three triconodont lines are changed to three cone shaped with the teeth still by-passing each other more or less, when the jaw opened or closed. These types are found in dogs and other carnivorous animals. Quathitubercular stage: The next stage of development created a projection on the triangular form that finally occluded with the antagonist of the opposing jaw. During the time as an accommodation to the changes in the dentition form and anatomy, the articulation of the jaws changed accordingly. The animals with dentition similar to that of humans are anthropoid apes. These include Chimpanzee, Gibbon. Gorilla and Orangutan. The shapes of individual teeth in these animals are very close to their counterparts in human mouth. Nevertheless, development of canines, arch form and jaw development are quite different. Common evolutionary trends in the primates I. There was shortening of the jaw due to the decrease in the size of the olfactory organs, upright body position and wide angle of the head to the body. 2. There was decrease in tile tooth size to be accommodated in these jaws, with subsequent elimination of some teeth from the dentition. 3. There was progressive shortening of the arch (in front) and relative widening. 4. Canines reduced in size 5. Lower premolar crowns became more symmetrical from oval. 6. First molars became the dominant cheek teeth. 7. In the upper second and third molars, the distolingual cusp reduced and often disappeared. 8. Third molars, which were larger than the first molars, were reduced in size and often eliminated. In modern man: There is a decrease in the tooth-bearing portion of the face. This decrease is partly due to a reduction in tooth size. • In some primitive and pre historic human skulls the second permanent molars usually succeeds the 1st molar. • The occlusal length of the lower molars is reduced in modern man and the ramus width is even more reduced less than the occlusal length of the lower molars. Characteristic of human dentition Tooth of the vertebrates are characterized depending upon: • Mode of attachment • Number of successive sets • Shape of teeth The way teeth are attached to the jaws. a) Acrodont: Teeth attached to the jaw by a connective tissue. b) Pleurodont: Teeth are set inside the jaws c) Thecodont: Teeth are inserted in a bony socket Depending on the number of teeth of successive sets. a) Polyphyodont: Teeth replaced throughout the life, e.g. shark. b) Diphyodont: Two sets of teeth, e.g. human being. c) Monophvodont: One set of teeth, e.g. sheep, goat. According to type or shape of teeth a) Hoiuodont: A single type of teeth b) Heterodont: Various types of teeth, e.g. human being. Origin of teeth Theories: Each tooth whether primary or permanent is believed to develop from the epithelial Primary germ cell. Mirious theories regarding mammalian dentition are reported. 1. The theory of concrescence: The mammalian dentition was produced by the fusion of two or more primitive conical teeth and each tubercle with its corresponding root originated as a single tooth. 2. Theory of trituberculv: Each of the mammalian teeth was derived from a single reptilian tooth by a secondary differentiation of tubercies and roots. This theory is widely accepted. 3. Theory of multituberculy: The mammalian dentition is the result of reduction and condensation of primitive tuberculate teeth. In the developmental process specialized structures such as teeth, differentiate as part of a closely integrated pattern of events. These progress from the initial genetic potential of the fertilized ovum to be influenced later by the prenatal and postnatal environment. The complex mechanisms, concerned with oro-dental development, involve a series of interactions not only between specific cell components, but also between the different varieties of cells which arise during organization of the various tissues The formation of the primitive oral cavity or stomatodeum and the perforation of the bucco-pharyngeal membrane depend upon the contact between the oral ectoderm and the pharyngeal ectoderm. The odontogenic epithelium is derived from this ectoderm. It is believed that ectoderm, eclo—mesenchyme which is contributed by the primitive streak through the notochord and adjacent tissue and the mesoderm are involved in dentition formation. The inductive differentiation of the cell layers of a tooth germ results from both ectodermal—ectomesenchymal and eclomcscnchyinal — mesodermal interactions. In humans, odontogenic epithelium which is the anlage of the dentition, can be identified in 28-30 day (ovulation age) embryo. 28-30 days: The epithelium proliferates, giving the appearance of epithelial thickening located on the inferior border of the maxillary process and the superior borders of the mandibular arches in the area forming the lateral margins of the stomatodeum 30-32 days: The odontogenic epithelium is 3-4 cell thick, the cells being ovoid to cuboid with little cytoplasm. 32-34 days: The mesenchyme immediately beneath the odontogentic epithelium can be distinguished from the adjacent mesenchyme. The odontogenic epithelium becomes invaginated relative to the underlying mesenchyme and a dental lamina from which individual tooth buds arise. 4th week The formation of dental lamina commences around and the tooth buds for the deciduous dentition begin to form about two weeks later. The dental lamina marks out the position of the future dental arches. The tooth buds for the corresponding permanent teeth develop from the same arch. Stages of tooth bud development • Phase of deciduous tooth – 5th month in utero • Phase of permanent tooth – 6th month • Phase of accessional tooth – spaced from 4th month in utero to 4-5 years A tooth germ (tooth bud) consists of three parts: An enamel organ, which is derived from the oral ectoderm, a dental papilla and a dental sac, both of the latter being derived from the mesenchyme. Each swelling of the lamina which is destined to be a tooth germ proliferates and differentiates, passing through various histological and morphological differentiation stages, namely bud, cap and bell stages. This basic configuration of the future tooth crown is fixed at the morphological differentiation stage. The enamel organ produces the enamel by a process of cell proliferation, cell differentiation and later mineralization, and the dental papilla produces the dentin and pulp of the tooth in a similar way. The dental sac produces the cementum and the periodontal ligament. Enamel formation ceases once the tooth crowns is complete, but dentition formation continues with root development. A layer of cementum is laid down on the surface of the root dentine. and incorporates periodontal fibers that support the tooth through its attachment to the bony wall of the tooth socket. Once histo-differentiation of the cells has progressed sufficiently far, mineralization commences. This occurs in the deciduous dentition during the 14th intrauterine week on average and begins with the central incisors. The permanent tooth buds appear in the fourth and fifth intrauterine months, at about the same age at which mineralization of the deciduous teeth commences. Mineralization of the permanent dentition is initiated around the time of birth on average, beginning with the first permanent molar. Originally chronology for dentition is based on the data of Logan and Kronfeld reported in 1933. Later, a careful review of the data was done by various researchers like Massler and Shour in 1941, Moorrees and Fanning and Hunt 1963, Kraus and Jordan in 1965. and Nystrom in 1977 etc. Factors affecting development of Dentition A retarding effect: Delayed eruption in both the primary dentition and permanent dentition, but more especially in the later has been attributed to many diseases, syndromes and systemic factors, the most common are: • Cleidocranial dysostosis • Downs syndrome • Hypovitaminosis (A and D) • Arnelogenesis imperfecta B. Local factors: Some local factors which may influence the developing dentition are • Aberrant tooth position • Lack of space in the arch • Very early loss of predecessor • Ectopic eruption • Congenital absence of teeth • Ankylosi of predecessor • Retained tooth or persisting deciduous root remnants • Arrested tooth formation (trauma) • Supernumeran’ tooth • Abnormal habit exerting muscular forces
Electric “Thinking Cap” Could Actually Help Generate Ideas Vanderbilt University researchers have found that sending mild stimulation to the brain can enhance or depress learning effects. A Vanderbilt University team of researchers, led by psychologist and PhD candidate Robert Reinhart and assistant professor of psychology Geoffrey Woodman, has conducted a study that showed it is possible to selectively enhance or depress a person’s ability to learn by sending a mild electrical current to the brain. Previous studies have shown that a spike of negative voltage originates from a certain part of the brain immediately after a person makes a mistake. Reinhart and Woodman wanted to explore the idea that this brain activity has an impact on learning because it allows the brain to learn from mistakes. The key objectives of the study were to find out if it was possible to control the brain’s electrophysiological response to mistakes and whether its effect can be enhanced or depressed depending on the direction of the current applied to the brain. The researchers also wanted to find out how long the effects lasted and whether the effects can be applied to other tasks, not just learning. An elastic headband with two electrodes were used in the experiments. One electrode was attached to the cheek and the other on top of the head. The researchers applied 20 minutes of transcranial direct current stimulation or a very mild elecrical current to the research participants. Three types of conditions were tested: a cathodal condition with the current running from the cheek to the crown of the head, an anodal condition with the current running from the crown of the head to the electrode on the cheek, or a “sham” condition where the physical tingling sensation was replicated without an actual current being applied. After they received the electrical stimulation, the participants were given learning tasks and their brain electrical activities were measured. The researchers found out that when an anodal current was applied, participants had a significantly higher spike in brain activity and made fewer errors and learned more quickly. A cathodal current showed opposite results. The results were not noticed by the participants, but they could be seen clearly on the EEG. The study, which was published in the Journal of Neuroscience, can have implications beyond that of learning and can be explored further in the treatment of conditions like schizophrenia and ADHD. This type of research could also have major implications in the wearable tech industry.
ENDANGERED SPECIES: Any species which is in danger of becoming extinct, or dying out entirely. A species can become endangered from being few in numbers or being threatened by a changing environment or increase in predation. WHAT IS EXTINCTION: Animal populations are all classified by biologists down to groups capable of reproducing fertile offspring (as well as bigger groups of classifications, such as a genus or family). When no more individuals of a species can be found anywhere on earth, the species is considered extinct. Many animals have been as added to the endangered species list because their populations are close to becoming extinct. If one animal relies on an endangered animal for its food or protection, it too can become part of the extinction chain. Possibly the most famous extinction happened at the end of the Cretaceous period, about 65 million years ago, when most of the species on Earth were wiped out after a large asteroid's impact with the Earth. That was when all the non-bird-like dinosaurs went extinct.
The status of species in the drylands remains unknown, as no assessment exists to date. About 8% of the drylands are protected, which is comparable to an average of about 10% in other ecosystems. The Millennium Ecosystem Assessment reports that 8 of the 25 global hotspots are in the drylands. These are areas where 0.5% of the plant species are endemic to the region but habitat loss exceeds 70%. The drylands ecosystems have a large and diverse heritage of flora and fauna, including major domesticated agricultural crops, with Africa alone being home to more than 50 000 known plant species, 1 000 mammal species, and 1 500 bird species. With desertification taking its toll, the biological diversity of the drylands ecosystems is steadily deteriorating, with some of the world's highest rates of loss of forests, rangelands, wetlands, and fish and wildlife populations (World Bank 2004) taking place at alarming rates. Africa for example, lost 39 million hectares of tropical forest during the 1980s, and another 10 million hectares by 1995. More than 50% of wetlands in the U.S. were destroyed in the last 200 years. In Europe, between 60% and 70% of the wetlands are completely destroyed (Stein et al., 2000). A critical challenge facing most countries is to halt and reverse the present extent of DLDD impacts and subsequent loss of biological diversity resulting from excessive exploitation of natural resources, especially those manifested in desertification and scarcity of water. With the increase in population, the situation in desertification-affected developing countries in the next few decades is likely to have the following characteristics: (a) Continuing loss in forest cover, while progress in achieving sustainable forest management will be slow; (b) Illegal logging will remain a major problem and many countries will not be in a position to produce wood competitively; (c) Wood will continue to be the main source of energy, with wood fuel consumption expected to increase, while increased urban demand for charcoal will result in further degradation of forests; (d) Effective resolution of land use conflicts will be critical in taking full advantage of the potentials of wildlife (e) Loss of biodiversity, land degradation and deterioration of watersheds and underdevelopment of rural areas will remain critical problems.
Photo: Casey Myers (Flickr) The German cockroach — the most abundant kind of roach in the US — provides an example of evolution in action. No, they are aren’t getting bigger or growing any new appendages. But equally alarming, they are becoming much harder to kill. Over the past few decades, increasingly large numbers of the German cockroach have evolved to avoid roach bait that contains sugar. Most roach‑bait poisons contain sugar to appeal to the cockroach’s sweet tooth. Roach baits typically contain high fructose corn syrup as well as trace amounts of glucose. And it’s the glucose that’s causing the aversion to bait, because it sets off a reaction that stimulates the bug’s bitter taste receptors. So while those roaches that don’t mind glucose are still chowing down on the lethal poison, the glucose‑averse roaches are breeding and producing another generation of glucose‑averse vermin. New Breed of Bugs The roaches are adapting not just by becoming resistant to pesticides, but also by developing a distaste for the foods more likely to contain the poison. That’s evolution happening right before our eyes. The bug’s newly evolved traits come with trade‑offs, however. Glucose‑averse cockroaches are smaller and reproduce more slowly than their sugar-fiend friends. Thusly, environmental conditions—such as the presence or absence of bait—can determine which evolutionary course these cockroaches will take. If we stop using glucose in bug bait, it may cutdown on the survivalist cockroach population. That is until some other mutation arises.
By Margaret A. Wissman, DVM, DABVP - Avian Practice While a feather is actively growing from a follicle, there is one artery and one vein that runs through the feather to support the growth. In the October 2009 issue of BIRD TALK Magazine, you learned about the science of molting. Now learn more about bird feather growth and health. Feathers are actually complex branched skin appendages. A feather develops from a follicle, which is found in the dermal and epidermal layers of the skin. To me, the way the bird’s skin cells are able to develop a large, primary feather, complete with shaft, barbs and barbules is one of those miracles of nature. While a feather is actively growing from the follicle, there is one artery and one vein that runs through the feather to support the growth. Once the feather has reached the full size, the blood supply is no longer needed and the vessels shrivel up. However, the follicle maintains a blood supply in the skin. A newly emerging feather is called a pin feather. As the new feather emerges from the follicle, there is a sheath covering and surrounding the new feather. You will notice a silvery, shiny sheath on the new pin feathers. These are most obvious on the top of the head and can appear as little silver spikes. Some birds are experts are removing their own sheaths with either the beak or by scratching at the feathers with foot. Birds housed together often preen and groom each other to remove each other’s sheaths. Many owners enjoy preening their pet birds by gently rolling the pin feathers through their finger tips, assisting in the removal process. Some of the bird’s follicles and pin feathers are very sensitive and the bird may resist any help in removing the sheaths. In some cases, the entire sheath or a portion of a sheath might not flake off in a timely fashion, and can become retained. If this occurs, when the sheath is removed, it might result in an abnormal appearance to the feather, which can be mistaken for a stress bar. Some birds seem to ignore sheaths on the long tail feathers and might require help in removing them by gently rolling the feather between finger tips or by gently pinching the shaft. If a feather is plucked, instead of falling out normally, some of the dead cells of the follicle remain attached to the plucked feather. Cells in the follicle are also damaged or destroyed by the plucking process, as well. When the feather is plucked out, bleeding occurs into the empty follicle and stops when a clot forms in the follicle. Unless the bird has liver damage, psittacosis or another disease that results in clotting problems, the amount of bleeding that occurs when plucking a feather out is inconsequential. However, if a blood feather is plucked out, there is a chance that the bleeding can become more significant. Also, damage to the follicle can also be more serious. Repeated plucking of a feather from the same follicle can eventually result in the follicle becoming damaged to the point that it can no longer replace a feather. We have all seen chronic feather-picking birds with a naked chest and no visible feathers, and no signs of active follicle activity. This is from continual plucking resulting in follicles no longer being able to replace feathers normally. For this reason, I always think twice before plucking any feather from a bird’s body, especially the large primary and secondary wing and tail feathers.
|The BioenergyWiki is no longer being actively updated.| [Note: This BioenergyWiki page was developed based on a Creative Commons-licensed page copied ("forked") from Wikipedia, by BioenergyWiki User:Vortexrealm (Wikipedia User Vortexrealm) on 12 December 2007.] Anaerobic digestion (AD) is the biological degradation of organic material in the absence of air. An anaerobic digester is a man-made system that harnesses this natural process to treat waste, produce biogas that can be converted to heat and electricity and anaerobic digestate, a soil improving material. Anaerobic Digestion (AD) is the preferred stabilisation process for the treatment of wastewater sludges and organic wastes. The process provides volume and mass reduction and delivers valuable renewable energy with biogas production . A biogas powerplant is an anaerobic digestion system that is designed and operated specifically for the purpose of generating energy. Anaerobic digestion has a long history dating back to the 10th Century BC. It is presently used to treat many biodegradable wastes including sewage, industrial effluents, farm waste and the organic component of municipal solid waste. The four stages key of anaerobic digestion are hydrolysis, acidogenesis, acetogenesis and methanogenesis. These stages result from the biological treatment of organic waste by two key bacterial groups- acetogens and methanogens. A simplified overall chemical reaction of the process can be summarised as: C6H12O6 → 3CO2 + 3CH4 There are a number of different configurations of anaerobic digestion systems that will include either: - Batch or continuous loading - One-stage or multi-stage digestion tanks - Mesophilic or thermophilic operational temperatures - High-solids or low-solids feedstock content Anaerobic digestion can be considered to be a sustainable technology and has many environmental benefits that lead to it contributing to the reduction of emissions of greenhouse gases to atmosphere. History of anaerobic digestion Biogas was first recorded to have been used for heating bath water in Assyria during the 10th century BC and then in Persia during the 16th century. In the 17th century, Jan Baptita Van Helmont found that decaying organic material produced flammable gases. In 1776, Count Alessandro Volta resolved that there was a direct connection between how much organic material was used and how much gas the material produced. In 1808, Sir Humphry Davy determined that methane was present in the gases produced by cattle manure. The first anaerobic digester was built by a leper colony in Bombay, India in 1859. In 1895 the technology was developed in Exeter, England, where a septic tank was used to generate gas for street lighting. Also in England, In 1904 the first dual purpose tank for both sedimentation and sludge treatment was installed in Hampton. In 1907, in Germany, a patent was issued for the Imhoff tank In the 1930’s, people began to recognise anaerobic digestion as a science, and research was done that led to the discovery of anaerobic bacteria and that led to more research into the required conditions to grow methane bacteria. This work was further developed during World War II where in Germany and France there was increased digestion of manure. In an aerobic system using free gaseous oxygen (or air), the end products are primarily CO2 and H2O which are the stable or oxidised forms of carbon and hydrogen. If the organic waste contains nitrogen, phosphorus and sulphur, then the end products may also include NO3−, PO43− and SO42−. In contrast to this, in an anaerobic system, there is an absence of free gaseous oxygen. In the case of anaerobic digestion, oxygen is prevented from entering the system through physical containment and isolation from the atmosphere in sealed digestion tanks. The oxygen source may be the organic waste itself or it may be supplied by inorganic oxides (in the waste). When the oxygen source in an anaerobic system is derived from the organic waste itself, then the 'intermediate' end products are (primarily) alcohols, aldehydes, and organic acids plus CO2. In the presence of specialised methanogens, the intermediates are converted to the 'final' end products of CH4, CO2 with trace levels of H2S. Uses of anaerobic digestion Anaerobic digesters are commonly used for effluent and sewage treatment or for managing animal waste. Anaerobic digestion is a simple process that can greatly reduce the amount of organic matter which might otherwise end up in landfills or waste incinerators. In developing countries simple home and farm-based anaerobic digestion systems offer the potential for cheap, low cost energy from biogas. Increasing environmental pressures on solid waste disposal in developed countries have increased the use of anaerobic digestion as a process for reducing waste volumes and generating useful byproducts. Here anaerobic digestion may either be used to process the source separated fraction of biodegradable waste or alternatively combined with mechanical sorting systems to process mixed municipal waste. These facilities fall under the category of mechanical biological treatment. Almost any organic material can be processed with anaerobic digestion. This includes biodegradable waste materials such as waste paper, grass clippings, leftover food, sewage and animal waste. Anaerobic digesters can also be fed with specially grown energy crops to boost biodegradable content and hence increase biogas production. After sorting or screening to remove inorganic or hazardous materials such as metals and plastics, the material to be processed is often shredded, minced, or hydrocrushed to increase the surface area available to microbes in the digesters and hence increase the speed of digestion. The material is then fed into an airtight digester where the anaerobic treatment takes place. Stages of anaerobic digestion - The first is the chemical reaction of hydrolysis, where complex organic molecules are broken down into simple sugars, amino acids, and fatty acids with the addition of hydroxyl groups. - The second stage is the biological process of acidogenesis where a further breakdown by acidogens into simpler molecules, volatile fatty acids (VFAs) occurs, producing ammonia, carbon dioxide and hydrogen sulfide as byproducts. - The third stage is the biological process of acetogenesis where the simple molecules from acidogenesis are further digested by acetogens to produce carbon dioxide, hydrogen and mainly acetic acid. - The fourth stage is the biological process of methanogenesis where methane, carbon dioxide and water are produced by methanogens. A simplified generic chemical equation of the overall process is as follows: C6H12O6 → 3CO2 + 3CH4 Products of anaerobic digestion Biogas is a gaseous mixture comprising mostly methane and carbon dioxide, but also containing a small amount hydrogen and trace levels of hydrogen sulfide. The methane in biogas can be burned to produce electricity, usually with a reciprocating engine or microturbine. The gas is often used in a cogeneration arrangement where both electricity is generated and waste heat used to warm the digesters or to heat buildings. Excess electricity can be sold to suppliers or put into the local grid. Electricity produced by anaerobic digesters is considered to be green energy and may attract subsidies. Since the gas is not released directly into the atmosphere and the carbon dioxide comes from an organic source with a short carbon cycle biogas does not contribute to increasing atmospheric carbon dioxide concentrations; because of this, it is considered to be an environmentally friendly energy source. The production of biogas is not a steady stream; it is highest during the middle of the reaction. In the early stages of the reaction, little gas is produced because the number of bacteria is still small. Toward the end of the reaction, only the hardest to digest materials remain, leading to a decrease in the amount of biogas produced.</div> Digestate can come in three forms; fibrous, liquor or a sludge-based combination of the two fractions. In two-stage systems the different forms of digestate come from different digestion tanks. In single stage digestion systems the two fractions will be combined and if desired separated by further processing. Fibrous; acidogenic digestate The second by-product (acidogenic digestate) is a stable organic material comprised largely of lignin and chitin, but also of a variety of mineral components in a matrix of dead bacterial cells; some plastic may be present. This resembles domestic compost and can be used as compost or to make low grade building products such as fibreboard. Liquor; methanogenic digestate The third by-product is a liquid (methanogenic digestate) that is rich in nutrients and can be used as a fertiliser dependent on the quality of the material being digested. Levels of potentially toxic elements (PTEs) should be chemically assessed. This will be dependent upon the quality of the original feedstock. In the case of most clean and source-separated biodegradable waste streams the levels of PTEs will be low. In the case of wastes originating from industry the levels of PTEs may be higher and will need to be taken into consideration when determining a suitable end use for the material. The final output from anaerobic digestion systems is water. This water originates both from the moisture content of the original waste that was treated but also includes water produced during the microbial reactions in the digestion systems. This water may be released from the dewatering of the digestate or may be implicitly separate from the digestate. It will typically contain high BOD and COD that will require further treatment prior to being released into water courses or sewers. This can be achieved by oxygenation of the end effluent in tanks associated with the digesters. The first and foremost issue when consideration of the implementation of anaerobic digestion systems is the feedstock. Digesters typically can accept any biodegradable material, however the level of putrescibility is the key factor. The more putrescible the material the higher the gas yields possible from the system. The anaerobes can breakdown material to varying degrees of success from readily in the case of short chain hydrocarbons such as sugars, to over longer periods of time in the case of cellulose and hemicellulose. Anaerobic microorganisms are unable to break down long chain woody molecules such as lignin. Anaerobic digesters were typically designed for operation using sewage sludge and manures. Sewage and manure are not however the material with the most potential for anaerobic digestion as the biodegradable material has already had the energy content taken out by the animal which produced it. A second consideration related to the feedstock will be moisture content. The wetter the material the more suitable the material will be to handling with pumps instead of screw presses and physical means of movement. Also the wetter the material, the more volume and area it takes up relative to the levels of gas that are produced. The level of contamination of the feedstock material is a key consideration. If the feedstock to the digesters has significant levels of physical contaminants such as plastic, glass or metals then pre-processing will be required in order for the material to be used. If it is not removed then the digesters can be blocked and will not function efficiently. It is with this logic in mind that mechanical biological treatment plants based on anaerobic digestion are designed. Anaerobic digestion systems can be designed to operate in a number of different configurations: - Batch or continuous - Temperature: Mesophilic or thermophilic - Solids content: High solids or low solids - Complexity: Single stage vs multistage Batch or continuous A batch system is the simplest form, where the biomass added to the reactor at the beginning and sealed for the duration of the process. Batch reactors can suffer from odour issues which can be a severe problem during emptying cycles. Typically biogas production will form in a normal distribution pattern. The operator can use this fact to determine when they believe the process of digestion of the organic matter has completed. Bioreactor landfills and anaerobic lagoons can take form. In the continuous process, which is the more common type, organic matter is constantly added, or added in stages to the reactor. Here the end products are constantly or periodically removed, resulting in constant production of biogas. Examples of this form of anaerobic digestion include UASB, EGSB and IC reactors. There are two conventional operational temperature levels for anaerobic digesters, which are determined by the species of methanogens in the digesters: - Mesophilic which takes place optimally around 37°-41°C or at ambient temperatures between 20°-45°C with mesophiles - mesophilic archaea - are the primary microorganism - Thermophilic which takes place optimally around 50°-52° at elevated temperatures up to 70°C where thermophiles - thermophilic archaea - are the primary microorganisms Methanogens come from the primitive group of bacteria called the archaea. This bacterial family includes species that grow at the hostile conditions hydrothermal vents. These species are more resistant to heat and can therefore operate at thermophilic temperatures, which is unique to bacterial families. Typically there are a greater number of species of mesophiles that are present in mesophilic digestion systems. These bacteria are more tolerant to changes environmental conditions than thermophiles. Mesophilic systems are therefore considered to be more stable than thermophilic digestion systems. As mentioned above, thermophilic digestion systems are considered to be less stable, however the increased temperatures facilitate faster reaction rates and hence faster gas yields. Operation at higher temperatures facilitates greater sterilisation of the end digestate. In countries where legislation, such as the Animal By-Products Regulations in the European Union, requires end products to meet certain levels of bacteria in the output material this may be a benefit. A draw back of operating at thermophilic temperatures is that more heat energy input is required to achieve the correct operational temperatures. This increase in energy may not be outweighed by the increase in the outputs of biogas from the systems. Hence it is important to consider an energy balance for these systems. Typically there are two different operational parameters associated with the solids content of the feedstock to the digesters: High solids digesters process a thick slurry that will require more energy input to move and process the feedstock. They will typically have a lower land requirement due to the lower volumes associated with the moisture. Low solids digesters can transport material through the system using pumps that require significantly lower energy input. Low solids digesters will require a larger amount of land than high solids due to the increase volumes. There are benefits associated with operation in a liquid environment enabling more thorough circulation of materials and contact between the bacteria and their food. Digestion systems can be configured with different levels of complexity: - One stage or single stage - Two stage or multistage A single stage digestion system is one in which all of the biological reactions occur within a single sealed reactor. This gives benefits associated with lower construction costs, however there is less control of the reactions occurring within the system. In a two-stage or multi-stage digestion system different digestion vessels are optimised to bring maximum control over the bacterial communities living within the digesters. Typically hydrolysis, acetogenesis and acidogenesis occur within the first reaction vessel. The organic material is then heated to the required operational temperature (either mesophilic or thermophilic) prior to being pumped into a methanogenic reactor. Acidogenic bacteria produce organic acids and more quickly grow and reproduce than methanogenic bacteria. Methanogenic bacteria require stable pH and temperature in order to optimise their performance. The residence time in a digester varies with the amount and type of feed material, the configuration of the digestion system and whether it be one-stage or two-stage. In the case of single-stage thermophilic digestion residence times may be in the region of 14 days, which is relatively fast. The plug-flow nature of some of these systems will mean that the full degradation of the material may not have been realised in this timescale. In this event digestate exiting the system will be darker in colour and will typically have more odour. In two-stage mesophilic digestion, residence time may vary between 15 and 40 days. In the case of mesophilic UASB digestion hydraulic residence times can be (1hour-1day) and solid retention times can be up to 90 days. In this manner the UASB system is able to separate solid an hydraulic retention times with the utilisation of a sludge blanket. Continuous digesters have mechanical or hydraulic devices, depending on the level of solids in the material, to mix the contents enabling the bacteria and the food to be in contact. They also allow excess material to be continuously extracted to maintain a reasonably constant volume within the digestion tanks. Many digestion plants have ancillary processes to treat and manage the by-products. These systems can include: - Biogas refinement - Digestate maturation - Effluent treatment Biogas may require further treatment and cleaning or 'scrubbing' to further refine it for other uses. Hydrogen sulphide is a toxic product of the anaerobic decomposition of sulphates contained within the input feedstock. This hydrogen sulphide is released as a trace component of the biogas. National environmental inforcement agencies such as the US EPA or the English and Welsh Environment Agency put strict limits on the levels of gases containing hydrogen sulphide. The US EPA has mandated that industrial facilities may not burn any fuel gas that contains more than 160 ppm by volume (0.016 percent by volume) of hydrogen sulfide. Therefore, if the levels of hydrogen sulphide in the the gas are high, gas scrubbing and cleaning equipment (such as amine gas treating) will be needed to process the biogas to within regionally accepted levels. If siloxanes are present in the gas, they will adversely affect gas engines. The siloxane forms mineralised deposits on the physical elements of the engine, which will increase wear and tear. Therefore, increased levels of siloxane will render greater attention to the maintenance of the gas engine. Over certain threshold levels the gas will not be suitable for processing in the gas engine. In countries such as Switzerland, Germany and Sweden the methane in the biogas may be concentrated in order for it to be used as a vehicle transportation fuel or alternatively input directly into the gas mains. In countries where the driver for the utilisation of anaerobic digestion are renewable electricity subsidies, this route of treatment is less likely as energy is required in this processing stage and reduces the over all levels available to sell. Digestate typically contains elements such as lignin that cannot be broken down by the anaerobic microorganisms. Also the digestate may contain ammonia that is phytotoxic and will hamper the growth of plants if it is used as a soil improving material. For these two reasons a maturation or composting stage may be employed after digestion. Lignin and other materials are available for degradation by aerobic microorganisms such as fungi helping reduce the overall volume of the material for transport. During this maturation the ammonia will be broken down into nitrates, improving the fertility of the material and making it more suitable as a soil improver. The wastewater exiting the anaerobic digestion facility will typically have elevated levels of BOD and COD. Some of this material is termed 'hard COD' meaning it cannot be accessed by the anaerobic bacteria for conversion into biogas. If this effluent was put directly into watercourses it would negatively affect them by causing eutrophication. As such further treatment of the wastewater is often required. This treatment will typically be an oxidation stage where air is passed through the water in sequencing batch reactors or similar aeration tanks. Consideration of suitability As with all industrial systems, to be economically viable, there must be a use, market or acceptable disposal point for the outputs of anaerobic digestion. Biogas can be sold or used in almost all parts of the world, where it can offset demand on fossil fuel stocks. Alternatively biogas can be used to provide cheap sources of energy in the developing world and help reduce methane emissions to atmosphere. Digestate liquor can be used as a fertiliser supplying vital nutrients to soils. The solid, fibrous component of digestate can be used as a soil conditioner. This material can help boost the organic content of soils. There are some countries, such as in Spain where there are many organically depleted soils, the markets for the digestate can be just as important as the biogas. When considering alternatives to anaerobic digestion such as composting, anaerobic digestion performs excellently with higher renewable energy production and lower carbon emissions. Contribution to prevention of climate change Methane produce in anaerobic digestion facilities can be utilised to replace methane derived from fossil fuels. The carbon in biodegradable waste is part of a carbon cycle, as such the carbon released from the combustion of biogas can be thought of as having been removed by plants in the recent past, for instance within the last decade, but typically within the last growing season. If these plants are re-grown, as is the case with crops, it can be argued that the systems can be considered to be carbon neutral. This contrasts to carbon in fossil fuels that has been sequestered in the earth for many thousands of years. Furthermore, if the putrescible waste feedstock to the digesters was landfilled, it would break down naturally and often anaerobicly, in this case the gas may escape into the atmosphere; As methane is about twenty times more potent as a greenhouse gas than carbon dioxide this would be considered more harmful. In this way correctly engineered and utilised anaerobic digestion can be considered to be sustainable and biogas considered to be a renewable fuel. - Circle Biodiesel & Ethanol Corporation - Activities include manufacturing and consulting related to methane digesters for production of biogas. - SEaB Energy Ltd - SEaB Energy Ltd is a designer, manufacturer & installer of renewable energy micro generation systems, specialising in anaerobic digestion & wind energy for small local installations. - Scientists Question EPA's Greenhouse Gas Emission Estimates, 28 June 2010 by azocleantech.com: "The approach the U.S. Environmental Protection Agency (EPA) uses to estimate greenhouse gas emissions from agricultural anaerobic lagoons that treat manure contains errors and may underestimate methane emissions by up to 65%, according to scientists". - "An interdisciplinary team of scientists from the University of Missouri evaluated the EPA and IPCC [Intergovernmental Panel on Climate Change] approach to estimate greenhouse emissions from anaerobic lagoons." They "documented errors in the approach, which the EPA and IPCC adapted from a method used to estimate methane production from anaerobic digesters." Additionally, the team "found that uncovered anaerobic lagoons were more efficient at converting waste to methane than predicted using literature based on digesters." - See the paper, An Evaluation of the USEPA Calculations of Greenhouse Gas Emissions from Anaerobic Lagoons. - USDA Makes a Move on Methane, 12 December 2009 by CQ Politics: "Agriculture Secretary Tom Vilsack said in a conference call from Copenhagen that his department and the dairy industry have reached an agreement to accelerate efforts to reduce the industry’s greenhouse gas emissions 25 percent by 2020. The announcement is part of the Obama administration’s continuing campaign to convince farmers they can benefit from an international agreement on climate change." - 21-22 June 2011, Chicago, Illinois, USA: Biogas East & Midwest. (Themes: anaerobic digestion, biogas, co-digestion, farm waste, landfill gas) - 14-16 September 2011, Leipzig, Saxony, Germany: I. International Conference on Biogas Microbiology. Organized by the Helmholtz-Zentrum für Umweltforschung - UFZ - Leipzig. (Themes: anaerobic digestion, biogas) - 31 October 2011-2 November 2011, Madison, Wisconsin, USA: 11th Annual BioCycle Renewable Energy Conference (Themes: anaerobic digestion, biogas, waste) - 12-15 April 2010, San Diego, California, USA: 25th Annual BioCycle West Coast Conference 2010. (Themes: anaerobic digestion, composting, municipal solid waste) - 5-6 May 2010 Bremen, Germany: Waste to Energy: International Exhibition & Conference for Energy from Waste and Biomass. (Themes: anaerobic digestion, biogas, biomass, bio-methane gas distribution, pyrolysis, sewage, waste-to-energy) - 29-30 September 2010, Lyon, France: Biogaz Europe. (Themes: anaerobic digestion, biogas, biomethane, waste) - ↑ [An introduction to anaerobic digestion], www.anaerobic digestion.com, retrieved 17.08.07 - ↑ Anaerobic digestion, www.energy.ca.gov, retrieved 17.08.07 - ↑ Anaerobic digestion, http://www.monsal.com, retrieved 18.09.07 - ↑ Application of biogas installation to produce electrical power in Szczecin, barzyk.pl, retrieved 17.08.07 - ↑ Svoboda, I (2003) Anaerobic digestion, storage, olygolysis, lime, heat and aerobic treatment of livestock manures, www.scotland.gov.uk, retrieved 17.08.07 - ↑ Cruazon, B. (2007) History of anaerobic digestion, web.pdx.edu, retrieved 17.08.07 - ↑ Anaerobic digestion, www.waste.nl, retrieved 19.08.07 - ↑ Humanik, F. et al (2007) Anaerobic digestion of animal manure, www.epa.gov, retrieved 17.08.07 - ↑ Adapted from Beychok, M. (1967) Aqueous Wastes from Petroleum and Petrochemical Plants, First edition, John Wiley & Sons, LCCN 67019834 - ↑ [ What is the anaerobic digestion process?], retrieved 17.08.07 - ↑ Anaerobic Digestion Page, www.remade.org.uk, retrieved 17.08.07 - ↑ Friends of the Earth (2004) Anaerobic digestion Briefing Paper, www.foe.co.uk, retrieved 17.08.07 - ↑ Cardiff University (2005) Anaerobic Digestion Page, www.wasteresearch.co.uk, retrieved 17.08.07 - ↑ Juniper (2005) MBT: A Guide for Decision Makers – Processes, Policies & Markets, www.juniper.co.uk, (Project funding supplied by Sita Environmental Trust), Accessed 22.11.06 - ↑ Finstein, M. S., Zadik, Y., Marshall, A. T. & Brody, D. (2004) The ArrowBio Process for Mixed Municipal Solid Waste – Responses to “Requests for Information”, Proceedings for Biodegradable and Residual Waste Management, Proceedings. (Eds. E. K. Papadimitriou & E. I. Stentiford), Technology and Service Providers Forum, p. 407-413 - ↑ Anaerobic digestion, www.waste.nl, retrieved 19.08.07 - ↑ Ciborowski, P (2004) [www.epa.gov/ttn/airinnovations/2004conference/less_polluting/peter_ciborowski.ppt Anaerobic Digestion in the Dairy Industry: Pollution Control Opportunities], Minnesota Pollution Control Agency Air Innovations Conference, www.epa.gov, retrieved 19.08.07 - ↑ Abstract from Operation of Municipal Wastewater Treatment Plants Manual of Practice-MOP 11 Fifth Edition, www.e-wef.org, retrieved 19.08.07 - ↑ Feasibility study concerning anaerobic digestion in Northern Ireland, www.eunomia.co.uk,retrieved 19.08.07 - ↑ Beginners guide to biogas, www.adelaide.edu.au, retrieved 19.08.07 - ↑ How Anaerobic Digestion (Methane Recovery) Works, www.eere.energy.gov, retrieved 19.08.07 - ↑ GE Energy - Jenbacher Gas Engines for Power Generation, www.power-technology.com, retrieved 19.08.07 - ↑ UK Biomass Strategy 2007, www.defra.gov.uk, retrieved 19.08.07 - ↑ Fact sheet on anaerobic digestion, www.waste.nl, retrieved 19.08.07 - ↑ Biomass and biogas, www.globalwarming101.com, retrieved 19.08.07 - ↑ Oaktech Environmental Consultation Response, www.alexmarshall.me.uk, retrieved 19.08.07 - ↑ BIOPAQ® IC, www.paques.nl, retrieved 19.08.07 - ↑ Biological processes with Biomar® technology, www.envirochemie.com, retrieved 19.08.07 - ↑ Song, Y.C., Kwon, S.J., Woo, J.H. (2004) Mesophilic and thermophilic temperature co-phase anaerobic digestion compared with single-stage mesophilic- and thermophilic digestion of sewage sludge, Water Res. 2004 Apr;38(7):1653-62 - ↑ Feasibility study concerning anaerobic digestion in Northern Ireland, www.eunomia.co.uk,retrieved 19.08.07 - ↑ Feasibility study concerning anaerobic digestion in Northern Ireland, www.eunomia.co.uk,retrieved 19.08.07 - ↑ HIMET—A Two-Stage Anaerobic Digestion Process for Converting Waste to Energy, www.gastechnology.org, retrieved 19.08.07 - ↑ Finstein, M. S. (2006) ArrowBio process integrates preprocessing and advanced anaerobic digestion to recover recyclables and generate electricity, www.oaktech-environmental.com, retrieved 19.08.07 - ↑ Removal of hydrogen sulfide from anaerobic digester gas, U.S. Patent, www.patentstorm.us, retrieved 17.08.07 - ↑ Siloxane quantification, removal and impact upon landfill gas generation facilities, www.p2pays.org, retrieved 17.08.07 - ↑ Wheles, E. & Pierece, E. (2004)Siloxanes in landfill and digester gas, www.scsengineers.com, retrieved 17.08.07 - ↑ Haase biogas energy centre, www.haase-energietechnik.de, retrieved 19.08.07 - ↑ Doelle, H. W. (2001)Biotechnology and Human Development in Developing Countries, www.ejbiotechnology.info, retrieved 19.08.07 - ↑ Introduction and spanish organic waste situation, www.compostnetwork.info, retrieved 19.08.07 - ↑ Questions about biomass energy, www.dti.gov.uk, retrieved 17.08.07 - ↑ Global warming methane could be far more potent than carbon dioxide www.newmediaexplorer.org, retrieved 17.08.07 - Anaerobic digestion forum - US Government Information Sheet: Methane from anaerobic digesters - Anaerobic biodigester design for small tropical producers - Low cost biodigester, Vietnam - Appropedia article on home biogas systems - Biogas Community on WikiSpaces
By M. Richard Eley In my previous article, we explored the wacky life of hyphens. As we found out, a hyphen’s main use is to break or join multiple-word compound terms. This time, we’ll look at the eN dash. The eN dash, like its brother the eM dash, is named for the letter length of the dash in the same font. An eN dash is supposed to be as long as an upper case N, but this can vary due to font design, character proportional spacing, and Web browsers. In any case, an eN dash will fall somewhere between a hyphen and the longer eM dash (explored in Part 3 of this series.) An eN dash is a little-used symbol, mainly due to its specific purpose, which is to break up a range of numbers, contest results, or scores. Many folks aren’t even aware of the eN dash, and often use hyphens instead—so you will see variances as to where and how it is used. Even major publishers and publications will show variances in its use. The eN dash should be used for number ranges, contest results, or scores. It looks like this: pages 14–20, 2010–2012, the Senate voted 68–32, Bears over Raiders 47–28. Note: If a range is introduced with words like “from,” “between,” or “during” do not use any form of dash: “The years between 1967 and 1970 were exciting.” It’s a one or the other rule—when dealing with numbers, either use an eN dash OR a range preposition. “The crates between 13–16 are the most valuable,” is incorrect because it uses both an eN dash and “between.” Sometimes, even though the usage is correct, the sentence’s meaning becomes confusing. Take, for example: “Crates 13–16 are the most valuable.” Does that mean we are talking about four crates: 13, 14, 15 and 16? Does it mean only two crates, 14 and 15, which are the two crates between 13 and 16? Or is 13–16 a lot number—with a dozen crates on a pallet #13–16? I’ve seen this type of ambiguity show up more often than you’d imagine—even in science books, college textbooks, and technical manuals. If a tech manual says, “To disable unit, cut the wires to the terminals between 44 and 48,” does that include terminals 44 and 48? Do you think a bomb disposal technician might be wondering the same thing? Express the correct meaning in clear and certain phrasing. Often when discussing a range of physical “things,” the sentence will read smoother and clearer if you forego eN dashes in favor of a preposition: “The book volumes #13 through #16 have the most useful data,” or “The contestants numbered between one and two hundred.” Using a prepositional word looks better than “There were 180–220 people out there.” It’s easy to misconstrue “180–220” as an adjective modifier of “people” instead of a numeric range. However, when identifying a pair of numbers, not a range, the eN dash is usually clearer and less verbose. “Last night the Bears beat the Raiders: 47–28” or “The vote for adjournment failed, 55–11.” In those examples, the reader is likely to understand from the context that we are discussing a comparison between two specific numbers, and not a range. An eN dash is the best choice for this use, unless your particular style is to craft the phrase in a more literal manner: “The vote failed: 55 were in favor, and 11 against.” The important thing is to construct the sentence such that the meaning is unmistakable, no matter who reads it, or if it is used out of context or quoted. If you follow a particular stylebook—and this is highly suggested—it might have a rule about eN dash verses hyphen use. It appears acceptable in most cases to use either an eN dash or a hyphen for compound adjectives before nouns: note the difference in “hyphen-separated” and “eNdash–separated.” But hyphens placed words slightly closer together than eN dashes do, and hyphens are considered the standard for compounding words. In Microsoft Word: hold down the CTRL key then press the Minus key on the numeric keypad to create an eN dash. You must use the numeric keypad’s Minus key, not the Hyphen key near the Backspace key on the top row. Using HTML/ALT codes, you can also hold down a PC’s ALT key and type 8211 or 0150 on the keypad. (8211 is the HTML code, 0150 is the original ASCII code.) *Note: all codes must be typed on a regular numeric keypad, so they probably won’t work using a laptop’s embedded keypad.* On Macs, try OPT-HYPHEN to create an eN dash. In the next part of this series, we’ll look at eM dashes. Until then, write on!
Some asteroids may have been like “molecular factories” cranking out life’s ingredients and shipping them to Earth via meteorite impacts, according to scientists who’ve made discoveries of molecules essential for life in material from certain kinds of asteroids and comets. Now it appears that at least one may have been less like a rigid assembly line and more like a flexible diner that doesn’t mind making changes to the menu. In January, 2000, a large meteoroid exploded in the atmosphere over northern British Columbia, Canada, and rained fragments across the frozen surface of Tagish Lake. Because many people witnessed the fireball, pieces were collected within days and kept preserved in their frozen state. This ensured that there was very little contamination from terrestrial life. “The Tagish Lake meteorite fell on a frozen lake in the middle of winter and was collected in a way to make it the best preserved meteorite in the world,” said Dr. Christopher Herd of the University of Alberta, Edmonton, Canada, lead author of a paper about the analysis of the meteorite fragments published June 10 in the journal Science. “The first Tagish Lake samples – the ones we used in our study that were collected within days of the fall – are the closest we have to an asteroid sample return mission in terms of cleanliness,” adds Dr. Michael Callahan of NASA’s Goddard Space Flight Center in Greenbelt, Md., a co-author on the paper. The Tagish Lake meteorites are rich in carbon and, like other meteorites of this type, the team discovered the fragments contained an assortment of organic matter including amino acids, which are the building blocks of proteins. Proteins are used by life to build structures like hair and nails, and to speed up or regulate chemical reactions. What’s new is that the team found different pieces had greatly differing amounts of amino acids. “We see that some pieces have 10 to 100 times the amount of specific amino acids than other pieces,” said Dr. Daniel Glavin of NASA Goddard, also a co-author on the Science paper. “We’ve never seen this kind of variability from a single parent asteroid before. Only one other meteorite fall, called Almahata Sitta, matches Tagish Lake in terms of diversity, but it came from an asteroid that appears to be a mash-up of many different asteroids.” By identifying the different minerals present in each fragment, the team was able to see how much each had been altered by water. They found that various fragments had been exposed to different amounts of water, and suggest that water alteration may account for the diversity in amino acid production. “Our research provides new insights into the role that water plays in the modification of pre-biotic molecules on asteroids,” said Herd. “Our results provide perhaps the first clear evidence that water percolating through the asteroid parent body caused some molecules to be formed and others destroyed. The Tagish Lake meteorite provides a unique window into what was happening to organic molecules on asteroids four-and-a-half billion years ago, and the pre-biotic chemistry involved.” If the variability in Tagish Lake turns out to be common, it shows researchers have to be careful in deciding whether meteorites delivered enough bio-molecules to help jump-start life, according to the team. “Biochemical reactions are concentration dependent,” says Callahan. “If you’re below the limit, you’re toast, but if you’re above it, you’re OK. One meteorite might have levels below the limit, but the diversity in Tagish Lake shows that collecting just one fragment might not be enough to get the whole story.” Although the meteorites were the most pristine ever recovered, there is still some chance of contamination though contact with the air and surface. However, in one fragment, the amino acid abundances were high enough to show they were made in space by analyzing their isotopes. Isotopes are versions of an element with different masses; for example, carbon 13 is a heavier, and less common, variety of carbon. Since the chemistry of life prefers lighter isotopes, amino acids enriched in the heavier carbon 13 were likely created in space. “We found that the amino acids in a fragment of Tagish Lake were enriched in carbon 13, indicating they were probably created by non-biological processes in the parent asteroid,” said Dr. Jamie Elsila of NASA Goddard, a co-author on the paper who performed the isotopic analysis. The team consulted researchers at the Goddard Astrobiology Analytical Lab for their expertise with the difficult analysis. “We specialize in extraterrestrial amino acid and organic matter analysis,” said Dr. Jason Dworkin, a co-author on the paper who leads the Goddard laboratory. “We have top-flight, extremely sensitive equipment and the meticulous techniques necessary to make such precise measurements. We plan to refine our techniques with additional challenging assignments so we can apply them to the OSIRIS-REx asteroid sample return mission.”
The Edict of Expulsion was an act of Edward I which expelled all Jews from the kingdom of England. To understand why why Edward acted in this way, you have to go back in history. Biblical exhortations against the lending of money led to an attitude among the inhabitants of Christian Europe that the lending of money at interest was at best, un-Christian, and at worst, sinful and evil. The Jewish religion attached no such stigma to lending money, and as a result many Jews offered that service to Christians. In the years following the Conquest of 1066 the Jews were an important part of Norman English society. The nobility of England were constantly in need of money, and as a result, they borrowed heavily from Jewish moneylenders. William the Conqueror recognized the importance of the Jewish moneylenders to Norman society, and offered them special protection under law. Jews were declared to be direct subjects of the king, not subjects of their local feudal lord. Because of this special status, however, English kings saw the Jewish moneylenders as a convenient source of funds. The king could levy taxes against Jews without needing the prior approval of Parliament. So when a king needed money - as they often did - he could simply levy a special tax on the Jews. This system would work as long as the Jews were allowed to accumulate money, but that was about to change. Throughout the period following the Norman invasion the medieval world underwent a gradual shift towards religious heterodoxy (emphasis on a single belief system), epitomized by the Fourth Lateran Council of 1215. The Lateran, among other measures, required Jews and Muslims to wear special dress so that they could easily be distinguished from Christians. England enforced this proclamation by requiring Jews to wear a special badge. Church proclamations like those of the Fourth Lateran Council really gave official approval to attitudes that were already prevalent in medieval society. The large landowners resented their indebtedness to the moneylenders. Attitudes of religious persecution became more and more evident. Even before the Lateran Council, outbreaks of mob violence aimed at Jews was not uncommon in England, for example, in 1190 a mob killed hundreds of Jews in York. At the same time as attitudes of intolerance were becoming more common - and more acceptable to both the Church and the state - the emergence of the Italian system of merchant banking made the Jewish moneylenders less vital to the nobility. Measures of punitive taxation against the Jews became more common, with the result that there were fewer Jewish moneylenders with ready cash to lend. In 1285 the Statute of Jewry banned all usury, even by Jews, and gave Jews 15 years to end their practice. Unfortunately, given prevailing altitudes towards Jews in trade, few avenues of livelihood were open to those affected by the Statute. The Edict of Expulsion These matters came to a head in 1287 when Edward I peremptorily seized all Jewish property and transferred all debts to his name. In other words, everyone who had previously owed money to a Jewish moneylender now owed it directly to Edward himself. On 18 July, 1290, Edward I issued what came to be called the Edict of Expulsion. The same day that the Edict was proclaimed writs were sent to the sheriifs of most counties advising that all Jews in their counties had until 1 November to leave the realm. Any Jews remaining after this date were liable to be seized and executed. To rub salt into the wound a special tax on the Jews was agreed in Parliament. How many people were affected by the Edict of Expulsion? Records are inexact for this period, but it seems likely that about 3000 Jews were forced to leave England. Edward's Edict to banish the Jews was followed by his fellow Christian monarch in France, Philip le Bel, sixteen years later. It was not until 1656 that Jews were allowed back into England. In the intervening period Jews were required to obtain a special license to visit the realm, though it seems very likely that some Jews resettled in England while keeping their religion secret.
Freezing rain, precipitation that falls through a shallow layer of freezing temperatures before melting and then freezing upon impact, makes regular, transitory appearances during Canadian winters. But, on Jan. 5, 1998, freezing rain began to fall and continued for six days without letup, crippling eastern Ontario, southern Quebec, and the Maritimes. Trees snapped, roofs collapsed, and high-voltage towers crumpled under the weight of a 5-to-7.5-cm veneer of ice. The electrical system failed, leaving 4 million people in frozen darkness for at least 36 hours. Thousands took refuge in emergency shelters where many stayed for weeks. In the largest peacetime troop deployment in Canadian history, the army was called in to help. Storm-related claims totaled at least $2 billion. The ice storm caused 22 deaths in Quebec and 4 in Ontario. This animation shows the progression of the 1998 ice storm across southern Ontario, southern Quebec and into the Maritimes. A red line depicts the boundary between warm air and cold masses. Label show the date described, while blue shading moves across the map to indicate the areas affected by freezing rain through the duration of the ice storm of January, 1998. A caption at the end of the animation states “One meteorologist called it a bayou storm in an eastern Canadian winter.” A menu gives viewers a menu leading to specific information about the cost of the ice storm. In early January 1998, a southern jet stream picked up warm, moist air around the Gulf of Mexico, and then turned north toward Canada. There it collided with a stagnant cold air mass, forcing the warm air to rise. On January 5th, rain began to fall — cooling as it descended through the cold air and freezing upon impact. The ice storm continued unabated for six straight days, crippling much of eastern Ontario, southern Quebec and the Maritimes. Under the weight of accumulated ice, trees snapped and roofs and high-voltage electrical towers collapsed, leaving millions of people freezing in the dark.
Diabetic peripheral neuropathy is a type of nerve damage that affects the nerves of the arms, legs, hands, and feet, causing symptoms such as pain, numbness, and tingling in the affected areas. According to the National Diabetes Information Clearinghouse, as many as 70% of people with diabetes eventually develop neuropathy. Pain from this condition is often difficult to treat, but researchers at the University of Virginia have recently made a discovery in mice that may shed light on how to effectively reduce nerve pain. Previous studies have indicated that a certain type of calcium channel (a structure that allows cells to communicate with one another) plays a role in the development of peripheral neuropathy pain. To investigate how these calcium channels contribute to neuropathy pain, researchers at the University of Virginia School of Medicine examined mice with neuropathy, Type 2 diabetes, and morbid obesity. They found that high levels of blood glucose change the structure of the calcium channels in such a way that the channels are forced open and calcium is released into the nerve cells. This overload of calcium causes the cells to become hyperactive, which in turn causes the characteristic symptoms of neuropathy such as tingling and pain. “Normally pain is useful information because it alerts us that there is a damaging effect — something happening to tissues. But this pain is typically without any obvious reason,” says researcher Slobodan M. Todorovic, MD, PhD. “It’s because nerves are being affected by high levels of glucose in the blood. So nerves start working on their own and start sending pain signals to the brain. It can be a debilitating condition that severely affects quality of life.” Dr. Todorovic and his colleague, Vesna Jevtovic-Todorovic, MD, PhD, showed that the pain from neuropathy could be reduced in the mice through the use of neuraminidase, a substance that naturally occurs in both animals and humans. The researchers note that this finding may help with the development of treatments not only for neuropathy pain, but for other conditions that cause chronic pain such as combat wound injuries or nerve damage from accidents. For more information, read the article “Discovery Shows the Way to Reverse Diabetic Nerve Pain” or see the study in the journal Diabetes. And for more on dealing with neuropathy pain, click here.
Policy, Practice and Pedagogy Recent years have seen a rapid policy transformation from segregation to inclusion in the education of children with special educational needs in Ireland. This book investigates how resource teachers and class teachers interpret the policy and principles of inclusion and enact these in their practice. Based on a study of nine resource teachers and nine class teachers, each paired in a particular school, it includes material from both interviews and observations of practice, providing a detailed qualitative account of the actions and interactions of teaching/learning experiences. The findings provide valuable insights into how inclusion is understood, interpreted and experienced in the classroom. They will be of interest to all those who are active in the field of education for inclusion, particularly teachers and policymakers. Chapter 3: Teachers’ Practices and Pedagogy for Inclusion Chapter 3 Teachers’ Practices and Pedagogy for Inclusion Introduction Devising policy through the formulation of legislation, directives and guidelines, defining inclusion and identifying principles of inclusive education are part of the process of educating people with special educa- tional needs. An equally significant part of that process are the practices that provide appropriate education for the diversity of learners and it is to teachers’ constructions of such practices that this chapter turns. The purpose of the chapter is to present a review of the research on teachers’ practices of inclusion for teaching children with special educational needs in mainstream settings. As such, the focus progresses from the what, how and why of policies, concepts and principles of inclusion to the what, how and why of teaching practices for inclusion. The chapter is structured in five sections to reflect key and interrelated aspects of teachers’ practices and pedagogy as follows: special education specific pedagogies; co-teaching; teaching for collaborative learning; differentiation; and, pedagogy for inclu- sion. Although each of these sections is discussed separately, their impact is intended to be cumulative in articulating perspectives from which the data garnered in the study relating to teachers’ interpretations, intentions and enactments of inclusion might be analysed. A final section provides a brief account of the methodology employed in the study. 38 Chapter 3 Special Education Specific Pedagogies Regarding the pedagogical repertoire required by teachers to teach children with special educational needs in the mainstream setting, a synthesis of cri- tiques provided by leading experts... You are not authenticated to view the full text of this chapter or article. This site requires a subscription or purchase to access the full text of books or journals. Do you have any questions? Contact us.Or login to access all content.
Commencement Level Lessons | Portrait of a Hero One class period Summer 1812 The Americans Invade (18 ½ minutes) Spring 1813 The British Invade (7 minutes) September 1813 Showdown on the Great Lakes (9 ½ minutes) September 1813 The Americans Invade Canada – Again (7 ½ minutes) Autumn 1814 Secession Threat in New England (9 minutes) III: People, Places, and Environments IV: Individual Development and Identity V: Individuals, Groups, and Institutions Canadian (Ontario) Concepts Interactions and Interdependence Canadian (Ontario) Specific Expectations – Seventh Grade Describe the major causes and personalities of the War of 1812 Explain key characteristics of life in English Canada from a variety of perspectives Describe the different groups of people Students will be able to: - understand and define the meaning of the words “hero” and “heroic” - describe the characteristics of the heroes from American/Canadian history by observing details in pictures and listening to brief biographies of each figure - What is a hero? - What does a hero look like? - What characteristics must someone have to be a hero? - Who are some heroes from American/Canadian history and in what ways are they heroic? What can they teach you about being a hero? The War of 1812 DVD The War of 1812 Heroes worksheet (57.4 KB) Now PlayingSummer 1812 | The Americans Invade - The teacher will post a picture of a family member or mentor in the center of the chalkboard or somewhere visible so that all students can see. - The teacher will explain to students why this person is a hero to them. Show the speech on Heroes, in the first program segment. - While explaining, the teacher will add bits of information about their hero in a spider-web-fashion around the image, detailing the reasons why heroism is warranted. Do this to model what the teacher will want the students to accomplish using historical figures. - If students understand the teacher’s example, move on; if not, the teacher might ask the students to complete a web of their own particular hero. - The teacher will hand out the War of 1812 Heroes worksheet and ask students to begin filling in information about these three figures based on their pictures. - After all thoughts are recorded, watch The War of 1812 segments that cover these three figures. After each section, allow writing time for each figure so students will not forget their thoughts. - When all three figures are finished, allow for some sharing of ideas about each figure. - Conclude the lesson by having the students write on the back of the paper which one historical figure is their hero and why. Collect each paper and grade it. - For a scoring rubric, explain to students that they should have at least ten different ideas written about each figure totaling one point a piece, and five points for the final question. Full credit equals 35 points. After the students view The War of 1812 segments on Sir Isaac Brock, Tecumseh, and General Andrew Jackson, they will fill in a chart on the circumstances in which each historical figure was made a hero. Related PBS Resources Explain how everyday heroes are just as heroic as famous ones and list heroic things they do. Design and present awards to everyday heroes you know, including yourself. www.pbs.org/parents/arthur/activities/acts/everyday_heroes.html Honoring Heroes and History Explore public sculptures and memorials. Discuss how artists' interpretations of history can influence perceptions of the past. Investigate and define valor in the context of the Medal of Honor and extend the defining parameters of the award to incorporate a broader group of recipients. Explore historic controversies surrounding minority recipients of the award. Download a print-friendly version
It started simply enough when life was simpler. If a crime were committed in Saxon times, a group of 11 freeholders might be called to consider, witnesses called and law applied. Rome refined the system, introducing a magistrate and a citizen judex. Laws were codified. Perhaps the only constant in the development of a jury system has been the number 12. The Scandinavians carried on with tribunals called Things who met in groups of 12 (or multiples of 12) to administer or invent the laws. The Magna Carta of 1215 guaranteed the right to jury trial (though juries of 12 were well established by then). Today, finding a consensus is much more complex and difficult where the object is to find a group of strangers who know as little as possible about the case and parties involved. Enlightened spokesmen now argue for a "collection of wisdom" among people of diverse ages, classes and experience, thus fairly representing we, the people.
Irradiation is a technique used in food production. It can be used to kill bacteria that cause food poisoning, such as salmonella, campylobacter and E. Coli. It also helps to preserve food and reduce food waste. During irradiation, food is exposed to electron beams, X-rays or gamma rays. The effect is similar to other preservation methods, such as pasteurisation or cooking. The appearance and texture of the food changes less during irradiation than other preservation methods. Irradiated food has been exposed to radiation but does not become radioactive itself. Safety of irradiated food Decades of research worldwide have shown that irradiation of food is a safe and effective way to: - kill bacteria in foods - extend the shelf life of food In 2011, the European Food Safety Authority reviewed the evidence and confirmed again that food irradiation is safe. How irradiation changes food Irradiation changes food in similar ways as other preservation techniques, such as cooking, canning and pasteurisation. Some vitamins may be reduced but this happens whenever foods are preserved or stored long-term. There is no evidence that any of the changes caused by food irradiation are a risk to the health of consumers. The law covering food irradiation states that irradiation can only be used where it is of benefit to the consumer. A company that wants to irradiate a food product has to be able to show that the benefits of irradiation outweigh any negative aspects. An example of the benefits of irradiation is reducing the risk of foodborne illness. This will vary between different foods and will mean that the use of food irradiation is more suitable to some foods than others. Categories of foods that could be irradiated and sold There are seven categories of food which may be irradiated in the UK. - fruit vegetables - bulbs and tubers - dried aromatic herbs, spices and vegetable seasonings - fish and shellfish These categories of food can also be irradiated and used as ingredients in other food products. Knowing that a food has been irradiated Foods which have been irradiated must have one of the following on the food labels: - treated with ionising radiation Where an irradiated food is used as an ingredient in another food, these words must appear next to the ingredient in the list of ingredients. If irradiated food is not pre-packed these words must appear on a display or notice above or besides the container in which the food is placed. How irradiation works When food is irradiated, it absorbs energy. This absorbed energy kills the bacteria that can cause food poisoning in a similar way that heat energy kills bacteria when food is cooked. They can also delay fruit ripening and help stop vegetables from sprouting. Once the irradiation treatment has stopped, the food quickly loses this absorbed energy in the same way that cooked food quickly cools down.
Salt weathering involves the progressive loss of material on surfaces that are subjected to the cyclic re-crystallization of salts. It is a naturally occurring phenomenon that affects rocks most commonly in desert and coastal environments. Salt weathering also acts on man-made monuments and structures in arid as well as more temperate inter-tidal settings. The crystallization of salts on exposed surfaces produces expansive pressures and material flakes off of these surfaces to relieve the stress. Cyclic heating and cooling as well as wetting and drying drive and exacerbate the flaking, which is why salt weathering is the preferred term for this process. Salt weathering does not typically produce significant cracking within the mass of the element. However, salt weathered surfaces may be aesthetically unpleasing and if left unchecked, could result in significant loss of mass from an element. Photograph showing sodium sulfate mineralization (white deposits/red arrows) on a foundation stemwall affected by salt weathering. The green arrow indicates the original surface; the scale is in inches.
In any mechanical entity, whether it’s a bicycle or a biplane, the components most likely to fail are those subject to continuous wear and tear. It is the parts that endure the stress of motion, often located where two parts intersect. In the human body, it’s no different. While skin and bone can break from acute impact, the hinges of the human body where bones connect are at risk of little more than normal function and time. As a result, joint replacement is one of the more common types of prosthetic surgeries available in the United States. Surgeons have been working to replace missing or ailing body parts for millennia. Egyptian mummies have been found with wooden toes, fingers, and limbs attached with leather straps. Joint replacement, or arthroplasty as it is known in the surgical community, was experimented with but not successful until the 19th Century. In 1822, Dr. Anthony White, working out of Liverpool in the United Kingdom, was responsible for the first excision operation that removed the leg portion of a patient’s hip socket. While this preserved mobility, it left the connecting tissue perilously unstable. The first artificial implants and replacements were experimented with only a generation after Dr. White, but septic complications and infection rendered the vast majority of these operations unsuccessful. It wasn’t until 1890 that Dr. Themistocles Gluck completed the first successful joint replacement when he implanted the first artificial knee. In 1891, Dr. Gluck successfully performed a hip replacement. In both cases, the implants were made of ivory and fixed to the bone with nickel plates and screws. A seminal figure in surgery, largely responsible for the modernization of both the materials and methods used today, is Sir John Charnley. Charnley was an army surgeon during the Second World War, where he served in Cairo. His experiences during the war spurred his interest in prosthetics to improve the mobility and well-being of recovering soldiers. In the early 1960s, Charnley was given control of the surgical center at Wrightington Hospital in Lancashire. One of his major breakthroughs was to disprove the prevailing belief at the time that friction, which greatly inhibited the long-term viability of joint replacement, could only be reduced by fluids lubricating the interface of bones. Charnley showed that it was actually the friction coefficient of the bones themselves that reduced wear. With this knowledge, he sought out an ideal material, eventually settling on High-Molecular-Weight Polyethylene, an early form of plastic. This advancement not only allowed for longer-lasting implants, but it also enabled them to be manufactured mechanically at a lower cost. Charnley was knighted in 1977 for his contributions to medicine. Today, hip replacement surgery is one of the most commonly performed medical procedures in first-world countries. Joint replacement procedures allow millions of people around the globe to continue mobility into old age. Without the pioneering efforts of men like John Charnley, these advancements would not be possible.
Ship Trails Over the Eastern Pacific December 27, 2007 This visible satellite image, acquired at 2230Z (aka 22:30 UTC) on December 5, 2007, shows ship trails off the coast of California. Ship trails are low-level clouds that typically form within a layer of ocean stratus (high relative humidity) as exhaust from ships leave trails of cloud condensation nuclei -- the particles on which water vapor condenses. Over the eastern Pacific Ocean, condensation nuclei are limited since the marine air is relatively unpolluted. It follows that generic ocean stratus typically consist of relatively large water drops. That's because there's not much competition for available water vapor, so the diameters of cloud drops tend to be larger. Within the exhaust plume of ships, however, there are many more condensation nuclei that compete for available water vapor. Thus, ship trails contain abundant cloud drops that have smaller diameters compared to the surrounding ocean stratus. Clouds containing these small water drops backscatter (reflect) more sunlight than clouds comprised of larger drops. Thus, on visible satellite imagery, ship trails tend to be brighter than the surrounding low clouds. In other words, ship trails stand out from the surrounding ocean stratus.
The second post in this special thread of 3 devoted to Neil Armstrong memory has to do with rocketry. Firstly, for completion, we are going to study the motion of a rocket in “vacuum” according to classical physics. Then, we will deduce the relatistic rocket equation and its main properties. CLASSICAL NON-RELATIVISTIC ROCKETS The fundamental law of Dynamics, following Sir Isaac Newton, reads: Suppose a rocket with initial mass and initial velocity . It ejects mass of propellant “gas” with “gas speed” (particles of gas have a relative velocity or speed with respect to the rest observer when the rocket move at speed ) equals to (note that the relative speed will be latex m_0$. Generally, this speed is also called “exhaust velocity” by engineers. The motion of a variable mass or rocket is given by the so-called Metcherski’s equation: where . The Metcherski’s equation can be derived as follows: the rocket changes its mass and velocity so and , so the change in momentum is equal to , plus an additional term and . Therefore, the total change in momentum: Neglecting second order differentials, and setting the conservation of mass (we are in the non-relativistic case) that represents (with the care of sign in relative speed) the Metcherski equation we have written above. Generally speaking, the “force” due to the change in “mass” is called thrust. With no external force, from the remaining equation of the thrust and velocity, and it can be easily integrated and thus we get the Tsiolkowski’s rocket equation: Engineers use to speak about the so-called mass ratio , although sometimes the reciprocal definition is also used for such a ratio so be aware, and in terms of this the Tsiokolski’s equation reads: We can invert this equation as well, in order to get Example: Calculate the fraction of mass of a one-stage rocket to reach the Earth’s orbit. Typical values for and show that the mass ratio is equal to . Then, only the of the initial mass reaches the orbit, and the remaining mass is fuel. Multistate rockets offer a good example of how engineer minds work. They have discovered that a multistage rocket is more effective than the one-stage rocket in terms of maximum attainable speed and mass ratios. The final n-stage lauch system for rocketry states that the final velocity is the sum of the different gains in the velocity after the n-th stage, so we can obtain After the n-th step, the change in velocity reads where the i-th mass ratios are definen recursively as the final mass in the n-th step and the initial mass in that step, so we have and we define the total mass ratio: If the average effective rocket exhaust velocity is the same in every step/stage, e.g. , we get The influence of the number of steps, for a given exhaust velocity, in the final attainable velocity can be observed in the next plots: We proceed now to the relativistic generalization of the previous rocketry. An observer in the laboratory frame observes that total momentum is conserved, of course, and so: where is the velocity increase in the rocket with a rest mass M’ in the instantaneous reference frame of the moving rocket S’. It is NOT equal to its velocity increase measured in the unprimed reference frame, du. Due to the addition theorem of velocities in SR, we have where u is the instantenous velocity of the rocket with respect to the laboratory frame S. We can perform a Taylor expansion of the denominator in the last equation, in order to obtain: and finally, we get Plugging this equation into the above equation for mass (momentum), and integrating we deduce that the relativistic version of the Tsiolkovski’s rocket equation, the so-called relativistic rocket equation, can be written as: We can suppress the primes if we remember that every data is in the S’-frame (instantaneously), and rewrite the whole equation in the more familiar way: where the mass ratio is defined as before . Now, comparing the above equation with the rapidity/maximum velocity in the uniformly accelerated motion: we get that relativistic rocket equation can be also written in the next manner: since we have in this case If the propellant particles move at speed of light, e.g., they are “photons” or ultra-relativistic particles that move close to the speed of light we have the celebrated “photon rocket”. In that case, setting , we would obtain that: and where for the photon rocket (or the ultra-relativistic rocket) we have as well Final remark: Instead of the mass ratio, sometimes is more useful to study the ratio fuel mass/payload. In that case, we set and , where M is the fuel mass and m is the payload. So, we would write so then the ratio fuel mass/payload will be We are ready to study the interstellar trip with our current knowledge of Special Relativity and Rocketry. We will study the problem in the next and final post of this fascinating thread. Stay tuned! Hi, everyone! This is the first article in a thread of 3 discussing accelerations in the background of special relativity (SR). They are dedicated to Neil Armstrong, first man on the Moon! Indeed, accelerated motion in relativity has some interesting and sometimes counterintuitive results, in particular those concerning the interstellar journeys whenever their velocities are close to the speed of light(i.e. they “are approaching” c). Special relativity is a theory considering the equivalence of every inertial frame ( reference frames moving with constant relative velocity are said to be inertial frames) , as it should be clear from now, after my relativistic posts! So, in principle, there is nothing said about relativity of accelerations, since accelerations are not relative in special relativity ( they are not relative even in newtonian physics/galilean relativity). However, this fact does not mean that we can not study accelerated motion in SR. The own kinematical framework of SR allows us to solve that problem. Therefore, we are going to study uniform (a.k.a. constant) accelerating particles in SR in this post! First question: What does “constant acceleration” mean in SR? A constant acceleration in the S-frame would give to any particle/object a superluminal speed after a finite time in non-relativistic physics! So, of course, it can not be the case in SR. And it is not, since we studied how accelerations transform according to SR! They transform in a non trivial way! Moreover, a force growing beyond the limits would be required for a “massive” particle ( rest mass ). Suppose this massive particle (e.g. a rocket, an astronaut, a vehicle,…) is at rest in the initial time , and it accelerates in the x-direction (to be simple with the analysis and the equations!). In addition, suppose there is an observer left behind on Earth(S-frame), so Earth is at rest with respect to the moving particle (S’-frame). The main answer of SR to our first question is that we can only have a constant acceleration in the so-called instantaneous rest frame of the particle. We will call that acceleration “proper acceleration”, and we will denote it by the letter . In fact, in many practical problems, specially those studying rocket-ships, the acceleration is generally given the same magnitude as the gravitational acceleration on Earth (). Second question: What are the observed acceleration in the different frames? If the instantaneous rest frame S’ is an inertial reference frame in some tiny time , at the initial moment, it has the same velocity as the particle (rocket,…) in the S-frame, but it is not accelerated, so the velocity in the S’-frame vanishes at that time: Since the acceleration of the particle is, in the S’-frame, the proper acceleration, we get: Using the transformation rules for accelerations in SR we have studied, we get that the instantaneous acceleration in the S-frame is given by Since the relative velocity between S and S’ is always the same to the moving particle velocity in the S-frame, the following equation holds We do know that Due to time dilation so in the S-frame, the particle moves with the velocity We can now integrate this equation The final result is: We can check some limit cases from this relativistic result for uniformly accelerated motion in SR. 1st. Short time limit: . This is the celebrated nonrelativistic result, with initial speed equal to zero (we required that hypothesis in our discussion above). 2nd. Long time limit: . In this case, the number one inside the root is very tiny compared with the term depending on acceleration, so it can be neglected to get . So, we see that you can not get a velocity higher than the speed of light with the SR framework at constant acceleration! Furthermore, we can use the definition of relativistic velocity in order to integrate the associated differential equation, and to obtain the travelled distance as a function of , i.e. , as follows We can perform the integral with the aid of the following known result ( see,e.g., a mathematical table or use a symbolic calculator or calculate the integral by yourself): From this result, and the previous equation, we get the so-called relativistic path-time law for uniformly accelerated motion in SR: For consistency, we observe that in the limit of short times, the terms in the big brackets approach , in order to get , so we obtain the nonrelativistic path-time relationship with . In the limit of long times, the terms inside the brackets can be approximated to , and then, the final result becomes . Note that the velocity is not equal to the speed of light, this result is a good approximation whenever the time is “big enough”, i.e., it only works for “long times” asymptotically! And finally, we can write out the transformations of accelaration between the two frames in a explicit way: Check 1: For short times, , i.e., the non-relativistic result, as we expected! Check 2: For long times, . As we could expect, the velocity increases in such a way that “saturates” its own increasing rate and the speed of light is not surpassed. The fact that the speed of light can not be surpassed or exceeded is the unifying “theme” through special relativity, and it rest in the “noncompact” nature of the Lorentz group due to the factor, since it would become infinity at v=c for massive particles. It is inevitable: as time passes, a relativistic treatment is indispensable, as the next figures show The next table is also remarkable (it can be easily built with the formulae we have seen till now with any available software): Let us review the 3 main formulae until this moment We have calculated these results in the S-frame, it is also important and interesting to calculate the same stuff in the S’-frame of the moving particle. The proper time is defined as: We can perform the integral as before Finally, the proper time(time measured in the S’-frame) as a function of the elapsed time on Earth (S-frame) and the acceleration is given by the very important formula: And now, let us set , therefore we can write the above equation in the following way: Remember now, from our previous math survey, that , so we can invert the equation in order to obtain t as function of the proper time since: Inserting this last equation in the relativistic equation path-time for the uniformly accelerated body in SR, we obtain: Similarly, we can calculate the velocity-proper time law. Previous equations yield and thus the velocity-proper time law becomes Remark: this last result is compatible with a rapidity factor . Remark(II): . From this, we can read the reason why we said before that constant acceleration is “meaningless” unless we mean or fix certain proper time in the S’-frame since whenever we select a proper time, and this last relationship gives us the “constant” acceleration observed from the S-frame after the transformation. Of course, from the S-frame, as this function shows, acceleration is not “constant”, it is only “instantaneously” constant. We have to take care in relativity with the meaning of the words. Mathematics is easy and clear and generally speaking it is more precise than “words”, common language is generally fuzzy unless we can explain what we are meaning! As the final part of this log entry, let us summarize the time-proper time, velocity-proper time, acceleration-proper time-proper acceleration and distance- proper time laws for the S’-frame: My last paragraph in this post is related to express the acceleration in a system of units where space is measured in lightyears (we take c=300000km/s) and time in years (we take 1yr=365 days). It will be useful in the next 2 posts: Another election you can choose is so there is no a big difference between these two cases with terrestrial-like gravity/acceleration.
New study in Geology shows humans erode soil 100 times faster than nature A new study shows that removing native forest and starting intensive agriculture can accelerate erosion so dramatically that in a few decades as much soil is lost as would naturally occur over thousands of years. Paul Bierman, UVM These are floodwaters laden with suspended sediment during the peak discharge of Hurricane Isabel flood on the Potomac River at Great Falls, Virginia, September 2003. Over 160,000 cubic feet per second of runoff, carrying sediment eroded from Piedmont riverbanks and farm fields upstream, submerged the falls. Floods of this magnitude recur about once a decade at Great Falls. New research by scientists at the University of Vermont and Imperial College, London, published in the February 2015 issue of the journal Geology, show that eroded soil, carried in rivers like this one, accelerated dramatically in the wake of European forest-clearing and intensive agriculture in North America. Had you stood on the banks of the Roanoke, Savannah, or Chattahoochee Rivers a hundred years ago, you’d have seen a lot more clay soil washing down to the sea than before European settlers began clearing trees and farming there in the 1700s. Around the world, it is well known that deforestation and agriculture increases erosion above its natural rate. But accurately measuring the natural rate of erosion for a landscape—and, therefore, how much human land use has accelerated this rate—has been a devilishly hard task for geologists. And that makes environmental decision-making—such as setting allowable amounts of sediment in fish habitat and land use regulation—also difficult. Now research on these three rivers, and seven other large river basins in the US Southeast, has, for the first time, precisely quantified this background rate of erosion. The scientists made a startling discovery: rates of hillslope erosion before European settlement were about an inch every 2500 years, while during the period of peak land disturbance in the late 1800s and early 1900s, rates spiked to an inch every 25 years. “That’s more than a hundred-fold increase,” says Paul Bierman, a geologist at the University of Vermont who co-led the new study with his former graduate student and lead author Luke Reusser, and geologist Dylan Rood at Imperial College, London. “Soils fall apart when we remove vegetation,” Bierman says, “and then the land erodes quickly.” Their study was presented online on January 7, 2015, in the February issue of the journal Geology. Their work was supported by the National Science Foundation. “Our study shows exactly how huge an effect European colonization and agriculture had on the landscape of North America,” says Dylan Rood, “humans scraped off the soil more than 100 times faster than other natural processes!” Along the southern Piedmont from Virginia to Alabama—that stretch of rolling terrain between the Appalachian Mountains and the coastal plain of the Atlantic Ocean—clay soils built up for many millennia. Then, in just a few decades of intensive logging, and cotton and tobacco production, as much soil eroded as would have happened in a pre-human landscape over thousands of years, the scientists note. “The Earth doesn't create that precious soil for crops fast enough to replenish what the humans took off,” Rood says. “ It’s a pattern that is unsustainable if continued.” The scientist collected twenty-four sediment samples from these rivers—and then applied an innovative technique to make their measurements. From quartz in the sediment, Bierman and his team at the University of Vermont's Cosmogenic Nuclide Laboratory extracted a rare form of the element beryllium, an isotope called beryllium-10. Formed by cosmic rays, the isotope builds up in the top few feet of the soil. The slower the rate of erosion, the longer soil is exposed at Earth’s surface, the more beryllium-10 it accumulates. Using an accelerator mass spectrometer at the Lawrence Livermore National Laboratory, the geologists measured how much beryllium-10 was in their samples—giving them a kind of clock to measure erosion over long time spans. These modern river sediments revealed rates of soil loss over tens of thousands of years. This allowed the team to compare these background rates to post-settlement rates of both upland erosion and downriver sediment yield that have been well documented since the early 1900s across this Piedmont region. While the scientists concluded that upland erosion was accelerated by a hundred-fold, the amount of sediment at the outlets of these rivers was increased only about five to ten times above pre-settlement levels, meaning that the rivers were only transporting about 6% of the eroded soil. This shows that most of the material eroded over the last two centuries still remains as “legacy sediment,” the scientists write, piled up at the base of hillslopes and along valley bottoms. “There's a huge human thumbprint on the landscape, which makes it hard to see what nature would do on its own,” Bierman says, “but the beauty of beryllium-10 is that it allows us to see through the human fingerprint to see what's underneath it, what came before.” “This study help us understand how nature runs the planet,” he says, “compared to how we run the planet.” And this knowledge, in turn, can “help to inform land use planning,” Bierman says. “We can set regulatory goals based on objective data about how the landscape used to work.” Often, it is difficult to know whether conservation strategies—for example, regulations about TMDL’s (total maximum daily loads) of sediment—are well fitted to the geology and biology of a region. “In other words, an important unsolved mystery is: "How do the rates of human removal compare to ‘natural’ rates, and how sustainable are the human rates?" Rood asks. While this new study shows that erosion rates were unsustainable in the recent past, “it also provides a goal for the future,” Rood says. “We can use the beryllium-10 erosion rates as a target for successful resource conservation strategies; they can be used to develop smart environmental policies and regulations that will protect threatened soil and water resources for generations to come.” Senior Communications Officer Joshua Brown | newswise New Study Will Help Find the Best Locations for Thermal Power Stations in Iceland 19.01.2017 | University of Gothenburg Water - as the underlying driver of the Earth’s carbon cycle 17.01.2017 | Max-Planck-Institut für Biogeochemie 19.01.2017 | Event News 10.01.2017 | Event News 09.01.2017 | Event News 23.01.2017 | Process Engineering 23.01.2017 | Physics and Astronomy 23.01.2017 | Life Sciences
Mathematics is a creative and highly interconnected discipline that has developed over centuries and provides the solution to some of history’s most intriguing questions. It is essential to everyday life, critical to science, technology and engineering, and necessary for financial literacy. An education in mathematics therefore provides a foundation for understanding the world, the ability to reason mathematically, an appreciation of the beauty and power of mathematics, and a sense of enjoyment and curiosity about the subject. The national curriculum for mathematics aims for all pupils to: - Long Term Plan
Oxidation is one of the key factors that determine what a particular tea looks and tastes like. While all true teas are made from the same plant, how much oxidation they undergo determines how dark they are. For example, green teas undergo very minimal oxidation, while black teas are fully oxidized. This article will cover what oxidation is, why it happens, and how it affects different types of tea. What is Oxidation? Oxidation is a process through which tea leaves are exposed to the air in order to dry and darken, contributing to the flavor, aroma, and strength of different teas. Just as other fruits and plants, like apples or avocados, brown when exposed to oxygen, tea leaves go through a similar process after they are harvested. As tea leaves are oxidized, they undergo unique changes that influence their chemical composition. This is a key step in processing tea, with different levels of oxidation results in different varieties, including black, green, white, and oolong. Tea leaves which are fully oxidized will turn brown and black, while tea leaves that are not oxidized at all will remain green. Tea leaves that are partially oxidized, like the leaves of white and oolong teas, can vary in color from green to grey to black depending on their level of oxidation, along with other characteristics of the leaves such as size and harvest date. Oxidation vs. Fermentation While it’s common for people to use the terms oxidation and fermentation interchangeably, they’re actually different processes. Fermentation involves microbial activity, during which tea leaves begin to break down and decompose. Aged teas like pu-erh are an example of fermented tea. Other fermented foods and drinks include beer, yogurt, and kombucha. Oxidation, meanwhile, refers to the process of exposing tea leaves to oxygen in order to dry and darken them. Many types of tea undergo some form of oxidation, most notably black and oolong teas. Tea makers often use precise methods to start and stop the oxidation process, in order to finely control the level of oxidation present in each individual tea. Oxidation and Tea Different kinds of tea have different levels of oxidation, resulting in a wide variety of appearances and flavor profiles. Tea leaves begin to oxidize as soon as they are plucked, and the level of oxidation is a key factor in tea processing that results in different categories of tea. Oxidation begins when the tea leaves are harvested, and continues when the leaves are crushed, rolled, or tumbled, putting pressure on the tea leaves and allowing greater exposure to air on all parts of the tea leaf. Once the tea is oxidized to the appropriate level, the leaves are then “fixed” by exposing them to heat. This stops the oxidation process and prevents the tea from darkening further. Tea leaves can be pan-fired, steamed, baked, or sun dried in order to provide the requisite heat necessary to halt oxidation. While all teas are made from the same plant, camellia sinensis, oxidation is largely responsible for the differences between different types of tea. In general, the longer tea leaves are allowed to oxidize, the darker and stronger the tea made from those leaves will be. Along with varietal, harvest date, and leaf size, oxidation is one of the primary factors that goes into determining different types of tea. Black Tea Oxidation Black teas are fully oxidized, resulting in a dark, rich cup of tea that is high in caffeine. Black teas are often macerated during the oxidation process, allowing all parts of the tea leaves to be exposed to air and fully darken. Black teas are high in tannins, and brew up a reddish amber color. These teas are typically grown in countries such as India and China, but are popular all over the world. Because of their full oxidation and hearty, robust body, black teas often pair well with milk and sugar. Common types of black tea include unflavored teas such as English Breakfast and Assam, and flavored black teas such as Early Grey and Masala Chai. Oolong Tea Oxidation Oolong teas are partially oxidized, and their oxidation level can vary widely between that of black and green teas. Lighter oolongs more similar to green teas and darker oolongs more similar to black teas are both common. Oolong teas can also have a wide variety of different flavor characteristics depending on their level of oxidation. Many oolongs are processed into a distinctive shape in which whole tea leaves are tightly rolled into small balls which gradually unfurl as the leaves are steeped. Because of this particular shape, many oolong teas can be infused multiple times, offering subtle differences of flavor with each successive infusion. Green Tea Oxidation Green teas are largely unoxidized, and undergo a heating process sometimes known as “killing the green,” soon after harvesting in order to halt oxidation, resulting in a lighter, more mellow cup of tea. Green tea leaves are typically bright green, reflecting the original color of tea leaves after harvest. Chinese green teas are typically pan-fired in order to halt oxidation, while Japanese teas are typically steamed. White Tea Oxidation White teas, because of their minimal processing, undergo a small amount of oxidization as they dry. White teas are often composed of the finest downy buds and tips of the tea plant, which makes them particularly prized. Although white teas do not undergo the international oxidation processes of black or oolong teas, they do oxidize slightly as they are exposed to air during the drying process. These teas have a delicate floral character, and are usually low in caffeine. Purple Tea Oxidation Purple tea refers to a specific varietal of the tea plant, one whose leaves are purple instead of green. Many purple teas are processed in a way similar to oolong tea, resulting in a partially oxidized tea with a light, floral flavor that brews up a beautiful reddish-purple. Purple teas are very high in beneficial compounds found in purple and blue foods known as anthocyanins, which help to promote cellular health and protect the body against disease. Pu-erh Tea Oxidation Unlike other teas made from the camellia sinensis plant, pu-erh teas are unique in that they are fermented and aged. While oxidation refers to the exposure of teas to air, fermentation refers to an aging process where tea leaves are broken down by microbial activity. Pu-erh teas may be aged anywhere from a few months to several years, and develop a distinctive rich, earthy taste through this process.
On February 27, 2007, the Stromboli Volcano underwent a strong eruption. According to the BBC News, two new craters opened on the volcano’s summit, producing twin lava flows. One of those lava streams reached the sea the same day, sending up plumes of steam as the scalding lava touched the cool water. Although authorities did not anticipate an evacuation of the volcanic island, they restricted access to high-risk areas. According to Volcano Discovery, seismic activity, including rockfalls, continued several days. On March 8, 2007, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA’s Terra satellite captured these images. The larger image is an infrared-enhanced, “false-color” image in which bare ground is gray, water is dark blue, and vegetation is red. The inset image is a thermal infrared image showing energy that humans can’t see but can sense as heat. Clouds, possibly mixed with steam, cover the new lava flow in the larger image, but in the inset, the hot flow makes a yellow glow northwest of the summit. The dark flow down the northwest side of the volcano separates the two areas of human settlement seen in this image. The towns of Piscita, Ficogrande, San Vincenzo, and Scari form the bright silver dots on the northeastern shore, while Ginostra is the small town on the western shore. Stromboli is a stratovolcano composed of alternating layers of hardened ash, lava, and volcanic rocks. Strong eruptions have been recorded at the volcano for more than 1,000 years. In 2002, a major eruption caused a small tsunami and damaged Stromboli Village on the north side of the island. Eruptions at Stromboli are not unusual. In fact, mild explosions and glowing lava flows are so frequent that the volcano has earned the name “Lighthouse of the Mediterranean,” says the Smithsonian’s Global Volcanism Program. The island of Stromboli is the tip of a massive underwater volcano. The island grows as the volcano continues to pump out fresh lava. As of March 6, the ongoing eruption had added a new 200-meter-wide delta that stretches 50-100 meters into the ocean, said Volcano Discovery. In early September 2007, Tanzaniarsquo;s Ol Doinyo Lengai Volcano erupted, sending a cloud of ash into the atmosphere. The volcanic plume appears pale blue-gray, distinct near the summit, and growing more diffuse to the south. The charcoal-colored stains on the volcano’s flanks appear to be lava, but they are actually burn scars left behind by fires that were spawned by fast-flowing, narrow rivers of lava ejected by the volcano.
The process by which red blood cells are produced in the body is called erythropoiesis. Nutritional deficiencies resulting in a lack of iron, folic acid, vitamin B12, vitamin A or even vitamin C mean erythropoiesis is either less effective or results in the production of red blood cells that don’t function properly. In particular, folic acid, vitamin B12 and iron play crucial roles in erythropoiesis.5 Importance of micronutrients as blood health builders Micronutrients are dietary components, often referred to as vitamins and minerals. Which although only required by the body in small amounts, are absolutely vital to development and overall well-being. The human body cannot produce micronutrients itself and so they must be derived from our diets. Iron is a key component of hemoglobin, the protein which transports oxygen through blood in our bodies. If one doesn’t have enough iron, the body cannot produce enough healthy oxygen-carrying red blood cells. Folic acid, on the other hand, is a type of B vitamin that aids in the production and repair of DNA, and works closely with vitamin B12 in producing red blood cells and in helping iron to function properly in the body. Vitamin B12 is also an essential nutrient in the B-complex. It is integral in numerous physiological processes such as red blood cell formation and bone marrow health. Another micronutrient integral to blood health is vitamin A as it helps support red blood cell development. Vitamin A plays a role in the development of erythroblasts from progenitor stem cells, ensuring that the body can produce enough new red blood cells to replace those that die due to age. It also ensures that the developing red blood cells have access to iron needed for hemoglobin. Though not regularly mentioned in the family of micronutrients essential to building blood health, Vitamin C supports body’s ability to absorb iron. In individuals with poor dietary intakes of iron Vitamin C is along with the iron rich diet helps to enhance the iron absorption. Whether you are a developing child, a pregnant woman or otherwise a healthy adult, there is no better time than now to ensure you are getting the essential micronutrients you need to support excellent blood health and to overall help keep you fit and healthy. The information contained in this article is not intended or designed to diagnose, prevent, treat or provide a cure for any condition or disease, to ascertain the state of your health or be substituted for medical care. P&G encourages you to seek the advice of your doctor or healthcare professional if you have any questions or concerns arising from the information in this article. 5 Tremblay, S. "What Nutrients are needed for red blood cell production?" SFGate Healthy Eating website. https://healthyeating.sfgate.com/nutrients-needed-red-blood-cell-production-5131.html
Despite its name, water in the West Texas town of Big Spring is scarce. The town’s namesake spring dried up decades ago, and droughts in recent years have made the water situation there even worse. So, Big Spring had to get creative. In 2013, the city became the first in the U.S. to start treating, and then reusing, its waste water. Some people call this “toilet to tap” – it’s, understandably, not a favored term among water engineers. In regular systems, water is flushed down the toilet, or flows down the sink or shower drain, and ends up in a lake, a river, or out in the ocean. Southern Cal environmental engineer Amy Childress says those bodies of water are called “environmental buffers” because they’re a kind of safety net: if something goes wrong in treatment, officials can catch it because it hangs out for a while in some kind of reservoir. “The buffer is also talked about as a psychological barrier because it does distract us from where … the water came from,” Childress says. But not in Big Spring. After the initial treatment most cities do before dumping wastewater, Big Spring’s water goes through a second, special cleansing, then, it goes straight back to the tap without ever seeing the light of day in a reservoir. Water engineers call it “direct potable reuse.” One reason Big Spring was the first to do it on a citywide scale is because reservoirs there are constantly losing water in the West Texas heat. John Womack is operations manager at the The Colorado River Municipal Water District, and says, “Mother nature would take it away from you just as easy as she gave it to you. … We can’t control that.” Womack says the water district’s biggest enemy is evaporation. “Especially with all the wind we have out here, and all the heat we have in the summertime; this is a very arid climate,” Womack says. By skipping the reservoir stage, the district now saves 1.7 million gallons of water every day. So here’s how it works: The city first removes all the big, chunky stuff from the wastewater. From there, instead of going back into nature, the water goes through its second, special cleansing during which it goes through several other stages. The first is microfiltration. “The beauty of a microfilter is that they have the ability to filter down to one-tenth of a micron, which is a very small particulate,” Womack says. That’s about one-five hundredth the width of a human hair; this step helps get out most bacteria. But it can’t remove viruses, which is where the next step – reverse osmosis – comes in. “Reverse Osmosis is different than filtration. Water is being diffused by a cellulose membrane, so you are getting down to the atomic level. Individual molecules of water are going through that membrane and not allowing other molecules to go through,” Womack says. The final step of this “cleansing” is super-intense ultraviolet processing. The system adds in a small stream of hydrogen peroxide – the same stuff you might put on a cut that makes it sting. The peroxide gets shaken up and spread through the water. Then, the water enters the UV light chamber. “Inside each one of those reactors are 72 light tubes; they look like a fluorescent light bulb. So, no matter where you are, wherever the water is, it’s getting exposed to this UV light, and that reacts with the hydrogen peroxide, and it will kill or destroy anything left in the water,” Womack says. What comes out is pure H2O – no minerals, just good ol’ water. “You wanna taste some?” Womack asks, offering a sample of the final product. “It’s 99.9 percent pure water. Cheers!” It tastes like … nothing. Some people say water doesn’t have any flavor, but often does taste like whatever minerals are in it. But this water truly has no taste at all. Ironically, it isn’t what will ultimately flow through the faucets in Big Spring. “In all actuality, you could not drink this all the time, it is too pure,” Womack says. “It would leach minerals out of your body.” So, that superpure water gets mixed with some of the reservoir water, gets treated again by the city, and then pumped to the taps. Of course, there’s another complicating factor: the city’s pipes are old and made out of iron. By the time the water gets to Brandi Mayo’s restaurant, she says it’s not so “super.”
STRATEGIES TO LINK ESL AND TECHNOLOGY The use of technology can enhance the quality of teaching and learning in ESL classes but there are important stages to follow with ESL learners. Teachers need to keep in mind that there is an additional level of language to cover in order to make ESL learners ‘ technology literate’. PRE- TECHNOLOGY LESSON This stage involves building the field and explicitly teaching the context as a means of preparation for the lessons to follow. Emphasis is placed on the use of language features especially the use of nouns and proper nouns specific to the topic and the software application * Teachers need to be mindful of the outcomes of a particular topic. * Allow the technology to support the content . * Keep in mind that ESL/technology lessons need to reinforce speaking, listening, reading, and writing skills or a combination of these skills. * Choose appropriate applications according to the outcomes that need to be achieved * Prepare students prior to technology lessons by: Introducing vocabulary associated with the topic . Exploring the field the topic in the classroom by explicitly teaching the content eg Are the students reading a book and writing a book review prior to using Glogster to make an on line poster for their book. * Explicitly teach vocabulary and computer jargon associated with the particular application to be used. EG glog( noun) Glogster( Proper noun) * Show the completed work sample to the class * Show step by step processes by using joint construction and teacher /peer modelling. * Explicitly teach instructional vocabulary eg upload, save , post, TECHNOLOGY LESSON This stage is reflective of the students ability to apply, remember, create and understand. * Revise and reinforce concepts taught during the pre technology stage * Reinforce language in terms of the content of topic and the technology /software terms * Ensure that students are familiar with verbal and written instructions * Encourage peer teaching during lessons * Incorporate aspects such as PURPOSE, AUDIENCE, CONTEXT so that students understand that there is a specific response that is expected from them. Start with small projects so that students can master the use of the application and be use the associated language confidently. * Always have a back up. Planning for technological failure is critical to success ( David R W, seven technology tips for the classroom) The outcomes/aims to be achieved will determine the type of application to be used by the teacher ————————————————- AIM: To reinforce speaking, listening and reading. To enable students to speak confidently without face to face interaction with audience. To help student focus on pronunciation, tone, pitch etc ————————————————- Suitable applications: Voice Thread, Audacity, flip share, photo story 3 ————————————————- AIM: To reinforce written skills, editing, proof reading. To reinforce use of correct language features ————————————————- Suitable applications: Glogster, Blogs, My-classes Journal , photo story 3 , Powerpoint, Prezi ————————————————- ————————————————- AIM: To reinforce sharing ,group –related tasks, brain-storms ————————————————- Applications: Pirate pad, Bubbl. us ————————————————- ————————————————- AIM: To back up traditional oral presentation ( use of palm cards, eye contact etc)with the use of pictures, slides, photos, videos and other visual stimuli ————————————————- Application: Prezi, powerpoint, photo story3, Flickr, flip share, digital photos ,Glogster ————————————————- ———————————————— POST TECHNOLOGY LESSONS This stage is reflective of the students’ ability to evaluate and analyse the completed work sample ( self or peer evaluation) * Link the technology lesson to an oral presentation or some other sharing session * Get students to self evaluate/evaluate their composition in terms of specific outcomes or marking criteria * Encourage students to evaluate using language –specific vocabulary e. g My glog has too many animated graphics. * This stage involves focus on responding to the text within a specific context. Cite this Use of Technology in Teaching Esl Use of Technology in Teaching Esl. (2018, Jul 22). Retrieved from https://graduateway.com/use-of-technology-in-teaching-esl/
Social psychologists unfortunately do not agree on the precise definition of an attitude. In fact, there are more than 100 different definitions of the concept. However, four definitions are more commonly accepted than others. One conception is that attitude is how positive or negative, favorable or unfavorable or pro or con a person feels toward an object. This definition views attitude as a feeling or an evaluation reaction to objects. A second definition represents the thoughts of Allport, who views attitudes as learned pre-dispositions to respond to an object or class of objects in a consistently favorable or favorable way. This definition is slightly more complicated than the first because it incorporates the notion of a readiness to respond toward objects. A second definition represents the thoughts of Allport who views attitudes as learned predispositions to an object or class of objects in a consistently favorable or unfavorable way. This definition is slightly more complicated than the first because it incorporates the notion of a readiness to respond an object. Third definition of attitude popularized by cognitively oriented social psychologists is: an enduring organization of motivational emotional perceptual and cognitive process with respect to some aspect of the individual’s world. These views attitudes as being made up of three components; (1) the cognitive or knowledge component (2) the affective or emotional component and (3) the cognitive or behavioral tendency component. More recently theorists have given more attention to new definitions of attitude which has generated much research and has been useful in predicting behavior. This definition explicitly treats attitude as being multidimensional in nature, as opposed to the uni-dimensional emphasis taken by earlier definitions. Here, a person’s overall attitude toward an object is seen to be function of (1) the strength of each of a number of beliefs the person holds about various aspects of the object and (2) the evaluation e gives to each belief as it relates to the object. A belief is the probability a person attaches to a given piece of knowledge being true. This last definition has considerable appeal because it has been shown that consumers perceive a product (object) as having many attributes and they form beliefs about each of these attributes. For example a consumer may believe strongly that Listerine mouthwash kills germs helps prevent colds, gives people clean, refreshing breath and prevents sore throats. If this consumer evaluates all five of these attributes as favorable qualities, then according to the definition he would have a strongly favorable overall attitude toward the brand. On the other hand, a second consumer might believe just as strongly as the first consumer that Listerine possesses all five of these traits; however she may not evaluate all attributes a favorably as the first consumer does. Therefore her overall attitude toward the brand would be less favorable. It has been important to provide all four attitude definitions because the majority of attitude studies have been based on them. Characteristics of attitudes Attitudes have several important characteristics or properties namely they (1) have an object: (2) have direction, intensity and degree; (3) have structure and (4) are learned. Attitudes have an object By definition attitudes must have an object that is, they must have a focal point – whether it be an abstract concept such as ethical behavior or a tangible item, such as a motorcycle. The object can be a physical thing, such as product or it can be an action, such as buying a lawnmower. In addition the object can be either one item, such as a person or a collection of items such as a social group, it also can be either specific (Deutschmacher bologna) or general (imported meats).
AIs are augmenting the capabilities of human decision-makers in several sectors. A key question is: when to trust an AI? To help people better understand when they should trust AI’s predictions, MIT scientists have created an onboarding technique that guides humans to develop an accurate understanding of situations like whether the machine is making correct predictions or not. The technique shows how AI complements people’s capabilities, thereby helping them make better decisions or conclusions faster while working with AI. Hussein Mozannar, a graduate student in the Clinical Machine Learning Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Institute for Medical Engineering and Science, said, “We propose a teaching phase where we gradually introduce the human to this AI model so they can, for themselves, see its weaknesses and strengths. We do this by mimicking the way the human will interact with the AI in practice, but we intervene to give them feedback to help them understand each interaction they are making with the AI.” Human makes decisions for any complex task based on past interactions and experiences. That’s why scientists designed an onboarding process that provides representative examples of humans and AI working together. This would act as a reference point that humans can consider in the future. Scientists, at first, created an algorithm that can identify examples that will best teach humans about AI. Mozannar says, “We first learn a human expert’s biases and strengths, using observations of their past decisions unguided by AI. We combine our knowledge about humans with what we know about AI to see where it will be helpful for humans to rely on AI. Then we obtain cases where we know the human should rely on the AI and similar cases where the human should not rely on the AI.” The team tested their technique on a passage-based question answering task: The user receives a written passage and a question whose answer is contained in the passage. The user then has to answer the question and click a button to ‘let the AI answer.’ Answers by AI are not already visible. Hence, users need to rely on their mental model of the AI. The onboarding process they developed begins by showing these examples to the user, who tries to predict with the help of the AI system. The human may be right or wrong, and the AI may be right or wrong, but in either case, after solving the example, the user sees the correct answer and an explanation for why the AI chose its prediction. Mozannar said, “To help the user retain what they have learned, the user then writes down the rule they inferred from the teaching example. User can later refer to these rules while working with the agent in practice. These rules also constitute a formalization of the user’s mental model of the AI.” When testing their technique on three groups of participants, scientists realized that: - 50 percent of the people who received training wrote accurate lessons of the AI’s abilities. - Those who had accurate lessons were right on 63 percent of the examples. - Those who didn’t have accurate lessons were right on 54 percent. - Those who didn’t receive teaching could see the AI answers were right on 57 percent of the questions. Mozannar said, “When teaching is successful, it has a significant impact. That is the takeaway here. When we can teach participants effectively, they can do better than if you gave them the answer.” - Hussein Mozannar et al. Teaching Humans When To Defer to a Classifier via Exemplars. arXiv: 2111.11297v2
In this article, we will discuss HTML attributes, which are one of the essential topics in HTML. In HTML, the attributes are used to provide additional information about the elements. For most of the HTML elements, attributes are optional, but there are some elements where we have to deliver the attributes. Attributes always specified within the opening tag of the HTML element and they specified in a name and value pair. <image src= "cat.jpg" alt ="cat image"> In this example src ="cat.jpg" and alt="cat image" are two attributes where src and alt are attributes name and “cat.jpg” and “cat image” are attributes values. alt attribute is optional, but src is mandatory because src specify which image to show. There should be at least one space gap between two attributes, and the value of the attributes must have resided in the double inverted comma. Some most important HTML element attributes: <a> Element href Attribute <a> anchor tags are used to create links on the web page and the href attribute specify the address of the link. <img> src Attribute src is a mandatory attribute that must be passed along with <img> tag, it specifies the image location that supposed to display on the web page. <img src ="dog.jpg"> img Width and Height attributes In the image <img> tag, we can also pass the width and height attribute to size the image. <img src ="dog.jpg" width="500" height ="700"> here width =”500″ means the image will be 500 pixels wide. alt attribute can be used with various HTML elements but it mostly used with <img> element. In case if the browser fails to load the image, the text of alt will be displayed on the screen. Even the screen reader app can read the alt information, so a visually impaired person can understand what image context is. <img src="cat.png" alt="black cat" height="200" width ="300"> Style attribute is used to provide inline styling to the HTML elements. It is an inline alternative for CSS coding. style attribute can be applied on any HTML element, and it mostly used to change the element font size, colour, style, etc. <h1 style="color:red">Welcome to TechGeekBuzz.</h1> This attribute defined inside the <html> tag and it describes the language of the document content. this attribute can be used with various HTML elements, and you can see its value when you hover your mouse over the element content. <h1 title="techgeekbuzz"> Hover over me! </h1> Points to remember While writing the attributes, there are some points; we need to keep in mind to keep our code good. |Bad Code||Good Code| |Always use Quotes for attribute values| |<a href= http://www.techgeekbuzz.com>||<a href=”https://www.techgeekbuzz.com”>| |Use lowercase characters to represent the attribute name.| |<a HREF = “https://www.techgeekbuzz.com”>||<a href=”https://www.techgeekbuzz.com”>| |Always provide one space to write the next attribute.| |<img src=”img.png”width=”500″height=”200″>||<img src=”img.png” width=”500″ height=”200″>| If you try to run this bad code, this will give us the same result like the Good code, but it always a good practice to keep your code clean and systematic so you and other developers could read and understand it. - HTML attributes provide additional information about the HTML elements. - Every HTML element has attributes. - There are some attributes which are specific to some tags, and there are some attributes which can be used with multiple tags. - Always use double or single quotes to represent attribute values. - Always use the lowercase character for attribute names.
Solar panels are used extensively or generating electricity and power throughout the world. Though the initial investment is quite high, the fuel is cheap and abundant produces hardly any pollution unlike the burning of the fossil fuels. The solar cell technology is extremely old and work started on this way back in the 1800’s. The French physicist Antoine – Cesar Becquerel is credited with the solar panel research, in 1839. While he was experimenting with a solid electrode that was dipped in the electrolyte solution he was able to see a photovoltaic effect. He even saw a voltage develop when the sunlight fell on the electrode. The solar cell creation for the first time is credited to Charles Fritts, who used junctions that were created when the semiconductor was coated by a very thin layer of gold. The earliest solar cells and panels that were created were extremely inefficient and the energy conversation received from the sun stood under 1%. Russell Ohl was the first inventor, who created the silicon solar cell in 1941. in 1954, the three American researchers Gerald Pearson, Calvin Fuller and Daryl Chapin were able to create a solar panel that could the had the efficiency level of 6% with direct sunlight.
1.2 How CNC Machining Works CNC machining controls the operation of the axis through a computer program. In the early days, engineers calculated and designed appropriate program instructions (G code). Nowadays, there is CAM (Computer Aided Manufacturing) software in conjunction with the CAD (Computer Aided Design) to create G code. The G code controls the 5-axis movement speeds for further processing. CNC machines can run faster with high precision. CNC machining is a subtractive manufacturing process in which various precision cutting tools are used to remove raw materials to make parts or products.
Information is a powerful tool that can be used to create, destroy and heal. During the Covid19 pandemic, we urgently needed our public information to coordinate better between agencies and inform citizens. In every country, the need for access to precise and relevant information has never been greater. On the one hand, we have governments collecting more data than ever before; on the other, in times of crisis like we are experiencing now, people still don’t feel adequately informed about what is going on in their country. The dramatic increase in the number of people who need government data motivated software developers to help in the process of digitalization, but companies need open data, and most of the time, they couldn’t get access to it. The significance of access to information has been recognized by the UN and UNESCO, who have proclaimed 28 September as International Day for Universal Access to Information. This day was first established globally in 2015. Still, the actual focus on the importance of open data sets started last year when developers wanted to help the general public be more informed about recent events. That’s when some of them realized that their hands were tied. But let’s go a few steps back. What is open data? Open data is the idea that government-held data should be made available for free to all without restriction. Governments are making some of their information open by default. Usually, it can include anything from national statistics like GDP and unemployment rates to city-specific crime reports. Around the world, countries are opening up more and more datasets on an ongoing basis, but there is still a long way to go before the public can access everything they want or need. Why do we need open access to government data? Open access means that anyone can use or share government data without restrictions on usage, including commercial entities and individuals who don’t have a connection with any specific organization. Without open access, agencies are limited in developing new products for research, education, and innovation because they may be unable to efficiently reuse existing datasets due to copyright restrictions. What is the main problem with getting access to open data? When adopted, access to information laws play an essential role in upholding and protecting the public’s right to information, especially in situations of uncertainty when the demand for data is high. However, in countries where such laws are inadequate or limited, various implementation aspects are delayed, and governments fail to respond proactively to information requests. There may also be instances of inadequacy in terms of transparency. That’s why the theme of the 2021 International Day for Universal Access to Information will highlight the role of access to information laws and their implementation to build back trustworthy institutions for the public good and sustainable development.
Oh Charlie Workshop By Rosemary Martin This workshop is ideal for excluded children or children at risk of exclusion Topics covered includes: - Behaviour choices & others - Exploring Charlie's Behaviour - People Affected by Charlie's Behaviour - Reading Body Language - Behaviour Choices Impacting Others - Effects of Pretending - Behaviour Choices - Identifying Bullying - Identifying Inappropriate Behaviour Individuals will participate in a range of interactive activities, discussions and games that will equip them with key components/ key attributes needed to develop and maintain solid emotional health and wellbeing. This workshop covers 2 full days. 1. Understanding choices and consequences 2. Labelling behaviours 3. Separating behaviour from person 4. Discover the effects of following the crowd and lack of control over outcomes 5. The consequences of pretending not to care 6. Reading body language 7. Foreseeing possible outcomes Oh Charlie has been used in whole school consultation and revision of behaviour and anti-bullying policy.
Wetter and colder winter weather means cases of headlice increase we spend more time indoors in close proximity. Head lice (Pediculus capitis) affects only humans, and cannot be passed on to, or caught from animals. Head lice are common in schoolchildren, particularly between the ages of 4 and 11, but anyone with hair can catch them. The lice are small wingless insects that feed on blood obtained by biting the scalp. These bites tend to be itchy and this itchiness is caused by an allergy to the lice. Female lice lay their eggs at the hair roots, particularly near warm areas behind the ears and at the back of neck. These eggs appear as white or white coloured grains and are best removed with a fine comb while the hair is wet. Infestation often causes itching of the scalp, but may also go unnoticed. If you suspect head lice, check the base of hairs for eggs and comb the hair over a piece of white paper and the lice will appear as pink or brown specks. Sometimes an infestation is marked by tiny red spots on the scalp. Lice may be visible in the hair behind the ears and at the nape of the neck as these are favourite spots for infestations. Head lice are transferred by close hair-to-hair contact. They cannot jump, fly or swim, but walk from one hair to another. It is a misconception that head lice infestation is as a result of dirty hair and poor hygiene. Lice can be a persistent and recurring nuisance, so it is important to treat them quickly and thoroughly. If one member of the family needs treatment it is important to check the rest of the family and treat if live lice are seen. WET COMBING METHOD Wet combing, is used to remove lice without using chemical treatments. This method is helpful because head lice are growing increasingly resistant to insecticides used to remove them. The following steps are best to follow: - Wash the hair as normal using an ordinary shampoo. - Apply conditioner liberally to wet hair (this causes lice to lose their grip on hair). - Comb the hair through with a normal comb first. With a fine tooth nit comb (available in pharmacies), comb from the roots along the complete length of the hair and after each stroke check the comb for lice and wipe it clean. Work methodically over the whole head for at least 30 minutes. - Rinse the hair as normal. - Repeat every three days for at least two weeks. MEDICATED LOTION OR RINSE Only use a lotion if you find a living (moving) head louse. Apply the preparation according to the instructions, and remove the lice and eggs with a fine-toothed nit comb. Treatment should only be done once and then repeated seven days later. There is no need to wash clothing, or bedding, if they have come into contact with head lice. This is because head lice quickly die without a host to provide warmth and food. There is a vast array of insect repellants available in pharmacies. In recent years easy to use treatments such as Lyclear® Crème Rinse (contains lice comb in pack) are proving popular as they involve only a 10 minute treatment. Lyclear® Crème Rinse can be used on children over 6 months. Lyclear has a pleasant smell and is suitable for asthmatics. ELECTRONIC LICE COMB RobiComb® is an example of an electronic lice comb. It detects and kills lice without the need for chemicals. It uses an AA battery. The best prevention is normal hair care and checking yours and your family's hair and scalp periodically. If your child has long hair, tie it back as this helps to reduce the likelihood of contact between their hair and that of an infected child. Regular combing of hair using the “wet combing” method (see above) can help with early detection as well as treatment. Repellants (available in pharmacies) may help to prevent head lice but effectiveness is unclear.
Otázka: History of the United Kingdom Introduction, timeline, one of the most imporatnt turning points in the history of great britain – The Magna Carta, conclusion - History of the United Kingdom is full of interesting facts. - First, summarize important facts of the British history. - Later on, I would like to talk about my favourite historic event from the history of The United Kingdom, The Magna Carta and about the act of Habeas Corpus that has arisen from this document - Before Celtic tribes arrived in Britain, the area had been inhabited by the Iberians. They came to England about 5000 BC and they were just small groups of hunters and gatherers. - On the other hand, Celts, who arrived about 3000 years ago, were well organized. - On the territory of the Great Britain lived around 150-200 different tribes. - Each tribe had organization similar to the pattern of 5 leveled pyramid. On the top there was the King. Second level belonged to the Druids, who were divided into 3 types. - Priests – took care of the religious matters (sacrifices etc.) - Magicians – basically carried out the function of healers - Bards – poets and singers - Third level belonged to the warriors. Important role – safety and expansion - Fourth level belonged to the commoners – daily needs – food preparation etc. - Lowest ranked in a Celtic society were slaves. Usually they were captured warriors from the other tribes. - Celtic society had a big influence on the language. Today, they are four types of old gaelic languages spoken in the Great Britain – Welsh, Irish, Scottisch and Breton. - It was approximately 43 BC when the Romans carrying Latin with them invaded the country and stayed there until the 5th century AC, when the Anglo‑saxons from Northern Europe pushed them out, bringing German with them - In the 9th century Vikings invaded the Great Britain and were fighting against Anglo-Saxons until Alfred the Great, originally the king of west Saxons negotiated peace with them - The reign of the Anglo – Saxons was ended by the William the Conqueror, who defeated Anglo- Saxons at the Battle at Hastings in 1066 and negotiated peace with Vikings - When William the Conqueror became King of England, feudalism and the French language became the dominant influence - William the Conqueror also founded a lot of castles, for example The Tower of London and was the first to be crowned in the Westminster Abbey - In 1215, King John signed the Magna Carta. This document restricted the power of the ruler and recognized the rights of the barons and freemen. According to some, it is considered to be the foundation of human rights in Great Britain. The Magna Carta became an essential part of English Law. Centuries later, it formed the basis of the American Bill of Rights. - In the 15th century, two rival English nobel houses wanted the crown. Symbol of both houses – the Lancestrians and the Yorkists – was a rose. Therefore, conflicts between them were called the Wars of the Roses. After 30 years of warfare, the war was ended at the battle of Bosworth, where Lancestrian Henry Tudor defeated the Yorkist Richard III. However, by marrying the Yorkist princess Elizabeth, Henry brought the warring families together and brought peace to the country - in the 16th century, Henry VIII wanted to divorce from his first wife. Nevertheless, the pope didn’t allow this divorce. Therefore, Henry founded the Church of England, a church separate from the Catholic Church. - In the end Henry VIII ended up with 6 wifes throughout his life - The Tudor dynasty ended with the Elizabeth I of England on the throne. Despite the fact that she was put on the throne to produce an heir and continue the Tudor dynasty, she never produced one and soon were known also as “The Virgin Queen“ - Elizabethann Era was famous above all for the flourishnig of English drama led by playwrights such as William Shakespeare and Christopher Marlowe and for the seafaring prowess of English adventurers such as Sir Francis Drake. - Thanks to massive colonization, global influence of Britain was increasing and the area of the British empire was getting bigger and bigger. Succesful battles with France and Spain reinforced the power of the British empire. Furthermore, the industrial revolution fuelled the prosperity of Britain. - The Victorian Era from 1837 (named after the reign of Queen Victoria), continued this period of stability and economic growth - During the Victorian Era, queen Victoria was also the Empress of India - In the 20th century Great Britain was involved in both World Wars and in both stood on the victorious side. - However, through World War I. Britain suffered huge economic losses and suffered major bombing damage in World War II., but held out against Germany after the fall of France in 1940. - Nevertheless, despite the fact that Britain lost many of its colonies and its influence declined in the 20th century it has gained political and financial stability and it is one of the richest countries in the world today. The Magna Carta and Habeas Corpus - Even though Magna Carta belongs among the publicly well known documents, only a few people know what it is really about and what is Habeas Corpus. A very important act that has arisen from this document. - In the 13th century, due to the combination of high taxes, unsuccessful wars and a conflict with the Pope, King John was unpopular with his barons. Some barons began to conspire against him and a few years later some of the most important barons engaged in open rebellion. Finally King John was forced to sign a document later known as the ‘Articles of the Barons’. In return, the barons renewed their oaths of fealty to the King. A formal document to record the agreement was created by the royal chancery on 15th of July 1215 and this document is now called Magna Carta. - This first version of Magna Carta was valid only a few months (when barons left London, king annulled it) , nevertheless from this document has arisen the act called Habeas Corpus (in Latin “you may have the body”). Habeas Corpus is very important for us, because it is a writ, or legal action, through which a prisoner can be released from unlawful detention. It is used in many countries today. - It was King John’s death in 1216 which secured the future of Magna Carta. Magna Carta was later reissued in 13th century in modified versions. - I’ve chosen Magna Carta because it is a very important historic turning-point. Since then, the king wasn’t an absolute monarch. He was bound by some certain laws that he had to abide. Oliver Cromwell (If I had to talk about my favourite personality from the British history, I would choose Oliver Cromwell) - military and political leader best known for his overthrow of the monarchy. Thanks to him, England was temporarily a republican Commonwealth. Later on, he became the Lord Protector. - one of the most controversialpersonalities of the British history - radical leader, who overthrew the tyrany of king and the nobles - tyrant, who misused the idea of republic to establish political and religious dictatorship - In my opinion, the topic History of the United Kingdom is very interesting even for people, who are not so much into history. It is very various and full of amusing facts. I think that we can learn a lot by studiyng the British history. Další podobné materiály na webu:
Adaptive radiation? Firstly, when discussing evolutionary biology, Schutler defined adaptive radiation as “the differentiation of a single ancestor into and array of species that inhabit a variety of environments and that differ in traits used to exploit those environments”. An example of adaptive radiation would be the Cichlid fish. Cichlids are a family of fish found in the lakes of the East African Rift., mainly Lakes Malawi, Tanganyika and Victoria with radiations producing 250 and 500 species per lake (Brawand et al, 2014) and other smaller African lakes contain small numbers of endemic cichlids (Table 1). They all differ in their body shapes and sizes, pigmentation patterns and social behaviors. Even though the lakes haven’t been thoroughly sampled the main species are haplochromines, molecular phylogenic studies have shown that cichlids have evolved more lately than the origin of the lake.Cichlid fish have a large species richness. When investigating specific lakes e.g. Lake Malawi which encompasses the largest radiation. Its Comprised of different species such as several species that feed on eggs and larvae carried by mouth brooding female cichlids, species that clear parasites from skins of fish another that feeds on pieces of their skin. Additionally, there are various species that are scale eaters, fin biters, rock scrapers, sediment shifters and zooplankton feeders. There are numerous streamline silver offshore shoaling fish and their sharp toothed predators. There are crab and snail eaters and one particular species that feeds mainly on flies that rest on rocks just above the surface of the water, another flips over sediment to look for hidden larvae from insects. all behavioral and ecological adaptation that the cichlid fish have taken which associate with morphological changes to the body shape and size, head shape, jaw size, shape, orientation and shape and number of cusps of the teeth. For example, body shape evolution is strongly affected by feeding habits. Piscivorous fish have a much larger head and benthivorous fish tend to have a slender body. Thus, body shape is not independent from tropic morphology. The body shapes are generally associated with swimming modes in fish, suggesting that the divergent body shapes of the Cichlids also relate to other ecological factors, such as the efficiency of escaping from predators. Some species lateral line canals to detect movements of prey hidden in the mud and huge eyes to enable them to see in the dim light at depths of over 100 meters. phylogenic studies have shown that cichlid adaptive radiation have the tendency for similar adaptations to appear in different lakes this is a demonstration of cichlids have failed to develop some forms. The restrictions on cichlid adaptive evolution allows insight on the reasons of variation and sheds light on the considerable diversification among cichlids ancestries in their affinity to undergo adaptive radiation. E.g. the genera Pseudocrenilabrus and Tilapia are widely distributed in Africa, yet have shown no adaptive radiation in Malwai, Tanganyika or Victoria, the main 3 African lakes. Even though in the 3 largest African lakes, cichlids have filled the niches of small fast-moving plankton, eel shaped species and truly nocturnal forms, there are no predatory cichlids. Even the larges cichlid predators only weigh 3-3.5 kg, compared to other fish e.g. 60-200kg for the largest catfish that lives in the same waters fishes exhibit complex physiological behavior, this makes them a suitable vertebrate model for the study of reproductive strategies. They demonstrate high levels of parental care, even after their offspring has been hatched. This is uncommon among fishes as cichlids continue to guard the larvae and then the independent offspring. to be small enough to fit inside empty snail shells and rear their young. Dominant males are large enough to pick up the shells and their reproductive success is related to the size of their shell collection. Smaller males show alternative strategies, either hunting for food and mates in packs or even mimicking females to sneak into the snail shells. is not uncommon in fishes, but maternal care is, yet cichlids seems to have evolved maternal mouth brooding on different occasions. In Lake Victoria and Malwai, adaptive radiation in ecomorphology have been accompanied by radiations in social behavior- haplochromiens are maternal mouth brooders. a change in the evolution of cichlid fish could be down to the consequences of the aggressive behavior of male’s cichlids driving the differentiation of species. Individuals of other species then try to avoid this behavior or strive to compete for critical resources that both of these species use. Thus, the increase in frequency in the population helps to drive sympatric speciation. of a role of allopatric speciation is geography. Over time in the larger lakes, water levels have fluctuated which led to isolation and reconnection of different species around the main lake, sub-basins and patches of habitat within a continuous water body. Even though molecular studies have shown that speciation takes place within surrounding lakes, there have been theories of the creation of hybrids from multiple colonisations of a lake. The hybrids contain greater adaptive genetic variation that any other individual original species, allowing larger genetic combinations., evidence for this is found in molecular phylogenic studies and the potential of fertile offspring from hybrids are formed.
Desert Tortoise Facts The desert tortoise has a large, domed shell with no easily recognizable pattern. When it is young, its shell is a light brown or tan hue, but the color changes as the tortoise grows and generally darkens into a dark brown or gray by adulthood. However, the underside of the average desert tortoise’s shell is usually a different color than the top; it often remains yellow or light brown for life. The desert tortoise spends a lot of time digging in the hard desert earth, so it has long, sharp claws and its front limbs are coated in a protective layer of scales. These scales make it easier for the desert tortoise to retain water. Desert Tortoise Habitat Approximately one hundred thousand desert tortoises are currently in existence around the world. This number was once much higher, but poaching and other destructive human activity has greatly decreased the species’ population. Desert tortoises generally live throughout the desert in bush scrub habitats. They are common in many parts of the United States, scattered throughout the Mojave and Sonoran Deserts in the states of Arizona, Utah, Nevada, and California. Desert Tortoise Diet Desert tortoises are herbivores; they do not eat other animals, but rather survive by consuming plants. Because desert tortoises are, true to their name, native to the desert, their diet consists of foods which can easily be found in a desert habitat. These foods include various types of vegetation which can be found in the desert, including flowers such as the primrose, hibiscus, and dandelion. Cactus, which is very common in the desert, is also a preferred treat for the desert tortoise. Desert climates are very dry and hot. It doesn’t rain very often, and water is often hard to come by for long periods of time. Over the millions of years it has been in existence, the desert tortoise has adapted to this lack of available water. Most of the flowers, cactus, and fruits that desert tortoises eat are filled with moisture, which the animal stores inside its bladder and absorbs over time. Incredibly, the tortoise’s ability to retain and absorb water from the foods it eats allows it to survive without access to fresh water for up to a full year. Desert Tortoise Breeding Desert tortoises generally do not reproduce until they reach approximately fifteen years old. Some even wait to breed until the age of twenty. The mating season of these tortoises ranges from late summer to early fall. After mating, females lay four to six eggs at a time and tend to them until they hatch after a period of ten to twelve months. At birth, each hatchling measures only about two inches long. Desert Tortoise Care Desert tortoises are common pets, especially throughout the United States. They need to be kept outside in a climate which is in accordance with that of their native habitat. The dry, hot weather of California, Arizona, and other states with similar weather is acceptable for this type of tortoise. The desert tortoise requires a large outdoor space, such as a fenced-in backyard, to roam and should be provided with multiple artificial shelters in order to find relief from the hot sun when needed. They also need sufficient natural shade from trees and abundant grass to munch while they roam their enclosure. Although the water-retaining capabilities of desert tortoises are quite remarkable, they still need to be provided with access to fresh water at all times in order to remain healthy and thrive in captivity. Desert Tortoise Life Documentary This is a pretty neat tortoise! What do you think? Let us know in the comments below!
A Self-Sustaining Industry Former NASA researcher and current University of Central Florida professor Dr. Phil Metzger presented a proposal stating that mining and manufacturing outside of our planet is currently possible with the technology we have today, and it would be advantageous to Earth, as well (as opposed to a money drain). Surprisingly enough, the biggest challenge to humanity taking this next step isn’t money or the lack of technology. According to Dr. Metzger, “The main challenge for this concept, is neither technology nor cost but simply convincing people it is realistic.” He estimates making this goal a reality would take up about 3-12% of NASA’s budget each year for the next few decades. Space mining of the moon and nearby asteroids would allow for greater access to hydrogen, carbon, silicon, metals, and other materials that may be overmined on this planet. Metzger’s vision does not involve launching entire mining and manufacturing infrastructures to these distant bodies. Fortune reports, “He projects that only 12 tons (<11 tonnes) of initial assets on the Moon could build themselves out into 150 tons (136 tonnes) of equipment (close to the amount that has been deemed necessary for a lunar colony) using local resources.” This method is referred to as a Self-sufficient Replicating Space Industry (SRSI). More Than Mining The Earthly benefit of a SRSI is far more inclusive than just industrial gains. Space mining and manufacturing will also ease some of the burden current mining practices put on the planet. Taking resources from space will help to keep from depleting the limited resources left. Some space mining critics claim an economic focus of such a plan would take important funding and attention away from scientific pursuits. However, Metzger also explains that there are more celestial bodies near Earth than we would ever have funding to reach. Bringing along industrial interests could multiply the monetary resources dedicated to space study. This way, mining in space can benefit humans both economically and scientifically.
|CONTENTS | PREV | NEXT||The Java Language Environment| To be truly considered "object oriented", a programming language should support at a minimum four characteristics: - Encapsulation--implements information hiding and modularity (abstraction) - Polymorphism--the same message sent to different objects results in behavior that's dependent on the nature of the object receiving the message - Inheritance--you define new classes and behavior based on existing classes to obtain code re-use and code organization - Dynamic binding--objects could come from anywhere, possibly across the network. You need to be able to send messages to objects without having to know their specific type at the time you write your code. Dynamic binding provides maximum flexibility while a program is executing Java meets these requirements nicely, and adds considerable run-time support to make your software development job easier. At its simplest, object technology is a collection of analysis, design, and programming methodologies that focuses design on modelling the characteristics and behavior of objects in the real world. True, this definition appears to be somewhat circular, so let's try to break out into clear air. What are objects? They're software programming models. In your everyday life, you're surrounded by objects: cars, coffee machines, ducks, trees, and so on. Software applications contain objects: buttons on user interfaces, spreadsheets and spreadsheet cells, property lists, menus, and so on. These objects have state and behavior. You can represent all these things with software constructs called objects, which can also be defined by their state and their behavior. In your everyday transportation needs, a car can be modelled by an object. A car has state (how fast it's going, in which direction, its fuel consumption, and so on) and behavior (starts, stops, turns, slides, and runs into trees). You drive your car to your office, where you track your stock portfolio. In your daily interactions with the stock markets, a stock can be modelled by an object. A stock has state (daily high, daily low, open price, close price, earnings per share, relative strength), and behavior (changes value, performs splits, has dividends). After watching your stock decline in price, you repair to the cafe to console yourself with a cup of good hot coffee. The espresso machine can be modelled as an object. It has state (water temperature, amount of coffee in the hopper) and it has behavior (emits steam, makes noise, and brews a perfect cup of java).
To start 3D printing or Laser Cutting, you'll need to create an account here. Once done, you'll be able to upload your files and get live quotes of yours parts Already have an account? Log In The history of Classical Metal Casting methods dates back to the middle ages. The process involves several steps. First of all, you need to produce a replica of your final product. To create replicas you can use different materials, one of them is wood, as shown in the example video. The design on the replica has to also include a filling system and some additional support. Then, you need to create a mold based on this replica. There are several techniques for working with molding materials, the most popular is sand. The first step is the preparation of the replica, this process will be different for different materials. Once you have the copy of your final product, the filling system has to be designed. It’s important to remember that the replica has to come out of the sand mold, so it can’t be wider at the bottom than on top. This is called an undercut and means the copy is unremovable. While designing your parts and the filling system, you also have to think about the fact that liquid metal hardens fast and it might not reach a higher level of your design before it concrets. To avoid that you have to design a feed head. It’s later removed from your part. Then it all has to be copied in a mold. It’s necessary to prepare two sand molds. Next step is to get the object mirrored in a special sand to create one mold. The second mold is used for the filling system, where the metal will be poured, and if necessary can also be used for the second part of your object. The molds are then stuck one of the top of the other, so the filling system matches the shape of your object. Once two molds are connected, so the metal doesn’t leak between the two layers of sand, the metal is heated up and liquified. The casting processes involve the metal being poured into the mold through a filing system very quickly as it hardens fast. The last step is to clean the object off the sand and remove the filling system. Now the object is ready for any additional polishing and removal of extra material if needed. Selective Laser Melting (SLM) is an Additive Manufacturing technology that uses metal powder to create your parts. The metal 3D printer spreads a thin layer of the metal powder on the bed, then a laser melts the metal creating the shape of your 3D model. The next layer of powder is placed and the process repeats itself. The metal is melted at a high temperature, and the 3D printed parts require a cooling time. With SLM technology you can integrate multiple components into a single object, which reduces the costs and saves time on assembling your parts. Choosing Selective Laser Melting for your production will also provide you with very strong parts, which at the same time can have thin walls lowering the weight of your parts. They also will have high-temperature resistance. For SLM we offer Aluminium AISi7Mg0,6 composed mainly of aluminum (90%), silicon (7%) and magnesium (0.6%). This material has good mechanical properties and can be used for parts subjected to high voltages. Direct Metal Laser Sintering (DMLS) technology also uses metal powder to 3D print your functional parts. The process is the same as for SLM, the metal 3D printer lays down a layer of metal powder, then a laser beam sinters the powder in the shape of your 3D model. The DMLS process is highly beneficial for those who need to produce their metal parts for prototyping or low-volume production by eliminating time-consuming tooling. It also allows creating complicated and highly detailed designs that wouldn’t be possible with any other technology, due to the limitations of the traditional manufacturing processes. Properties of your parts metal printed using SLM and DSML will be similar. They will have good mechanical characteristics, your models can be quite detailed and also fully functional, prepared to be used or integrated into a larger object. Like the technologies mentioned above, Binder Jetting is powder-based 3D printing method. However, with this Additive Manufacturing method, there is no laser involved. The metal powder is fused with a binding liquid agent and lightly cured between each layer. Binder Jetting is the fastest and the cheapest metal 3D printing process. The main benefit of Binder Jetting is high customization and fast production time, however, this technology is more suitable for prototyping needs, ornamental and decorative parts or jewelry. Lost-Wax Metal Casting is the only 3D printing method that doesn’t involve metal powder. The technology is based on injecting metal into a mold. The master model, typically built in wax, thanks to 3D printing is the perfect replica of the finished product. Once the master model is 3D printed, a mold made in plaster is poured over it. When the plaster mold is ready, liquid metal is injected into the mold to replace wax pattern which is drained away through a treelike structure to create the object. Additive Manufacturing doesn’t necessarily have to be a competitor to traditional metal casting. It can very well complement classical methods of production and improve them. The best example of combining Additive Manufacturing and Classical Foundry is to 3D print the replicas of the master objects. 3D printing allows a high level of details, which was not reachable before the use of Additive Manufacturing methods. And it also speeds up the pre-production process of traditional Metal Casting. A good example of the two technologies working together is Lost-Wax Metal Casting. Thanks to Additive Manufacturing a high level of customization can be achieved for the 3D printed wax copy of the original design. The process is faster thanks to 3D printing and the wax replica is easily removable. Another way of combining Additive Manufacturing and classical Metal Casting is to 3D print plastic copies of the final product. Like with the Lost-Wax Metal Casting, 3D printed replicas will melt out creating the perfect, custom-made mold that can be used for Traditional Foundry methods. The process is very well explained in the video below. Moreover, 3D printed models are also great for the most commonly used Metal Casting molds made with sand. Thanks to Additive Manufacturing the model will be detailed and will leave the exact shape of your design in the sand to produce accurate metal products. A great example of combining both technologies is shown in the video of a 3D printed metal hammer: If you’re planning to produce large mechanical parts, such as engine’s components, or big gears for machines, Classical Foundry is great for that purpose. Additive Manufacturing won’t be as effective with large sized parts due to the limitations of dimensions the 3D printer can reach. The main goal of producing mechanical parts is for them to be functional. The looks are not important at all, they have to have good mechanical properties and if big size is also essential for your production, traditional Metal Casting will provide you with great parts. Classical foundry will also be beneficial if you need multiple copies of your parts. The costs of production decrease with the number of parts, whereas for using Additive Manufacturing, the cost stays the same. As the process of metal 3D printing with Lost- Wax Metal Casting technology is quite similar to traditional Metal Casting in this chapter we will talk more about the benefits of using metal powder- based Additive Manufacturing. 3D printing also allows for much more design freedom, a high level of details, customization, and if precision is important for your design, choose Additive Manufacturing. Moreover, with 3D printing, you can design your parts to be articulated which is impossible with traditional Metal Casting. It not only saves the assembly time but also gives you totally new design opportunities. If time plays a big factor in your production process, Additive Manufacturing is the right solution for you. Metal 3D printing is much faster than traditional Metal Casting for several reasons. Starting with pre-production, to metal 3D print your parts you just need a 3D model. Compared to Metal Casting, where you not only need to design your parts, the filling system, and the feed head but also create a mold, 3D printing saves you a lot of time. For powder- based Additive Manufacturing technologies all you need to do it to upload the 3D model to our website. Also, the post-production process is much quicker for metal 3D printed parts. Keep in mind that Metal Casting involves machining removal methods to unattach the filling system and feed head, as well as manual removal of metal that got spilled between the molds before you even move on to surface finishing. For 3D printed parts, the supports have to be removed, but that’s taken care of by our production team. We also provide you with several options for surface finishes such as polishing and plating which allows for your designs to reach another level of customization. The production itself might seem faster for Metal Casting as the liquid metal hardens quickly, but you have no control over the process. Additive Manufacturing has a much more stable process of production. There is a possibility for the 3D printer to crash, but we keep an eye on all our parts in production and if such situation happens, we can immediately react and stop the process. Metal Casting method doesn’t give you that option as you can’t see what’s happening inside the molds. With traditional Metal Casting, there is also a risk of the liquid metal concreting before it reaches higher parts of your object. To avoid it, an additional part of the design has to be created called the feed head. It later has to be removed from the final product which slows down the post-production process. Also, during the filling, oxidation can cause the creation of bubbles in the metal, and small pieces of the sand mold can get inside of your part, which will affect the object’s properties. If you need your parts to be light, Additive Manufacturing gives you lots of options to achieve that. To reduce the weight of your parts, your 3D printed parts can have walls filled with lattice structures. Your 3D model can also be hollowed, which is not the case with Metal Casting process. With Additive Manufacturing methods such as Selective Laser Melting and Direct Metal Laser Sintering, your metal printed parts will also be very strong and have high heat resistance. Metal 3D printed objects with SLM and DMLS will have better mechanical properties as they are produced at a higher temperature. True that traditional methods of Metal Casting are beneficial if your parts have to be huge. However, if you require custom made and precise parts for your object to be fully functional, 3D printing is the way to go. Additive Manufacturing will give you the freedom you need to design exactly the model you need, provide you with fast results and can highly improve your production system. With the variety of 3D technologies your metal printed parts can be designed and manufactured on your computer, just upload your 3D model to our website. It’s very important to choose the right material to decide which metal 3D printing technology to use with the best results to empower your manufacturing process. To help you with that we prepared for you a blog post explaining in detail each of our metal materials, and you can always ask one of our consultants. Get the latest 3D printing news delivered right to your inbox Subscribe to our weekly newsletter to hear about the latest 3D printing technologies, applications, materials, and software.
Water is THE most important nutrient that our body needs to stay healthy and in balance. H2O is truly the foundation of life. No amount of vitamins or supplements can substitute the vital life-sustaining properties found in water. (see Proper Hydration Is Important!) Unfortunately, however, it can often be challenging to have access to good tasting, healthful, and uncontaminated drinking water. Below are a few of the water options we can choose from as consumers. Municipal Tap Water Municipal tap water is generally treated with chlorine to make it safe for human consumption. However, the unpleasant taste and odor associated with tap water often deter people from drinking sufficient quantities of water. (see Dangers of Dehydration). Also, chlorine reacts with various organic and inorganic elements present in water to create byproducts that can be toxic when consumed in large doses (e.g. trihalomethanes, Many cities also add fluoride as a public health measure to reduce tooth decay. However, many studies have shown that long-term ingestion of fluoride can have a negative impact on our health, including effects on the thyroid, bones, digestive system, brain, and even the tooth enamel it’s supposed to help (a condition known as fluorosis). Municipal tap water can also be laced with varying levels of unwanted chemicals and contaminants including heavy metals, volatile organic compounds (VOCs), sulfates, nitrates, pharmaceuticals and hormones, and radioactive elements. Tap water can also become unfit for human consumption during boil water advisories when a community’s drinking water That being said, municipal tap water is also the most environmentally friendly way to consume water. The solution to rising concerns about water safety is not to turn to bottled water, which is polluting and not necessarily healthful but to find a cost-effective way to safely decontaminate water. Berkey systems are the most ecological and cost-effective way to produce safe, pure and refreshing drinking water from any treated or untreated water source. Carbon Filtration Systems There are two primary types of carbon filtration on the market: granular activated carbon (GAC), such as the small granules found in filtering pitchers, and block carbon which is a finer, denser material typically found in cartridges such as those used in under-counter filters and refrigerator filters. There are many factors that affect filtration efficiency, including the amount of carbon in the unit, the pore size (measured in microns) and the flow rate which affects the length of time contaminants spend in contact with the carbon. Carbon is effective in removing the bad tastes and odors associated with tap water as well as certain organic compounds. However, carbon performs rather poorly in removing dissolved inorganic contaminants and heavy metals such as minerals, salts, antimony, arsenic, asbestos, barium, beryllium, cadmium, chromium, copper, fluoride, lead, mercury, nickel, nitrates, selenium, sulfate, thallium Carbon block filters rated at a pore size of 0.5 microns or smaller can remove some larger, dangerous microorganisms, such as giardia and cryptosporidium, but filters with larger pore size or 1 or 5 microns will be ineffective for bacteria removal. Furthermore, viruses are too small to be removed by carbon, as they usually range between 20 and 400 nanometers in size. As for GAC filters, they can suffer from a phenomenon called channeling where the water pressure forces channels to open up in the loose carbon granules. Some of the water, following the path of least resistance, will flow through the channel and not come into contact with the carbon filtration medium. Consequently, some of the water flowing through a GAC filter may not have been filtered at all, and there is no way of knowing the extent to which the water still contains harmful contaminants or not As such, most carbon filter companies are unable or unwilling to provide consumers with independent lab test reports to show the detailed list of contaminants removed as well as the removal efficiency, expressed as a percentage. On the other hand, New Millenium Concepts Ltd., the manufacturer of Berkey water purifiers, is one of the only companies on the market that publish independent lab test reports that show the 99+% removal rate of hundreds of organic and inorganic contaminants. Finally, a little-known fact is that carbon filters lower the pH of water by about 1 point on the pH scale, making water more acidic than the unfiltered tap water. Reverse Osmosis Systems Reverse osmosis (RO) filtration systems are widely available, and their popularity has increased significantly over the past 10 or 15 years with growing concerns over the safety of drinking water. RO systems are extremely effective water purification systems that will remove contaminants found in water to an efficiency of 95-99%, including fluoride, heavy metals, chemicals, pharmaceuticals and more. RO water, however, can also have a number of undesirable long-term effects on our health and the environment: In addition to removing contaminants, RO systems also remove all dissolved minerals naturally present in water. Water acts as a natural solvent in our body. As RO water contains no minerals at all, it becomes a more aggressive solvent that robs the body of essential minerals with every glass of water you drink (see pH Balanced Healthful Water). Treating water through an RO system drops the pH of waterto about 5, which is more than 100 times more acidic than our body’s natural pH of 7,35. Reverse osmosis treatment removes all the vibrational frequency or energy that is naturally present in water and that our body thrives on. This is why water treated by this process is often referred to as “dead” water. Reverse osmosis treatment used alone is not an effective treatment with untreated water sources that may contain bacteria and viruses. RO systems waste a lot of water: for every liter of filtered water produced, RO systems can generate anywhere between 4 and 10 liters of wastewater that simply goes down the drain. Bottled water is expensive and has a huge environmental impact. , not only due to the plastic waste generated by the bottles themselves but also due to the extraction and consumption of the petroleum products used to produce the plastic bottles and to transport the bottled water to the point of sale. Furthermore, it takes 3 liters of water to produce a 1-liter bottle of water. The other 2 liters go down the drain! (see Environmental Concerns) The bottled-water industry has seen skyrocketing growth in recent years, and the range of consumer products keeps expanding in response to people’s growing concerns about the safety of tap water. Here are some of the main types of bottled waters on the market: Spring water can vary enormously depending on the source. Variables include the type of minerals found in the source water, mineral concentration (in mg/l or ppm) and pH value, which generally ranges from 7 to 8. Few brands indicate the pH value on the - Reverse Osmosis Reverse osmosis (RO) water, also referred to as demineralized water, is produced using a multi-stage filtration system that includes a semi-porous membrane capable of filtering out 95-99% of contaminants found in water. However, the treatment process produces a huge amount of wastewater as the RO system’s membrane must be continuously - Distilled water Distilled water contains only H2O molecules and is the purest form of water. The distillation process reproduces the natural cycle of water through evaporation and condensation of water molecules. Distilled water acts as an excellent solvent, but if used continuously without compensating with mineral supplementation, it can have a demineralizing effect on our body. Distilled water is dead water, devoid of minerals and vibrational frequencies. - Carbonated mineral water Carbonated water is generally rich in alkaline minerals, but the carbonated gas lowers the pH to around 5, or 100 times more acid than neutral. Living Raw Water Water harvested directly from a spring is Find out how you can transform your Berkey water into pure, living, structured, alkaline water with the Vitalizer Plus Vortex Water Revitalizer.
2.3 Koryo Dynasty - Temporary Private Control of Land The new Koryo Dynasty (918 - 1392) tried to reestablish order by reducing the tax paid by peasants to 1/3 of the harvest. As in former times, the king's loyal followers received land (actually, land tax) grants, which led to the emergence of wealthy families, whose wealth was based on the controlled land, and, later, on moneylending as well. As a precautionary measure, the king concentrated all aristocrats in the capital. They became dependent on the king because only he could grant land, basis of their wealth. Thus, the preservation of the dynasty was in their interest. In theory, all land belonged to the king, but the administrative control was in the hands of the aristocrats who increased their estates by appropriating public lands, thereby undermining the basis of the state, the ownership of all land, and the right to divide income from the land according to its needs. The fourth king therefore implemented a land reform and redistributed the land according to rank and grade. But he was not successful . Because of the small group of aristocrats, in practice, offices became hereditary. In addition, the fact that land was granted according to rank and status often allowed a retired official to retain the land he held because of his rank, now because of his status. Consequently, large estates were under the permanent control of officials, which made them independent from the state. Loss of office had no immediate effect, and that had consequences for loyalty. Aristocrats used their independence from the state to illegally oppress the peasants, who again left the land, and the end of the dynasty just as its beginning was marked by reforms to reduce the peasants' hardships.
Students learn about how biomedical engineers aid doctors in repairing severely broken bones. They learn about using pins, plates, rods and screws to repair fractures. They do this by designing, creating and testing their own prototype devices to repair broken turkey bones. Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards. All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standard Network (ASN), a project of JES & Co. (www.jesandco.org). In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc. Click on the standard groupings to explore this hierarchy as it applies to this document. - Colorado: Math - c. Solve for unknown quantities in relationships involving perimeter, area, surface area, and volume (Grades 9 - 12) ...show - Colorado: Science - a. Discuss how two or more body systems interact to promote health for the whole organism (Grades 9 - 12) ...show - International Technology and Engineering Educators Association: Technology - R. Evaluate final solutions and communicate observation, processes, and results of the entire design process, using verbal, graphic, quantitative, virtual, and written means, in addition to three-dimensional models. (Grades 9 - 12) ...show - K. Medical technologies include prevention and rehabilitation, vaccines and pharmaceuticals, medical and surgical procedures, genetic engineering, and the systems within which health is protected and maintained. (Grades 9 - 12) ...show - Next Generation Science Standards: Science - Design a solution to a complex real-world problem by breaking it down into smaller, more manageable problems that can be solved through engineering. (Grades 9 - 12) ...show - Describe how engineers aid doctors in repairing severe bone fractures. - Create prototype devices to aid in the healing of a bone fractures and test them for strength. - Evaluate the strengths and weaknesses of a prototype medical device based on model testing. - Bone Repair Challenge (ppt) - computer and LCD projector to show a PowerPoint presentation (or make overhead transparencies of the PPT file and use an overhead projector) - 1 turkey femur (drumstick) - safety glasses or goggles, one per student - other supplies, depending on group design (see below) - Repairing Broken Bones Design Worksheet, one per person - ~1 yard (~1 m), half-inch diameter steel or aluminum rod - ~20 metal screws (suggestion: 10 half-inch long plus 10 one-inch long) - metal strip (sold in coils at hardware stores, usually with plumbing supplies; already has screw holes in it) - 1-2 extra turkey bones - other materials or supplies that students include in their designs - drill (a drill press is preferred, but a hand drill is okay) - hack saw - screw driver - (optional) tile drill bit (makes drilling into bone easier and less likely to crack) |biocompatibility:||A characteristic of some materials that when they are inserted into the body do not produce a significant rejection or immune response.| |bone graft:||Bone taken from a patient during surgery or a bone substitute that is used to take the place of removed bone or to fill a bony defect.| |external fixation:||The process of installing temporary repair supports outside of the skin to stabilize and align bone while the body heals. Examples: screws in bone, metal braces, casts, slings.| |fracture:||An injury to a bone in which the tissue of the bone is broken.| |internal fixation:||The process of fastening together pieces of bone in a fixed position for alignment and support, using pins, rods, plates, screws, wires, grafting, and other devices, all under the skin. Can be temporary or permanent fixtures.| |prototype:||An original, full-scale, and usually working model of a new product, or new version of an existing product.| Before the Activity - Purchase enough turkey drumsticks to equal to the number of groups plus one or two extras, the bigger the bones the better. Ask if a butcher or meat plant might donate them. Eat the turkey or remove the meat from the bones. - To make the turkey femurs as clean as possible, boil them and remove any remaining meat and other tissue. If necessary, soak the bones in a solution of 90% warm water and 10% bleach or ammonia to make cleaning them easier. - Let the bones dry for ~24 hrs. - Gather materials and make copies of the Repairing Broken Bones Design Worksheet, one per person. Day 1: Bone Breaking - Divide the class into groups of three students each. - Break the turkey femurs, keeping track of the maximum weight each bone could bear before breaking. Two suggested methods: - Use a stress tester, such as an Instron universal testing machine, to break bone from the side and/or from compression. Universities often have stress testing equipment. - Bridge a bone across two desk edges and hang enough weight from the center of the bone until it breaks. Expect a turkey femur to bear up to 200 lbs (91 kg), depending on its size. (This approach is described in more detail as part of the Sticks and Stones Will Break That Bone! activity.) Days 2-4: Bone Repair - Challenge teams to design and build repairs for their fractured bones. Encourage them to try to make the bone even stronger than before. Have students follow along with the design worksheet during this process. - Have students carefully examine the extent and nature of the bone fracture(s), and brainstorm possible ways to repair their broken turkey bones. - Have students draw two or three engineering designs (sketches) to repair bone. Make sure they label the parts and materials that they intend to use in each prototype design. - Have students choose their best design and present it to the class. Require presentations to include reasons for their design choices, and engineering advantages and disadvantages (and other factors, as described in the Assessment section). Encourage the rest of the students to provide constructive feedback and suggestions. - Direct students to begin fixing their bone, as designed. Encourage careful work, bones can be brittle and are not replaceable. - Have students present their final prototype "products" to the class again. In these presentations, have them explain what steps they took as well as what they would improve upon. Day 5: Bone Testing - Have each group predict the performance of their repaired bone. - Break each reinforced bone using the same method used before. - Have students record how well their bone resisted the weight compared to its unbroken state. Record how much weight the reinforced bone withstood, and any other observations during the test. Have students complete their worksheets while others complete the stress testing. - When testing is complete, discuss and compare all results as a class. - Have students give final presentations, answering questions as described in the Assessment section. - Have students wear eye protection throughout the activity as bone fragments may splinter and fly. - Provide proper training and safety measures when using any power tools. - What are different ways to reinforce a broken bone? Activity Embedded Assessment - How does the design support the weight and movement of the patient? - Is it minimally invasive (easy for a doctor to implant)? Why or why not? - Are the materials biocompatible? - Is it realistic? - What are the design strong points and weaknesses? - Which design did you choose? Why? - How did your repair handle the load during testing? - Where on the bone did the repair fail? Why do you think it failed there? - How could you have improved your device? - For lower grades, have students work on smaller bones, such as chicken wings. Or conduct the fourth-grade Sticks and Stones Will Break That Bone! activity, which includes a class demonstration to break a chicken bone by applying a load until it fails (fractures), followed by student teams acting as biomedical engineers, designing (on paper) their own splints or casts to help mend fractured bones. - For lower grades, have students repair their bones solely with external fixation, such as bracing, casts or splints. Bone fracture repair-series, Procedure. Last updated September 21, 2009. MedlinePlus Medical Encyclopedia, US National Library of Medicine, National Institutes of Health. Accessed October 29, 2009. http://www.nlm.nih.gov/medlineplus/ency/presentations/100077_3.htm Prototype. The American Heritage® Dictionary of the English Language, Fourth Edition. Houghton Mifflin Company. Accessed November 2, 2009, from Dictionary.com website. http://dictionary.reference.com/browse/Prototype Todd Curtis, Malinda Schaefer Zarske, Janet Yowell, Denise W. Carlson © 2008 by Regents of the University of Colorado. Integrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder Last modified: May 25, 2015
1. Set I contains six consecutive integers. Set J contains all integers that result from adding 3 to each of the integers in set I and also contains all integers that result from subtracting 3 from each of the integers in set I. how many different positive four-digit integers can be formed if the first digit must be 2, the last digit cannot be zero, and digits may be repeated? I know the answer is 900 but I'm not sure how I would get the answer? thanks! is it possible to find a sequence with the ruleadd four for wich all terms are multiples of four and eight all terms are even numbers all terms are negative numbers none of the terms are whole numbers if so tell me the sequence
Electric power system operators must consider the frequency response of renewable energy generation. Wind and solar generation use significantly different technologies from conventional power plants; therefore, their electrical characteristics and performance are different. When a large generating plant shuts down, the frequency of the electric power system drops because of the imbalance between generation and load. The frequency decline is checked in the first few seconds by conventional synchronous machines, which contribute stored inertial energy to the system. Over the next few tens of seconds, synchronous machines equipped with governors increase their power output in an effort to return the system frequency to normal. Synchronous machines can also respond to frequency increases caused by large losses of load. This frequency response—both inertial and governor—could change with significant levels of variable generation. Most modern wind turbines and solar arrays connect to the grid via power electronics-based converters. These converters isolate the wind and solar generation from the grid and its frequency excursions. When equipped with governor-like controls, the converters can also allow the renewable generation to contribute to grid frequency maintenance. Wind and solar generation respond best to grid frequency increases, which require a drop in power generation. They can provide governor response to frequency drops only when they are operating in a curtailed condition. In addition, wind turbines can provide an inertia-like response by contributing power to the grid from their own stored kinetic energy or by capturing more energy from the wind. Solar arrays, which lack a large rotating mass, would need auxiliary storage to provide inertial response. Because most wind and solar plants are not equipped with frequency response capability, NREL is researching the effects of displacing conventional generation with significant quantities of wind and solar generation. NREL is addressing frequency response issues on the transmission system through its work on active power controls.
November 28, 2013 In many cases, the causes of epilepsy can’t be identified – but in other cases, researchers know what causes epilepsy, which means that some people can reduce their risk of developing epilepsy, or seizure disorder. At the Centers for Disease Control and Prevention, Rosemarie Kobau says some cases result from head injury. So, she says: “Safety measures, such as wearing seat belts in cars, wearing helmets when riding a bike or a motorcycle or playing competitive sports, can prevent head injuries, which in turn can prevent epilepsy.” Kobau also says treatment of high blood pressure or infections in a pregnant woman can prevent brain damage in the developing baby that could lead to epilepsy. Strokes are another cause of epilepsy. Not smoking, as well as controlling weight, can reduce stroke risk. Learn more at healthfinder.gov. HHS HealthBeat is a production of the U.S. Department of Health and Human Services. I’m Ira Dreyfuss. Use the comment form below to begin a discussion about this content. Please review our Policies and Procedures before registering or commenting
What would assessment look like, if you could reinvent it using 21st-century tools? The question was posed during a Jan. 25 presentation by Harvard University professor Chris Dede at the Florida Educational Technology Conference (FETC) in Orlando. And his answer revealed a vision for assessment that is much richer and potentially more useful than what currently exists in schools. Dede’s vision for the future of assessment relies on two guiding principles: that formative, or diagnostic, assessment provides a much more valuable snapshot of students’ abilities than an end-of-semester exam; and that technology gives schools access to an incredible amount of data that can be used to gauge students’ understanding of key concepts. Any time students have some kind of mediated interaction involving technology–a video conference, for example, or an online chat session–this interaction can be logged, saved, and analyzed at a later date to reveal important information about students’ thought processes, Dede explained. And this information, in turn, can be used to help guide instruction. Setting aside the obvious privacy and security concerns this practice would raise, “we’re missing a huge opportunity to capture these data and use them to enhance assessment,” he said. Dede, who is the Timothy E. Wirth Professor of Learning Technologies at Harvard’s Graduate School of Education, used a research project at the university to illustrate his vision. The project, called River City, is a multi-user virtual environment (MUVE) that immerses students in an online scenario in which they are asked to apply scientific inquiry skills to solve a problem. Students travel back in time to the 19th century, working together in small research teams to help discover why residents of a virtual world called River City are becoming ill. Students use technology to keep track of clues that hint at causes of illnesses, form and test hypotheses, develop controlled experiments to test their hypotheses, and make recommendations based on the data they collect. Every time students interact with a resident of River City, this interaction is logged in a database, Dede said. This means that, besides formal assessment data, researchers also have access to observational data based on these event logs: information about where students went (and in what sequence), which artifacts they examined, who they talked with, and what they said in these interactions. The problem now facing researchers is how to make sense of all this information–and how to use it to improve instruction. Some kinds of analysis are rather simple and can be quite revealing, Dede said. For example, educators can look at the logs and see fairly easily what proportion of scientific data were contributed by a given team member, or how much time students spent gathering information. More complex types of analysis, such as trying to reconstruct which sequence of events led to a given student becoming more engaged in the lesson, are problematic–but Dede said researchers are working on developing data-mining techniques to help solve these challenges. Although few schools are using tools as technologically sophisticated as the River City MUVE, Dede said, most are using some form of mediated interaction with students–and such techniques could apply equally well in these cases, too. And while this vision of how technology can help enhance assessment “isn’t going to happen tomorrow morning,” he acknowledged, it’s something educators should be thinking of as they move forward.