content
stringlengths
275
370k
NAT sets up mapping rules that translate source and destination IP addresses into other Internet or intranet addresses. These rules modify the source and destination addresses of incoming or outgoing IP packets and send the packets on. You can also use NAT to redirect traffic from one port to another port. NAT maintains the integrity of the packet during any modification or redirection done on the packet. Use the ipnat command to work with NAT rule lists. For more information on the ipnat command, see the ipnat(1M) command. You can create NAT rules either at the command line, using the ipnat command, or in a NAT configuration file. NAT configuration rules reside in the ipnat.conf file. If you want the NAT rules to be loaded at boot time, create a file called /etc/ipf/ipnat.conf in which to put NAT rules. If you do not want the NAT rules loaded at boot time, put the ipnat.conf file in a location of your choice, and manually activate packet filtering with the ipnat command. Use the following syntax to create NAT rules: command interface-name parameters Each rule begins with one of the following commands: Maps one IP address or network to another IP address or network in an unregulated round-robin process. Redirects packets from one IP address and port pair to another IP address and port pair. Establishes a bidirectional NAT between an external IP address and an internal IP address. Establishes static IP address-based translation. This command is based on an algorithm that forces addresses to be translated into a destination range. Following the command, the next word is the interface name, such as hme0. Next, you can choose from a variety of parameters, which determine the NAT configuration. Some of the parameters include: Designates the network mask. Designates the address that ipmask is translated to. Designates tcp, udp, or tcp/udp protocols, along with a range of port numbers. The following example illustrates how to put together the NAT rule syntax together to create a NAT rule. To rewrite a packet that goes out on the de0 device with a source address of 192.168.1.0/24 and to externally show its source address as 10.1.0.0/16, you would include the following rule in the NAT rule set: map de0 192.168.1.0/24 -> 10.1.0.0/16 For the complete grammar and syntax used to write NAT rules, see the ipnat(4) man page.
is complex, and the term has no single universally accepted definition. Emotion is, however, closely related to motivation and can sometimes provide motivation (as, for example, a student's fear of failing provides motivation for studying). Psychologists do agree that emotions are reaction patterns that include Theorists differ on the order of appearance of the reaction patterns. The autonomic nervous system. The autonomic nervous system (ANS) has two components, the sympathetic nervous system (SNS) and the parasympathetic nervous system (PNS). When activated, the SNS prepares the body for emergency actions; it controls glands of the neuroendocrine system (thyroid, pituitary, and adrenal glands). Activation of the SNS causes the production of epinephrine (adrenaline) from the adrenal glands, increased blood flow to the muscles, increased heart rate, and other readiness reactions. Conversely, the PNS functions when the body is relaxed or at rest and helps the body store energy for future use. PNS effects include increased stomach activity and decreased blood flow to the muscles. The reticular activating system. The reticular activating system (RAS) is a network of neurons that runs through the core of the hind‐brain and into the midbrain and forebrain. It has been demonstrated that electrical stimulation of the RAS causes changes in the electrical activity of the cortex (as measured by an electroencephalogram) that are indistinguishable from changes in electrical activity seen when external stimuli (such as loud sounds) are present. The RAS is believed to first arouse the cortex and then to stimulate its wakefulness so that it may more effectively interpret sensory information. The limbic system. The limbic system includes the anterior thalamus, the amygdala, the septal area, the hippocampus, the cingulate gyrus, and structures that are parts of the hypothalamus (Figure ). The word limbic means “border” and describes this system because its structures seem to form a rough border along the inner edge of the cerebrum. Studies have associated the limbic system with such emotions as fear and aggression as well with as drives, including those for food and sex. The Limbic System Lie detectors (polygraphs). Lie detectors, or polygraphs, rely upon the physiological arousal of the emotions. Concomitant measurements are taken of the heart rate, blood pressure, respiration rate, and galvanic skin response (GSR). (The GSR is a measure of the skin's electrical conductivity, which changes as the sweat glands increase their activity.) Polygraph recordings are used to see if a person is not telling the truth (lying), which usually creates emotional arousal. Because of polygraphs' high error rates, however, their findings are generally not accepted as evidence in the courts.
In this very unusual process, the normal DNAreplication process is seriously flawed. The result is that instead of making a single copy of a region of a chromosome, many copies are produced. This leads to the production of many copies of the genes that are located on that region of the chromosome. Sometimes, so many copies of the amplified region are produced that they can actually form their own small pseudo-chromosomes called double-minute chromosomes. The genes on each of the copies can be transcribed and translated, leading to an overproduction of the mRNA and protein corresponding to the amplified genes as shown below. The squiggly lines represent mRNA being produced via the transcription of each copy of the gene. While this process is not seen in normal cells, it occurs quite often in cancer cells. If an oncogene is included in the amplified region, then the resulting overexpression of that gene can lead to deregulated cell growth. Examples of this include the amplification of the myc oncogene in a wide range of tumors and the amplification of the ErbB-2 or HER-2/neu oncogene in breast and ovarian cancers. In the case of the HER-2/neu oncogene, clinical treatments have been designed to target cells overexpressing the protein product. Gene amplification also contributes to one of the biggest problems in cancer treatment: drug resistance. Drug resistant tumors can continue to grow and spread even in the presence of chemotherapy drugs. A gene commonly involved is called MDR for multiple drug resistance. The protein product of this gene acts as a pump located in the membrane of cells. It is capable of selectively ejecting molecules from the cell, including chemotherapy drugs. This removal renders the drugs ineffective. This is discussed in more detail in the section on Drug Resistance. The amplification of different genes can render other chemotherapy drugs ineffective.
How does a child's physician diagnose Otitis Media?Ear infections require immediate attention by a pediatrician, primary care physician or an otolaryngologist (ear, nose and throat specialist). In addition, evaluation by an audiologist and a speech-language pathologist is important if a child has repeated episodes of infection and/or chronic fluid in the middle earIf otitis media is suspected, the child's ears are examined with an otoscope to check for redness or fluid behind the eardrum. may be performed to check for middle ear fluid. During this procedure a puff of air is blown into the ear and movement of the eardrum is observed. An eardrum with fluid behind it does not move as well as an eardrum with air behind it. An audiogram or hearing test is performed to measure the degree of hearing loss. measures eardrum motion and the middle ear pressure to determine how well the Eustachian tube is functioning. If speech delay is suspected, a speech and language evaluation should be considered.An ear swab from the ear discharge may be taken to determine the culture and sensitivity of the infecting microorganism, depending on which the appropriate antibiotic is prescribed. In certain situations, a CT Scan of the head may be helpful to determine if the infection has spread beyond the middle ear.
AKA Demodecosis, Demodex, demodectic acariasis, follicular mange, and red mange. Demodectic mange is the result of mites of the Genus Demodex multiplying out of control. Small populations of these mites are present in healthy animals (dogs, cats, pigs, humans, rabbits, cattle, etc, etc.), but occasionally a population will rocket out of control. This is most likely due to a problem with the host's immune system. This results in irritation, hair loss, inflammation, and thickening of the skin. The Demodex mite is too small to be seen without a microscope, 0.1-0.4 mm long. Demodex infests a number of mammals, each type of animal having a different species of Demodex to infest it. It lives in hair follicles or the secretory ducts of sebaceous glands connected to the hair follicles; the female lays her eggs there, and when they hatch the larvae and nymphs are swept by the sebaceous flow to the mouth of the follicle. Here they mature, and travel to other hair follicles to begin the cycle again. The entire life cycle takes about three weeks. Since all members of a host's species tend to have some Demodex mite, demodectic mange is not considered contagious. It cannot move between species. Demodex folliculorum = Humans (in the hair follicles) Demodex brevis = Humans (in the sebaceous glands) Demodex phylloides = Hog mange (Pigs) Demodex canis = (Dogs) Demodex bovis = (Cattle) Demodex ovis = (Sheep) Demodex caprae = (Goats) Demodex cati = (Cats)
Speaking of Mathematics When the recognizer has done its work, the second component of AsTeR takes over to render the parsed expression in sound. It does so by applying rules written in AFL, the audio formatting language. The rules determine not only what words are spoken but also how they are spoken, controlling the pitch and speed of the voice and a variety of other qualities such as breathiness and smoothness. The rules also invoke nonspeech audio cues. AsTeR's standard rule for rendering fractions reads a simple expression such as a/b as "a over b," but a more complicated instance such as (x + y)/(x - y) is given as "the fraction with numerator x + y and denominator x - y." A few special cases are recognized, so that 1/2 can be rendered as "one-half" rather than "one over two." All of the AFL rules are subject to modification. The rendering of superscripts and subscripts is an area where changes in voice quality provide an intuitive vocal analogue of the visual rendering. Superscripts are read at a higher pitch and subscripts at a lower pitch. Such voice cues can help to resolve ambiguities in an audio rendering. For instance, xn + 1 is readily distinguished from xn + 1, even without an explicit verbal marker of where the exponent ends. An even more direct mapping from visual space to auditory space helps the listener to discern the structure of tables and matrices. With stereophonic output, AsTeR can vary the relative loudness of the left and right sound channels while reading the rows of a matrix, so that the voice seems to be moving through the structure. Nonspeech sounds provide a concise and unobtrusive way of conveying certain other textual features. In a bulleted list, a brief tone can announce each new item, rather than repeating the word "bullet." Sounds played continuously in the background while speech continues can serve to emphasize or highlight a passage of text, providing an audio equivalent of italic type and boldface. The aim of these various devices is to create a true audio notation for mathematics. In written mathematics, succinct notation allows the overall structure of an expression to be taken in at a glance, whereas the same concepts expressed in words would have to be laboriously parsed. AsTeR seeks in a similar way to shift some of the work of listening from the cognitive to the perceptual domain.
The healthcare industry follows several best practices to ensure the delivery of the highest standards of patient care. Sterilization of medical devices is one of these best practices because the use of unsterilized devices can result in infections which can be life-threatening in some cases. Therefore, several regulatory bodies in the healthcare domain enforce stringent compliance requirements to guarantee the safety of medical devices. Sterilization measures for these devices are to be used during manufacturing as well as between procedures. The medical industry has taken up intensive medical device sterilization research to establish the best techniques for the process. Let us know all about these techniques: - Steam Sterilization: Steam sterilization involves the use of high temperatures and pressures to eliminate the microorganisms that breed on the surface of the instruments. The technique is best suited for devices made of heat-resistant materials such as steel. It enables quick sterilization, taking only 3-15 minutes to complete though several hours are needed to cool and dry the instruments after the steam sterilization. The second stage is extremely important because any moisture left on the device surface can result in corrosion and impair the functioning of the devices. The technique is unsuitable for equipment with plastic and electronic components. - Dry Heat Sterilization: Dry heating sterilization is a more sophisticated technique that requires higher temperatures and takes longer as compared to steam sterilization. However, it is highly effective for killing the biological contaminants as well as their spores. It is suitable for instruments made of metal and glass. Another benefit of this technique is that it prevents moisture-induced damage as it uses hot air rather than steam to sterilize the devices. Vaccine vials are sterilized with this method before being filled up with the liquid. - Radiation Sterilization: As the name suggests, this technique uses gamma or electron beam (E-beam) radiation to kill any microorganisms on the device surface. It is best suited for single-use devicessuch as catheters, implants and syringes. Also, radiation works effectively for dense materials and can penetrate product packaging too. Another benefit of this method is that it is less time-consuming. However, it should not be used for heat-sensitive plastics and other materials. Moreover, it can result in some cosmetic issues such as discoloration of the devices. - Ethylene Oxide Sterilization: Another widely used sterilization technique for medical instruments is ethylene oxide sterilization. It is a chemical process that involves the exposure to the ethylene oxide gas, which reacts with the enzymes, proteins and DNA to prevent cell division and kill the microorganisms that live on the device surface. Unlike the other techniques, it is suited for devices made of delicate materials and those with plastic and electronic components. The gas is capable of penetrate small spaces inside devices and also works for instruments packaged in plastic. On the downside, the gas is highly reactive at low temperatures and toxic to humans, which makes the process complicated. It is also a time consuming process as compared to the other techniques. The choice of the sterilization technique is based on the kind of material that the device is made of and the standards being followed in the industry. It has to b ensured that the sterilization technique does not have any detrimental impact on the quality or integrity of the equipment.
Middle childhood brings many changes to a child's life. By this time, children can dress themselves, catch a ball more easily with only their hands, and tie their shoes. Developing independence from family becomes more important now. Events such as starting school bring children this age into regular contact with the larger world. Friendships become more and more important. Physical, social, and mental skills develop rapidly at this time. This is a critical time for children to develop confidence in all areas of life, such as through friends, schoolwork, and sports. Here are some changes your child may go through during middle childhood: - More independence from parents and family. - Stronger sense of right and wrong. - Beginning awareness of the future. - Growing understanding about one's place in the world. - More attention to friendships and teamwork. - Growing desire to be liked and accepted by friends. - Rapid development of mental skills. - Greater ability to describe experiences and talk about thoughts and feelings. - Less focus on one's self and more concern for others. (Adapted with permission from Bright Futures: Green M, Palfrey JS, editors. Bright Futures Family Tip Sheets: Middle childhood. Arlington (VA): National Center for Education in Maternal and Child Health; 2001) Last Updated: 3/3/2008
NASA's Cassini spacecraft has spotted methane lakes in the so-called "tropics" of Saturn's moon, Titan. (Where temperatures reach a balmy −179 °C, or −290 °F.) The lakes were a bit of a surprise to researchers who had assumed that the long-standing liquid bodies would only exist at the poles. The discovery also raises the unique possibility that life could exist in this bizarre environment. Top image: Ron Miller, who wants to call this "Lake Bonestell" after famous Titan artist Chesley Bonestell. To make this discovery, researchers used Cassini's visual and infrared mapping spectrometer, which detected the dark areas in the tropical region known as Shangri-La — an area very close to the spot where the European Space Agency's Huygens probe landed in 2005. After Huygens had landed, the heat from its lamp vaporized some methane from the ground, giving researchers a good idea that it settled in moist area. One of the tropical lakes appears to be the size of Utah's Great Salt Lake, and features a depth of at least three feet (one meter). NASA made the announcement of the liquid lakes, as part of their ongoing Cassini mission. The question now facing the researchers is, where did the liquid for these lakes come from? Caitlin Griffith, a Cassini team associate at the University of Arizona, speculates that the lake is being fed from an underground aquifer. "In essence," she says, "Titan may have oases." It's important that scientists study these lakes, so that they can get a better handle on Titan's weather. While the Earth has a "hydrological cycle," Titan has a "methane" cycle, with methane rather than water circulating. Ultraviolet light is able to pierce through Titan's atmosphere, causing it to break the methane apart on contact. This results in a complicated chain of organic chemical reactions. It's the aquifer theory, therefore, that supports the idea that there's a subterranean source for the continuous replenishment of methane. And because it rains so infrequently on Titan, there's no way that precipitation could account for the large, waist-deep lakes of continually evaporating methane. What's of particular interest to the researchers are the organic chemical reactions that are likely producing interesting molecules such as amino acids, the building blocks of life. As a result of this finding, astrobiologists cannot rule out the possibility that Titan might be able to spark and harbour primitive lifeforms. NASA's findings will appear in an upcoming issue of the journal Nature. Image via NASA.
All mosquitoes lay eggs in water, which can include large bodies of water, standing water (like swimming pools) or areas of collected standing water (like tree holes or gutters). Females lay their eggs on the surface of the water, except for Aedes mosquitoes, which lay their eggs above water in protected areas that eventually flood. The eggs can be laid singly or as a group that forms a floating raft of mosquito eggs (see Mosquito Life Cycle for a picture of an egg raft). Most eggs can survive the winter and hatch in the spring. The mosquito eggs hatch into larvae or "wigglers," which live at the surface of the water and breathe through an air tube or siphon. The larvae filter organic material through their mouth parts and grow to about 0.5 to 0.75 inches (1 to 2 cm) long; as they grow, they shed their skin (molt) several times. Mosquito larvae can swim and dive down from the surface when disturbed (see Mosquito Life Cycle for a Quicktime movie of free-swimming Asian tiger mosquito larvae). The larvae live anywhere from days to several weeks depending on the water temperature and mosquito species. After the fourth molt, mosquito larvae change into pupae, or "tumblers," which live in the water anywhere from one to four days depending on the water temperature and species. The pupae float at the surface and breathe through two small tubes (trumpets). Although they do not eat, pupae are quite active (see Mosquito Life Cycle for a Quicktime movie of free-swimming Asian tiger mosquito pupae). At the end of the pupal stage, the pupae encase themselves and transform into adult mosquitoes. Inside the pupal case, the pupa transforms into an adult. The adult uses air pressure to break the pupal case open, crawls to a protected area and rests while its external skeleton hardens, spreading its wings out to dry. Once this is complete, it can fly away and live on the land. One of the first things that adult mosquitoes do is seek a mate, mate and then feed. Male mosquitoes have short mouth parts and feed on plant nectar. In contrast, female mosquitoes have a long proboscis that they use to bite animals and humans and feed on their blood (the blood provides proteins that the females need to lay eggs). After they feed, females lay their eggs (they need a blood meal each time they lay eggs). Females continue this cycle and live anywhere from many days to weeks (longer over the winter); males usually live only a few days after mating. The life cycles of mosquitoes vary with the species and environmental conditions.
The following procedures have been developed to equip parents, extended family members, friends, personal care assistants, teachers, group home staff and day treatment staff to be more effective with children or adults who engage in challenging behavior. General Interaction Strategies (GIS) provide caregivers with a solid foundation of skills that are frequently used when interacting with children or adults who may engage in challenging behavior. In many situations, once caregivers have learned the basic skills trained in GIS, specific behavior plans can be developed for improving skill acquisition, reducing problem behavior, and building independence. There are a variety of options available for structuring the training sessions. Training may occur in 1:1 sessions with a family member or staff. An entire family or team of staff can be trained. One, two, or multiple-day workshop training is also available for staff working in schools, group homes, daycares, day treatment centers, or agencies in which people interact with individuals who engage in problematic behavior. Caregivers will be trained to use the following procedures, which will be addressed using the actual behaviors, activities, and situations that are specific for each client. These procedures will be taught to caregivers before moving on to development of behavior plans to deal with specific behaviors. The procedures to be taught include: 1. Reinforcement Strategies - Identifying behavior to reinforce - Identifying reinforcing items and activities - Noticing and responding/praising/reinforcing appropriate behaviors 2. Offering Choices - Presenting choices of tasks that must get done - Giving choices to maintain appropriate behavior - Presenting options when you want to help your child structure their free play, while maintaining appropriate behaviors 3. Giving Instructions - Delivering instructions in the best way at the best time - Following through with instructions and providing reinforcement 4. Responding to Requests - Honoring appropriate requests as often and as quickly as possible - Responding when you are unable to honor a request 5. Redirection Techniques - Offering choices of more appropriate alternative behaviors when your child is off-task or behaving inappropriately - Presenting appropriate alternatives without interacting with the individual 6. Responding to Upsets - Knowing when and how to ignore inappropriate behaviors - Knowing when and how to re-engage with your child when they are behaving appropriately 7. Additional notes - Additional topics may be added as needed on an individual basis For each of the topics covered, there will be a specific set of steps that will be followed: 1. Information provided via: written materials, discussions, questions and answer 2. Modeling which may include: staff demonstrating the techniques or video clips 3. Caregiver rehearsal with feedback from staff 4. Caregiver practicing with the individual and receiving feedback from the trainer The skills that are taught in the GIS are skills that can be used throughout an individual’s life. Our goal is to empower you to be able to identify skills that you can use to encourage appropriate behaviors in all situations. Not only will you be able to use these skills with the people with whom you live or work, but you will also be able to teach other people in the individual’s life (e.g., grandparents, siblings, PCAs, babysitters) to use these same skills to create a very consistent environment for the individual – so you are all sending the same message regarding what behaviors are important and appropriate! Funding for the GIS is specific to the client’s situation and goals of the team. Medical Assistance may be billed in some cases. In other situations, individual or group rates are available. For more information, please contact us.
Use these resources to introduce students to how the American people elect national leaders, the laws that govern the nation, and the three branches of government. The Congress of the United States is the legislative, or lawmaking, branch of the federal government. It is a bicameral legislature, which means that it is made up of two chambers, or houses. They are the House of Representatives and the Senate. The U.S. Constitution gives the two houses similar powers. The most important of these is that no law can be adopted unless it is first passed in identical form by a majority (more than half) of the members of each house. So what makes them different, and why are there two? The House and the Senate There are two main reasons why the Congress has two houses. The first is in keeping with historical tradition. The framers of the Constitution were most familiar with the British Parliament, which consists of two houses. In fact, at the time of the Constitutional Convention of 1787, the legislatures of 11 of the 13 states of the United States were made up of two houses. The second is that a bicameral (made of two houses) legislature offered a way of resolving a major conflict in the writing of the Constitution. Delegates to the Constitutional Convention from the heavily populated states wanted a state's representation in the new Congress to be based on population. Delegates from the less heavily populated states feared that the larger states would dominate the Congress if this were done. They insisted that each state receive equal representation. This obstacle was overcome by the Great Compromise. It provided for equal representation for each state in the Senate, and for the House of Representatives to be elected on the basis of population. Furthermore, a legislature made up of two chambers supports the system of checks and balances that is built into the American form of government. Either house is able to block legislation approved by the other. Therefore, the two houses must often cooperate with each other and compromise on their differences in writing the nation's laws. The Big House The House of Representatives has 435 members, or one elected from each congressional district. It is thus more than four times the size of the Senate, which has 100 members, or two elected from each state. The House of Representatives (commonly known as the House) is presided over by the Speaker of the House, who is nominated by the majority political party in that chamber. The vice president of the United States presides over the Senate. Terms of Office Members of the House are elected to 2-year terms of office. Senators are elected to 6-year terms. Members of the House must thus seek re-election much more frequently than senators and have to pay especially close attention to the needs and opinions of their constituents — the people in the districts they represent. While a senator represents an entire state, a member of the House represents a congressional district, which is usually only a small part of a state. A senator's constituency (the body of citizens he or she represents) is therefore likely to be more diverse than a House member's. For example, states have urban (city), suburban, and rural (country) areas, all of whose voters a senator must represent. A House member's district, on the other hand, may be largely urban or suburban or rural. The Senate has special responsibility for the ratification, or approval, of treaties with foreign countries. The Constitution requires that "two thirds of the Senators present concur" (agree) for a treaty to be ratified. This gives the Senate more influence than the House in foreign policy matters. In addition, candidates nominated by the president for such positions as cabinet members, ambassadors, and federal judges require approval by the Senate. On the other hand, the House has a special role in tax legislation. All bills for raising revenue must originate in the House of Representatives. Congress also has a number of other responsibilities and powers. It can propose amendments to the Constitution and declare war. The House of Representatives has the power to impeach, or bring charges against, federal officials for misconduct. If no candidate in a presidential election wins a majority in the electoral college, the president is elected by the House of Representatives. The Congress also determines if a president is disabled and thus unable to continue in office. The Congress can conduct investigations into any matter that affects its powers under the Constitution. It also reviews the actions of federal agencies to see that programs authorized by law are carried out
Despite its overwhelming success, the human brain peaked about two million years ago. Lucky for us, computers are helping us understand our brains better, but there may be some consequences to giving AI a skeleton key to our mind. A team of Japanese researchers recently conducted a series of experiments in creating an end-to-end solution for training a neural network to interpret fMRI scans. Where previous work achieved similar results, the difference in the new method involves how the AI is trained. An fMRI is a non-invasive and safe brain scan similar to a normal MRI. What differs is the fMRI merely shows changes in blood flow. The images from these scans can be interpreted by an AI system and ‘translated’ into a visual representation of what the person being scanned was thinking about. This isn’t totally novel; we reported on the team’s previous efforts a couple months ago. What’s new is how the machine gets its training data. In the earlier research, the group used a neural network that’d been pre-trained on regular images. The results it produced were interpretations of brain scans based on other images it’d seen. The above images show what a human saw and then three different ways an AI interpreted fMRI scans from a person viewing that image. Each image was created by a neural network trained on image recognition using a large data set of regular images. Now it’s been trained solely on images of brain-scans. Basically the old way was like showing someone a bunch of pictures and then asking them to interpret an inkblot as one of them. Now, the researchers are just using the inkblots and the computers have to try and guess what they represent. The fMRI scans represent brain activity as a human subject looks at a specific image. Researchers know the input, the computer doesn’t, so humans judge the machine’s output and provide feedback. Perhaps most amazing: this system was trained on about 6,000 images – a drop in the bucket compared to the millions some neural networks use. The scarcity of brain scans makes it a difficult process, but as you can see even a small sample data-set produces exciting results. But, when it comes to AI, if you’re not scared then you’re not paying attention. We’ve already seen machine learning turn a device no more complex than a WiFi router into a human emotion detector. With the right advances in non-invasive brain scanning it’s possible that information similar to that provided by an fMRI could be gleaned by machines through undetectable means. AI could hypothetically interpret our brainwaves as we conduct ourselves in, for example, an airport. It could scan for potentially threatening mental imagery like bombs or firearms and alert security. And there’s also the possibility that this technology could be used by government agencies to circumvent a person’s rights. In the US, this means a person’s right not to be “compelled in any criminal case to be a witness against himself” may no longer apply. With artificial intelligence interpreting even our rudimentary yes/no thoughts we could effectively be rendered indefensible to interrogation – the implications of which are unthinkable. Then again, maybe this technology will open up a world of telekinetic communication through Facebook Messenger via cloud AI translating our brainwaves. Perhaps we’ll control devices in the future by dedicating a small portion of our “mind’s eye” to visualizing an action and thinking “send” or something similar. This could lay the groundwork for incredibly advanced human-machine interfaces. Still, if you’re the type of person who doesn’t trust a polygraph, you’re definitely not ready for the AI-powered future where computers can tell what you’re thinking. H/t: MIT’s Technology Review
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
Why do we teach this? Why do we teach it in the way we do? Mathematics teaches children how to make sense of the world around them through developing their ability to calculate, reason and solve problems. Mathematics is essential to everyday life, critical to science, technology and engineering, and necessary for financial literacy and most forms of employment. A high-quality mathematics education therefore provides a foundation for understanding the world, the ability to reason mathematically, an appreciation of the beauty and power of mathematics, and a sense of enjoyment and curiosity about the subject. We aim to support children to achieve economic well-being and equip them with a range of computational skills and the ability to solve problems in a variety of contexts. At Mobberley, children are encouraged to make mistakes in a safe and supportive environment. They are supported to discuss these misconceptions with their peers and staff alike. Here at Mobberley, we place oracy at the heart of our learning through shared work and class discussions. Use of appropriate vocabulary is modelled throughout lessons by both staff and children, allowing everyone to ‘talk like a mathematician’. Once a child can articulate their understanding of a concept, they can truly begin to make connections within their learning. At our school, the majority of children will be taught the content from their year group only. They will spend time becoming true masters of content, applying and being creative with new knowledge in multiple ways. We aim for all pupils to: - Become fluent in the fundamentals of mathematics, so that they develop conceptual understanding and the ability to recall and apply knowledge rapidly and accurately. including the varied and regular practice of increasingly complex problems over time. - Reason mathematically by following a line of enquiry and develop and present a justification, argument or proof using mathematical language. - Can solve problems by applying their mathematics to a variety of problems with increasing sophistication, including breaking down problems into a series of simpler steps and persevering in seeking solutions – including unfamiliar contexts and real-life scenarios. Sequence & structure How does the maths curriculum plan set out the sequence and structure of how we’ll teach the knowledge and skills? We follow the National Curriculum, which sequences and structures the teaching into the year groups. In order to ensure this curriculum is covered in full and in manageable and logical steps, we follow the White Rose planning in EYFS, KS1 and KS2. The progression is clearly structured and available to see under this ‘progression’ link or within the maths curriculum web page.
An international team of scientists has determined the evolutionary family tree for one of the most strikingly diverse and endangered bird families in the world, the Hawaiian honeycreepers. Using one of the largest DNA datasets for a group of birds and employing next-generation sequencing methods, the team which included Professor Michi Hofreiter, of the University of York, determined the types of finches from which the honeycreeper family originally evolved, and linked the timing of that rapid evolution to the formation of the four main Hawaiian Islands. The research, which will be published in the latest edition of Current Biology on 8 November, also involved scientists from the Smithsonian Institution and Earlham College in the USA and the Max Planck Institute for Evolutionary Anthropology in Leipzig. There were once more than 50 species of these colourful songbirds that were so diverse that historically it was unclear that they were all part of the same group. Professor Hofreiter, of the Department of Biology at the University of York, said: “Honeycreepers probably represent the most impressive example of an adaptive radiation in vertebrates that has led to a number of beak shapes unique among birds. In our study we are, for the first time, able to resolve the relationships of the species within this group and thereby understand their evolution." Heather Lerner, an assistant professor of biology at Earlham College, added: “Some eat seeds, some eat fruit, some eat snails, some eat nectar. Some have the bills of parrots, others of warblers, while some are finch-like and others have straight, thin bills. So the question that we started with was how did this incredible diversity evolve over time?” The answer is unique to the Hawaiian Islands, which are part of a conveyor belt of island formation due to volcanic activity, with new islands popping up as the conveyor belt moves northwest. Each island that forms represents a blank slate for evolution, so as one honeycreeper species moves from one island to a new island, those birds encounter new habitat and ecological niches that may cause them to adapt and branch off into distinct species. The researchers examined the evolution of the Hawaiian honeycreepers after the formation of Kauai-Niihau, Oahu, Maui-Nui and Hawaii. The largest burst of evolution into new species, called a radiation, occurred between 4 million and 2.5 million years ago, after Kauaii-Niihua Oahu formed but before the remaining two large islands existed, and resulted in the evolution of six of 10 distinct types of species. Co-author Helen James, a research zoologist at the Smithsonian National Museum of Natural History “This radiation is one of the natural scientific treasures that the archipelago offers out in the middle of the Pacific. It was fascinating to be able to tie a biological system to geological formation and allowed us to become the first to offer a full picture of these birds’ adaptive history.” Using genetic data from 28 bird species that seemed similar to the honeycreepers morphologically, genetically or that shared geographic proximity, the researchers determined that the various honeycreeper species evolved from Eurasian rosefinches. Unlike most other ancestral bird species that came from North America and colonized the Hawaiian Islands, the rosefinch likely came from Asia, the scientists found. Rob Fleischer, head of Smithsonian Conservation Biology Institute’s Center for Conservation and Evolutionary Genetics said: “There is a perception that there are no species remaining that are actually native to Hawaii, but these are truly native birds that are scientifically valuable and play an important and unique ecological function. I’m thrilled that we finally had enough DNA sequence and the necessary technology to become the first to produce this accurate and reliable evolutionary tree.” The diversity of Hawaiian honeycreepers has taken a huge hit, with more than half of the known 56 species already extinct. The researchers focused on the 18 surviving honeycreeper species but of those, six are considered critically endangered by the International Union for Conservation of Nature, four are considered endangered and five are vulnerable. Professor Hofreiter said: “It is a tragedy that most species from this unique group of birds, one of the best examples of the power of natural selection we have on earth, are extinct or on the brink of extinction. We still have time to take actions to conserve the diversity that is left.” The next, step in the research is to use museum specimens and subfossil bones to determine where the extinct species fit into the evolutionary family tree, or phylogeny, to see if the new lineages fit into the overall pattern found in the current study. DNA analysis for the current study used specialized protocols developed by Professor Hofreiter and colleagues at the Max Planck Institute.
Max Planck was a German theoretical physicist, considered to be the initial founder of quantum theory, and one of the most important physicists of the 20th Century. Around the turn of the century, he realized that light and other electromagnetic waves were emitted in discrete packets of energy that he called "quanta" - "quantum" in the singlular - which could only take on certain discrete values (multiples of a certain constant, which now bears the name the “Planck constant”). This is generally regarded as the first essential stepping stone in the development of quantum theory, which has revolutionized the way we see and understand the sub-atomic world. Karl Ernst Ludwig Marx Planck, better known as Max, was born in Kiel in Holstein, northern Germany on 23 April 1858. His family was traditional and intellectual (his father was a law professor and his grandfather and great-grandfather had been theology professors). In 1867, the family moved to Munich, where Planck attended the Ludwig Maximilians gymnasium school. There, he came under the tutelage of Hermann Müller, who taught him astronomy and mechanics as well as mathematics, and awoke Planck’s early interest in physics. Although a talented musician (he sang, played the piano, organ and cello, and composed songs and even operas), he chose to study physics at the University of Munich in 1874, soon transferring to theoretical physics, before going on to Berlin for a year of further study in 1877. Having completed his habilitation thesis on heat theory in 1880, Planck became an unpaid private lecturer in Munich, waiting until he was offered an academic position. In April 1885, the University of Kiel appointed him as an associate professor of theoretical physics, and he continued to pursue work on heat theory and on Rudolph Clausius’ ideas about entropy and its application in physical chemistry. In 1889, Planck moved to the University of Berlin, becoming a full professor in 1892. He had married Marie Merck in 1887, and they went on to have four children, Karl (1888), the twins Emma and Grete (1889) and Erwin (1893), of whom only Erwin was to survive past the First World War. The Planck home in Berlin became a social and cultural center for academics, and many well-known scientists, including Albert Einstein, Otto Hahn and Lise Meitner, were frequent visitors. In 1894, Planck turned his attention to the problem of black body radiation, the observation that the greatest amount of energy being radiated from a “black body” (or any perfect absorber) falls near the middle of the electromagnetic spectrum, rather than in the ultraviolet region as classical theory would suggest. In particular, he investigated how the intensity of the electromagnetic radiation emitted by a black body depends on the frequency of the radiation (e.g. the color of the light) and the temperature of the body. After some initial frustrations, he derived the first version of his black body radiation law in 1900. However, although it described the experimentally observed black body spectrum well, he realized that it was not perfect. The previous year, though, in 1899, he had noted that the energy of photons could only take on certain discrete values which were always a full integer multiple of a certain constant, which is now known as the “Planck constant”. Thus, light and other waves were emitted in discrete packets of energy that he called "quanta". Defining the Planck constant enabled him to go on to define a new universal set of physical units or Planck units (such as the Planck length, the Planck time, the Planck temperature, etc), all based on five fundamental physical constants: the speed of light in a vacuum, the gravitational constant, the Coulomb force constant, the Boltzmann constant and his own Planck constant. Later in 1900, then, he revised his black body theory to incorporate the supposition that electromagnetic energy could be emitted only in “quantized” form, so that the energy could only be a multiple of an elementary unit E = hv (where h is the Planck constant, previously introduced by him in 1899, and v is the frequency of the radiation). Although quantization was a purely formal assumption in Planck’s work at this time and he never fully understood its radical implications (which had to await Albert Einstein’s interpretations in 1905), its discovery has come to be regarded as effectively the birth of quantum physics, and the greatest intellectual accomplishment of Planck's career. It was in recognition of this accomplishment that he was awarded the Nobel Prize in Physics in 1918. Planck was among the few who immediately recognized the significance of Einstein’s 1905 Special Theory of Relativity, and he used his influence in the world of theoretical physics (he was president of the newly formed German Physical Society from 1905 to 1909) to ensure that the theory was soon widely accepted in Germany, as well as making his own contributions to extending the theory. After Planck had been appointed dean of Berlin University, it became possible for him to call Einstein to Berlin and to establish a new professorship specifically for him in 1914, and the two scientists soon became close friends and met frequently to play music together. Planck's wife Marie died in 1909, possibly from tuberculosis, and, in 1911, he married his second wife, Marga von Hoesslin, who bore him a third son, Hermann, the same year. By the time of the German annexation and the First World War in 1914 (which Planck initially welcomed, but later argued against), he was effectively the highest authority of German physics, as one of the four permanent presidents of the Prussian Academy of Sciences, and a leader in the influential umbrella body, the Kaiser Wilhelm Society. By the end of the 1920s, Niels Bohr, Werner Heisenberg and Wolfgang Pauli had worked out the so-called "Copenhagen interpretation" of quantum mechanics, and the quantum theory which Planck’s work had triggered became ever more established, even if Planck himself (like Einstein) was never quite comfortable with some of its philosophical implications. When the Nazis seized power in 1933, Planck was an old man of 74, and he generally avoided open conflict with the Nazi regime, although he did organize a somewhat provocative official commemorative meeting after the death in exile of fellow physicist Fritz Haber. He also succeeded in secretly enabling a number of Jewish scientists to continue working in institutes of the Kaiser Wilhelm Society for several years. The “Deutsche Physik” movement attacked Planck, Arnold Sommerfeld and Werner Heisenberg among others for continuing to teach the theories of Einstein, calling them "white Jews". When his term as president of the Kaiser Wilhelm Society ended in 1936, the Nazi government pressured him to refrain from seeking another term. At the end of 1938, the Prussian Academy of Sciences lost its remaining independence and was taken over by Nazis, and Planck protested by resigning his presidency. He steadfastly refused to join the Nazi party, despite being under significant political pressure to do so. Allied bombing campaigns against Berlin during the Second World War forced Planck and his wife to leave the city temporarily to live in the countryside, and his house in Berlin was completely destroyed by an air raid in 1944. He continued to travel frequently, giving numerous public lectures, including talks on Religion and Science (he was a devoted and persistent adherent of Christianity all his life), and at the advanced age of 85 he was still sufficiently fit to climb 3,000-meter peaks in the Alps. At the end of the Second World War (during which his youngest son Erwin was implicated in the attempt on Hitler's life in 1944 and hanged), Planck, his second wife and his remaining son moved to Göttingen. He died there on 4 October 1947, aged 89, from the consequences of a fall and several strokes. Max Planck Resources See the additional sources and recommended reading list below, or check the physics books page for a full list. Whenever possible, I linked to books with my amazon affiliate code, and as an Amazon Associate I earn from qualifying purchases. Purchasing from these links helps to keep the website running, and I am grateful for your support!
This exhibit features a coronal section of an infant brain with an enlarged detail depicting the meninges of the brain. The full coronal section illustrates the normal anatomy of the brain, including: the cerebellum, midbrain, tentorium cerebelli, dural membrane, ventricles, skull, venous sinuses, and cerebrum. The schematic cross section further details the meningeal layers surrounding the brain. Skin, subcutaneous fat, galea aponeurotica, and pericranium cover the outer skull. The inner skull is lined with endocranium and dura mater. The dura mater is a tough, membranous sac that protects the brain. It also contains cerebrospinal fluid and the venous sinuses. Beneath the dura lies the leptomeninges (arachnoid mater and pia mater), which cover the brain, and also contain cerebrospinal fluid. Cerebral arteries travel through the subdural spaces and supply blood to the brain tissues.
Common Types of Pond Water Algae There are four common types of pond water algae and a toxic alga that is found in some larger ponds. The greenish and brownish alga will make ponds look dirty and not suitable for swimming or fishing. However, one type of algae helps oxygenate the water in a pond and keep fish alive. Sometimes algae in a pond is necessary. The alga that makes the water look green on the top is called planktonic. The green alga will help oxygenate a pond and when it dies off, it can cause a depletion of oxygen in the water and the fish can die as a result. Some strains of this alga can give off a foul smell. Pond scum is called filamentous algae. It starts growing around the bottom and edges of the pond and can rise to the surface as the pond begins to fill with it. The thread-like alga is greenish in color and attaches to turtles, rocks and logs. There are two forms of resistant algae that can grow in ponds. The lyngbya alga is a bluish green and the pithophora is a darkish green alga that looks similar to an S.O.S. scrubbing pad. The pithophora grows mainly on the bottom of ponds and sometimes it will appear on the surface of the water, but that is rare. The lyngbya will lie on the bottom of a pond, but can float to the top. The most common form of algae is the string alga. It looks like stringy hair floating in the water and can become long and slimy to the touch. The alga normally grows by the shallow waters around plants and rocks. Toxic algae can grow in large ponds and can cause harm to humans, wildlife and pets. The U.S. Department of the Interior Federal Water Pollution Control Administration analyzed medical case histories for a 120-year period for humans who were exposed to toxic algae and it caused throat, eye and nose irritation, muscle pain, fever, diarrhea, vomiting, nausea and skin rashes. Animals that drink the contaminated water died, as noted by the Water Quality Criteria Handbook. Pamela Gardapee is a writer with more than seven years experience writing Web content. Being functional in finances, home projects and computers has allowed Gardapee to give her readers valuable information. She studied accounting, computers and writing before offering her tax, computer and writing services to others.
The dynamics of water near solid surfaces play a critical role in numerous technologies, including water filtration and purification, chromatography, and catalysis. One well-known way to influence those dynamics, which in turn, affects how water “wets” a surface, is to modify the surface hydrophobicity, or the extent to which the surface appears “oily” and repels water. Such modifications can be achieved by altering the average coverage, or surface density, of hydrophobic chemical groups on the interface. Now, in a paper published in the Proceedings of the National Academy of Sciences, lead author Jacob Monroe, a fifth-year PhD student in the lab of UCSB chemical engineering professor M. Scott Shell, describes a new perspective on the factors that control water dynamics at interfaces. The findings could have important ramifications for membranes, especially those used in water filtration. “What we’re seeing is that just changing the patterning alone — the distribution of those hydrophobic and hydrophilic groups, without changing the average surface densities — produces fairly large effects at an interface,” Monroe said. “That’s valuable to know if I want water to flow through a membrane optimally.” Monroe and Shell found that if they arranged all the hydrophobic groups together and made the surface very patchy, the water moved faster, but if they spread them all apart, the water slowed down. “If the membrane were for water filtration, you might want the water to move quickly across it,” Monroe notes, “but you might also want to sit at the surface to repel par-ticles that stick to it and foul the membrane.” Monroe's finding about patterning holds immediate relevance for interpreting experiments, because it means that assessing the surface density of hydrophobic groups alone is not enough to characterize the material. Monroe and Shell discovered the distribution effect by combining simulations of molecular dynamics with a genetic algorithm optimization, which is simply an algorithm that emulates natural evolution — here used to identify surface patterns that either increase or decrease surface-water mobility. “It’s kind of like a breeding program,” Monroe explains. “If you had a pool of dogs and wanted a certain kind of dog, say one that’s bigger or has a shorter tail, you would breed the dogs that have those characteristics. We do the same thing on a computer, but our goal is to design a surface having specific characteristics that allow it to perform how we want it to. You need the fitness metric, and then you can tune the genetic algorithm to optimize specific performance characteristics, for instance, to have water move quickly across a membrane or to adsorb on a surface. “We run molecular dynamics simulations to assess those properties,” he adds. “We assign a level of fitness to each individual, and then hybridize the most fit individuals spatially and drive the systems toward the desired properties. “This work is exciting because it shows for the first time that nanoscale patterning on surfaces is an effective means of engineering materials that give rise to unique water dynamics,” Shell says. “It has long been thought that biological molecules, like proteins, use surface chemical patterning to influence water dynamics toward functional ends, such as accelerating binding events that underlie many biomolecular processes. We have now used a computational optimization algorithm to 'learn' what these patterns should look like in synthetic materials having target performance characteristics. The results suggest a new way to design surfaces to precisely control water dynamics near them, which becomes widely important to chemical separations and catalysis tasks.” The research will also be useful in a new Energy Frontier Research Center (EFRC) project (see article at left). In that effort, the researchers will be taking a materials approach to “design and perfect” revolutionary new materials that can be used as membranes for filtering chemically contaminated water for re-use.
Influenza, commonly known as “the flu”, is an infectious disease caused by the influenza virus. Symptoms can be mild to severe. The most common symptoms include: a high fever, runny nose, sore throat, muscle pains, headache, coughing, and feeling tired. These symptoms typically begin two days after exposure to the virus and most last less than a week. The cough, however, may last for more than two weeks. In children there may be nausea and vomiting but these are not common in adults. Nausea and vomiting occur more commonly in the unrelated infection gastroenteritis, which is sometimes inaccurately referred to as “stomach flu” or “24-hour flu”. Complications of influenza may include viral pneumonia, secondary bacterial pneumonia, sinus infections, and worsening of previous health problems such as asthma or heart failure. Usually, the virus is spread through the air from coughs or sneezes.This is believed to occur mostly over relatively short distances. It can also be spread by touching surfaces contaminated by the virus and then touching the mouth or eyes. A person may be infectious to others both before and during the time they are sick. The infection may be confirmed by testing the throat, sputum, or nose for the virus. Influenza spreads around the world in a yearly outbreak, resulting in about three to five million cases of severe illness and about 250,000 to 500,000 deaths. In the Northern and Southern parts of the world outbreaks occur mainly in winter while in areas around the equator outbreaks may occur at any time of the year. Death occurs mostly in the young, the old and those with other health problems. Larger outbreaks known as pandemics are less frequent. In the 20th century three influenza pandemics occurred: Spanish influenza in 1918, Asian influenza in 1958, and Hong Kong influenza in 1968, each resulting in more than a million deaths. The World Health Organization declared an outbreak of a new type of influenza A/H1N1 to be a pandemic in June of 2009. Influenza may also affect other animals, including pigs, horses and birds. Frequent hand washing reduces the risk of infection because the virus is inactivated by soap. Wearing a surgical mask is also useful. Yearly vaccinations against influenza is recommended by the World Health Organization in those at high risk. The vaccine is usually effective against three or four types of influenza. It is usually well tolerated. A vaccine made for one year may be not be useful in the following year, since the virus evolves rapidly. Antiviral drugs such as the neuraminidase inhibitors oseltamivir among others have been used to treat influenza. Their benefits in those who are otherwise healthy do not appear to be greater than their risks. No benefit has been found in those with other health problems.
Nowadays, climate change threats are rapidly increasing so, making sustainability a necessary tool to reduce the pressure on the environment. Every day, we are exposed to various environmental risks including pollution, exhaustion of natural resources, depletion of the ozone layer, generation of e-waste, poor air quality, water quality, deforestation, and more. These environmental parameters pose a challenge to sustainability. With no focus on the CO2 emissions, US CO2 emissions will reach 4,807 million metric tons. Every day, we hear about green technology or sustainability. With the increased industrialization, there is a need for a possible solution to tackle the environmental problems with green technology. Sustainable Technology is important to bring innovation. Consider the natural resources of the environment and thus foster social and economic development. Green technology helps to reduce the negative impacts on human health by conserving resources and nature. Why is Green Technology Important to us? We need green technology to slow down global warming and to reduce the greenhouse gases in the environment. Implementing green technology to preserve natural resources is beneficial for human well-being and the planet’s health. This technology ensures the use of renewable energy sources instead of non-renewable sources of energy. Sustainable technology comprises recycling, health concerns, energy efficiency, and renewable energy resources. Shifting towards green technology has a positive impact on the environment in the long run. One of the foremost objectives of green technology is to preserve and protect the environment. A good example of green technology that helps to reserve the adverse effects of existing damage is “Bioenergy with Carbon Capturing & Storage”, which converts crops into biofuels and the remaining CO2 is captured. Eco-friendly technology or green technology gives some of the basic principles that contribute to to the betterment of the earth’s natural environment. - Waste Reduction - Efficient Power Use - Limit Hazardous Materials - Eco-Friendly Products - Reduce Consumption of Resources - Improve the quality of life - Increase product’s life cycle What is the Application of Sustainable/Green Technology in Life? With the passing years, green technology has now been within the reach, giving more opportunities for households and individuals to protect the natural environment. Some of the benefits of sustainable technology in our daily life are: - Energy-Efficient Lighting Ideally, shifting towards green technology, smart light bulbs are the ones compared to the incandescent bulbs that were more common in the past. LED and smart bulbs are inventions that consume less electricity and change the lighting game. LEDs are great due to the longer lifespan, less energy consumed, and safe material. Smart LED bulbs boast 80% improvements compared to incandescent bulbs as they do not contain any harmful chemicals such as mercury, eliminating the risk to inhale toxic fumes. Replacing your power source with solar-powered systems is another sustainable way to help the environment. Shifting towards solar energy reduces the electricity bills as well the carbon emissions. Solar panels are a great alternative to traditional coal plants. Electricity generation through solar systems reduces the carbon footprint as well as saves massive bills. Just like solar panels, now there are solar batteries too that can store extra solar energy in them to be used later. 3. Renewable Sources of Energy Alternative energy sources other than solar energy are also available, known as clean energy sources. Clean energy includes biogas, wind, biomass, geothermal, and hydropower energy. These sources generate zero GHGs and do not deplete the natural resources of the earth. Renewable energy widens sustainable living and supports large-scale employment compared to fossil fuels. Harnessing wind energy is a great example of green technology. Among the easiest and the greenest technology is composting that can be easily started anywhere. Recycle organic and food waste at home simply through composting that improves recycling of nutrients, soil health, mitigates drought impacts, and reduces GHGs. You can make your worm bin at home and mow many people are moving towards household composting. 5. Electric Vehicles Electric cars are more efficient than gasoline cars as you can readily fuel up your cars with electricity with zero to minimal emissions of carbon. Their electric engines convert the energy without producing any toxic fumes that are beneficial for a healthy environment. 6. Vertical Farming Vertical Farming, another eco-friendly technology that benefits human living. Growing plants vertically in stacked layers than horizontal increases sustainability. This farming reduces the use of water and does not require soil for the growth of plants. Vertical farming maintains consistent production of crops around the year without affecting Mother Nature. These crops grown within a controlled climate do remain unaffected by the adverse weather. Environmental-friendly farming reduces the need for fossil fuels and no need for pests, weeds for crop harvesting. Benefits of Sustainable Living Sustainable living reduces the demand for earth’s natural resources, replacing them with the best resources available for mankind. Sustainable living gives benefits to both human beings as well as the planet. A healthy diet is a sustainable diet, produced with less impact on the ecosystem and the soil. Taking a sustainable approach to your life makes you feel better and protects the planet’s resources. Waste reduction is the 1st step towards sustainable living to benefit the present and the future generation. With sustainability, it leads to less air pollution in the environment. Planting more trees cleans the air so, WBM Foundation also works to plant more trees and does well to play its role in conserving Mother Nature for future generations. Being green is not easy but, today is the need to be sustainable or eco-friendly considering the renewable sources of energy, using energy-efficient appliances and lighting systems, and using electric cars for transportation. Green technology is important to combat climate change. So, WBM Foundation addresses the environmental causes by organizing activities and campaigns in institutions to work for the prosperity of humans and nature. At times, green technology was limited to large-scale applications so, these technologies contribute to bringing back the life and environment to breathing conditions. Thankfully, there is enough awareness about the environmental issues that increase the demand for eco-friendly technologies. Of course, not everyone can afford these options, but moving to simple and easy green options makes you live on the planet safely while protecting Mother Nature.
Flooded soils occur with complete water saturation of soil pores, and generally result in anoxic conditions of the soil environment. Flooded soil environments may include such ecosystem as: rice paddies; wetlands (swamps, marshes, and bogs); compacted soils; and post-rain soils (Scow, 2008). Additionally, similar redox conditions (where oxygen is lacking) can also be found within soil aggregates and along pollutant plumes, and thus many of the concepts discussed in this section may be applied to those environments. Oxygen is only sparingly soluble in water and diffuses much more slowly through water than through air (Schlesinger, 1997). What little oxygen that is present in saturated soils in the form of dissolved O2 is quickly consumed through metabolic processes. Oxygen is used as terminal electron acceptor via respiration by roots, soil microbes, and soil organisms (Sylvia, 2005), and is lost from the soil system in the form of carbon dioxide (CO2). Heterotrophic respiration may completely deplete oxygen in flooded soils; and these effects may be observed within only a few millimeters of the soil surface (Schlesinger, 1997). Due to the deficiency of oxygen in flooded soils, those organisms inhabiting flooded soils must be able to survive with little to no oxygen. Although energy yields are much greater with oxygen than with any other terminal electron acceptor (see #Electron tower theory, section 2.1.1), under anoxic conditions anaerobic and facultative microbes can use alternative electron acceptors such as nitrate, ferric iron (Fe III), manganese (IV) oxide, sulfate, and carbon dioxide to produce energy and build biomass. Microbial transformations of elements in anaerobic soils play a large role in biogeochemical cycling of nutrients and in greenhouse gas emissions. Changes in the oxidation state of terminal electron acceptors may result in nutrient loss from the system via volatilization or leaching. Anaerobic microbial processes including denitrification, methanogenesis, and methanotrophy are responsible for releasing greenhouse gases (N2O, CH4, CO2) into the atmosphere (Schlesinger, 1997). In general, flooded soil condition occurs due to seasonal flooding or agricultural activity. The flooded soils condition can be often converted into non-flooded soil condition by the water level fluctuation and drainage. Through this variation of soil condition, various gases are emitted into the atmosphere and environmental factors, such as redox potential (Eh), pH, acidity, alkalinity, and salinity, are continuously changed. As explained in the #introduction, microorganism can use alternative terminal electron acceptor when dissolved oxygen is absent (nitrate,perchlorate, sulfate, carbon dioxide). They successively use electron acceptors according to the order of electron acceptor utilization based on the electron tower. The progression of electron acceptor utilization is observed in soil aggregates and pollutant plume. In redox reactions, one molecule (the reducing agent) loses electrons and the other molecule(the oxidizing agent) accepts electrons. A classic example is well known in the process of cellular respiration when glucose (the reducing agent) reacts with oxygen (the oxidizing agent)and is oxidized to carbon dioxide, and oxygen is reduced to water. Oxygen is the most common electron acceptor, and some organisms can not live long without it.(6) In flooded soils oxygen is typically not availible. Facultative and strict anaerobic bacteria have the ability to use other oxidizing agents/electron acceptors to carry out respiration. Anaerobic and facultative bacteria will use the electron acceptor which yields the highest energy, or the acceptor which is most available. The availibility and concentration of electron acceptors changes as the soil profile increases in depth. Electron tower theory explains the utilization order of electron acceptor for respiration. Depending on the type of electron acceptors used by microorganisms, microbes can be classified into the strict aerobes, obligate anaerobes, and facultative anaerobes. The strict aerobes can not live under anoxic condition; on the contrary, obligate anaerobes can never use oxygen as electron acceptor. However, facultative anaerobes can live in both aerobic and anaerobic condition. If oxygen is plentiful, they tend to use oxygen because microorganisms gain much energy from reducing oxygen rather than other electron acceptors. When there is no more available oxygen in solution, they start to use nitrate as electron acceptor. Thus, obligate anaerobes and facultative anaerobes use alternative electron acceptor in the order of electron acceptor having more reducing energy. Oxygen is most efficient electron acceptor, while carbon dioxide has the less reducing energy. Gleyed soil and Recover from flooded soils Soil Gleying: Gleying is a phenomenon in which waterlogged soils are discolored by accumulating of Fe(II) due to reduction of ferric iron into ferrous iron (Lovely 1991). Although ferric iron exists as an insoluble form in flooded soils, more ferrous iron can accumulate by the reduction of ferric iron over time. This results in a greenwish, blue, grey soil color. In general Fe(III)-reducing fermentative bacteria can be readily isolated from gleyed soil. The black color of soils/solution is frequently observed in flooded soil. This may result from the formation of iron sulfides (FeS) and pyrite (FeS2) (Wenk and Bulakh 2004). Recover to aerobic condition When waterlogged soils drain, the Eh starts to increase due to entering of oxygen. Plentiful oxygen represses the activity of anaerobes so that the population of aerobes increases. If oxygen diffuses into the deep soil, the production of H2S ceases. Under aerobic conditions, ferrous iron is oxidized by iron oxidizing bacteria and ferrous oxides or ferrous hydroxide mineral increase. The gray color in soil is converted to a red, yellow, or brown color as these minerals are oxidized. At the high Eh zone ( > 500 mV), undecomposed soil organic matter is used as an electron donor by aerobes and converted to water and CO2 (Richardson and Vepraskas 2000). Variation of pH and Eh Neutral pH soil When soil is saturated with water, pH drops at first due to organic acid produced from fermentation. Then, pH gradually starts to rise because H+ is consumed via respiration of the aerobes and anaerobes. The half reactions of hydrogen consumption are as follow; Aerobic respiration: ½ O2 + 2e- + 2H+ -> H2O (by facultative anaerobe and aerobes) Iron reduction: Fe(OH)3 + 3 H+ + 2e- -> Fe2+ + 2H2O (by Iron reducing bacteria) Denitrification: 2NO3- + 12 H+ +10e- -> N2+6H2O (by Denitrifier) Sulfate reduction: SO42- + 10H+ +8e- -> H2S + 4H2O (by sulfate reducing bacteria) Methane production: CO2 + 8 H+ + 8e- -> CH4 +2 H2O (by Methanogens) Manganese reduction: MnO2 + 4H+ + 2e- ->Mn2+ + 2H2O (by Manganese reducing bacteria) During the succetion of anaerobic oxidation processes, the redox potential (Eh) of the flooded soil will decrease as a result of the reduced products formed. The approximate redox potential values that indicate the start and end of a specific reduction oxidation process are as follows: |Disappearance of oxygen||+330| |Disappearance of nitrate||+220| |Appearance of manganese ions||+200| |Appearance of ferrous ions||+120| |Disappearance of sulfate||-150| |Appearance of methane||-250| Solubility/mobility of mineral Since the toxicity, solubility, mobility,and bioavailability of a given element or compounds are mainly influenced by soil solution redox potenial and pH, flooded soil condition plays an important role in mobility of trace metal, nutrients, and mineral. Plant nutrient availability Flooded soils can prevent efficient gas exchange between the plant root and the soil. pH plays a main role in a healthy plant growth process. In flooded soils, under anaerobic conditions the pH value wil tend to rise. Denitrification of soil nitrate to nitrogen gas plays a major role in the rise of pH levels. Flooding results in poor soil aeration because the supply of oxygen to flooded soil is severely limited. Oxygen deficiency is likely the most important environmental factor that triggers growth inhibition and injury in flooded plants. Microoganisms will begin to use the available plants nutrients as alternative electron acceptors, such as sulfate, nitrate and iron(III). Experiments have been done on soybean plants to show the effects of flooded soils. Flood duration effects on the soybean plant were manifested in yellowing and abscission of leaves at the lower nodes, stunting, and reduced dry weight and seed yield. Canopy height and dry weight decreased linearly with duration of the flood at both growth stages. The growth rates were 25 to 35% less when soybean was flooded (3) Key microbial processes and organisms involved The role of microorganisms under flooded soil In anaerobic respiration oxygen is replaced by other compounds as TEA (terminal electron acceptors). Some important terminal electron acceptors include iron, nitrate, sulfate, and manganese. These processes occur because of microoganism activity. Energy yields of these alternative electron acceptors are lower than for aerobic respiration. In flooded anaerobic conditions, microorganisms will have to use other lower energy yielding compounds besides oxygen. As the available oxygen drops organisms that thrive under anoxic conditions begin to grow using alternative electron acceptors. The order in which these available elctron acceptors is consumed can be partially predicted by the electron tower. Changes in the oxidation-reduction status over time reflect the activities of a succession of microorganisms able to use these alternative electron acceptors. Flooding alters the microbial flora in soil by decreasing the O2 concentration. Fermentation is one of the major biochemical processes responsible for organic matter decomposition in flooded soils. Eh levels can affect the compounds that are fermented. These levels will tend to gradually drop in flooded soils fermentation under anoxic condition There are many types of fermentative bacteria, such as the genus Bacillus, Clostridium, and Lactobacillus, in soils. 4 ATP molecules per molecule of glucose are produced by fermentation, while 38 ATP molecules are yielded by aerobic respiration. Although the energy production rate by fermentation is less efficient rather than oxidative phosphorylation, fermentation plays an important role in anaerobic respiration of obligate and facultative anaerobic bacteria, including denitrifier, Fe3+, Mn4+, SO42-, reducers, and methanogens. Sugar (glucose or fructose) is broken down into simple compounds (e.g. formate, acetate, and ethanol) during fermentation, Also, numerous fermentation products, such as carbon dioxide, fatty acid, lactic, alcohols, are released into soils. These compounds serve as substrates for other anaerobic bacteria. Thus, low molecular weight organic compounds produced from fermentation influence the reduction of Fe(III), Mn(IV), SO42-, and CO2(Richardson And Vepraskas 2000). Organisms involved in flooded soil nitrate reducing bacteria When available oxygen is depleted, denitrification,the reduction of NO3- to NO,N2,or N2, primarily occurs. Denitrification is carried out by obligate respiratory bacteria belonging to the genra Agrobacterium, Alcaligenes, Bacillus, Paracoccus denitrificans, Pseudomonas and Thiobacillus (Knowles, 1982). Nitrate ammonification found in facultative anaerobe bacteria belonging to the genera Bacillus, Citrobacter and Aeromonas, or in the memebers of the Enterobacteriaceae (Cole adn Brown, 1980; Smith adn Zimmerman, 1981; MacFarlane and Herbert, 1982). Strictly anaerobic bacteria belonging to the genus Clostridium are also able to reduce nitrate to ammonia (Hasan and Hall, 1975). Pure culture studies show evidance that nitrate reduction does occur in the presence of oxygen (Kuenen and and Robertson, 1987). Iron/Manganese reducing bacteria Most microorganisms can reduce Mn4+ and Fe 3+. Ferrous iron is used as electron acceptor by iron-reducing bacteria such as Geobacter(Geobacter metallireducens and Geobacter sulfurreducens),Shewanella putrefaciens,Desulfovibrio, Pseudomonas, and Thiobacillus(Lovley 1993). Bacillus, Geobacter, and Pseudomonas are representative manganese-reducing bacteria. Different forms of ferric oxides exist in aerobic drained as well as in waterlogged soils. Not all of these ferric oxides are equally suitable for reduction by ferric oxide reducer bacteria (Gotoh and Patrick, 1974; Schwertmann and Taylor, 1977). In general, amorphous forms are more efficient for ferric reducer bacteria than crystalline forms (Lovely adn Phillips, 1986). The reduction of ferric oxide may release phosphate and trace elements that are adsorbed to the amorphous ferric oxide and thus enhance availablity of these compounds in the soil (Lovely and Phillips, 1986). Sulfate reducing bacteria Bacteria can use acetate as electron donor and sulfate as electron acceptor. This reaction is as follow; CH3COO- + SO42- + 3 H+ ---> 2CO2 + H2S + 2 H2O This reaction is carried out by sulfate-reducing bacteria such as Desulfobacterales, Desulfovibrionales, and Syntrophobacterales (Langston and Bebiano 1998). Hydrogen sulfide gas produced via anaerobic respiration cause the rotten egg odor. Methanogen products less energy than other rueducing reaction because the reduction of carbon dioxide occur under the most anaerobic and reduced conditions(see #Electron tower section). Thus, the activity of methanogen is repressed until other alternative terminal electron acceptor such as Fe(III), NO3-,and SO42-, have been depleted. Methanogen (e.g Methanobacterium formicum, Methanobacterium bryantii, Methanobacterium thermo-autrotrophicum, and etc ) can use CO2 and produce methane (Langston and Bebiano 1998) Greenhouse Gas Emissions from Flooded Soils Flooded soils are dynamic ecosystems that play an important role in biogeochemical cycling and in the production of greenhouse gases. Methane (CH4+) and nitrous oxide (N2O) are produced as byproducts of anaerobic metabolism in the low-redox zones characteristic of flooded soils, where oxygen is lacking. Carbon dioxide (CO2), which receives widespread attention as a greenhouse gas and potential source of global warming, may also be produced at the interface of anaerobic-aerobic zones through the consumption of methane gas. However, it should be noted that from a global standpoint methane and nitrous oxide on a per molecule basis have the potential to contribute 25x and 300x more to global warming over the next century than carbon dioxide, respectively (Schlesinger, 1997). Thus the conversion of methane gas to carbon dioxide essentially reduces the greenhouse gas effect by 25x per molecule per 100 years. According to Matthews and Fung (1987), an estimated 3.6% of terrestrial land is classified as wetlands, and although this number continues to decline (Schlesinger, 1997) the effect of flooded soils to the global climate is clear. Methane production; methanogenesis Methane production occurs exclusively in anaerobic conditions by a group of Archaea known as methanogens. These microbes are obligatory, and require extremely low redox conditions in the range of -100mV (see #electron tower theory, section 2.1.1) (Sylvia, 2005). If oxygen is introduced into the system, methanogenesis ceases; thus, the process of methanogenesis depends on saturated soil conditions. Methanogenesis can occur via one of two pathways: either by 1) CO2 reduction or by 2) acetate fermentation. 1) CO2 + H2 --> CH4+ (CO2 reduction) 2) CH3COOH --> CH4+ + CO2 (acetate fermentation) Both acetate and hydrogen are byproducts of anaerobic fermentation. Because the process of methanogenesis is “fed” byproducts produced from a complex series of degradation processes which are themselves “fed” complex organic matter, rates of methane production are highly sensitive to changes in temperature. Methanogenesis has a Q10 value in the range of 30-40, which is substantially higher than most biochemical process (Sylvia, 2005). Despite the clear effect of increasing temperatures on the rate of methanogenesis, the actual impact of global warming on methane production rates in wetlands and permafrost regions is highly unpredictable. Because methanogenesis requires anoxic conditions, any drying of flooded soil environments would both decrease methane production and increase methane oxidation, reducing overall methane emissions. Alternatively, warmer climates could increase growing seasons, which would increase methane emissions (Sylvia, 2005). CO2 production via methane consumption: Methanotrophy Some of the methane produced via methanogenesis in flooded soils may be consumed and oxidized to CO2 at the interface of the anaerobic-aerobic zones. This process occurs primarily by a group of bacteria known as methanotrophs. These microbes can be found in surface layers of wetland soils and unsaturated upland soils, and may be exposed to very high concentrations of methane gas, sometimes amounting to 10% or more of the dissolved gases. Methane is thought to be the only source of C and energy for these bacteria. Methanotrophy occurs in the following reaction: CH4+ + 2O2 --> CO2 + 2H2O Methane is similar in size and shape to ammonium; and there is some evidence that nitrifiers (ammonium oxidizers) can also oxidize methane (Sylvia, 1998). Because they are molecularly similar, NH4+ competes at the enzyme’s active site, inhibiting methane oxidation. As a result, methanotrophy is generally inhibited by the addition of fertilizer or excess nitrogen in the system, when ammonium levels are high. Alternatively, if nitrogen is extremely limiting the addition of nitrogen will stimulate methanotrophy and actually increase methane consumption. So although it is generally expected that adding N-fertilizer will decrease CH4+ consumption and lead to increased global warming potential, sometime the opposite effect may occur. (Sylvia, 2005). Nitrous oxide; denitrification Denitrification is an anaerobic process in which nitrate serves as the terminal electron acceptor, and generally some source of organic carbon is the electron donor (also H2 may serve as a donor). In this process, nitrate is oxidized to nitric oxide, then nitrous oxide, and then fully oxidized to dinitrogen: NO2- --> NO --> N2O --> N2 However, under certain conditions the full oxidation of NO3- to N2 does not occur and nitrous oxide (N2O) is produced. Microbes responsible include both organotrophs and lithotrophs, and this process occurs primarily by facultative anaerobes. Although a low redox potential is important for denitrification to occur (oxygen must not be present or it will “out-compete” nitrate as a terminal electron acceptor), redox requirements are not so low that this process cannot occur within anaerobic microsites of soil aggregates. Factors affecting nitrous oxide production include oxygen, pH, and the ratio of nitrate to available C. Although denitrification rates decrease with increasing oxygen, the proportion of N evolved as nitrous oxide actually increases with increasing oxygen. Low pH generally inhibits the reduction of N2O to N2; thus at low pH, N2O will likely dominate. However, highly acidic soils have low N availability and low nitrification and denitrification rates. Thus, the highest rate of nitrous oxide production from denitrification occurs in moist soils that cycle N rapidly (Sylvia, 2005). (1) Lecture 5 of Kate Scow. 2008. Microbial Metabolism. Unpublished, University of California, Davis. (2) Schlesinger, W.H. 1997. Biogeochemistry: An Analysis of Global Change. 2nd ed. Elsevier Academic Press, Amsterdam. (6) Flood Duration Effects on Soybean Growth and Yield http://agron.scijournals.org/cgi/content/abstract/81/4/631 (7) Knowles 1982 (8) Cole and Brown 1980 (9) Smith and Zimmerman 1982 (10) Mac Herbent 1982 (11)Hasan and Hall 1975 (12) Kuenen and Roberston 1987 (14) Gotoh and Patrick 1974 (15) Schwertman and Taylor 1977 (18) Derek R. Lovley and Elizabeth J.P Phillips., Availability of Ferric Iron for Microbial Reduction in Bottom Sediments of the Freshwater Tidal Potomac River., Appl Environ Microbiology. 1986, p. 751-757 (20) WJ Langston, MJ Bebianno, ,and GR Burt, ., (1998) "Metal handling strategies in molluscs" In: Langston, WJ, Bebiano, MJ eds. , Metal metabolism in the aquatic environment, Chapman and Hall, London, United Kingdom, pp 219-272 (21) Matthews, E. and I. Fung. 1987. Methane Emission from Natural Wetlands: Global Distribution, Area, and Environmental Characteristics of Sources. Global Biogeochemical Cycles 1: 61-86. Edited by students of Kate Scow
How Do Diesel Engines Work? There isn't much of a difference between gasoline and diesel engines. Both are types of internal combustion engines designed to change chemical energy into mechanical energy. In all combustible engines, energy is created to move pistons back and forth within cylinders. The moving pistons causes the crankshaft to turn. The wheels of the car begin to move. Small combustions power both diesel and gasoline engines. There is a difference, however, in how those explosions occur. Gasoline engines mix fuel with air that is then compressed by pistons and ignited by sparks. Diesel engines compress the air first and then inject the fuel. Because air becomes hotter when it is compressed, the process of compression creates the spark. Diesel Engines Today Diesels in cars, as mentioned above, aren't very popular in the United States. Europe manufactures several models of commuter vehicles that are powered by diesel. As diesel improves, it's becoming more and more popular in the United States. Collierville Christian Brothers If you have a diesel engine in Collierville, take it into Christian Brothers for repairs, tune-ups, and maintenance. We have been thoroughly trained to repair diesel engines.
Aquatic algae, (thought of by many as pond scum) are microscopic plants that grow in sunlit water that contains phosphates, nitrates, and other nutrients. Algae, like all aquatic plants, add oxygen to the water and are important in the fish food chain. They share many characteristics with plants, although they lack true stems, roots, and do not flower. Common algae that forms in ponds include planktonic algae (green water algae) and filamentous algae (string algae). Algae is actually important and beneficial to a pond or water garden. It is part of the eco-system we want to establish in the ponds because it helps in maintaining good water quality. If the pond filtration and circulation system are properly designed, the nutrients and toxins can be controlled, there by controlling algae growth. The perceived algae problem begins when algae grow in abundance, but this condition is really a symptom or an indicator of excessive nutrients and or toxins in the pond water. Excess nutrients are typically caused from feeding the fish too much, too often, or both. In addition, leaves, grass, or other organic material find their way into the pond, settle to the bottom and begin decaying and releasing nutrients into the water. Excess toxins are typically generated directly from fish and decomposing matter. When fish breath they release ammonia into the water form their gills. If the pond is overstocked, either by too many fish or too large of fish, you may find the water looks like pea soup. In addition, decomposing organic material such as fish waste, leaves, sticks, grass, etc., can generate toxins. As a general rule, if you are experiencing an algae problem, adding more of the ‘right kind’ of filtration will help you reduce and manage the amount of algae in your pond. - Plants, such as lilies, shade the pond and reduce the amount of sunlight available for algae growth. - Fish, especially koi, will eat a tremendous amount of algae. - Rocks and gravel provide surface areas for bacteria to colonize in and between the rocks, which is like having an additional biological filter in the pond. - Skimmers act as a mechanical filter by removing leaves and other debris from the surface of the pond before they can sink to the bottom and decompose and then turn into either nutrients or toxins. - Biological Filters provide an area for bacteria and enzymes to colonize which consume nutrients and help break down organic debris and fish waste that would otherwise contribute to water quality problems. Algae is a part of nature just like the other parts of the eco-system. The main goal in keeping clean water is not to attempt to completely rid your pond of algae, but to keep it in balance with nature.
Item Link: Access the Resource Media Type: Article - Recent Date of Publication: July 12, 2017 Year of Publication: 2017 Publisher: IOP Publishing Author(s): Seth Wynes, Kimberly A. Nicholas Journal: Environmental Research Letters There’s a gap between what the numbers indicate are the individual lifestyle choices most effective at reducing greenhouse gas emissions and the mitigation strategies mentioned in educational and government resources. Seth Wynes and Kimberly A. Nicholas quantitatively consider the potential for a range of individual lifestyle choices and find four high-impact actions: having one fewer child, living car-free, avoiding airplane travel, and eating a plant-based diet. However, when the authors reviewed educational and government resources on climate change mitigation, they found limited mention of these particular actions. ABSTRACT: Current anthropogenic climate change is the result of greenhouse gas accumulation in the atmosphere, which records the aggregation of billions of individual decisions. Here we consider a broad range of individual lifestyle choices and calculate their potential to reduce greenhouse gas emissions in developed countries, based on 148 scenarios from 39 sources. We recommend four widely applicable high-impact (i.e. low emissions) actions with the potential to contribute to systemic change and substantially reduce annual personal emissions: having one fewer child (an average for developed countries of 58.6 tonnes CO2-equivalent (tCO2e) emission reductions per year), living car-free (2.4 tCO2e saved per year), avoiding airplane travel (1.6 tCO2e saved per roundtrip transatlantic flight) and eating a plant-based diet (0.8 tCO2e saved per year). These actions have much greater potential to reduce emissions than commonly promoted strategies like comprehensive recycling (four times less effective than a plant-based diet) or changing household lightbulbs (eight times less). Though adolescents poised to establish lifelong patterns are an important target group for promoting high-impact actions, we find that ten high school science textbooks from Canada largely fail to mention these actions (they account for 4% of their recommended actions), instead focusing on incremental changes with much smaller potential emissions reductions. Government resources on climate change from the EU, USA, Canada, and Australia also focus recommendations on lower-impact actions. We conclude that there are opportunities to improve existing educational and communication structures to promote the most effective emission-reduction strategies and close this mitigation gap. Read the full article.The views and opinions expressed through the MAHB Website are those of the contributing authors and do not necessarily reflect an official position of the MAHB. The MAHB aims to share a range of perspectives and welcomes the discussions that they prompt.
4- to 5-Year-Olds: Developmental Milestones Your child is growing up. Have you noticed that your 4- to 5-year-old is becoming more independent and self-confident? If not, you will in the coming year. Most children this age begin to develop greater independence, self-control, and creativity. They are content to play with their toys for longer periods of time, are eager to try new things, and when they get frustrated, are better able to express their emotions. Although children grow and develop at their own pace, your child will likely achieve most of the following developmental milestones before he or she turns 6 years old. 4- to 5-Year-Old Development: Language and Cognitive Milestones Your curious and inquisitive child is better able to carry on a conversation. In addition, your child's vocabulary is growing -- as is his or her thought process. Not only is your child able to answer simple questions easily and logically, but he or she should be able to express feelings better. Most children at this age enjoy singing, rhyming, and making up words. They are energetic, silly, and, at times, rowdy and obnoxious. Other language and cognitive milestones your child may achieve in the coming year include being able to: - Speak clearly using more complex sentences - Count ten or more objects - Correctly name at least four colors and three shapes - Recognize some letters and possibly write his or her name - Better understand the concept of time and the order of daily activities, like breakfast in the morning, lunch in the afternoon, and dinner at night - Have a greater attention span - Follow two- to three-part commands. For example, "Put your book away, brush your teeth, and then get in bed." - Recognize familiar words, such as "STOP" - Know his or her address and phone number, if taught 4- to 5-Year-Old Development: Movement Milestones and Hand and Finger Skills Children learn through play, and that is what your 4- to 5-year-old should be doing. At this age, your child should be running, hopping, throwing and kicking balls, climbing, and swinging with ease. Other movement milestones and hand and finger skills your child may achieve in the coming year include being able to: - Stand on one foot for more than 9 seconds - Do a somersault and hop - Walk up and down stairs without help - Walk forward and backwards easily - Peddle a tricycle - Copy a triangle, circle, square, and other shapes - Draw a person with a body - Stack 10 or more blocks - Use a fork and spoon - Dress and undress, brush teeth, and take care of other personal needs without much help
In these globalised times, people are interdependent as never before. Inspiring young people to think about their roles and responsibilities as global citizens has become very important. Oxfam Hong Kong facilitates Global Citizenship Education in Hong Kong, Macau and Taiwan, to help young people to observe carefully, think critically, reflect conscientiously and act responsibly about local and global poverty issues. Through participatory and empowering learning processes, learners are enabled to learn through action about the interdependence between different peoples or countries. Learners will gain skills to discern, in everyday life, the linkages between the world, one’s nation, one’s community and oneself, and also to think critically and to respond proactively to issues of global poverty and injustice. 1. To be aware that each person is a member of the world: To treasure the interdependence between peoples, countries and species, and be willing to explore and reflect, in everyday life, on the relationship between oneself, one’s community, one’s nation and the world, and to respond in a responsible way. 2. To respect oneself and others:To respect the dignity, rights and value of every person. 3. To build and to live with positive values: To embrace values such as justice, diversity, love, peace and sustainability, and to be willing to practice these values in everyday life. 4. To develop critical thinking:To learn to read the world critically and be able to understand where one’s ideas and those of others come from; to know their limitations, and to realise that no one person’s understanding of the world is comprehensive and that no one person is most correct and knows best. 5. To develop a sense of responsibility and a sense of mission:To be aware that every choice and every action has its consequences and that every person is able to bring positive or negative changes to the world through his/her action. Click here to learn more about Oxfam’s Global Citizenship Education (Chinese only) Cyberschool (Chi Only) Oxfam Interactive Education Centre
Video Solutions to help grade 6 students learn how to calculate the median of the given data and estimate the percent of values above and below the median value. Plans and Worksheets for Grade 6 Plans and Worksheets for all Grades Lessons for Grade 6 Common Core For Grade 6 New York State Common Core Math Grade 6, Module 6, Lesson 12 Lesson 12 Student Outcomes • Given a data set, students calculate the median of the data. • Students estimate the percent of values above and below the median value. Lesson 12 Summary In this lesson, you learned about a summary measure for a set of data called the median. To find a median you first have to order the data. The median is the midpoint of a set of ordered data; it separates the data into two parts with the same number of values below as above that point. For an even number of data values, you find the average of the two middle numbers; for an odd number of data values, you use the middle value. It is important to note that the median might not be a data value and that the median has nothing to do with a measure of distance. Medians are sometimes called a measure of the center of a frequency distribution but do not have to be the middle of the spread or range (maximum-minimum) of the data. Lesson 12 Classwork How do we summarize a data distribution? What provides us with a good description of the data? The following exercises help us to understand how a numerical summary answers these questions. Example 1: The Median – A Typical Number Suppose a chain restaurant (Restaurant A) advertises that a typical number of french fries in a large bag is 82. The graph shows the number of french fries in selected samples of large bags from Restaurant A. 1. You just bought a large bag of fries from the restaurant. Do you think you have french fries? Why or why not? 2. How many bags were in the sample? 3. Which of the following statements would seem to be true given the data? Explain your reasoning. a. Half of the bags had more than 82 fries in them. b. Half of the bags had fewer than 82 fries in them. c. More than half of the bags had more than 82 fries in them. d. More than half of the bags had fewer than 82 fries in them. e. If you got a random bag of fries, you could get as many as 93 fries. Example 2: The Median Sometimes it is useful to know what point separates a data distribution into two equal parts, where one part represents the larger “half” of the data values and the other part represents the smaller “half” of the data values. This point is called the median. When the data are arranged in order from smallest to largest, the same number of values will be above the median as are below the median. 4. Suppose you were trying to convince your family that you needed a new pair of tennis shoes. After checking with your friends, you argued that half of them had more than four pairs of tennis shoes, and you only had two pairs. Give another example of when you might want to know that a data value is a half-way point? Explain your thinking. 5. Use the information from the dot plot in Example 1. The median number of fries was 82. a. What percent of the bags have more fries than the median? Less than the median? b. Suppose the bag with 93 fries was miscounted and there were only 85 fries. Would the median change? Why or why not? c. Suppose the bag with 93 fries really only had 80 fries. Would the median change? Why or why not? Exercises 6–7: A Skewed Distribution 6. The owner of the chain decided to check the number of french fries at another restaurant in the chain. Here is the data for Restaurant B: 82, 83, 83, 79, 85, 82, 78, 76, 76, 75, 78, 74, 70, 60, 82, 82, 83, 83, 83. a. How many bags of fries were counted? b. Sallee claims the median is 75 as she sees that 75 is the middle number in the data set listed above. She thinks half of the bags had fewer than 75 fries. Do you think she would change her mind if the data were plotted in a dot plot? Why or why not? c. Jake said the median was 83. What would you say to Jake? d. Betse argued that the median was halfway between 60 and 85 or 72.5. Do you think she is right? Why or e. Chris thought the median was 72. Do you agree? Why or why not? 7. Calculate the mean and compare it to the median. What do you observe about the two values? If the mean and median are both measures of center, why do you think one of them is lower than the other? Exercises 8–10: Finding Medians from Frequency Tables 8. A third restaurant (Restaurant C) tallied a sample of bags of french fries and found the results below. a. How many bags of fries did they count? b. What is the median number of fries for the sample of bags from this restaurant? Describe how you found 9. Robere decided to divide the data into four parts. He found the median of the whole set. a. List the 13 values of the bottom half. Find the median of these 13 values. b. List the 13 values of the top half. Find the median of these 13 values. 10. Which of the three restaurants seems most likely to really have 82 fries in a typical bag? Explain your thinking. 1. The amount of precipitation in each of the western states in the United States is given in the table as well as the a. How do the amounts vary across the states? b. Find the median. What does the median tell you about the amount of precipitation? c. Do you think the mean or median would be a better description of the typical amount of precipitation? Explain your thinking. 2. Identify the following as true or false. If a statement is false, give an example showing why. a. The median is always equal to one of the values in the data set. b. The median is halfway between the least and greatest values in the data set. c. At most, half of the values in a data set have values less than the median. d. In a data set with 25 different values, if you change the two smallest values in the data set to smaller values, the median will not be changed. e. If you add 10 to every value in a data set, the median will not change. 3. Make up a data set such that the following is true: a. The data set has 11 different values, and the median is 5. b. The data set has 10 values, and the median is 25. c. The data set has 7 values, and the median is the same as the least value. 4. The dot plot shows the number of landline phones that a sample of people have in their homes. a. How many people were in the sample? b. Why do you think three people have no landline phones in their homes? c. Find the median number of phones for the people in the sample. 5. The salaries of the Los Angeles Lakers for the 2012–2013 basketball season are given below. The salaries in the table are ordered from largest to smallest. a. Just looking at the data, what do you notice about the salaries? b. Find the median salary, and explain what it tells you about the salaries. c. Find the median of the lower half of the salaries and the median of the upper half of the salaries. d. Find the width of each of the following intervals. What do you notice about the size of the interval widths, and what does that tell you about the salaries? i. Minimum salary to the median of the lower half: ii. Median of the lower half to the median of the whole data set: iii. Median of the whole data set to the median of the upper half: iv. Median of the upper half to the highest salary: 6. Use the salary table from above to answer the following. a. If you were to find the mean salary, how do you think it would compare to the median? Explain your reasoning. b. Which measure do you think would give a better picture of a typical salary for the Lakers, the mean or the median? Explain your thinking. Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
Posted on May 25, 2020, 4 p.m. Imagine, if you will, being able to send signals to an immune cell within the human body to generate antibodies that could fight against a virus, bacteria, cancer or other pathogens. While this may seem like it comes out of science fiction horror movies, this fictional possibility has taken a step closer towards becoming science fact reality with the development of bio-compatible transistors that are about the size of a tiny virus. Professor Charles Lieber and colleagues have used nanowires to create a transistor that is so small it can be used to enter and probe cells without disrupting the intracellular machinery; these nanoscale semiconductor switches could even be used to possibly enable the two way communication with individual cells. Lieber has been working for over a decade on the development, design and synthesis of nanoscale parts that have enabled the creation of these tiny electronic devices. Creating this biological interface of a nanoscale device capable of communicating with a living organism has been a tricky project, the problem was being able to insert a transistor constructed on a flat plane into a 3D object of a cell perhaps 10 microns is size, wherein piercing the cell wasn’t enough because such a transistors need a source of wire from which electrons flow and drain a dire through which they are discharged. The key was to figure out how to introduce two 120 degree bends into a linear wire to create a hairpin configuration with the transistor near the tip. The nanowire probes were integrated with a pair of bimetal layered interconnects; joining strips of 2 different metals that expand at different rates when temperatures change to as is in thermostats, to lift the transistor up and out of the flat plane in which it was created. Inserting their tiny device into a cell was not as easy, as pressing hard enough to disrupt the cell membrane killed the cell fairly quickly. When the hairpin nanowire device was coated with a fatty lipid layer the device was easily pulled into the cell via membrane fusion which is the process related to the ones that cells use to engulf pathogens. Lieber explains this innovation is important because it indicates that when a man made structure is as small as a virus it can behave the way biological structures do. Preliminary testing of the tiny nanodevice indicates that it could be used to measure activity within neurons, heart cells and muscle fibers among others, and it could also measure two distinct signals within a single cell simultaneously, or even the workings of intracellular organelles. Because a transistor allows for the application of a voltage pulse this tiny device may even one day be able to provide a base for a hybrid biological digital computation or deep brain stimulation for those with conditions such as Parkinson’s disease, or it may even serve as an interface for a prosthetic that requires information processing at the point where it attaches to the user. As with any innovation such as this, the uses for such a device are only limited by imagination, for better or worse, whatever the case may be. Regulations for safety will undoubtedly need to be made before it is used in the future. “Digital electronics are so powerful that they dominate our daily lives. When scaled down, the differences between digital and living systems blurs, so that you have an opportunity to do things that sound like science fictions-- things that people have only dreamed about,” says Lieber. Materials provided by: Content may be edited for style and length. This article is not intended to provide medical diagnosis, advice, treatment, or endorsement.
To fully understand the basics of robotics, CR102 Robotics and Mechanics II introduces students to more robotics ideas. Students will learn to use the infrared sensor, ultrasonic sensor, photoresistors, and motors. They will also learn advanced robotics theories, including obstacle detection and avoidance. In doing so, students will use advanced programming logic such as functions. They will complete projects designed to challenge their ability to create interesting new things, as well as their problem solving skills – which in turn helps to build their logical thinking, independence, and confidence. During this course, students will: - Have a more in-depth understanding of everything they learned in CR101 - Learn how to view and use the serial monitor - Learn how to use ultrasonic and light sensors to improve their rovers
Galaxy clusters have been fascinating astronomers for decades. Often consisting of thousands of galaxies, the clusters are the largest known structures being held together by gravitational forces. At their centers, astronomers have found some of the biggest and most powerful black holes ever discovered, and high-energy jets of extremely hot particles emanating from these black holes were found to be preventing the formation of stars — which, of course, raised a galactic mystery: where are all the stars coming from? But now, thanks to data collected by NASA’s Chandra X-ray Observatory and the Hubble Space Telescope, a team of scientists has foundthat a galaxy cluster called the Phoenix Cluster, some 5.8 billion light years from Earth, is birthing stars at a “furious rate.” Its black hole seemed to be far weaker than other clusters’ black holes, with trillions of Suns’ masses worth of hot gas cooling around it, allowing the formation of a vast number of stars. Usually black holes have keep those gases from cooling — thereby stopping the formation of stars — by continuously spewing out high-energy jets of particles. The research could help us understand the life cycle of galaxy clusters and how the supermassive black holes at their centers interfere — and sometimes, seemingly, aid — the formation of stars within them. A paper of the results was published in The Astrophysical Journal last month. “Imagine running an air-conditioner in your house on a hot day, but then starting a wood fire. Your living room can’t properly cool down until you put out the fire,” co-author Brian McNamara from the University of Waterloo, Canada, said in a statement. “Similarly, when a black hole’s heating ability is turned off in a galaxy cluster, the gas can then cool.” In fact, they found that the hot gas was cooling at the same rate as when a black hole stops injecting energy. And that means a huge amount of stars are allowed to be born in regions where the hot gas has cooled sufficiently — in fact, the Phoenix Cluster is forming new stars at 500 times the rate of the Milky Way galaxy, according to X-ray observations made by the Chandra Observatory. This effect won’t go on forever, though. “These results show that the black hole has temporarily been assisting in the formation of stars, but when it strengthens its effects will start to mimic those of black holes in other clusters, stifling more star birth,” co-author Mark Voit from Michigan State University said in the statement. Published By @empiregenius READ MORE: Coming Soon! NowStreaming HipHop Artist: GeniusMan
Monarch butterflies (Danaus plexippus) are insects that are fascinating not only because of their attractive appearances, but also because of their far-flung annual migration habits -- think journeys of thousands of miles. Male and female monarch butterflies are both intensely colorful creatures, although they can often easily be distinguished by mere quick glances. Monarch Butterfly Background Monarch butterflies are prevalent all throughout North America. Many of them travel to more pleasant weather conditions during the coldest times of the year, whether to Mexico, California or even South America. Some monarch butterflies in milder geographical locations -- like Texas and Florida -- never migrate. Color wise, monarch butterflies are reddish-orange. This dazzling coloration functions as a handy "back off" sign to predators, as these butterflies are actually poisonous. Monarch butterflies are herbivorous, and their food intake consists of nectar. As youngsters, they eat milkweed, which is the reason why they're poisonous -- their bodies are full of "stashed away" cardenolides, which are plant compounds. The primary difference that exists between male and female monarch butterflies involves conspicuous blots on both of the males' hindwings, specifically the interior parts. These blots, which are situated over the veins, are noticeably absent in individuals of the fairer sex. These blots are composed of scales, and are actually scent glands. If you observe the sides of monarch butterflies' stomachs, you also might notice another key difference in the sexes. The sides of the males' stomachs branch out, unlike those of the females. The wings of male and female monarch butterflies also aren't exactly the same. The veins on the females' wings are notably broader than the ones that adorn the males' wings. Male monarch butterflies are also just a tad larger than the females. Monarch butterflies in general are practically light as air any way you slice it -- they typically weigh no more than a slight .026 of an ounce. If you ever observe the wooing activities of monarch butterflies prior to mating, you might notice that the males attempt to attract the females as they are flying. They start the whole process out by prodding against the females. They then seize them and begin mating with them on terra firma. The breeding season for monarch butterflies takes place in the spring each year. - Comstock/Comstock/Getty Images
*Note: THIS IS A STUDY GUIDE!! I did not write it to be considered as an essay, rather to assist any confused Bio students, help out with homework, or any other purpose that people see fit. Please do not critique my format, it's not an essay! Enzymes are proteins that acts as catalysts in organisms. A catalyst is a substance that speeds up reactions without the addition of heat; decreases the activation energy needed for the reaction to take place by bringing molecules in contact with each other, no longer relying on chance collisions. Most enzymes end in 'ase'. The active site of the enzyme is the part that comes in contact with the substrate, which is the molecule acted on by the enzyme. There are 2 models that show how they work: -Lock and Key Model-active site only allows certain molecules to fit in, molecule separates and leaves, enzyme is unchanged and can be reused. -Induced Fit Molecule- the active site is not rigid, changes as substrate enters. Properties of Enzymes- A single enzyme molecule can catalyze thousands of substrate reactions per second, each enzyme only works on one type of reaction, enzymes are not changed or used up in a reaction, enzyme does not determine direction of reaction, lower temperature= rate of reaction will be reduced, higher=enzyme=more effective, too high=denaturization (enzyme begins to break down, changes shape to stop functioning. ) Optimum temperature= 25-40 C. Enzyme works best at a certain pH (7). Rate of reaction depends on concentration of enzyme and substrate, temperature, and pH. However, at high substrate concentrations, the rate will not increase further upon adding more substrate, which is known as saturability. Coenzymes- organic cofactors, some vitamins. Irreversible Inhibitors- bond to active site and permanently cripple the enzyme. Competitive Inhibitors- substances that...
Billions of years ago, in the center of a galaxy cluster far, far away (15 billion light-years, to be exact), a black hole spewed out jets of plasma. As the plasma rushed out of the black hole, it pushed away material, creating two large cavities 180 degrees from each other. In the same way you can calculate the energy of an asteroid impact by the size of its crater, Michael Calzadilla, a graduate student at the MIT Kavli Institute for Astrophysics and Space Research (MKI), used the size of these cavities to figure out the power of the black hole’s outburst. In a recent paper in The Astrophysical Journal Letters, Calzadilla and his coauthors describe the outburst in galaxy cluster SPT-CLJ0528-5300, or SPT-0528 for short. Combining the volume and pressure of the displaced gas with the age of the two cavities, they were able to calculate the total energy of the outburst. At greater than 1054 joules of energy, a force equivalent to about 1038 nuclear bombs, this is the most powerful outburst reported in a distant galaxy cluster. Coauthors of the paper include MKI research scientist Matthew Bayliss and assistant professor of physics Michael McDonald. The universe is dotted with galaxy clusters, collections of hundreds and even thousands of galaxies that are permeated with hot gas and dark matter. At the center of each cluster is a black hole, which goes through periods of feeding, where it gobbles up plasma from the cluster, followed by periods of explosive outburst, where it shoots out jets of plasma once it has reached its fill. “This is an extreme case of the outburst phase,” says Calzadilla of their observation of SPT-0528. Even though the outburst happened billions of years ago, before our solar system had even formed, it took around 6.7 billion years for light from the galaxy cluster to travel all the way to Chandra, NASA’s X-ray emissions observatory that orbits Earth. Because galaxy clusters are full of gas, early theories about them predicted that as the gas cooled, the clusters would see high rates of star formation, which need cool gas to form. However, these clusters are not as cool as predicted and, as such, weren’t producing new stars at the expected rate. Something was preventing the gas from fully cooling. The culprits were supermassive black holes, whose outbursts of plasma keep the gas in galaxy clusters too warm for rapid star formation. The recorded outburst in SPT-0528 has another peculiarity that sets it apart from other black hole outbursts. It’s unnecessarily large. Astronomers think of the process of gas cooling and hot gas release from black holes as an equilibrium that keeps the temperature in the galaxy cluster — which hovers around 18 million degrees Fahrenheit — stable. “It’s like a thermostat,” says McDonald. The outburst in SPT-0528, however, is not at equilibrium. According to Calzadilla, if you look at how much power is released as gas cools onto the black hole versus how much power is contained in the outburst, the outburst is vastly overdoing it. In McDonald’s analogy, the outburst in SPT-0528 is a faulty thermostat. “It’s as if you cooled the air by 2 degrees, and thermostat’s response was to heat the room by 100 degrees,” McDonald explains. Earlier in 2019, McDonald and colleagues released a paper looking at a different galaxy cluster, one that displays a completely opposite behavior to that of SPT-0528. Instead of an unnecessarily violent outburst, the black hole in this cluster, dubbed Phoenix, isn’t able to keep the gas from cooling. Unlike all the other known galaxy clusters, Phoenix is full of young star nurseries, which sets it apart from the majority of galaxy clusters. “With these two galaxy clusters, we’re really looking at the boundaries of what is possible at the two extremes,” McDonald says of SPT-0528 and Phoenix. He and Calzadilla will also characterize the more normal galaxy clusters, in order to understand the evolution of galaxy clusters over cosmic time. To explore this, Calzadilla is characterizing 100 galaxy clusters. The reason for characterizing such a large collection of galaxy clusters is because each telescope image is capturing the clusters at a specific moment in time, whereas their behaviors are happening over cosmic time. These clusters cover a range of distances and ages, allowing Calzadilla to investigate how the properties of clusters change over cosmic time. “These are timescales that are much bigger than a human timescale or what we can observe,” explains Calzadilla. The research is similar to that of a paleontologist trying to reconstruct the evolution of an animal from a sparse fossil record. But, instead of bones, Calzadilla is studying galaxy clusters, ranging from SPT-0528 with its violent plasma outburst on one end to Phoenix with its rapid cooling on the other. “You’re looking at different snapshots in time,” says Calzadilla. “If you build big enough samples of each of those snapshots, you can get a sense how a galaxy cluster evolves.”
Cassini observed a giant ice cloud on Titan’s south pole as winter is setting in on Saturn’s moon. NASA released the images captured by the Cassini spacecraft just recently. The giant ice cloud formed of frozen compounds is looming above Titan’s south pole, capturing part of the low and the mid stratosphere of the moon. In 2012, Cassini revealed another giant cloud looming over the same location, at 186 miles altitude. Now, the giant ice cloud is hovering at 124 miles altitude, much lower than the previously imaged cloud. Cassini observed a giant ice cloud on Titan’s south pole. The Composite Infrared Spectrometer (CIRS), the infrared camera on NASA’s Cassini spacecraft, was used to image the giant ice cloud at thermal wavelengths that are invisible otherwise. Cassini is the first spacecraft to have the capacity to image the seasonal transition on Titan. One season on Titan last approximately seven and a half years as we measure them on Earth. When Cassini’s mission comes to an end in 2017, Titan’s south pole will still be transiting winter. Carrie Anderson, scientist with NASA’s Goddard Space Flight Center (Maryland) stated that thanks to CIRS and the data collected the giant ice cloud became extraordinarily visible. It was unexpected, she stated. Nonetheless, it is a great opportunity to understand the processes shaping the seasonal transition on Saturn’s moon. The findings were presented on November 11th during the Meeting of the Division of Planetary Sciences of the American Astronomical Society. Thanks to the data collected by Cassini, researchers are able to understand how ice clouds form on Titan’s south pole. The process bears some similarity to how rain clouds form above Earth. As water evaporates from the surface of the planet it meets cooler temperatures at it reaches the troposphere. When the altitude is high enough for condensation to occur, rain clouds are formed. The methane clouds hovering in the troposphere of Titan form in the same manner. The polar clouds form above the troposphere due to circulation that takes gases from the warm pole of Saturn’s moon to the opposite pole. Warm air thus sinks, and the gases forming the air condense at the right cool temperatures found at different altitudes. Thus, several blankets of clouds form a giant ice cloud above the pole. The data collected by Cassini also enabled scientists to determine the temperature that would enable the formation of the giant ice cloud over Titan’s south pole. At this point, it should reach -238 degrees Fahrenheit. Photo Credits: imgur.com
The image is built up by grooves and areas which lie below the surface of a metal plate, usually copper or zinc. To take a print, ink is pushed into the grooves, and the surface of the plate wiped clean. The plate is put onto a press bed with dampened paper on top, then run through the press under pressure, drawing the ink out of the grooves and onto the paper. Intaglio prints are often characterised by an embossed line around the image, which is made by the edges of the plate. An intaglio print taken from a metal plate into which the lines forming the image are cut with a wedge shaped tool called a burin. A print taken from a plate into which the image has been bitten with acid. The plate is covered with a wax or resin ground, which is scratched away to reveal areas of metal. Acid bites into these exposed areas leaving a surface that holds ink. A process where the plate is etched through a porous ground of powdered and melted resin, so as to produce a texture when printed. The surface of the plate is worked by rocking a serrated tool, which forces the metal to sit on the surface of the plate. When inked the surface prints a rich black. In carborundum printmaking, the areas in the plate which are to print black are covered with a mixture of carborundum, an industrially produced substance, and a binding agent. When dry that area retains ink just as in any other intaglio process. Carborundum printing gives a rich velvety surface. The plate is covered with glue, and drawn into with any implement. When dry, it is inked, wiped and printed.
Child development would be a fascinating process to observe on fast-forward, but for most parents it is a laborious process wrought with many fears and concerns. The stages of childhood development are fairly concrete, but the milestones of child development that occur in each stage are loosely defined as each child develops at his or her own natural pace. The stages of childhood development are infancy, early childhood, middle childhood, and adolescence. Infancy is the stage of child development that actually begins in the prenatal stage as the baby is developing in utero. A baby can be monitored for normal growth and development even before being delivered into the world. Once the child is born, he or she considered an infant until the age of one year. Many physical developments happen during this time, some of which include developing teeth, learning mobility, strengthening of muscles, development of eyesight, and the beginnings of communication. Development through this stage should be monitored by a physician through routine well-child visits. Early childhood is perhaps the most important of all the stages of childhood development. This is the time between approximately one year and fives years of age. During this time, children use more of their brain capacity than in any other time in life. Mental stimulation is important and can be achieved through both free and structured play. During this stage, communication and language development, motor skill development and behavior responses occur as well as milestones such as learning to walk, talk, self-feed, and toilet train. Middle childhood is the period of time between five years and ten years of age. Children during this stage attend school, develop more advanced social skills and improve on their learning skills. Self-control, coordination and advanced levels of self-care occur during this stage. This is a period in development when cognitive abilities will be assessed and often the stage during which any learning disabilities are likely to manifest. Finally, and perhaps the most difficult of all stages of childhood development is adolescence. This period is the longest stage of child development and consists of 11 to 21 years of age. During this time, puberty occurs and as a consequence, all the behavior concerns that come with raging hormones and an overwhelming desire for independence. This can be a difficult stage of development for parents for certain, but can be equally difficult on the child. Children in this stage experience both physical and emotional ramifications. Growing pains, hormonal imbalances, and awkward physical changes that accompany puberty are compounded by social complexities and often strained relationships with parents and authority figures. Regardless of where a child is in the stages of childhood development, attentive parents and caregivers can ensure that development is occurring normally. Any concerns about normal development, either physical, mental, or emotional, can be collectively addressed by parents, medical professionals, and educators who are professionally trained in child development. The sooner a concern is addressed, the easier it will be for both the parent and the child.
Each day has its own significance. The importance of each day is signified by many happenings that took place on that particular day. What is the importance of June 12 in Indian history? We have listed some important events that happened in Indian History on June 12. Read on to know more about this day: - 1761: Balaji Baji Rao (Nanasaheb), also known as Nana Saheb Peshwa, died after losing in the battle of Panipat. He had also contributed in the development of Pune. In the Battle of Panipat, he was held responsible for the defeat of the Marathas - 1952: J&K Assembly decided to terminate hereditary monarchy - 1972: Dinanath Gopal Tendulkar, a documentary film maker and writer died this day. He was famously known as the author of the eight volume biography of Mahatma Gandhi, Mahatma: Life of Mohandas Karamchand Gandhi - 1975: Indira Gandhi's election to the Lok Sabha was declared void by High Court of Allahabad on the basis of electoral malpractice - 1996: H. D. Deve Gowda, 12th Prime Minister of India, won the vote of confidence in the Lok Sabha and ruled as the head of the United Front coalition - 1990: Indian National Satellite (INSAT-1D) was launched. This was an operational multi-purpose communication and meteorology satellite
Giardia (say gee-ar-dee-ah) is the name of a microscopic parasite that can live in the human bowel. The sickness that this parasite causes is called giardiasis (say gee-ar-dye-a-sis). Some symptoms of giardiasis are diarrhea, belching, gas and cramps. Although these problems are very unpleasant, the illness isn't usually dangerous. Giardiasis is easy to catch if you drink untreated water. Many animals carry giardia in their feces and may introduce this parasite into rivers, streams and springs in rural areas. Infected stream water may look clean and safe when it really isn't. City water may also be infected if sewer lines flood or leak. If you travel overseas, you may get giardiasis by drinking water (even tap water) that hasn't been boiled or treated. Some people who get giardiasis don't become ill, but they may spread the parasite to other people. Giardiasis may be spread in day care centers if workers aren't careful to wash their hands each time after changing diapers. Your doctor can usually diagnose giardiasis by looking at stool samples under a microscope, although several samples may have to be checked before the diagnosis can be made. Sometimes other tests may be necessary. Giardiasis is usually treated with a medicine called metronidazole. It's usually taken 3 times a day for 5 to 10 days. Side effects may include a metallic taste in the mouth or nausea. If you take metronidazole, you should not drink any alcohol. This medicine shouldn't be taken in the early stages of pregnancy. Children younger than 5 years of age may be treated with furazolidone. This medicine has fewer side effects and comes in a liquid form, but it shouldn't be given to babies younger than 1 month of age. It's usually best if a whole family is treated at the same time, because giardiasis is so easily spread. In most cases, your doctor will want to check a stool sample after the treatment to be sure the medicine worked. Sometimes you may need to take medicine for a longer time, or your doctor may want you to take another medicine for a complete cure. If you are traveling or camping, be very careful about the water you drink. If someone in your family gets giardiasis, it's likely that this problem will spread to everyone in your home--especially to the children. When camping, take bottled water or boil water before you use it. Wash your hands carefully with soap and water several times a day. When traveling, don't brush your teeth or wash dishes with water that hasn't been boiled. Peel raw fruits and vegetables before you eat them, and don't eat undercooked food. Written by familydoctor.org editorial staff
What are volatiles? Volatiles are chemical species that under normal conditions of pressure and temperature exist in the gaseous phase. The main volatiles associated with volcanoes are water (H2O), in the form of steam, and carbon dioxide (CO2) gas. Volatiles from the Mantle>> There are 3 possible reservoirs that can contribute volatiles to subduction zone volcanoes: 1) the subducting slab, 2) the mantle wedge, and 3) the crust. The subducting slab has a veneer of marine sediments on its surface, which are taken into the mantle as part of the subduction process. The sediments, in particular, tend to be very water and carbonate rich. Thermal breakdown of the carbonate as the subducting plate enters the mantle produces CO2. Another source of volatiles is the mantle trapped between the downgoing plate and the over-riding plate or crust upon which the volcano is built. This region of the mantle is called the mantle wedge. Finally, the arc crust itself can release volatiles by thermal contact with ascending magmas from the mantle. Distinguishing between these possible sources of volatiles in subduction zone volcanoes is a primary objective of our study in Costa Rica.
The large false serotine is known to occur in protected areas, such as the Sapagaya Forest Reserve and the Tabin Wildlife Reserve in Sabah, Malaysia (1). Its only record in Thailand is also from a protected area, the Kangkachan Nature Reserve (6). Logging will continue to be a problem for the large false serotine, particularly of dipterocarp forest. Suggested conservation measures include the sustainable management of forests, which would help to conserve the large false serotine and other threatened forest species. Sustainable logging will also benefit humans, as it ensures timber production for the future while preserving biodiversity (9). Very little is known about the distribution and biology of the large false serotine, and as with other species more research is needed in order to better understand how to conserve it. This would lead to the best possible “Species Action Plan” for its conservation (11). In the past the most successful bat conservation projects have worked with a long term view of the species and its habitat. Support of local people and the government is also crucial in any conservation programme (10).
Clouds with a chance of warming Researchers from Argonne's Environmental Science division participated in one of the largest collaborative atmospheric measurement campaigns in Antarctica in recent decades. On May 13, 1887, the journal Science published a brief history of Antarctic exploration in which it outlined scientific achievements thus far and expressed a hope that new exploration would soon be undertaken. The article makes apparent that, by the late 19th century, scientists already understood the importance of the region's geography on meteorology and the regulation of ocean currents. "… the meteorological phenomena of the southern hemisphere depend on those of the Antarctic region, and our knowledge of the meteorology of the earth will be incomplete until such phenomena of the south polar region are thoroughly studied." While the hope for further exploration of Antarctica has come to fruition, such exploration has come in fits and starts, due in part to the huge investment in time and money required to transport, install and maintain delicate instrumentation and a small host of scientists. The primary reason, perhaps, is what atmospheric research engineer Maria Cadeddu delicately refers to as the region's "prohibitive conditions." It's a tough place. In 2015, Cadeddu and colleagues from the U.S. Department of Energy's (DOE) Argonne National Laboratory participated in a collaborative atmospheric measurement campaign to understand the impact of regional and large-scale events on Antarctic warming. The team was comprised of a number of academic institutions and national laboratories, including Argonne, Los Alamos and Brookhaven. The research focused on the micro- and macro-physical properties of Antarctic clouds, like the average size of droplets or the total amount of liquid or ice contained in a cloud. The goal was to determine the amount of radiation the clouds will transmit based on such parameters. Based at McMurdo Station and on the West Antarctic Ice Sheet (WAIS), the campaign was part of the DOE Atmospheric Radiation Measurement (ARM) West Antarctic Radiation Experiment (AWARE), led by Principal Investigator Dan Lubin from the Scripps Institution of Oceanography. The one-year study deployed the largest assemblage of instrumentation for ground-based Antarctic atmospheric measurements since 1957, and details from that study are emerging in a number of scientific journals, including Nature Communications and the Journal of Geophysical Research: Atmospheres. "The whole idea was to try to figure out how atmospheric dynamics, like air masses that come from the sea, for example, can affect cloud properties and how changes in cloud properties affect the energy balance of the region," said Cadeddu, who works in Argonne's Environmental Science division. "And understanding how clouds affect a system can help with future climate projections." Antarctica is an important region for climate models, she noted, but models rely on data, the more accurate the better. To date, Antarctic climate models have been less than accurate because science lacks quantitative observations of the region; the observations that are available come from satellites, which have issues in very high and low latitudes. But given the time and the instrumentation provided by AWARE, researchers have begun to fill in many missing pieces in Antarctica's overall climate puzzle. At home, where temperatures are less frigid, Cadeddu is part of the Argonne cloud and radiation research team that includes Virendra Ghate, a radar meteorologist, and Donna Holdridge, the ARM mentor for the radio-sounding systems. The cloud and radiation research team contributed their expertise in remote-sensing equipment, including LiDAR (light detection and ranging) and radar devices, short-wave spectrometers and microwave radiometers for measuring radiation, and radiosondes (balloon-elevated apparatuses that measure upper atmospheric conditions). Because remote sensors transmit raw data, researchers must process and interpret the information to obtain direct measurements or physical quantities. For example, signals sent from LiDAR and radar devices return as scattered signals that correlate to cloud-altering mechanisms like radiation. "These sensors use knowledge of how radiation propagates through a medium, as well as how cloud and rain drops interact with radiation. When we examine these signals, we can estimate specific cloud properties, such as particle sizes or the amount of vapor, liquid water or ice they contain," explained Cadeddu. Cloud phases are relevant to radiative property, or how much radiation the clouds transmit, absorb or scatter. Argonne researchers used this information, in part, to understand differences between cloud conditions in the Arctic and Antarctic and their effect on regional climate. Among the major differences, Antarctica exhibits much less anthropogenic pollution than the Arctic. While this offers more pristine conditions for studying clouds, the lower pollution levels also affect the amount of liquid water present in clouds at very low temperatures. Models convert all the liquid to ice when clouds reach temperatures near -20 degrees C. But the team found that the liquid layer persists in temperatures as low as -35 degrees C in the clouds above McMurdo. Even small amounts of liquid can have a warming effect on the surface of the Arctic, so the team is trying to determine what climate-related effects these liquid-saturated clouds might have in the south. The AWARE campaign made headlines in 2016, when scientists conducting measurements along the West Antarctic Ice Sheet Divide captured one of the largest surface melt events on record. Traditionally, surface melt events at the ice sheet are attributed to warm ocean water beneath coastal ice shelves, but extensive observations showed external factors at work, as well. Scientists attribute some of the melting to a strong El Niño event combined with regional conditions, some of which related back to liquid-bearing clouds. "Clouds exert an important influence on the balance of incoming and outgoing energy at the surface, and these low-level optically thin clouds can have a determinant role in either causing or prolonging melting conditions over ice sheets," said Cadeddu. Cloud characteristics may not have been part of the larger consideration of "meteorological phenomena of the southern hemisphere" when the Science article appeared in 1887. Whatever the factors, the author made clear that 19th century science was looking at a larger, more forward-thinking picture that left room for the potential role of clouds when they wrote the following: " … The important bearing of these problems on practical questions cannot be overrated. The seaman cannot dispense with the knowledge of the currents, winds, and magnetic elements, and there is hardly a class of people who will not be benefited by the progress of meteorology." Research papers used for this article include, "Antarctic cloud macrophysical, thermodynamic phase, and atmospheric inversion coupling properties at McMurdo Station. Part I: Principal data processing and climatology," and "Cloud optical properties over West Antarctica from shortwave spectroradiometer measurements during AWARE," in the Journal of Geophysical Research: Atmospheres, May 22, 2018, and September 3, 2018, respectively; and "January 2016 extensive summer melt in West Antarctica favoured by strong El Niño," in Nature Communications, June 15, 2017. A. Wilson et al. Cloud Optical Properties Over West Antarctica From Shortwave Spectroradiometer Measurements During AWARE, Journal of Geophysical Research: Atmospheres (2018). DOI: 10.1029/2018JD028347 Julien P. Nicolas et al. January 2016 extensive summer melt in West Antarctica favoured by strong El Niño, Nature Communications (2017). DOI: 10.1038/ncomms15799
Those who live near wooded areas grow familiar with the woodpecker's distinctive sound. Many birds remain recognisable because of unique calls and melodious songs. The woodpecker announces its presence by less delicate means. The loud hammering of its bill makes this bird one of nature's favourite drummers. When drawing the woodpecker, pay special attention to the crown and bill of this bird. Strong diagonal lines on the head of this bird will suggest the signature strength of its beak. - Skill level: - Moderately Easy Other People Are Reading Things you need - Straight edge Using a straight edge, draw a long, diagonal line. Orient this line so that it points toward the top left corner of your page. On top of the diagonal line, draw a large egg shape. Draw this shape vertically so that the narrow tip points downward. Make this shape cover about two-thirds of the diagonal line. This forms the body of the woodpecker. Draw a circle on top of the egg shape. Draw this circle so that it is evenly bisected by the diagonal line. Draw a small triangle on the right side of the circle. Join these two shapes by tracing around the arms of the triangle and extending these lines over the circle. This forms the head of the woodpecker. Draw a small circle in the centre of the woodpecker's head. Shade this circle darkly, using the tip of your pencil. This forms the eye of the woodpecker. Extend a narrow triangle from the right side of the woodpecker's head. This forms the beak. Draw a short, triangular shape on the top of the woodpecker's head. Make the top of this triangle curve slightly to form the crest. Extend two long, vertical oval shapes over the back and bottom of the woodpecker's body. Elongate the bottoms of these ovals into pointed tips to form the back and tail feathers. Draw a short, thick line extending from the right side of the woodpecker's body to form the leg. Draw a rough triangular shape at the bottom of this line to form the feet. Tips and warnings - Study images of woodpeckers to prepare for your artwork. - 20 of the funniest online reviews ever - 14 Biggest lies people tell in online dating sites - Hilarious things Google thinks you're trying to search for
Phenotype is an organism's observable properties that are the result of the interaction of the organism's genotype and the environment. Observable properties include things such as eye color, height and hair color. Phenotype is derived from the Greek words phainein, which means to show, and typos, meaning type. Different environments result in different influences of inherited traits and can lead to a different expression of similar genotypes. For example, height is affected by how much food is available to an organism. Because of environmental changes and differences, the phenotype of a person can change throughout his or her life. Some possible variations of characteristics are never even exhibited in the phenotype because the genes are recessive or inhibited.
In linear algebra, real numbers or other elements of a field are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector. More generally, a vector space may be defined by using any field instead of real numbers, such as complex numbers. Then the scalars of that vector space will be the elements of the associated field. A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, allowing two vectors to be multiplied to produce a scalar. A vector space equipped with a scalar product is called an inner product space. The real component of a quaternion is also called its scalar part. The term is also sometimes used informally to mean a vector, matrix, tensor, or other usually "compound" value that is actually reduced to a single component. Thus, for example, the product of a 1×n matrix and an n×1 matrix, which is formally a 1×1 matrix, is often said to be a scalar. The word scalar derives from the Latin word scalaris, an adjectival form of scala (Latin for "ladder"). The English word "scale" also comes from scala. The first recorded usage of the word "scalar" in mathematics occurs in François Viète's Analytic Art (In artem analyticem isagoge) (1591):[page needed] - Magnitudes that ascend or descend proportionally in keeping with their nature from one kind to another may be called scalar terms. - (Latin: Magnitudines quae ex genere ad genus sua vi proportionaliter adscendunt vel descendunt, vocentur Scalares.) - The algebraically real part may receive, according to the question in which it occurs, all values contained on the one scale of progression of numbers from negative to positive infinity; we shall call it therefore the scalar part. Definitions and properties Scalars of vector spaces A vector space is defined as a set of vectors, a set of scalars, and a scalar multiplication operation that takes a scalar k and a vector v to another vector kv. For example, in a coordinate space, the scalar multiplication yields . In a (linear) function space, kƒ is the function x ↦ k(ƒ(x)). Scalars as vector components According to a fundamental theorem of linear algebra, every vector space has a basis. It follows that every vector space over a scalar field K is isomorphic to a coordinate vector space where the coordinates are elements of K. For example, every real vector space of dimension n is isomorphic to n-dimensional real space Rn. Scalars in normed vector spaces Alternatively, a vector space V can be equipped with a norm function that assigns to every vector v in V a scalar ||v||. By definition, multiplying v by a scalar k also multiplies its norm by |k|. If ||v|| is interpreted as the length of v, this operation can be described as scaling the length of v by k. A vector space equipped with a norm is called a normed vector space (or normed linear space). The norm is usually defined to be an element of V's scalar field K, which restricts the latter to fields that support the notion of sign. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the four arithmetic operations; thus the rational numbers Q are excluded, but the surd field is acceptable. For this reason, not every scalar product space is a normed vector space. Scalars in modules When the requirement that the set of scalars form a field is relaxed so that it need only form a ring (so that, for example, the division of scalars need not be defined, or the scalars need not be commutative), the resulting more general algebraic structure is called a module. In this case the "scalars" may be complicated objects. For instance, if R is a ring, the vectors of the product space Rn can be made into a module with the n×n matrices with entries from R as the scalars. Another example comes from manifold theory, where the space of sections of the tangent bundle forms a module over the algebra of real functions on the manifold. Scalar operations (computer science) Operations that apply to a single value at a time. - Mathwords.com – Scalar - Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN 0-321-28713-4. - Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6. - Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2. - Vieta, Franciscus (1591). In artem analyticem isagoge seorsim excussa ab Opere restitutae mathematicae analyseos, seu Algebra noua [Guide to the analytic art [...] or new algebra] (in Latin). Tours: apud Iametium Mettayer typographum regium. Retrieved 2015-06-24. - http://math.ucdenver.edu/~wcherowi/courses/m4010/s08/lcviete.pdf Lincoln Collins. Biography Paper: Francois Viete - Hazewinkel, Michiel, ed. (2001), "Scalar", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 - Weisstein, Eric W. "Scalar". MathWorld.
Great for warm, sunny locations in the vegetable garden, pepper plants (Capsicum annuum) are short-lived tropical perennials that are usually grown as annual (one growing season) plants. They are killed by frost, but where frosts do not occur, the pepper will continue to grow for two or even three years, continuing to flower and produce fruits filled with seeds. Generally, the seeds that drop to the ground sprout and readily overgrow and replace the older plants, perpetuating the pepper plant. Pepper plant seeds readily germinate in moist soil that is warm (at least 70 degrees Fahrenheit). Colder, drier soils retard or prevent the seed from sprouting altogether. Seedlings that are exposed to cold, drought or overly wet conditions can succumb to fungal rot or wilt. As long as temperatures remain above 70 degrees and soil is fertile and moist and sunlight is abundant, a young pepper plant will grow unabated. With longer days in summer, increased ambient humidity and temperatures above 75 degrees increase the rate of stem elongation and unfurling of new leaves. According to research published in the Oxford Journal Annals of Botany, these conditions promote many leaves that are small compared to leaves grown in the cooler, shorter days of spring. Deep green foliage converts energy from the sun into carbohydrates to strengthen the plant and prepare it for flowering. At the stem branch tips, small singular white or yellow flowers appear that are star- or bell-shaped in summer when humidity and heat are high. Sometimes the blossoms appear in clusters of two or three at the base of leaf petiole stems. Insects facilitate pollination, which allows for formation of the pepper fruit. Flowers are continually produced as the stems grow , although flowering wanes when temperatures drop below 70 degrees during the day. If nighttime temperatures drop below 55 degrees, flowers drop from the plant. Development and Maturation Pollinated flowers' ovaries expand and ripen the seeds inside. The fruits at first are green and small, and, depending on the type of pepper, gradually enlarge and become more fleshy and juicy. They often become rounded and elongated and any shade of red, orange, yellow, purple or green when they mature. A ripened fruit lingers on the mother plant until the tissue in the stem holding the fruit collapses, dropping the fruit to the ground. Eventually the fruit itself rots and exposes its many seeds to the soil. As with flowering, the development of fruit continues as long as growing conditions are favorable. In autumn or winter, the shorter days and cooler temperatures slow the plant growth on pepper plants. Cool nighttime temperatures, dry air or dry soil can cause flowers, fruits and leaves to stop growing. A frost will kill the pepper plant, and subfreezing temperatures will kill underground roots thoroughly. Even if the plant survives for a few years in a frost-free environment, the plant will slowly degrade after fruits mature and seeds ripen. These plants are genetically predisposed to grow and yield seeds before dying. Plucking off flowers and fruits will prolong the plant's life as it continues to grow merely to make seeds and ensure the species endures to grow in a subsequent generation.
General Relativity fights off competing theory Einstein famously said that no amount of experimentation would prove him right. But two new studies have backed up his General Theory of Relativity, and one significantly undermines a rival theory. Both teams took advantage of observations from the Chandra X-ray Observatory of galaxy clusters, the largest objects in the universe bound together by gravity. The first finding significantly weakens a competitor to General Relativity known as 'f(R) gravity'. "If General Relativity were the heavyweight boxing champion, this other theory was hoping to be the upstart contender," said Fabian Schmidt of the California Institute of Technology in Pasadena, who led the study. "Our work shows that the chances of its upsetting the champ are very slim." In recent years, some physicists have suggested competing theories to General Relativity to explain the accelerated expansion of the universe. Currently, the most popular explanation is the so-called cosmological constant, which can be understood as energy that exists in empty space. This energy is referred to as dark energy as it can't be directly detected. In the f(R) theory, the cosmic acceleration comes not from an exotic form of energy but from a modification of the gravitational force. This also affects the rate at which massive clusters of galaxies form, opening up the possibility of a sensitive test of the theory. Schmidt and colleagues used mass estimates of 49 galaxy clusters and compared them with theoretical model predictions and studies of supernovas, the cosmic microwave background, and the large-scale distribution of galaxies. They found no evidence that gravity is different from General Relativity on scales larger than 130 million light years. A second study also bolsters General Relativity by directly testing it across cosmological distances and times. Until now, General Relativity had been verified only up to solar system scales, leaving the possibility that it breaks down on much larger scales. To ckeck this out, a group at Stanford University compared Chandra observations of how rapidly galaxy clusters have grown over time to the predictions of General Relativity. The result is nearly complete agreement between observation and theory. "Einstein's theory succeeds again, this time in calculating how many massive clusters have formed under gravity's pull over the last five billion years," said David Rapetti of Stanford University and SLAC National Accelerator Laboratory, who led the study. "Excitingly and reassuringly, our results are the most robust consistency test of General Relativity yet carried out on cosmological scales."
Monitoring Volcano Seismicity Moving Magma and Volcanic Fluids Trigger Earthquakes Earthquake activity beneath a volcano almost always increases before an eruption because magma and volcanic gas must first force their way up through shallow underground fractures and passageways. When magma and volcanic gases or fluids move, they will either cause rocks to break or cracks to vibrate. When rocks break high-frequency earthquakes are triggered. However, when cracks vibrate either low-frequency earthquakes or a continuous shaking called volcanic tremor is triggered. Most volcanic-related earthquakes are less than a magnitude 2 or 3 and occur less than 10 km beneath a volcano. The earthquakes tend to occur in swarms consisting of dozens to hundreds of events. During such periods of heightened earthquake activity, scientists work around the clock to detect subtle and significant variations in the type and intensity of seismic activity and to determine when an eruption is occurring, especially when a volcano cannot be directly observed. Networks of Seismometers are Needed to Monitor Volcanoes A seismometer is an instrument that measures ground vibrations caused by a variety of processes, primarily earthquakes. To keep track of a volcano’s changing earthquake activity, we typically must install between 4 and 8 seismometers within about 20 km of a volcano's vent, with several located on the volcano itself. Seismic networks are made up of several instruments. Having enough of the right instruments located in strategic places is especially important for detecting earthquakes smaller than magnitude 1 or 2; sometimes, these tiny earthquakes represent the only indication that a volcano is becoming restless. If a seismometer is located more than 50 km away, these tiny earthquakes could go undetected. Advances in Volcano Seismology Lead to Better Eruption Warnings Dramatic improvements in computer technology and increased scientific experience with volcano seismicity from around the world have improved our ability to provide eruption warnings and to characterize eruptions in progress. New technologies have enabled us to locate earthquakes beneath a volcano faster and with greater accuracy than we could in the past. We can determine in real time the changing character of a volcano's earthquake activity. And we can better “map“ subsurface structures such as fault zones and magma reservoirs. More About Volcano Seismicity - Earthquakes and other ground vibrations "write" unique seismic signatures - Current earthquake activity beneath selected volcanoes - Magnitude 7.2 earthquake on November 29, 1975, on the Island of Hawai`i triggers a tsunami, remarkable ground movement, and an eruption in the summit caldera of Kilauea Volcano.
fish and other marine creatures. The impacts of aquaculture on the environment are an increasingly important issue as aquaculture operations expand globally. These impacts are largely dependent on the intensity of production, the species farmed and the farm location¹. Negative impacts of aquaculture on the environment In South-East Asia where finfish and shellfish are heavily produced and poorly managed there are fairly heavy environmental impacts. Finfish production here is usually quite intensive and involves an addition of solids and nutrients to the marine environment to help fish grow. This process is generally recognised as being potentially degrading to the environment as such a rapid unnatural build-up of organic material can negatively impact on the localised flora and fauna. In some cases this can cause major changes to the sediment chemistry and affect the overlying water column location¹. The effect of farmed fish on local wild fisheries is also a real environmental concern in South-East Asia and elsewhere. Outbreaks of disease from farms can spread quickly due to the high concentrations in which fish are retained and is easily spreadable into wild fish populations if uncontrolled. Aquaculturalists used to tackle these outbreaks with antibiotics in fish feed until concern mounted over the effect of the drugs on local aquatic ecosystems as well as on consumers. Vaccinations are however now readily available for farmed fish and the practice of using drugs to tackle disease is seldom used in Western aquaculture². Additional impacts related to aquaculture may also occur as a result of other farm discharges and waste products. These can include from shore-based stun and bleed operations, the escaping of non-resident species, transmission of disease and (lack of) control of predatory species. Where species such as shellfish compete with other organisms such as native seagrass for survival, displacement can occur which has a potentially spiralling effect on the native wildlife¹,². Positive impacts of aquaculture on the environment Despite a negative outlook there are some fairly positive environmental impacts to be recognised from aquaculture. These can be found in (artificially or naturally) nutrient enriched areas where the farming of filter feeders such as shellfish improve water quality. Farmed fish are also generally free of environmental contaminants such as mercury and heavy metals as they exclusively eat human-processed feed of which toxin levels are regulated¹,². Room for improvement Achieving the sustainable use of aqua-cultural techniques and aquatic ecosystems has the predominant objective of fisheries managers for decades; however it has arguably failed due to lack of governance. Natural variability and climate change have also had significant implications for the productivity and management of aquaculture and catastrophic natural events continue have significant impacts on resources, infrastructures and people³. Understanding, predicting and accounting for this is going to a significant challenge for the next decade³. Aquaculture can be made more sustainable by producing the fishmeal and fish oil used in feeds from seafood waste. This type of recycling in the feed industry has been on the rise in recent years². Regulatory bodies have also recognized the problems with algae blooms and implemented measures to prevent them such as the use of cages to wash away effluent in locations where strong currents are existent².
Septicemia is the presence of bacteria in the blood (bacteremia ) and is often associated with severe disease. Blood poisoning; Bacteremia with sepsis Causes, incidence, and risk factors: Septicemia is a serious, life-threatening infection that gets worse very quickly. It can arise from infections throughout the body, including infections in the lungs, abdomen, and urinary tract. It may come before or at the same time as infections of the bone (osteomyelitis ), central nervous system (meningitis ), or other tissues. Septicemia can begin with spiking fevers, chills, rapid breathing, and rapid heart rate. The person looks very ill. The symptoms rapidly progress to shock with decreased body temperature (hypothermia), falling blood pressure, confusion or other changes in mental status, and blood clotting problems that lead to a specific type of red spots on the skin (petechiae and ecchymosis). There may be decreased or no urine output . Septicemia is a serious condition that requires a hospital stay. You may be admitted to an intensive care unit (ICU). Fluids and medicines are given by an IV to maintain the blood pressure . Oxygen will be given. Antibiotics are used to treat the infection. Plasma or other blood products may be given to correct any clotting abnormalities. Septic shock has a high death rate, exceeding 50%, depending on the type of organism involved. The organism involved and how quickly the patient is hospitalized will determine the outcome. Calling your health care provider: Septicemia is not common but is devastating. Early recognition may prevent progression to shock. Seek immediate care if: - A person has a fever, shaking chills, and looks acutely ill - There are signs of bleeding into the skin - Any person who has been ill has changes in mental status Call your health care provider if your child is not current on vaccinations . Appropriate treatment of localized infections can prevent septicemia. The Haemophilus influenza B (HIB) vaccine has already reduced the number of cases of Haemophilus septicemia and is a routine part of the recommended childhood immunization schedule. Children who have had their spleen removed or who have diseases that damage the spleen (such as sickle cell anemia ) should receive pneumococcal vaccine. Pneumococcal vaccine is not part of the routine childhood immunization schedule. Persons who are in close contact with someone with septicemia may be prescribed preventative antibiotics.
Laboratories use a variety of methodologies to test the countless analytes that are of interest to the medical community. Understanding the method used for a test provides a broader context for understanding your test results. Below are explanations of several common laboratory methods mentioned on this site. Laboratory methods are based on established scientific principles involving biology, chemistry, and physics, and encompass all aspects of the clinical laboratory from testing the amount of cholesterol in your blood to analyzing your DNA to growing microscopic organisms that may be causing an infection. Such methods are much like the recipes in a cookbook, defining the procedures or processes that are used to test biological samples for particular analytes or substances. The laboratory scientist follows step-by-step procedures until the end product, a test result, is achieved. Some methods, like some recipes, are much more complicated and labor-intensive than others and require varying degrees of expertise. Often, there may be more than one method that can be used to test for the same substance. Consequently, the same analyte may be tested differently in different laboratories, a fact that is crucial when comparing test results. The descriptions of the methods listed below attempt to give some insight into the scientific principles used and the steps that are required to produce a result. Explanations of the methods – and their differences – are provided to give you a better understanding of some of the tests that you may undergo. These items are not intended to be a comprehensive list of available methodologies, but do represent some of those that are mentioned on this web site. Immunoglobulins are proteins produced by the immune system to recognize, bind to, and neutralize foreign substances in the body. Immunoassays are tests based on the very specific binding that occurs between an immunoglobulin (called an antibody) and the substance that it specifically recognizes (the foreign molecule, called an antigen). Immunoassays can be used to test for the presence of a specific antibody or a specific antigen in blood or other fluids. When immunoassays are used to test for the presence of an antibody in a blood or fluid sample, the test contains the specific antigen as part of the detection system. If the antibody being tested for is present in the sample, it will react with or bind to the antigen in the test system and will be detected as positive. If there is no significant reaction, the sample tests negative. Examples of immunoassay tests for antibodies include Rheumatoid Factor (which tests for the presence of autoimmune antibodies seen in patients with rheumatoid arthritis), West Nile Virus (which tests for antibodies that a person made in response to an infection with that virus) or antibodies made in response to a vaccination (such as tests for antibodies to Hepatitis B to assure that the vaccination was successful). When immunoassays are used to test for the presence of antigens in a blood or fluid sample, the test contains antibodies to the antigen of interest. The reaction of the antigen that is present in the person's sample to the specific antibody is compared with reactions of known concentrations and the amount of antigen is reported. Examples of immunoassay tests for antigens include drug levels (like digoxin, vancomycin), hormone levels (like insulin, TSH, estrogen), and cancer markers (like PSA, CA-125, and AFP). This testing method is a type of immunoassay. It is based on the principle that antibodies will bind to very specific antigens to form antigen-antibody complexes, and enzyme-linked antigens or antibodies can be used to detect and measure these complexes. To detect or measure an antibody in a person's blood, a known antigen is attached to a solid surface. A solution containing the patient sample is added. If the patient's sample contains antibody, it will bind to the antigen. A second antibody (against human antibodies) that is labeled with an enzyme is then added. If the enzyme-linked antibody binds to human antibodies, the enzyme will create a detectable change that indicates the presence and amount of the antibody in the patient sample. (2001). Gerostamoulos, J. et. al. (2001). The Use Of Elisa (Enzyme-Linked Immunosorbent Assay) Screening In Postmortem Blood. TIAFT, The International Association of Forensic Toxicologists [On-line information]. Available online at http://www.tiaft.org/tiaft2001/lectures/l13_gerostamoulos.doc through http://www.tiaft.org. Clarke, W. and Dufour, D. R., Editors (2006). Contemporary Practice in Clinical Chemistry, AACC Press, Washington, DC. Harris, N. and Winter, W. This is an immunoassay test method that detects specific proteins in blood or tissue. It combines an electrophoresis step with a step that transfers (blots) the separated proteins onto a membrane. Western blot is often used as a follow-up test to confirm the presence of an antibody and to help diagnose a condition. An example of its use includes Lyme disease testing. To perform a western blot test, a sample containing the protein is applied to a spot along one end of a layer of gel. Multiple samples and a control may be placed side by side along one end of the gel in separate "lanes." An electrical current causes the proteins in the sample(s) to move across the gel, separating the proteins by size and shape and forming bands that resemble the steps of a ladder. These sample and control ladders are then "blotted" (transferred) onto a thin membrane that is put in contact with the gel. Labelled or tagged antibodies are then used in a one or two step process to detect the proteins bound to the membrane. For example, to confirm HIV or Lyme antibody tests, the proteins separated are those of the causative organism. A patient’s sample is then added to the blot and any antibodies to the organism are bound and later detected by labeled antibodies to human immunoglobulins. The presence of the certain proteins is interpreted by comparison with known negative or positive control samples in the other lanes. Khalsa, G. Western blotting. Arizona State University, School of Life Sciences, Mama Ji's Molecular Kitchen [On-line information]. Available online at http://lifesciences.asu.edu/resources/mamajis/western/western.html through http://lifesciences.asu.edu. Tietz Textbook of Clinical Chemistry and Molecular Diagnostics. Burtis CA and Ashwood ER, Bruns DE, eds. 4th edition St. Louis: Elsevier Saunders; 2006. This molecular testing method uses fluorescent probes to evaluate genes and/or DNA sequences on chromosomes. Humans normally have 23 pairs of chromosomes: 22 pairs of non-sex-determining chromosomes (autosomes) and 1 pair of sex chromosomes (XX for females and XY for males). Chromosomes are made up of DNA, repeating sequences of four bases that form the thousands of genes that direct protein production in the body and determine our physical characteristics. DNA consists of two strands bound together in a double helix structure (like a spiral staircase). Each half of the helix is a complement of the other. For a FISH test, a sample of a person's cells containing DNA is fixed to a glass slide. Samples can include blood, bone marrow, amniotic fluid, or tumor cells, depending on the clinical indication. The slides with the "target" (person's) DNA are heated to separate the double strands of DNA into single strands. Fluorescent probes are then added to the sample. Fluorescent probes are sections of single-stranded DNA that are complementary to the specific portions of DNA of interest. The probe, which is labeled with a fluorescent dye, attaches to the specific piece of DNA. When the slides are examined using a special microscope, the genes that match the probe can be seen as areas of fluorescence, which will appear as bright spots on a dark background. This technique can be used to show the presence of extra gene copies (duplicated or amplified genes), and genetic sequences that are missing (gene deletions) or have been moved (translocated genes). Increased numbers of chromosomes, as seen in certain genetic disorders, are also diagnosed using FISH technologies (trisomy 21 or Down syndrome, for example). The targeted area(s) or sequences of DNA are determined by the probes that are used. Multiple targeted areas in the DNA can be assessed at the same time using FISH probes labeled with a number of different fluorescent dyes. The following photographs show cells that have been evaluated using the FISH methodology. These are just a few examples of the use of FISH technique. In Figure 1, FISH testing is applied to cells in amniotic fluid, obtained from a pregnant woman carrying a baby suspected of having Down syndrome (trisomy 21). Three copies of chromosome 21 are observed (red signals). The green signals (two copies) are for chromosome 13; these are for control purposes and show that the test is working properly. FISH supports a clinical diagnosis of trisomy 21. The doctors and genetic counselors will work with the woman to help her understand the results of the test. In Figure 2, FISH is used to assess breast tumor cells for the presence of an amplified gene, HER-2/neu (red signals). In approximately 25% of breast cancers, HER-2/neu is amplified. Women with amplified HER-2/neu tumors are treated with a drug (Herceptin) that targets the protein that is the product of the abnormal gene. If a woman is NOT positive for HER-2/neu amplification, she is not likely to receive any therapeutic benefit from Herceptin therapy and other drugs are considered. Figure 3 shows FISH used in a particular type of chronic leukemia, chronic myelogenous leukemia (CML). The specific probes used in this case detect BCR-ABL, an abnormal gene sequence formed by the translocation of a portion of chromosome 22 (BCR, a green probe) with a portion of chromosome 9 (ABL1, a red probe). The areas of yellow fluorescence signify the abnormal, fusion gene (joining of red and green probes). Finding the BCR-ABL fusion confirms a diagnosis of CML. BCR-ABL positive patients receive benefit from molecular-targeted drugs, such as imatinib. (August 16, 2010) Fluorescence In Situ Hybridization (FISH). National Human Genome Research Institute [On-line information]. Available online at http://www.genome.gov/10000206 through http://www.genome.gov. Accessed March 2011. (August 16, 2010) Frequently Asked Questions about Genetic Testing. National Human Genome Research Institute [On-line information].Available online at http://www.genome.gov/19516567 through http://www.genome.gov. Accessed March 2011. (March 6, 2006) Genetics Home Reference. Fluorescent in situ hybridization. Available online at http://ghr.nlm.nih.gov/glossary=fluorescentinsituhybridization through http://ghr.nlm.nih.gov. Accessed March 2011. (June 29, 2011) Hiller B, Bradtke J, Balz H and Rieder H (2004). CyDAS Online Analysis Site. Available online at http://www.cydas.org/OnlineAnalysis/ through http://www.cydas.org. Accessed July 2011. PCR is a laboratory method used for making a very large number of copies of short sections of DNA from a very small sample of genetic material. This process is called "amplifying" the DNA and it enables specific genes of interest to be detected or measured. DNA is made up of repeating sequences of four bases – adenine, thymine, guanine, and cytosine. These sequences form two strands that are bound together in a double helix structure by hydrogen bonds (like a spiral staircase). Each half of the helix is a complement of the other. In humans, it is the difference in the sequence of these bases on each strand of DNA that leads to the uniqueness of each person's genetic makeup. The arrangement of the bases in each gene is used to produce RNA, which in turn produces a protein. There are about 25,000 genes in a human genome, and expression of these genes leads to the production of a large number of proteins that make up our bodies. The DNA of other organisms such as bacteria and viruses is also composed of thousands of different genes that code for their proteins. How is the method performed? PCR is carried out in several steps or "cycles" in an instrument called a thermocycler. This instrument increases and decreases the temperature of the specimen at defined intervals during the procedure. The first step or cycle of PCR is to separate the strands of DNA into two single strands by increasing the temperature of the sample that contains the DNA of interest. This is called "denaturing" the DNA. Once the strands separate, the sample is cooled slightly and forward and reverse primers are added and allowed to bind to the single DNA strands. Primers are short sequences of bases made specifically to recognize and bind to the section of DNA to be amplified, which are the very specific sequence of bases that are part of the gene or genes of interest. Primers are called "forward" and "reverse" in reference to the direction that the bases within the section of DNA are copied. After the two primers attach to each strand of the DNA, a DNA enzyme (frequently Taq polymerase) then copies the DNA sequence on each half of the helix from the forward to the reverse primer, forming two double stranded sections of DNA, each with one original half and one new half. Taq polymerase is an enzyme found in a bacterium (Thermues aquaticus) that grows in very hot water, such as in geysers or hot springs. Polymerases copy DNA (or RNA) to make new strands. The Taq polymerase is especially helpful for laboratory testing because (unlike many other enzymes) it does not break down at very high temperatures needed to do PCR. When heat is applied again, each of the two double strands separate to make four single strands and, when cooled, the primers and polymerase act to make four double strand sections. The four strands becomes eight in the next cycle, eight become sixteen, and so on. Within 30 to 40 cycles, as many as a billion copies of the original DNA section can be produced and are then available to be used in numerous molecular diagnostic tests. This process has been automated so that a billion copies of the original DNA can be produced within a few hours. How is it used? This method can be used, for example, to detect certain genes in a person's DNA, such as those associated with cancer or genetic disorders, or it may be used to detect genetic material of bacteria or viruses that are causing an infection. These are just a few examples of laboratory tests that use PCR: Real-time PCR is similar to PCR except that data are obtained as the amplification process is taking place (i.e., "real time") rather than at a prescribed endpoint and shortens the time for the test from overnight to a few hours. This method is used to measure the amount of DNA that is present in a sample. RT-PCR (Reverse Transcriptase PCR) This method uses PCR to amplify RNA. RNA is a single stranded nucleic acid molecule and needs to be made into DNA before it can be amplified. The addition of a new strand that is the complement of RNA is achieved by the enzyme called Reverse Transcriptase (RT) and an antisense (reverse) primer. The primer binds to the single stranded RNA and the enzyme RT copies the RNA strand to make a single stranded DNA, which it then copies to make a double stranded DNA molecule. The double stranded molecule can now be amplified by PCR. Detection can also be by real-time methods. Here are two examples of laboratory tests that use RT-PCR: (February 27, 2012). Polymerase Chain Reaction (PCR). National Human Genome Research Institute [On-line information]. Available online at http://www.genome.gov/10000207 through http://www.genome.gov. Accessed August 2012. Tietz Textbook of Clinical Chemistry and Molecular Diagnostics. Burtis CA, Ashwood ER, Bruns DE, eds. St. Louis: Elsevier Saunders; Fifth edition, 2011, Pp 1412-1413. Clarke, W. and Dufour, D. R., Editors (2006). Contemporary Practice in Clinical Chemistry, AACC Press, Washington, DC. Pp 135-137.
How Much Is A Billion? Understanding The Federal Budget: What Is A Billion? How much is a billion? What does it mean to say that something costs a billion dollars ($1B)? Politicians — at the state and federal level — talk in numbers that most folks can’t visualize, whether we’re talking budgets or campaign spending. First, we have to clarify that we’re using the US system of numbers, not the British system (short scale v long scale). If we’re talking about things relating to the US political system, we are using the US system. Here’s the math: The chart, however accurate, doesn’t put the number “a billion” — one thousand million — into perspective. Most of us know that the numbers are big, we just don’t know how to think of them in the context of our own lives. Here are some attempts, collected from around the Net: - If we wanted to pay down a billion dollars of the US debt, paying one dollar a second, it would take 31 years, 259 days, 1 hour, 46 minutes, and 40 seconds. To pay off a trillion dollars of debt, at a dollar a second, would take about 32,000 years. The current U.S. federal debt is $18.1 trillion. - About a billion minutes ago, the Roman Empire was in full swing. (One billion minutes is about 1,900 years.) - About a billion hours ago, we were living in the Stone Age. (One billion hours is about 114,000 years.) - About a billion months ago, dinosaurs walked the earth. (One billion months is about 82 million years.) - A billion inches is 15,783 miles, more than halfway around the earth (circumference). - A billion pennies, stacked one atop the other, would reach nearly 1,000 miles. The Space Shuttle orbits about 225 miles above the Earth’s surface. - A person counting at the rate of two numbers per second would need 5 years, 308 days, 9 hours, 41 minutes, 50 seconds to reach a billion. - The earth is about 8,000 miles wide (diameter), and the sun is about 800,000 miles wide, not quite a million. Remember that a trillion is one thousand billion — and today’s federal budget numbers are in trillions. For a really mind-blowingly large number, think about the googol, which is 1 followed by 100 zeros (10100). To try to wrap your mind around that, envision a diamond that weighs as much as the earth. It would contain only 1050 carbon atoms. I originally wrote this article for About.com in 2004.
To find the value of each angle of the triangle use the property that the sum of the angles of a triangle is equal to 180 degrees. As angle C is half as large as angle A and A = B, we can write the three angles of the triangle in terms of C as C, 2C and 2C. C + 2C + 2C = 5C = 180 => C = 36 A = 36*2 = 72 = B The angles of the triangle are A = 72 degrees, B = 72 degrees and C = 36 degrees. The sum of the three angles in a triangle add to 180 degrees. So if two of the angles are equal and the third is half as much as one, the developed equation is x+x+.5x=180. Combining like terms, 2.5x=180. Divide both sides by 2.5 and x=72. Angle A and B will be 72 degrees and angle C is 36 degrees.
Learning outside the home begins early in life. More than one-third of all U.S. children under the age of five are cared for outside of their homes by individuals not related to them.1 Research on early childhood education shows that high-quality child care experiences support the development of social and academic skills that facilitate children's later success in school. There is also mounting evidence that close relationships between teachers and children are an important part of creating high-quality care environments and positive child outcomes. As most parents and teachers know, children gain increasing control over their emotions, attention, and behavior across the early years. These growing abilities allow them to face and overcome new developmental challenges, from getting along with others to learning novel academic skills.2 Despite their growing abilities, preschoolers sometimes find it difficult to regulate their thoughts and emotions in ways that allow them to succeed at new tasks. At these times, close relationships with meaningful adults, including teachers, can help children learn to regulate their own behavior. The sense of safety and security afforded by close relationships with teachers provides children with a steady footing to support them through developmental challenges. This support may help the child work through a new academic challenge, such as learning to write a new letter of the alphabet; or the close relationship may help the child maintain a previously learned skill when confronted with a challenging new context. For instance, a child who is quite socially adept during circle time (a prior skill) might have more difficulty navigating these social interactions when he or she is over-tired from a missed nap (a challenging context). In either case, when children "internalize" their teachers as reliable sources of support, they are more successful at overcoming challenges. In fact, having emotionally close relationships with child-care providers as a toddler has been linked with more positive social behavior and more complex play later as a preschooler.3 Kindergartners with close teacher relationships have been shown to be more engaged in classroom activities, have better attitudes about school, and demonstrate better academic performance.4 Thus, teacher-child relationships appear to be an important part of children's social and academic success in school. Harvard Graduate School of Education Lecturer Jacqueline Zeller's applied work in the Boston Public Schools and her research have been informed by this literature on teacher-student relationships. In the following interview, Zeller discusses the importance of teacher-student relationships for building students' sense of security and facilitating their readiness to learn at school. What led you to study and consult regarding building positive teacher-student relationships? Before beginning graduate school in psychology, my experiences teaching in elementary schools led me to believe that the relationships between children and teachers are powerful mechanisms for change. When students felt that I believed in them and supported their growth, they felt more confident both academically and socially at school. This belief was further strengthened in my graduate studies, as I began to apply attachment theories to teacher-child relationships. I decided to study how teachers' characteristics and children's characteristics work together to predict relationship quality, incorporating an attachment perspective. At that same time, I was working in schools, which was a natural venue for me to apply attachment theories to my consultation work, as I tried to help teachers in their efforts to join effectively with their students. Why do you think socio-emotional development is important to discuss with regard to schools? Often, we discuss social and emotional development very distinctly from academic growth. However, these ideas are very much intertwined. When children feel more secure at school, they are more prepared to learn. Children who feel this level of security are also generally more open to share how their lives outside of school are connected with ideas introduced in their classrooms. Educators have noted that these personal anecdotes help children build the foundations for literacy. What do you think is important to think about when reflecting on teacher-student relationships? Earlier research examining teacher-student relationships has tended to focus on how student's individual characteristics affect their relationships with teachers. While the individual characteristics that students bring to their relationships are very important, we know that as adults, we also bring experiences, beliefs, and characteristics that affect quality of relationships. It is important to consider what each individual brings to the relationship and how the relationship is affected by the contexts in which it is embedded. Most people relate easier with some children over others, but as adults in relationships with youth it is important that we reflect on what we bring to the table and seek support when we need it to most effectively help children and adolescents. How do you feel that these principles match with your training of students in HGSE's Risk and Prevention and School Counseling program? A primary goal of the Risk and Prevention and School Counseling Program at HGSE is to train future practitioners who practice prevention and intervention in school settings. We know that children and adolescents do not exist in a vacuum, but rather are bound by their contexts, including their home, schools, and neighborhoods. Students in our program are encouraged to understand how children's experiences are a function of these contexts. A major part of children's school contexts is their classroom environments and relationships with their teachers. Currently, in addition to teaching at Harvard, I work as a clinician at an elementary school. I try to bring perspectives from my practice work to my courses at HGSE to provide some examples of how these theories are applied in real-world settings. Similarly, at their practicum sites, our students are encouraged to partner with children's teachers to foster safe and supportive relationships between teachers and children. What are your hopes for where research and practice is heading in this field? My hope is that researchers continue to examine these relationships contextually and reciprocally, acknowledging the complexity of these relationships. Reflective practice is important to understand how we as adults can help shape children and adolescents' contexts to facilitate their healthy development. Schools have increasing demands placed upon them with each passing year, so providing time for teachers and school staff to discuss and reflect on their relationships can be very difficult. However, I hope that as we continue to understand the powerful implications of these relationships for children, schools will protect time for teachers to discuss these relationships with colleagues, school psychologists, mentors, and consultants. 1 Johnson, J.0 (2005). Current population report: Who's minding the kids? Child care arrangements: Winter 2002. Washington, DC: U.S. Census Bureau. Available online at http://www.census.gov/prod/2005pubs/p70-101.pdf. 3 Howes, C., Matheson, C.C., & Hamilton, C.E. (1994). Maternal, teacher, and child care history correlates of children's relationships with peers. Child Development, 65, 264-273. 4 Birch, S. H., & Ladd, G. W. (1997). The teacher-child relationship and children's early school adjustment. Journal of School Psychology, 35, 61-79. Jacqueline Zeller's clinical interests focus on prevention and intervention efforts in schools and promoting resiliency in children. She has worked as a therapist in a variety of settings, including residential treatment centers, day treatment centers, outpatient clinics, hospitals, and schools.
Examples of fixed joints include the joints between the bones in the skull and the joint where the radius and ulna bones meet in the lower arm. Fixed joints, also known as fibrous joints, are places where two bones come together in the body but are unable to move. This type of joint is held together by fibrous connective tissue rather than ligaments and tendons.Continue Reading There are three different types of fixed joints in the body: sutures, syndesmoses and gomphoses. Sutures are the junctions between the skull bones. These joints are slightly mobile while a person is an infant, allowing the skull to expand as the brain grows. The sutures become completely rigid by the time a child is a toddler, thus protecting the brain from damage. Syndesmoses are fixed joints between two long bones. There are two places where syndesmoses are found in the body: between the radius and ulna in the arm and between the fibular and tibia in the leg. A small amount of motion does actually occur in syndesmoses joints, but since they are connected by fibrous tissue, they are still technically considered fixed joints. The final type of fixed joint, gomphoses, are the joints between the tooth roots and the mandible or maxillary bones. These joints are completely immobile, and each tooth is attached to the bone by fibrous connective tissue.Learn more about Bones
Algae are a promising source of biofuels: besides being easy to grow and handle, some varieties are rich in oil similar to that produced by soybeans. Algae also produce another fuel: hydrogen. They make a small amount of hydrogen naturally during photosynthesis, but Anastasios Melis, a plant- and microbial-biology professor at the University of California, Berkeley, believes that genetically engineered versions of the tiny green organisms have a good shot at being a viable source for hydrogen. Melis has created mutant algae that make better use of sunlight than their natural cousins do. This could increase the hydrogen that the algae produce by a factor of three. It would also boost the algae’s production of oil for biofuels. The new finding will be important in maximizing the production of hydrogen in large-scale, commercial bioreactors. In a laboratory, Melis says, “[we make] low-density cultures and have thin bottles so that light penetrates from all sides.” Because of this, the cells use all the light falling on them. But in a commercial bioreactor, where dense algae cultures would be spread out in open ponds under the sun, the top layers of algae absorb all the sunlight but can only use a fraction of it. Melis and his colleagues are designing algae that have less chlorophyll so that they absorb less sunlight. That means more light penetrates into the deeper algae layers, and eventually, more cells use the sunlight to make hydrogen. The researchers manipulate the genes that control the amount of chlorophyll in the algae’s chloroplasts, the cellular organs that are the centers for photosynthesis. Each chloroplast naturally has 600 chlorophyll molecules. So far, the researchers have reduced this number by half. They plan to reduce the size further, to 130 chlorophyll molecules. At that point, dense cultures of algae in big bioreactors would make three times as much hydrogen as they make now, Melis says. “If you can increase the productivity by means of thinning out the [chlorophyll], it’s going to affect any product that you make,” says Rolf Mehlhorn, an energy technologist at the Lawrence Berkeley National Laboratory. Algae that use sunlight more effectively would produce more oil, he says. Startups such as Solix Biofuels, based in Fort Collins, CO, and LiveFuels, based in Menlo Park, CA, are trying to extract oil from algae; the oil can be refined to make diesel and jet fuel. The process is still at least five years from being used for hydrogen generation. Researchers will first have to increase the algae’s capacity to produce hydrogen. During normal photosynthesis, algae focus on using the sun’s energy to convert carbon dioxide and water into glucose, releasing oxygen in the process. Only about 3 to 5 percent of photosynthesis leads to hydrogen. Melis estimates that, if the entire capacity of the photosynthesis of the algae could be directed toward hydrogen production, 80 kilograms of hydrogen could be produced commercially per acre per day.
2 Answers | Add Yours Austen's technique for imputing two meanings to her narratorial statements through verbal irony--remember that in the text, it is the narrator we hear, not Jane Austen (although one of the charms of Austen's works is that we believe the narrator's voice is identical to Jane's)--is complex and involves syntax, grammar and subject matter. In order to answer your question, an analysis of these three elements in ironic statements is in order. As always, Austen introduces the story of Emma with a brilliantly ironic statement from the narrator: The real evils, indeed, of Emma's situation were the power of having rather too much her own way, and a disposition to think a little too well of herself; these were the disadvantages which threatened alloy to her many enjoyments. The danger, however, was at present so unperceived, that they did not by any means rank as misfortunes with her. Let's analyze this to see how Austen uses verbal irony to imply two attitudes. The subject matter of these two statements can reduced to its simplest form and stated as: the evils of Emma's life. This is a serious subject, especially when coupled with the vocabulary word (vocabulary use is a subcategory of grammar: words and word usage) "danger." With this topic looming from the first phrase, "The real evils," we expect a serious danger to obtrude into Emma's life. Instead, we encounter verbal irony as we are told that the dangerous evil of Emma's life is that she is over-indulged, pampered, unguided and spoiled. The technique Austen uses here is juxtaposition of the serious with the ridiculous (though too true) in one statement. When we read that the evil facing Emma is that she is spoiled, we have to laugh and understand the ironic tale about to unfold (or we are confused because we have never encounter a heroine who is maligned and painted in unpleasant shades at the outset). We've already noted one point in grammar, that being the vocabulary choice, "danger." The phrase "threatened alloy" is a second vocabulary choice that reinforces the verbal irony of the two ideas in unexpected juxtaposition: being spoiled juxtaposed to danger and threat. Aside from some differences between 18th and 19th century punctuation and contemporary punctuation (e.g., the now unneeded commas in "at present so unperceived, that they did not" and "her own way, and a disposition"), Austen's grammar is perfect, thus, grammatically, vocabulary analysis is our best tool for understanding Austen's technique, though it is most likely Austen chose her punctuation to reinforce and emphasize her ironic statements. Syntax--the arrangements of grammatical parts for style and emphasis of communication--adds strongly to the duality of Austen's ironic technique. Let's analyse part of the above quotation for syntax. The real evils, indeed, of Emma's situation were ...; these were the disadvantages which threatened ... Austen might have written the above thoughts using this syntax: - The disadvantages that threatened were the real evils of Emma's situation which were .... This is a straightforward statement that combines the subjects of the two semicolon coordinated sentences into one sentence. This syntax takes itself and its communication very seriously; there is no room for verbal irony in this syntax. This is a serious statement, and if Austen had written this instead or her two semicolon coordinated sentences, we would have had a very different image of Emma and our ironic tale of meddlesomeness and love would have been a serious didactic tale about a troublemaker. Thus, it is through subject matter and unexpectedly juxtaposed subject matter; grammar and vocabulary; and syntax that Austen implies a different attitude than the essence of her narrative statements using verbal irony that she creates with these techniques. To find other examples, apply these steps of analysis to other ironic statements. This novel is notable in the way that it deliberately tries to confuse the reader, echoing the confusion of the various characters as they, and the audience, try to establish who is in love with whom. What is so excellent about Austen's narrative voice is the way that she uses verbal irony so brilliantly to capture this confusion and suggest other attitudes that are perhaps more accurate than the actual words used suggest. A classic example of this is when Emma reflects on her action in trying to bring Harriet and Mr. Elton together after Mr. Elton has actually proposed to her: The first error, and the worst, lay at her door. It was foolish, it was wrong, to take so active a part in bringing any two people together. It was adventuring too far, assuming too much, making light of what ought to be serious--a trick of what ought to be simple. She was quite concerned and ashamed, and resolved to do such things no more. As Emma reflects on her faults, she, typically and rather impetuously, "resolved to do such things no more," declaring that she will never try to matchmake again. However, if the reader has understood her character, they will detect a touch of verbal irony in this strong declaration. Emma will find it impossible to stop matchmaking altogether, as her natural sense of arrogance and desire to interfere is so strong that she will find it all but impossible to desist from trying to matchmake again. Her character development will not occur after such a relatively minor misunderstanding. Verbal irony in this example therefore works in the way that Austen records the thoughts of a character in such a way as to mock their resolutions and show them to be rather extreme reactions made in the heat of the moment that they have no intention of following. We’ve answered 319,852 questions. We can answer yours, too.Ask a question
Social anxiety is effectively an extreme type of shyness that can result in people feeling very self-conscious, inferior, and subject to judgement from others in social situations. It’s the third most common form of psychological disorder, after depression and alcoholism, and doesn’t always occur in isolation. There are two types of social anxiety: Specific social phobia: This is when someone may feel able to mix with people socially in most situations but may struggle in particular circumstances, such as when they need to speak publicly, eat in front of others, etc. In these instances they may feel they are being scrutinised and judged and may worry about what could go wrong. General social phobia: People with this phobia become anxious whenever they are around people. They may feel they are being looked at and judged. It can be very disabling and people often cope with this by avoiding social situations that they feel will make them anxious. This may prevent them from forming long-term relationships. The symptoms of social anxiety Once recognised, social anxieties can usually be successfully treated. Therefore if you think you may have a social anxiety it is important to look out for common signs and symptoms so you can seek help. Physical signs: Sweating, palpitations (feeling of increased or irregular heartbeat), dry mouth, blushing and trembling are all common signs that you may be anxious. You might also have difficulty breathing which may escalate into a panic attack. Sometimes people fear the symptoms as they worry this may result in them being judged harshly by others. Psychological signs: If you have a social anxiety, you may worry excessively about a social situation or ruminate about situations in the past. You may be aware that you have a problem, but look for ways around it. You may choose to avoid certain situations or use alcohol and drugs to relax and reduce your anxiety so you can function normally. If you regularly use alcohol in this way, it can become an additional problem, so it is important to seek help. Causes of social anxiety It is not fully understood why some people develop a social anxiety while others may not. Sometimes it runs in families, so there may be a biological factor, but often it can stem from past experiences. Bad treatment at school (bullying and teasing) or how we were treated by our families and friends can affect us in later life too. Dealing with a social anxiety If you think you may have a social phobia, it’s important to discuss this with your GP, who will be able to refer you on to someone in your area that can help. You may find this hard to do, but treatment for this type of anxiety can be very successful. Cognitive Behavioural Therapy (CBT): CBT has been shown to be highly successful in treating social anxieties. Social skills: Sometimes re-learning social skills such as starting a conversation can help you feel more equipped to deal with social situations. Medication: There are medications that can be prescribed to ease the symptoms of anxiety. Self-help groups: Joining a self-help group can be really helpful. Self-help publications: There are many good self-help books that guide you through the causes of social anxiety and the things you can do to help yourself. A book our therapists and clients have found helpful is: - Overcoming social anxiety and shyness by Gillian Butler This is an accessible self-help book based on Cognitive Behavioural Therapy. It provides a good explanation of social anxiety and ways to challenge it. There are many exercises and techniques for those who like to work in a structured way. Feel free to contact us to ask about psychological therapies available at First Psychology Aberdeen that may help with social anxiety.
Intro to Multiplication Arrays and Repeated Addition using manipulative guided worksheet: With this, I give each student some sort of manipulative that they can visualize the repeated addition with. This is in a cup usually (hence the vocabulary chosen in the worksheet - fully editable!). They then follow the directions to build an array and write our the repeated addition on the worksheet. Very easy to follow and self explanatory as well as very great visual. Helped my students to understand the concept much better than anything in our curriculum! ENJOY!! EXAMPLE - First part says this: There are 4 groups. Each group has 3 in it. Show these groups on your desk using the cup given to you. Then they are to fill in the blanks: There are _____ groups of 3 I can show this by doing ____ + ____ + ____ = ____
This indispensable book presents a wealth of concrete ways to promote children's intrinsic motivation to read. It provides 30 practical strategies and activities—such as "Citizen of the Month," "High Five," and "Your Life in Books"—that are ready to implement in the K–6 classroom. Teachers get step-by-step instructions for creating a motivating classroom environment, nurturing children's self-concepts as literacy learners, and fostering appreciation of the value of reading and writing. More than a dozen reproducibles include two helpful assessment tools; the large-size format facilitates photocopying. The Publisher grants individual book purchasers non-assignable permission to reproduce selected materials in this book for professional use. For details and limitations, see copyright page. > Classroom ready: gives K–6 teachers step-by-step strategies they can start using right away. > Motivating readers early lays the foundation for literacy success. > Includes 30 concisely described activities and over a dozen reproducibles, including two assessment tools. > From prominent experts with experience as teachers, researchers, and professional developers. Introduction: Myths and Truths I. Motivating Classroom Communities 1. Book Blessing 2. Citizen of the Month 3. Class Spirit 4. Literacy Centers Plus 5. Literacy Workshop Plus 6. Read-and-Think Corner 7. Star of the Week 8. Teacher’s Reading Log 9. Happy Happenings Box 10. Lifeline: Past–Present–Future II. Promoting Self-Concept as a Reader 11. Experts Teaching 12. Every-Pupil-Response Techniques 13. High Five 14. I Can, You Can, We Can 15. Specific Praise 16. Now–Next–Quick Reads 17. Alternatives to Cold, Round Robin Reading 18. Word Sorting for Younger Students 19. Word Sorting for Older Students III. Promoting the Value of Reading 20. Be a Reading Role Model 21. Wall of Fame 22. Honor All Print 23. Personal Invitation to Read 24. Make a Real-World Connection 26. Textbook Top Ten 27. Rewarding Reading 28. Your Life in Books 29. Vote for the Read-Aloud 30. Promoting the Value of Literacy at Home IV. Against All Odds: A Case Study of Small Changes and Big Differences V. Assessing Motivation: Instruments Assessing Motivation to Read The Teacher, the Text, and the Context: Factors That Influence Elementary Students' Motivation to Write VI. Conclusion: Myths and Truths Revisited This informative, easy-to-use resource is full of practical ideas that fit into the constraints of the busy elementary classroom and mesh well with Common Core standards. The book addresses a very common pitfall to student achievement—lack of motivation. I will definitely use the read-aloud text selection suggestions and the concept of 'honoring all print' in my classroom." —Denise Ashe Devine, MS, fourth-grade teacher, Chittenango (New York) Central Schools "This book is a 'must read' for all those who are (or will be) teaching reading. Without motivation, we cannot teach children to become lifelong voluntary readers. The book illustrates how to motivate children with choices, suitable challenges, social interaction, and success—and how to make literacy more interesting and relevant for them. I thoroughly enjoyed the book and learned many excellent strategies."—Lesley Mandel Morrow, PhD, Professor and Chair, Department of Learning and Teaching, Rutgers, The State University of New Jersey Barbara A. Marinak, PhD, is Associate Professor in the School of Education and Human Services at Mount St. Mary’s University. Before coming to Mount St. Mary's, she spent more than two decades in public education. She co-chairs the Response to Intervention (RTI) Task Force of the International Reading Association and serves on the National Joint Commission on Learning Disabilities. Dr. Marinak is a recipient of the J. Estill Alexander Future Leaders in Literacy Dissertation Award from the Association of Literacy Educators and Researchers. Her research and publications address reading motivation, intervention practices, and the use of informational text. Linda B. Gambrell, PhD, is Distinguished Professor in the Eugene T. Moore School of Education at Clemson University. A past president of the International Reading Association (IRA) and the National Reading Conference (NRC), she is a recipient of numerous awards, including the Outstanding Teacher Educator in Reading Award and the William S. Gray Award from the IRA, the Albert J. Kingston Award from the NRC, and the Oscar Causey Award from the Literacy Research Association, and is a member of the Reading Hall of Fame. Dr. Gambrell's research and publications focus on comprehension and cognitive processing, literacy motivation, and the role of discussion in teaching and learning. Susan A. Mazzoni, MEd, is an independent literacy consultant who works with administrators and teachers to improve literacy practices in elementary school classrooms. For the past 15 years, she has worked with teachers on implementing phonics, phonemic awareness, fluency, comprehension, and vocabulary instruction in ways that promote student engagement and literacy motivation. Ms. Mazzoni has taught reading courses at the University of Maryland, College Park, and served as a research assistant for the National Reading Research Center. Her research and publications address reading motivation, reading engagement, emergent literacy, and discussion. Pub USA 2013 Pbk 184 pages
The prokaryotes (pronounced /proʊˈkæri.oʊts/ or /proʊˈkæriəts/) are a group of organisms that lack a cell nucleus (= karyon), or any other membrane-bound organelles. They differ from the eukaryotes, which have a cell nucleus. Most are unicellular, but a few prokaryotes such as myxobacteria have multicellular stages in their life cycles. The word prokaryote comes from the Greek πρό- (pro-) "before" + καρυόν (karyon) "nut or kernel". The prokaryotes are divided into two domains: the bacteria and the archaea. Archaea were recognized as a domain of life in 1990. These organisms were originally thought to live only in inhospitable conditions such as extremes of temperature, pH, and radiation but have since been found in all types of habitats.
There are various definitions of an inertial reference frame out there, but only one is really accepted by the physics community. In some places, you will see an inertial reference frame defined as a a reference frame in which Newton's law of inertia is valid. In some places, you will see an inertial reference frame defined simply as a reference frame which isn't accelerating. In some places, you will see an inertial reference frame defined as a reference frame in which all three of Newton's laws are valid. (This is the standard definition) In some places, you will see an inertial reference frame defined as follows: An inertial reference frame is defined as a reference frame in which an object at rest will remain at rest, and an object in motion will remain in motion in a straight line at a constant speed, and if a repulsive force acts between two bodies of the same mass, they will acquire equal velocities in equal amounts of time. What I would like to do in this thread, is discuss inertial reference frames. The reason is this: If someone does not know exactly what an inertial reference frame is, then they certainly don't understand the theory of special relativity, precisely because its fundamental postulate is that the speed of light is c in any inertial reference frame . To begin with, consider Newton's laws: Law of inertia: An object at rest will remain at rest, and an object in motion will remain in motion in a straight line at a constant speed, unless acted upon by an outside force. Newton's second law: An external force will accelerate an object, and the change in the momentum of the object will be directly proportional to the applied force, and in the direction of the applied force. Newton's third law: Every action is accompanied by an equal and opposite reaction. In other words, if some object in the universe is acted upon by an external force of magnitude F, then some other object in the universe is simultaneously acted upon by a force of the same magnitude F, but in the opposite direction. Now, Newton's first and second law can be combined into a single mathematical equation which is this: [tex] F = dP/dt [/tex] In the above equation, F is a vector quantity, and P = momentum is also a vector quantity. The definition of momentum is as follows: P = momentum = mass times velocity = mV where velocity is a vector quantity. The magnitude of an object's velocity, is its speed in a frame. This is all you need in order to understand the definition of an inertial reference frame. An interesting consequence of the definition of an inertial reference frame, is that if frame 1 is an inertial reference frame, and frame 2 is moving at a constant speed relative to frame 1, then frame 2 is also an inertial reference frame, but if frame 2 is accelerating with respect to frame 1, then frame 2 is a non inertial reference frame. The first issue which I wish to address, is how is it shown that Newton's first and second laws are contained in the single mathematical statement given above?
Venezuela's air force was first established in 1920 and used the national flag -a horizontal tricolour of yellow, blue and red - as rudder striping and roundels for wings and fuselage. Since about 1956 bars in three colours have been added to the roundels and it has been normal practice to mark them above the port wing and below the starboard. The blue, central area of the rudder marking bears a semi-circle of seven white stars for the original seven provinces of the country. Low-visibility markings on camouflaged aircraft are reduced in size, while the rudder markings have been discontinued. Some aircraft now bear the national flag as a fuselage marking. Naval aircraft have reverted to the pre-1956 roundel and also carry a black anchor on a white panel. Once part of French Indo-China, Vietnam formed its first air arm, under French supervision, in 1950. Aircraft were marked with an orange disc bearing three concentric red circles. The fin marking was an orange square with three red lines, the national flag. After the French withdrawal, in 1954, the country was divided into Northern and Southern zones. The Southern Zone began to receive aid from the United States and, by 1962, the wing and fuselage marking was changed to a U.S. type. The fin marking remained unchanged. The new roundel was a white star on a blue disc with side bars of orange and a red stripe. The whole insignia was surrounded with a red border. The Northern Zone adopted a national flag of plain red with a yellow five pointed star. The initial aircraft marking was a red bordered yellow star. Some aircraft have been reported with a plain yellow star, often as a fin marking. Occasionally captured American aircraft carried the Viet Cong flag, red over blue with a yellow star. By about 1970 North Vietnam placed the yellow star on a red disc with red side bars, all surrounded with yellow. Unified Vietnam, from 1975, used the North Vietnamese marking or the national flag.
SummaryStudents investigate decomposers and the role of decomposers in maintaining the flow of nutrients in an environment. Students also learn how engineers use decomposers to help clean up wastes in a process known as bioremediation. This lesson concludes a series of six lessons in which students use their growing understanding of various environments and the engineering design process, to design and create their own model biodome ecosystems. Bioremediation is any process that uses natural living things to return an environment altered by contaminants to its original condition. Engineers use decomposers such as earthworms, fungi and bacteria in environmental clean-up efforts through bioremediation, for example, to clean up oil and chemical spills. Bioremediation technology examples include bioventing, landfarming, bioreactors and biostimulation. This use of biological agents to restore damaged eco-systems to healthier states is especially beneficial because it has less impact on the natural environmental than other processes. Also, some engineers design new products so they decompose on purpose, such dissolvable stitches, packing material, temporary tear duct plugs, plastic grocery bags, biodegradable plates and flatware. Some basic information about food chains, plants and animals, as covered in the Biodomes unit, Lessons 3, 4 and 5. After this lesson, students should be able to: - Define decomposers. - List two examples of decomposers and how they affect the environment. - Explain how engineers use decomposers for bioremediation of the environment. More Curriculum Like This Students gain an understanding of the parts of a plant, plant types and how they produce their own food from sunlight through photosynthesis. They learn how plants play an important part in maintaining a balanced environment in which the living organisms of the Earth survive. This lesson is part of ... By studying key processes in the carbon cycle, such as photosynthesis, composting and anaerobic digestion, students learn how nature and engineers "biorecycle" carbon. Students are exposed to examples of how microbes play many roles in various systems to recycle organic materials and also learn how ... Students design and conduct experiments to determine what environmental factors favor decomposition by soil microbes. They use chunks of carrots for the materials to be decomposed, and their experiments are carried out in plastic bags filled with soil. Students look at the components of cells and their functions. The lesson focuses on the difference between prokaryotic and eukaryotic cells. Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards. All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org). In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc. Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards. All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org). In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc. - Waste must be appropriately recycled or disposed of to prevent unnecessary harm to the environment. (Grades 3 - 5) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! - A subsystem is a system that operates as a part of another system. (Grades 3 - 5) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! - Compare and contrast different habitat types (Grade 4) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! - Create and evaluate models of the flow of nonliving components or resources through an ecosystem (Grade 4) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback! Let's think about different plants and animals. We have learned about the basic needs of plants and animals — food, water and energy. We also know that animals can be classified by how and what they eat. Who can tell me where plants get their energy to make food? (Answer: The sun.) What are the three words we use to classify what animals eat? (Answer: Carnivores, herbivores and omnivores.) Let's play a game. Let's see how many different plants and animals we can name in two minutes. (Ask students to raise their hands and give a plant or animal name. Do not allow any repeats.) Did we mention worms? How about fungus or mold? Today we are going to learn about a special group of organisms that include earthworms and fungi and the cool things that they do for the environment and us. Does anyone know what group I am talking about? Well, earthworms and fungi fall into a category called decomposers. Decomposers are living organisms that breaks down other living and non-living things into smaller parts. When plants and animals die, they become food for these decomposers. Decomposers can recycle dead plants and animals into chemical nutrients such as carbon and nitrogen that are released back into the soil, air and water as food for living plants and animals. So, decomposers can recycle dead plants and animals and help keep the flow of nutrients available in the environment. Earthworms are animal decomposers that eat dead plants and animals. When they eat, they take in nutrients from microorganisms as well as soil and tiny pebbles. Worms then deposit wastes that are rich in nutrients such as nitrogen and phosphorus that helps the soil. As worms move through the soil, they also help loosen the soil so air can circulate; this helps plants to grow. One thing to remember is that earthworms need moist environments to survive. If they dry out, they have trouble burrowing into the soil and they die. Fungi are another type of decomposer. Fungi include things like mushrooms, mildew, mold and toadstools. Fungi are not actually plants because they do not make their own food using photosynthesis. Instead, they contain chemicals that help them break down and absorb food from dead plants and animals around them. Many fungi are helpful to humans. Penicillin and other antibiotic medicines are made from fungi. Mushrooms, truffles and yeast are edible or used in making meals that we eat. Some fungi are used to make industrial chemicals that, in turn, make things like stonewashed denim for jeans. Other fungi are highly poisonous and should never be eaten by people. Both scientists and engineers need to know about decomposers and how they interact with the environment. Why do you think engineers might want to know something about decomposers? What kind of things might engineers need to break down into smaller parts? Well, many engineers are concerned with pollution and how it affects an environment. Toxic oil spills in the ocean and even sewage in a city are things that engineers want to break down so they are no longer harmful to humans or ecosystems. Environmental engineers help to clean up toxic areas through a process called bioremediation. Who thinks they know what bioremediation is? Well, let's break down the word. "Bio" is the study of life. What word does "remediation" sound close to? Remedy is a treatment or cure that corrects something that is wrong or makes you sick. So, bioremediation is basically fixing something that is wrong by using living things. Bioremediation uses decomposers and green plants to return the environment that has been damaged by contaminants or pollution back to its original, healthy state. When do you think bioremediation may have problems? Well, decomposers are unable to break down some substances, such as metals, and they remain harmful to the environment. Engineers are working on ways to recycle those materials and use them for new things. So, sometimes bioremediation is a lot like recycling — helping to break down old things and make new things! Lesson Background and Concepts for Teachers Worldwide, more than 5,500 named species of earthworms are known. They exist almost everywhere except for polar and arid climates. They range in size from 2 cm to more than 3 m. With no eyes or legs, worms tunnel through the soil, breathing through their skin. They travel underground by moving their segmented bodies through muscular contractions that shorten and lengthen their body. Worms can replace or replicate lost segments, although this ability varies between species and depends on the extent of the damage. Earthworms are helpful when we want to compost dead organic matter, add nutrients to soils, and aerate the soil. As earthworms move through the organic material in soils, they eat. Like humans, earthworms cannot digest everything they eat. What they cannot digest is released as waste called castings, which add important nutrients to the soil. Also, as earthworms move from place to place, they create burrows that mix up the soil and aerate it so air and water can penetrate The more than 50,000 known species of fungi vary from unicellular yeasts to multicellular mushrooms. They are all eukaryotes that digest their food externally and absorb nutrients into their cells. Fungi are typically found in warm, moist places that provide the atmosphere required to live and grow. Fungi absorb their food through hyphae, which are threadlike tubes that make up their bodies. Hyphae actually grow into the desired food source and release digestive juices that help to break down the food source into smaller particles that can be absorbed. Fungi feed on both living and dead organisms, and can even be responsible for the death of an organism. Some fungi also serve as decomposers — organisms that break down into chemicals the dead plant and animal matter that would otherwise cover our planet. Fungi can also be used to produce food, such as bread, cheese and plain mushrooms. Fungi can be responsible for both the cause and cure of various diseases. Example human diseases caused by fungi include athlete's foot and ringworm. Penicillin, a well-known antibiotic that has saved millions of lives, is made from a fungus. biodome: A human-made, closed environment containing plants and animals existing in equilibrium. bioremediation: The use of natural living things, such as plants, worms, bacteria and fungi, to clean up polluted soil or water in a more environmentally friendly way. decomposer: Living organisms that break down living and non-living matter into smaller parts. decomposition: The breakdown of a substance into different parts or simpler compounds. Decomposition can occur due to heat, chemical reaction, decay, etc. earthworm: An example animal decomposer. Worms burrow in soil and feed on soil nutrients and decaying organic matter. In compost piles, worms break down food wastes into healthy soil. ecosystem: A functional unit consisting of all the living organisms (plants, animals and microbes) in a given area, and all the nonliving physical and chemical factors of their environment, linked together through nutrient cycling and energy flow. An ecosystem can be of any size — a log, pond, field, forest or the Earth's biosphere — but it always functions as a whole unit. engineer: A person who applies scientific and mathematical principles to creative and practical ends such as the design, manufacture and operation of efficient and economical structures, machines, processes and systems. environment: The surroundings in which an organism lives, including air, water, land, natural resources, flora, fauna, humans and their interrelationships. (Examples: Tundra, coniferous forest, deciduous forest, grassland prairie, mountains and rain forest.) fungus : An example eukaryote decomposer. Fungi break down plant matter into nutrients that makes soil healthier. Includes mushrooms, molds and mildews. Fungus (singular), fungi (plural). - Biodomes Engineering Design Project: Lessons 2-6 - Students continue to engage in the engineering design process as they design and create model biodomes of a particular environment. In Part 6, they consider decomposers and their roles in the environment they are designing. They finish their biodome projects by adding decomposers into their manufactured environments, following instructions provided in the Procedure section in this activity. Today we learned about decomposers. What are decomposers? (Answer: Any living organism that breaks down other living and non-living things into smaller parts.) Who can name an example of a decomposer? (Possible answers: Earthworms, bacteria and fungi.) How do these decomposers help the environment? (Answer: Decomposers can help break down dead plants and animals into nutrients, creating food for living plants and animals.) Environmental engineers must learn about decomposers; they use decomposers to help them with bioremediation. Who remembers what bioremediation is? It is using decomposers help fix an environment that has been damaged by contaminants or pollution. Now, let's think like engineers. Could we use bioremediation to clean up an oil spill? Yes! How about some land next to a factory that dumped chemicals on the soil? Yes! These are both good places to use bioremediation. When might it be hard for engineers to use bioremediation? Well, maybe in a situation with substances, such as metals, that cannot be broken down by decomposers. Or, in situations when no decomposers are available to use. Discussion Topic: Ask the class: What do worms and mushrooms have in common? Working in small groups, have students think of an answer and write it down, then share during a class discussion. After soliciting answers, explain that these questions will be answered during the lesson. Idea Web: Ask students to brainstorm a list of pollutants. What effects do these pollutants have on our environment and us? Do they know of possible solutions for reducing these types of pollutants? Tell students that today we will be learning about another way to clean up pollutants in the environment. Food Web Connection: Have students think about the role of worms and fungi in a food chain or food web. Do the decomposers link the food chain or web back towards the beginning (that is, provide food for seeds and plants)? Have them explain how worms and fungi contribute to the environment remaining healthy. Can You Find the Decomposers?: Show the students pictures or make a list on the board of animals and plants, some of which are decomposers. Ask the students to identify which animals and plants they think help to break things down, or decompose. Possible examples: Cats, snakes, turtles (animals); maple tree, rose, tomato (plants); earthworms, mushrooms and mold (decomposers). Lesson Summary Assessment Bioremediation Engineers: To help clean up an oil spill at an airport, engineers could use bioremediation, which is the use of decomposers help clean up an environment that has been damaged by contaminants or pollution. Ask the students to think of another pollution scenario in which engineers could use bioremediation. Have them write or draw how an engineer would use decomposers (and which ones) in that scenario to return the environment to a healthier condition. Decomposition Send-a-Problem: Have students write their own questions about decomposers. Each student on a team creates a flashcard with a question on one side and the answer on the other. If the team cannot agree on an answer they should consult the teacher. Pass the flashcards to the next team. Each member of the team reads a flashcard and everyone attempts to answer it. If they are right, they pass the card on to another team. If they feel they have another correct answer, they can write it on the back of the flashcard as an alternative answer. Once all teams have tested themselves on all the flashcards, clarify any questions. (Example questions: True or false, worms are decomposers. True or false, decomposition is when plants produce fruit.) Post-Unit Quiz: If you administered the Pre-Unit Quiz before beginning the Biodomes curricular unit, conclude the overall pre/post assessment of the unit (six lessons, with associated activities), by administering the Post-Unit Quiz to the class after concluding this lesson and activity. Compare pre- to post- scores to gauge the impact of the curricular unit on students' learning. Engineered to Decompose: Some engineers design new products so they decompose on purpose. Why? Have students research and report on examples. (Possible ideas: Dissolvable stitches, decomposable packing material, temporary tear duct plugs, decomposable plastic grocery bags, biodegradable picnic plates and flatware, potting containers.) What types of materials were used to make the items? (Usually natural materials, such as corn starch, collagen, silk, sugar cane fiber, etc.) Why might these items have different decomposition rates from each other? Thinking like an engineer, brainstorm to come up with other ideas for human-made items that would be helpful if they decomposed. Lesson Extension Activities Ask each group to make a poster about decomposition. The groups can choose to focus on natural or human-made situations. Make the poster colorful with at least three key words labeled on the diagram. If students are interested, have them investigate recycling or bioremediation and ask them to find real world examples to share with the class. Ask students to find out about composting. What is it? How does composting fit into the energy and nutrient cycles? How can composting help reduce our household waste that goes into a landfill and improve our garden soil? Dictionary.com. Lexico Publishing Group, LLC. Accessed November 6, 2006. (Source of some vocabulary definitions, with some adaptation). http://www.dictionary.com O'Neil, Dennis. Classification of Living Things: Linnaean Classification of Kingdoms. Last updated March 8, 2005. Dr. Dennis O'Neil, Behavioral Sciences Department, Palomar College, San Marcos, CA. Accessed October 11, 2006. http://anthro.palomar.edu/animal/table_kingdoms.htm Padilla, Michael J. Science Explorer: From Bacteria to Plants. Upper Saddle River, NJ: Prentice Hall, 2002. What are dissolvable stitches? How Stuff Works, Media Network. Accessed November 29, 2006. http://health.howstuffworks.com/question611.htm ContributorsKatherine Beggs; Malinda Schaefer Zarske; Denise W. Carlson Copyright© 2005 by Regents of the University of Colorado. Supporting ProgramIntegrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder The contents of this digital library curriculum were developed under a grant from the Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education and National Science Foundation GK-12 grant no. 0338326. However, these contents do not necessarily represent the policies of the Department of Education or National Science Foundation, and you should not assume endorsement by the federal government. Last modified: March 20, 2018
The main difference between endothermic reactions and exothermic reactions is that in endothermic reactions heat is absorb while in exothermic reactions heat is release. |Basis of Comparison||Endothermic Reactions||Exothermic Reactions| |Introduction||A chemical reaction that absorbs energy and in the form of the products form.||A chemical reaction that releases energy in the products form.| |Form of Energy||Heat||Heat, electricity, sound or light| |Results||Energy is absorbed||Heat is released| |Charge that free energy||Small positive||Large negative| |Product/Reactants Ratio||Products have more energy than reactants||Products have less energy than reactants| |End Result||Increase in chemical potential energy||Decrease in chemical potential energy| |Examples||Cooking an egg, photosynthesis, and evaporation||Fireplace, respiration, and combustion| Endothermic reactions are those chemical reactions where energy is absorbed by the system from the surroundings mostly in the form of heat. The concept is applied in the physical sciences like chemical reactions where hear is converted to chemical bond energy by way of experiments. Common examples of endothermic reactions are cooking an egg, photosynthesis, and evaporation. This reactions process accounts for the enthalpy change of a reaction only. The overall energy analysis of any reaction is the Gibbs free energy that includes temperature and entropy in addition to the enthalpy. The point to mention here is that endothermic reactions release energy always in the form of heat only. Moreover, the products have more energy as compared to the reactants. The end result of any endothermic reaction is in an increase in chemical potential energy. The endothermic reaction always needs a greater amount of energy to break the existing bonds in the reactants in order to need the new bonds form in the products. In a nutshell in the endothermic reactions process, less energy is added to the environment as compared to the amount of energy absorbed to initiate and maintain the reaction. An exothermic reaction is a chemical reaction that releases energy in the form of heat, light, sound or even electricity. It can be expressed as the reaction where reactants results in products and energy. Overall it adds energy to the surroundings. Moreover, it is the energy that is needed to start the reaction process and is always less than the energy released. It ‘s hard to measure the amount of energy released during the chemical process. However, the enthalpy change of a chemical reaction is easier to work, and it always equals the change in internal energy of the system and the amount of work required to change the volume of the system against constant ambient pressure. The concept of exothermic reactions process is applied in the physical sciences to chemical reactions where the energy of chemical bond is converted into thermal energy. It explains two kinds of chemical systems or reactions found in nature. In a nutshell in the overall process, more energy is added to the environment as compared to the amount of energy that was absorbed to initiate and maintain the reaction. - Endothermic reactions absorbed the heat while exothermic reactions give out the heat. - In the case of endothermic reactions, the content of energy of the reactants is always less than the products while it happens reverse in the case of the exothermic reactions. - The change of enthalpy for endothermic reactions is always positive while it tends to negative in case of AH in a change of enthalpy in the exothermic reactions. - In endothermic reactions, small positive free energy while in exothermic reactions large negative free energy. - All endergonic reactions are exothermic while all exergonic reactions are exothermic. - The common examples of endothermic reactions are cooking an egg, photosynthesis, and evaporation. The common examples of exothermic reactions are a fireplace, respiration, and combustion. - Endothermic results in an increase in chemical potential energy while exothermic reactions result in a decrease in chemical potential energy. - Exothermic reactions are hotter than surroundings while endothermic reactions are cooler than surroundings. - In endothermic reactions, energy is always present in the form of heat while in the case of exothermic reactions; energy is always present in the form of heat, electricity, sound or light. - In the endothermic reactions process, less energy is added to the environment as compared to the amount of energy absorbed to initiate and maintain the reaction. In exothermic reactions process, more energy is added to the environment as compared to the amount of energy that was absorbed to initiate and maintain the reaction.
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Pitch accent is a linguistic term of convenience for a variety of restricted tone systems that use variations in pitch to give prominence to a syllable or mora within a word. The placement of this tone or the way it is realized can give different meanings to otherwise similar words. The term has been used to describe the Scandinavian languages, Serbian, Croatian, Ancient Greek, Japanese, some dialects of Korean, and Shanghainese. Pitch accent is often described as being intermediate between tone and stress, but it is not a concept that is required to describe any language, nor is there a coherent definition for pitch accent. Proto-Indo-European accent is usually reconstructed as a free pitch-accent system, preserved in Ancient Greek, Vedic, and Proto-Balto-Slavic. The Greek and Indic systems were lost: pitch produced stress accent in Modern Greek), and was lost entirely from Indic by the time of the Prākrits. Balto-Slavic retained Proto-Indo-European pitch accent, reworking it into the opposition of "acute" (rising) and "circumflex" (falling) tone, and which, following a period of extensive accentual innovations, yielded pitch-accent based system that has been retained in modern-day Lithuanian and West South Slavic languages (in most dialects). Some other modern IE languages have pitch accent systems, like Swedish and Norwegian, deriving from a stress-based system they inherited from Old Norse, and Panjabi, which developed tone distinctions that maintained lexical distinctions as consonants were conflated. Pitch accent is not a coherently defined term, but is used to describe a variety of systems that are on the simple side of tone (simpler than Yoruba or Mandarin) and on the complex side of stress (more complex than English or Spanish). - 1 Difference - 2 Stress - 3 Norwegian and Swedish - 4 West South Slavic languages - 5 Japanese - 6 Korean - 7 Shanghainese - 8 Autosegmental-metrical theory - 9 English - 10 See also - 11 References Firstly, while the primary indication of accent is pitch (tone), there is only one tonic syllable or mora in a word, or at least in simple words, the position of which determines the tonal pattern of the whole word. Pitch accent may also be restricted in distribution, being found for example only on one of the last two syllables. This is unlike the situation in typical tone languages, where the tone of each syllable is independent of the other syllables in the word. For example, comparing two-syllable words like [aba] in a pitch-accented language and in a tonal language, both of which make only a binary distinction, the tonal language has four possible patterns: - low-low [àbà], - high-high [ábá], - high-low [ábà], - low-high [àbá]. The pitch-accent language, on the other hand, has only three possibilities: - accented on the first syllable, [ába], - accented on the second syllable, [abá], or - no accent [aba]. The combination *[ábá] does not occur. With longer words, the distinction becomes more apparent: eight distinct tonal trisyllables [ábábá, ábábà, ábàbá, àbábá, ábàbà, àbábà, àbàbá, àbàbà], vs. four distinct pitch-accented trisyllables [ábaba, abába, ababá, ababa]. Secondly, there may be more than one pitch possible for the tonic syllable. For example, for some languages the pitch may be either high or low. That is, if the stress is on the first syllable, it may be either [ába] or [àba] (or [ábaba] and [àbaba]). In stress-accent systems, on the other hand, there is no such variation: accented syllables are simply louder. (If there is secondary stress in a stress-accent language, as is sometimes claimed for English, there must always be a primary stress as well; such languages do not contrast [ˈaba] with primary stress only from [ˌaba] with secondary stress only.) In addition, many lexical words may have no tonic syllable at all, whereas normally in stress-accent languages every lexical word must have a stressed syllable; also, whereas non-compound words may have more than one stress-accented syllable, as in English, multiple pitch-accent words are not normally found. In a wider and less common sense of the term, "pitch accent" is sometimes also used to describe intonation, such as methods of conveying surprise, changing a statement into a question, or expressing information flow (topic-focus, contrasting), using variations in pitch. A great number of languages use pitch in this way, including English as well as all other major European languages. They are often called intonation languages. Norwegian and Swedish - Main article: Swedish phonology#Stress and pitch Most dialects differentiate between two kinds of accents. Often referred to as acute and grave accent, they may also be referred to as accent 1 and accent 2 or tone 1 and tone 2. Hundreds of two-syllable word pairs are differentiated only by their use of either grave or acute accent. Accent 1 is, generally speaking, used for words whose second syllable is the definite article, and for words that in Old Norse were monosyllabic. (Although also some dialects of Danish use tonal word accents, in most Danish dialects so called stød functions to the very same end.) These are described as tonal word accents by Scandinavian linguists, because there is a set number of tone patterns for polysyllabic words (in this case, two) that is independent of the number of syllables in the word; in more prototypical pitch-accent languages, the number of possible tone patterns is not set but increases in proportion to the number of syllables. For example in many East Norwegian dialects, the word "bønder" (farmers) is pronounced using tone 1, while "bønner" (beans or prayers) uses tone 2. Though the difference in spelling occasionally allow the words to be distinguished in written language, in most cases the minimal pairs are written alike. A Swedish example would be the word "tomten," which means "Santa Claus" (or "the house gnome") when pronounced using tone 2, and means "the plot of land," "the yard," or "the garden" when pronounced using tone 1. Thus, the sentence "Är det tomten på tomten?" ("Is that Santa Claus out in the yard?") uses both pronunciations right next to each other. Although most dialects make this distinction, the actual realizations vary and are generally difficult for non-natives to distinguish. In some dialects of Swedish, including those spoken in Finland, this distinction is absent. There are significant variations in the realization of pitch accent between dialects. Thus, in most of western and northern Norway (the so-called high-pitch dialects) accent 1 is falling, while accent 2 is rising in the first syllable and falling in the second syllable or somewhere around the syllable boundary. The word accents give Norwegian and Swedish a "singing" quality which makes it fairly easy to distinguish them from other languages. West South Slavic languages Late Proto-Slavic accentual system was based on the fundamental opposition of short/long circumflex (falling) tone, and the acute (rising) tone, position of the ictus being free as is the state of affairs inherited from Proto-Balto-Slavic. Common Slavic accentual innovations significantly reworked the original system primarily with respect to the position of the ictus (Dybo's law, Illič-Svityč's law, Meillet's law etc.), and further developments yielded some new accents—e.g. the so-called neoacute (Ivšić's law), or the new rising tone in Neoštokavian idioms (the so-called "Neoštokavian retraction"). As opposed to other Slavic dialect subgroups, West South Slavic idioms have largely retained the Proto-Slavic system of free and mobile tonal accent (including the dialect used for basis of codification of modern standard Slovene, as well as Neoštokavian used for the basis of standard Croatian and Serbian), though the discrepancy between codified norm and actually spoken speech may significantly vary.. Serbian and Croatian languages Neoštokavian idiom used for the basis of standard Croatian and Serbian distinguishes four types of pitch accents: short falling < ̏>, short rising <̀>, long falling < ̑> and long rising <´>. The accent is said to be relatively free as it can be manifested in any syllable but the last one. The long accents are realized by pitch change within the long vowel; the short ones are realized by the pitch difference from the subsequent syllable. . Accent alternations are very frequent in inflectional paradigms, both by quality and placement in the word (the so-called "mobile paradigms", which were present in the PIE itself but in Proto-Balto-Slavic have became much more widespread). Different inflected forms of the same lexeme can exhibit all four accents: lònac 'pot' (nominative sg.), lónca (genitive sg.), lȏnci (nominative pl.), lȍnācā (genitive pl.). Restrictions on the distribution of the accent depend, beside the position of the syllable, also on its quality, as not every kind of accent can be manifested in every syllable. - Falling tone generally occurs in monosyllabic words or the first syllable of a word (pȃs 'belt', rȏg 'horn'; bȁba 'old woman', lȃđa 'river ship'; kȕćica 'small house', Kȃrlovac). The only exception to this rule are the interjections, i.e. words uttered in the state of excitement (ahȁ, ohȏ) - Rising tone generally occurs in every syllable of a word beside the ultimate or in the monosyllabics (vòda 'water', lȗka 'harbour'; lìvada 'meadow', lúpānje 'slam'; siròta 'female orphan', počétak 'beginning'; crvotòčina 'wormhole', oslobođénje 'liberation'). Thus, monosyllabics generally have falling tone, whilst polysyllabics generally have falling or rising tone on the first syllable, and rising in all the other syllables but the last one. The tonal opposition rising ~ falling is hence generally only possible in the first accented syllable of polysyllabic words, while the opposition by lengths, long ~ short, is possible even in the non-accented syllable as well as in the post-accented syllable (but not in the pre-accented position). Proclitics (clitics which latch on to a following word), on the other hand, may "steal" a falling tone (but not a rising tone) from the following mono- or disyllabic word. This stolen accent is always short, and may end up being either falling or rising on the proclitic. This phenomenon (accent shift to proclitic) is most frequent in the spoken idioms of Bosnia, in Serbian it is more limited (normally, with negation proclitic ne), and is almost absent from Croatian Neoštokavian idioms. Short rising accent resists such shift better than the falling one (as seen in the example /ʒěliːm/→/ne‿ʒěliːm/) |in isolation||with proclitic| |rising||/ʒěliːm/||I want||/ne‿ʒěliːm/||I don't want| |/zǐːma/||winter||/u‿zîːmu/||/û‿ziːmu/||in the winter| |/nemɔgǔːtɕnɔst/||inability||/u‿nemɔgǔːtɕnɔsti/||not being able to| |falling||/vîdiːm/||I see||/ně‿vidiːm/||I can't see| |/grâːd/||city||/u‿grâːd/||/û‿graːd/||to the city (stays falling)| |/ʃûma/||forest||/u‿ʃûmi/||/ǔ‿ʃumi/||in the forest (becomes rising)| In Slovenian, there are two concurrent standard accentual systems - the older, tonal, with three "pitch accents", and younger, dynamic (i.e. stress-based) with distinctive length only. The stress-based system was introduced because two thirds of Slovenia does not have tonal accent anymore. In practice, however, even the stress-based accentual system is just an abstract ideal and speakers generally retain their own organic idiom even when trying to speak standard Slovenian (e.g. the speakers of urban idioms at the west of Slovenia which have not distinctive lengths don't introduce that kind of quantitative opposition when speaking the standard language). Older accentual system, as it was said, is tonal by quality and free (jágoda 'strawberry', malína 'raspberry', gospodár 'master, lord'). There are three kinds of accents: short falling <̀>, long falling < ̑> and long rising <´>. Non-final syllables always have long accents ( ̑ or ´), e.g rakîta 'crustacea', tetíva 'sinew'. Short falling accent can come only in the ultimate (or the only, as is the case in monosyllabics) syllable, e.g. bràt 'brother'. It is only there that three-way opposition among accents is present: deskà 'board' : blagọ̑ 'goods, ware' : gospá 'lady'. Accent can be mobile throughout the inflectional paradigm: dȃr — darȃ, góra — gorẹ́ — goràm, bràt — bráta — o brȃtu, kráva — krȃv, vóda — vodọ̑ — na vọ̑do). The distinction is made between open –e- and –o- (either long and short) and closed -ẹ- and -ọ- (always long). - Main article: Japanese pitch accent Japanese is often described as having pitch accent. However, it is found in only about 20% of Japanese words; 80% are unaccented. This "accent" may be characterized as a downstep rather than as pitch accent. The pitch of a word rises until it reaches a downstep, then drops abruptly. In a two-syllable word, this results in a contrast between high-low and low-high; accentless words are also low-high, but the pitch of following enclitics differentiates them. |Accent on first mora||Accent on second mora||Accentless| Standard Seoul Korean uses only pitch for prosodic purposes. However, several dialects outside Seoul retain a Middle Korean pitch accent system. In the dialect of North Gyeongsang, in southeastern South Korea, any one syllable may have pitch accent in the form of a high tone, as may the initial two syllables. For example, in trisyllabic words, there are four possible tone patterns: The Shanghai dialect of Wu Chinese is marginally tonal, with characteristics of pitch accent. Not counting closed syllables (those with a final glottal stop), a Shanghainese word of one syllable may carry one of three tones, high, mid, low. (These tones have a contour in isolation, but for our purposes that can be ignored.) However, low always occurs after voiced consonants, and only there. Thus the only tonal distinction is after voiceless consonants and in vowel-initial syllables, and then there is only a two-way distinction between high and mid. In a polysyllabic word, the tone of the first syllable determines the tone of the entire word. If the first tone is high, following syllables are mid; if mid or low, the second syllable is high, and any following syllables are mid. Thus a mark for high tone is all that is needed to write tone in Shanghainese: |No voiced initial (mid tone)||aodaliya||澳大利亚||mid-high-mid-mid||Australia| |No voiced initial (high tone)||kónkonchitso||公共汽車||high-mid-mid-mid||bus| "Pitch accent" is a term used in autosegmental-metrical theory for local intonational features that are associated with particular syllables. Within this framework, pitch accents are distinguished from both the abstract metrical stress and the acoustic stress of a syllable. Different languages specify different relationships between pitch accent and stress placement. Languages vary in terms of whether pitch accents must be associated with syllables that are perceived as prominent or stressed. For example, in French and Indonesian, pitch accents may be associated with syllables that are not acoustically stressed, while in English and Swedish, syllables that receive pitch accents are also stressed. Languages also vary in terms of whether pitch accents are assigned lexically or post-lexically. Lexical pitch accents are associated with particular syllables within words in the lexicon, and can serve to distinguish between segmentally similar words. Post-lexical pitch accents are assigned to words in phrases according to their context in the sentence and conversation. Within this word, the pitch accent is associated with the syllable marked as metrically strong in the lexicon. Post-lexical pitch accents do not change the identity of the word, but rather how the word fits into the conversation. The stress/no-stress distinction and the lexical/post-lexical distinction create a typology of languages with regards to their use of pitch accents. Languages that use lexical pitch accents are described as pitch accent languages, in contrast to tone/tonal languages like Mandarin Chinese and Yoruba. Pitch accent languages differ from tone languages in that pitch accents are only assigned to one syllable in a word, whereas tones can be assigned to multiple syllables in a word. Pitch accents consist of a high (H) or low (L) pitch target or a combination of H and L targets. H and L indicate relative highs and lows in the intonation contour, and their actual phonetic realization is conditioned by a number of factors, such as pitch range and preceding pitch accents in the phrase. In languages in which pitch accents are associated with stressed syllables, one target within each pitch accent may be designated with a *, indicating that this target is aligned with the stressed syllable. For example, in the L*+H pitch accent the L target is aligned with the stressed syllable, and it is followed by a trailing H target. This model of pitch accent structure differs from that of the British School, which described pitch accents in terms of 'configurations' like rising or falling tones. It also differs from the American Structuralists' system, in which pitch accents were made up of some combination of low, mid, high, and overhigh tones. Evidence favoring the two-level system over other systems includes data from African tone languages and Swedish. One-syllable words in Efik (an African tone language) can have high, low, or rising tones, which would lead us to expect nine possible tone combinations for two-syllable words. However, we only find H-H, L-L, and L-H tone combinations in two-syllable words. This finding makes sense if we consider the rising tone to consist of an L tone followed by an H tone, making it possible to describe one- and two-syllable words using the same set of tones. Bruce also found that alignment of the peak of a Swedish pitch accent, rather than the alignment of a rise or fall, reliably distinguished between the two pitch accent types in Swedish. Systems with several target levels often over-predict the number of possible combinations of pitch targets. Within autosegmental-metrical theory, pitch accents are combined with edge tones, which mark the beginnings and/or ends of prosodic phrases, to determine the intonational contour of a phrase. The need for pitch accents to be distinguished from edge tones can be seen in contours (1) and (2) in which the same intonational events - an H* pitch accent followed by an L- phrase accent and a H% boundary tone - are applied to phrases of different lengths. Note that in both cases, the pitch accent remains linked to the stressed syllable and the edge tone remains at the end of the phrase. Just as the same contour can apply to different phrases (e.g. (1) and (2)), different contours can apply to the same phrase, as in (2) and (3). In (3) the H* pitch accent is replaced with an L* pitch accent. Nuclear and prenuclear pitch accents Pitch accents can be divided into nuclear and prenuclear pitch accents. The nuclear pitch accent is defined as the head of a prosodic phrase. It is the most important accent in the phrase and perceived as the most prominent. In English it is the last pitch accent in a prosodic phrase. If there is only one pitch accent in a phrase, it is automatically the nuclear pitch accent. Nuclear pitch accents are phonetically distinct from prenuclear pitch accents, but these differences are predictable. Pitch accents in English serve as a cue to prominence, along with duration, intensity, and spectral composition. Pitch accents are made up of a high (H) or low (L) pitch target or a combination of an H and an L target. The pitch accents of English used in the ToBI prosodic transcription system are: H*, L*, L*+H, L+H*, and H+!H*. Most theories of prosodic meaning in English claim that pitch accent placement is tied to the focus, or most important part, of the phrase. Some theories of prosodic marking of focus are only concerned with nuclear pitch accents. - Larry Hyman, "Word-Prosodic Typology", Phonology (2006), 23: 225-257 Cambridge University Press - The term free here refers to the position of the accent—its position was unpredictable by phonological rules, i.e. it could stand on any syllable of a word, regardless of its structure. This is opposed to fixed or bounded accent whose position is determined by factors such as the syllable quantity and/or position, e.g. in Latin where it's on the penultimate syllable if it's "heavy", antepenultimate otherwise. - Fortson IV (2004:62) "From the available comparative evidence, it is standardly agreed that PIE was a pitch-accent language. There are numerous indications that the accented syllable was higher in pitch than the surrounding syllables. Among the IE daughters, a pitch-accent system is found in Vedic Sanskrit, Ancient Greek, the Baltic languages and some South Slavic languages, although none of these preserves the original system intact." - Proto-Germanic had fixed accent on the first syllable of a phonetic word, a state of affairs preserved in oldest attested Germanic languages like Gothic, Old English and Old Norse. Free PIE accent was lost in Germanic rather late, after the operation of Verner's law. - E.g. the accentual system of the spoken idiom of the Croatian capital Zagreb is stress-based and does not make use of distinctive vowel lengths. - Lexical, Pragmatic, and Positional Effects on Prosody in Two Dialects of Croatian and Serbian, Rajka Smiljanic, Routledge, ISBN 0-415-97117-9 - A Handbook of Bosnian, Serbian and Croatian, Wayles Brown and Theresa Alt, SEELRC 2004 - Pierrehumbert, Janet; Beckman, Mary (1988), Japanese Tone Structure, MIT Press: Cambridge, MA - The Prosodic Structure and Pitch Accent of Northern Kyungsang Korean, Jun et al., JEAL 2005[ling.snu.ac.kr/jun/work/JEAL_final.pdf] - Beckman, Mary (1986), Stress and Non-stress Accent, Foris Publications: Dordrecht - Ladd, D. Robert (1996), Intonational Phonology, Cambridge University Press: Cambridge, UK - Bolinger, Dwight (1951), Intonation: levels verrsus configurations., Word 7, p. 199-210 - Pike, Kenneth L. (1945), The Intonation of American English, University of Michigan Press: Ann Arbor - Bruce, Gösta (1977), Swedish word accents in sentence perspective, Developing the Swedish intonation model, Working Papers, Department of Linguistics and Phonetics, University of Lund - Silverman, Kim; Pierrehumbert, Janet (1990), The timing of prenuclear high accents in English., Kingston and Beckman - Hirschberg, Julia; Beckman, Mary (1994), ToBI Annotation Conventions, http://www.ling.ohio-state.edu/~tobi/ame_tobi/annotation_conventions.html - This usage of the term 'pitch accent' was proposed by Bolinger (1958), taken up by Pierrehumbert (1980), and described in Ladd (1996). - Bolinger, Dwight, "A theory of pitch accent in English", Word 14: 109-49, 1958 . - Ladd, Robert D. (1996), Intonational Phonology, Cambridge University Press - Pierrehumbert, Janet (1980). "The phonlogy and phonetics of English intonation" (PDF). PhD thesis, MIT, Published 1988 by IULC. Retrieved on 2008-01-14. - Fortson IV, Benjamin W. (2004), Indo-European Language and Culture, Blackwell Publishing, ISBN 1-4051-0316-7 Syllable ·Mora ·Metrical foot ·Vowel reduction Secondary stress ·Vowel reduction Chroneme ·Gemination ·Vowel length ·Extra-short |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
There are many scientific studies that indicate the reality and significance of climate change. One of the ways to reduce the drain on resources currently being experienced by the planet is to shift to systems that utilise renewable energy, such as solar power. By making this energy shift, the environment will have a chance to begin to heal. When used properly, solar power and other forms of renewable energy will be enough to eliminate oil, gas, and coal consumption before the year 2050. But that will only happen if changes are made now, instead of waiting. Climate Change and the Australian Environment Climate change is having massive and significant effects on the Australian environment. The bushfire season of late 2019 and early 2020 is evidence of this, but they are not the only problems being seen. The year 2019 was the hottest and driest year on record for all of Australia, and the trends are showing that these kinds of problems are likely to continue getting worse. Rather than allow that to happen any longer, there are changes that can be made to stop the decline in the environment. Reducing pollution is a big part of the solution, and that can be done by using renewable energy sources — most notably solar energy. Millions of animals are dying in bushfires, and there have been massive numbers of homes and businesses destroyed, as well as human injuries and fatalities, too. It’s time to do something more, and protect the planet in ways that might not have been considered as seriously in the past. With a Green New Deal for Australia and other countries, the opportunity exists to move many of the world’s developed countries over to solar and other renewable energy sources before climate change goes so far that reversing it is simply not possible. Much needs to be done quickly, to cool Australia down and protect its climate. The Economic Impact of Climate Change Climate change and ecological disasters have had billions of dollars of impact on the economy of Australia. While it will cost money to move everything over to renewable energy, it will cost much more in the long run if the continent fails to do so. A Renewable Energy Agency has been established and the Solar Towns Programme has also been created, in an effort to move more people toward solar energy. By understanding why this is so important and educating the public on the value of renewable energy, billions of dollars can be invested in the right things and saved over time due to a reduced need for fossil fuels. Millions of dollars have already been set aside for solar communities, allowing Australians who want to engage with others on topics like solar energy and climate change the opportunity to do so. By reaching out to those who already have solar power at their homes — along with those who are considering it — the opportunity to protect the environment and reverse climate change grows stronger. Right now, solar energy options are being installed at a rate that will allow Australia to meet its target of 50 percent renewable electricity by the year 2024. That will make a difference, but more can still be done to help. Australia’s Renewable Energy Options There are several popular renewable energy options for Australia, which include solar farms, wind farms, and rooftop solar panels that are used for households and communities. All three of these can be excellent choices, and all three can provide Australia with the help and hope it needs to improve its climate health and protect its people, animals, and natural resources. Wind farms can be put to excellent use over time, but it is the solar farms and rooftop solar panels that provide the continent with the most hope. Solar panels for individuals and businesses are ready now, and can be installed quickly in large numbers. By putting solar energy to good use, and by getting more people involved in working to solve climate change, Australia has the opportunity to be a world leader in protecting the environment and saving the planet. The continent can reduce its costs, stop more damage from occurring for its people and animals, and focus on ways to help the entire planet breathe a little bit easier. Solar power and other renewable energy sources are the answer to climate change, and that answer must be implemented now.
Our Pit stop activities are a KS3 mathematics resource originally developed with teachers and curriculum specialists as part of the cre8ate maths project. They are designed to motivate mathematics students to investigate the performance of the solar car kits through a series of practical data gathering challenges. These materials can also be used for the basis of activities for KS2 and KS4 students. The activities are all about collecting real data and then analysing it. Within the How fast…? and Hill climb activities there is scope to discuss the different ways in which a gradient can be displayed i.e. ratio, percentage, decimal, angle. Time and distance data is collected to calculate the car’s speed for the various surfaces and the results used throughout other activities. Distance time graphs motivate interesting discussion of the cars racing performance over different race tracks in the Race your car activity. Bringing it all together the Formula 8 race meeting involves estimation, collecting race times and determining the finishing position. We recommend using solar cars in natural daylight as solar cells are optimised to work in this quality of light. Using a solar car under artificial lighting conditions will compromise performance. Before racing pupils will need time to construct and fine tune their model cars. See our resources on building the solar car for help with this. Downloadable Pit Stop materials Pit Stop Teacher Notes Read me first guide to curriculum focused activities and practical implications Pit Stop Activity Sheets Both the Solar Detective and Pit Stop materials are built around an approach that is centred on the Solar Car kit at its heart. The relevant and contemporary context around sustainable issues supports an approach to learning that focuses on using our Solar Car kits to stimulate student’s interest. Our Kits act as fun, motivational and memorable activities that involve real life applications of scientific and mathematical concepts whilst providing opportunities to use inventive, creative, problem solving and solution focused skills crucial to both wider curriculum and employability skills. In addition nurturing inventive thinking; making science, technology and mathematics enjoyable to learn and teach. Kits are available from our online shop. In doing this activity students have opportunity to develop a range of skills from the skills builder approach including. Listening: The receiving, retaining and processing of information or ideas Speaking: The oral transmission of information or ideas Problem Solving: The ability to find a solution to a situation or challenge Creativity: The use of imagination and the generation of new ideas Staying Positive: The ability to use tactics and strategies to overcome setbacks and achieve goals Aiming High: The ability to set clear, tangible goals and devise a robust route to achieving them Leadership: Supporting, encouraging and developing others to achieve a shared goal Teamwork: Working cooperatively with others towards achieving a shared goalSpeaking, Problem Solving, Teamwork. You can find out more about Skills builder from
Say Play Read Updated: Jan 18 When we learn to reading by playing with sounds first, the skills develop naturally. Using mirrors and sound play we allow students to discover for themselves how their speech sounds relate to written words. When you are led to discover things for yourself, you created a stronger memory than if someone simply tells you "make the letter P sound with your lips and a swift gust of air from your mouth." Most children enjoy watching themselves in a mirror. Why not use that engaging activity to increase their knowledge of how the sounds they make become words on a page! Playing with the sounds in both real and "alien" or nonsense words create strong neural associations that helps increase the speed and accuracy of reading and spelling.
Cybercrime is commonly defined as any unlawful action against the integrity of a specific computer site or perpetrated using a computing device. This definition is based on the use of computing resources. Indeed, the definition applies whether the computer is used by the criminal to perpetrate an offence or a conventional crime (scam, threat, etc.) or the computer is the target of the criminal (theft, fraudulent use or destruction of data, etc.). This type of attack uses technologies associated with information and communication networks as a medium. Generally, the goal is to take advantage of the credulity of users to acquire confidential information from them and then use it unlawfully. There are all sorts of conventional offences and their number is constantly increasing. The classic examples are as follows: These are ‘traditional’ crimes and offenses transposed to digital information and communication networks. These attacks are essentially motivated by greed (the search for any type of gain, financial or material) or immoral, unhealthy and improper behaviours (such as paedophilia, prostitution rings, racism, revisionism. etc.). This type of attack has changed significantly since its advent; it essentially exploits the many vulnerabilities of computer resources. The most common attacks are as follows: A technological attack can be based on one or a combination of several of the following reasons: They either target confidentiality, integrity or the availability of a computer system (or a combination of all three). To deploy malware, the hacker typically focuses on one of the following alternatives: Opportunistic attacks are attacks not directly targeting particular people or organisations, but where the goal is to cause as many casualties as possible, whatever they may be. Most people and organisations are vulnerable to this threat. Here are some common steps for this type of attack: Malware is a tool that gives the attacker absolute control over the computer of his/her victims. It is, therefore, the cornerstone of many opportunistic attacks. Reaching a large number of victims requires good distribution. Whether for a scam or to infect computers a wide audience must be reached. Sending emails or SPAM on social networks can be a very good method. A web presence is important not only for legitimate organisations, but also for cybercriminals. Creation of phishing sites, advertisements, scams, pages containing an exploit that will infect the computers of Internet users… Targeted attacks can be very difficult to counteract. It all depends on the energy and time deployed by the criminal group. In general, a well-organised, targeted attack is likely to succeed when the attacker focuses exclusively on the victim. These attacks can take place in different stages. Below, you will find some important steps involved in this type of attack. Before attacking a particular target, the hacker generally assesses any information that might help him/her map the targeted organisation or individual (snapshot). A list of telephone numbers or emails posted on the Internet can be the key to attacking an organisation. Sometimes hackers test the target systems to see if they are active and determine if there are any vulnerabilities. This can trigger alarms and often does not give convincing results; it is therefore reserved for certain specific fields of application only. Often, attacking computer systems is impossible because they are highly protected. In the case of social engineering, rather than using a technical flaw of the system, the perpetrator will exploit the credulity of a human being. The perpetrator will, for example, pretend to be someone else related to the user in order to gain access to information such as a password. This scenario has become common practice; hackers often use psychological pressure on an individual or invoke urgency, to quickly obtain the desired information. Often the perpetrator will attempt an attack by mailtrap, containing a ‘Trojan horse’ in any program, which may allow the perpetrator, once activated by the user, to take remote control of the victim’s computer.
A polychord consists of two or more chords played together and such chords may be originated from the same or different tonalities – usually the latter is applied. Generally, the used chord in the polychord structures must be perceived independently, although it is not mandatory. Nevertheless, when the chord pitches are mixed, the overall chord structure will be considered as one complex unit: If you wish to represent polychord structures they are usually separated by a vertical bar. For instance, if you have the C and Ab major chords played as a polychord, they would be written as C|Ab. In this particular case, polytonality or bitonality is implied since we have chords from different tonalities, as in the example above with Em and Eb. Normally, as with clusters, if you want the individual chord formations to be heard distinctly, they should be properly spaced since depending on the way that the involved chord notes are arranged, more or less tension can be created. When polychords are built from chords in the same tonality, they can be considered, in fact, as chord extensions. As an example, if you have a C and a G major chord from C major tonality, you can either represent it as a C|G or Cmaj9. As for the resonance of polychords, it is advisable to follow the logic of how the overtones are produced – from wider to closer interval relationship. So, in order to achieve a better overall resonance, one chord would be in open position voicing at the bottom and the other chord with close position voicing at the top: As mentioned, the perceived quality of the dissonance and consonance of polychords, as with any other chord formation, has to do with where dissonant or consonant intervals are placed. The use of consonant or dissonant intervals in the outer voices of a chord structure tend to have a deeper impact in the way we perceive a chord as a whole: The ability to regulate the tensions should be exercised by you according to the desired effect. As with clusters, some of the consonant or dissonant quality may be enhanced or dispelled according to how closely you place both chords; by the way you voice the chord structure and its register; by the way that the involved chord notes are presented; or by arpeggiating one or both chords: Polychord structure and voicing is maintained but it is presented in three different ways Any type of chord can be used to build a polychord but three note chords are usually more common. However, the decision of the chord quality and their extensions are entirely up to the composer and the harmonic and melodic impetus. Until now, we have been looking at polychords with two triadic units, but more chords may be used in the construction of a polychord: Again, the way you use polychord structures will depend on the effect and sensibility you are going after. In terms of tonality and harmonic implications, the more chords are involved, the more complex and ambiguous the musical passage. Do you like what you read? Subscribe to the newsletter and get a free sample of the Beyond Music Theory eBook!
Type-2 (adult-onset) diabetes and other diseases related to the obesity epidemic depend on how the body stores excess energy, according to evolutionary biologist Mary Jane West-Eberhard, emeritus scientist at the Smithsonian Tropical Research Institute. In the Proceedings of the National Academy of Sciences, she describes her theory about fat inside the abdominal cavity — visceral adipose tissue or VAT — the “VAT prioritization hypothesis.” More than 300 million people are affected by obesity-associated diabetes. Heart disease is a major killer. Both involve chronic inflammation. “Pathogenic obesity is an advantageous process gone awry,” said West-Eberhard. “Very early in life the body makes decisions about where to store fat. It makes sense for poorly nourished fetuses to invest in VAT rather than in fat under the skin because VAT evolved to protect us from infections, but this choice sets us up for disaster if we have access to too many calories later in life.” Researchers study obesity from different perspectives, but West-Eberhard took a broader look to ask how the body makes decisions about where to deposit fat and why. “Trying to understand diseases related to obesity without understanding the abdominal structures that become obese is like trying to understand circulatory diseases without knowing the functions of the heart,” West-Eberhard said. Visceral fat is nature’s super band-aid. Sometimes called “the abdominal policeman,” a VAT-rich structure called the omentum, a loosely hanging fold of the membrane lining the abdominal cavity, sticks to wounds, foreign objects such as shrapnel and infection sites like a bandage full of antibiotics. In fact, surgeons sometimes use pieces of omentum to control severe postoperative infections. VAT surrounds the small intestine, defending the body from ingested pathogens and toxins. “The fact that visceral fat tissue evolved to fight visceral infections provides a causal hypothesis for how high fructose sweeteners and saturated fats contribute to chronic diseases such as type 2 diabetes,” West-Eberhard said. “They influence which bacteria grow inside the intestines [called the microbiome], making the intestinal walls more permeable and releasing more toxins into the bloodstream, stimulating the visceral immune system and potentially leading to chronic inflammatory disease.” In the past, the role of visceral fat as part of the immune system may have been more widely important than it is today because starvation and infections were more common. West-Eberhard proposes that in fetuses subject to nutritional stress, more energy may be stored as fat around the abdominal organs rather than as fat under the skin (subcutaneous fat or SAT). She notes that childhood catch-up growth, a better predictor of obesity-associated disease than low birth weight, may be a sign of the mistake the body has made as it assigns energy to VAT producing the apple shape of abdominal obesity, rather than the pear shape of lower body fat distributed in the hips, buttocks or thighs or more evenly under the skin. In overweight individuals, a dangerous feedback loop may develop: increased VAT leads to increased chronic inflammation, which, in turn, leads to increased insulin resistance leading to further VAT storage and increased susceptibility to disease. Eventually, the ability to produce insulin is reduced and these individuals may need injected insulin to control type-2 (adult onset) diabetes. “I think the combination of malnutrition early in life coupled with modern diets of saturated and trans-fats and high-fructose foods available on a global scale is leading to a situation that is toxic for individuals in many different cultures.” West-Eberhard said. “People’s body shape — apple versus pear — is based on the way their bodies allocate fat. Even in ancient societies, poor nutrition leading to investment in VAT contributed to apple-shaped bodies, versus more ‘beautiful,’ voluptuous, pear-shaped bodies associated with SAT fat storage by better-nourished babies. Social upheaval (war, conquest and disease) would have favored flexibility in fat allocation because social rank and food availability would occasionally have changed.” In the future, she hopes to see more research revealing fetal cues that turn on VAT storage, the development of the visceral immune system, the role of the omentum, disease-resistance in obese individuals and the capabilities of people of different geographic and ethnic origins to allocate fat differently. Source: Read Full Article
Summative assessments evaluate student learning according to a benchmark. Tests and Quizzes Many of us were educated on a model of testing that involved one or two tests each semester. Typically, students took a midterm and a final, with an occasional third exam thrown in for good measure. Research shows that using tests that are more frequently given, rather than once or twice a semester can enhance student learning. As can giving students a list of the learning/course outcomes they will be assessed on to help them prepare for the exam. Allowing them to review previous homework, quizzes, or exams and labeling what learning outcomes relate to the assessment can help them identify challenges and focus their efforts on the relevant information to be successful. These resources will guide you through some best practices for creating test and quizzes that demonstrate learning without testing your patience. - Exams and Quizzes Best Practices - Designing the Essentials: Outcomes, Assessment and the Syllabus - Designing Test Questions - Best Practices for Designing and Grading Exams - 10 Tips to Refine Your Course Assessments - Designing Effective Assessments - A Short Guide to Writing Effective Test Questions - How Do I Create Tests for My Students? - Berkley Center for Teaching and Learning on Final Exams - Tips to Help Students Manage Schedules and Stress During Final - Creating Multiple Choice Tests - 14 Rules for Writing Multiple Choice Questions - Writing Multiple Choice Questions that Demand Critical Thinking - Classroom Assessment Techniques: A Handbook for College Teachers (available in the UTSA library) - Easily Create and Manage Online Assessment with Respondus® - Tests That Grade Themselves - Using Extra Credit Questions as a Motivational Tool - Helping Students Memorize, Tips from Cognitive Science - Learning that Lasts: Helping Students Remember and Use What You Teach - Writing Effective Essay Questions Presentation skills are highly sought after in the workplace. They also add another tool for assessing student learning. Whether you are assigning a 3 minute report or a 30 minute group presentation, it’s important to have the right tools for assignments, instruction and assessment. Click here to access our presentation resources.
What is referred to as "breaking wave" cloud patterns in our atmosphere reportedly disturb Earth's magnetic field (or magnetosphere) surprisingly often - more often than scientists previously thought, according to new research. The phenomenon involves ultra low-frequency Kelvin-Helmholtz waves, which are abundant throughout the Universe and create distinctive patterns - which can be seen from Earth's clouds and ocean surfaces, to even the atmosphere of Jupiter. "Our paper shows that the waves, which are created by what's known as the Kelvin-Helmholtz instability, happens much more frequently than previously thought," co-author Joachim Raeder of the University of New Hampshire (UNH) Space Science Center within the Institute for the Study of Earth, Oceans, and Space, said in a statement. "And this is significant because whenever the edge of Earth's magnetosphere, the magnetopause, gets rattled it will create waves that propagate everywhere in the magnetosphere, which in turn can energize or de-energize the particles in the radiation belts." In fact, data shows that Kelvin-Helmholtz waves actually occur 20 percent of the time at the magnetopause and can change the energy levels of our planet's radiation belts. So why is this important? Well, first of all, Earth's magnetic field protects us from cosmic radiation. Not to mention these changing energy levels can potentially impact how the radiation belts either protect or threaten spacecraft and Earth-based technologies. But the UNH team presses that their discovery is less about the effects of so-called "space weather" and more about a better understanding of the basic physics of how the magnetosphere works. "It's another piece of the puzzle," Raeder said. "Previously, people thought Kelvin-Helmholtz waves at the magnetopause would be rare, but we found it happens all the time." Kelvin-Helmholtz instability waves - named for 19th century scientists Lord William Thomson Kelvin and Hermann von Helmholtz - can be seen in everyday life, such as in cloud patterns, on the surface of oceans or lakes, or even in a backyard pool. The distinctive waves with capped tops and cloudless troughs are created by what's known as velocity shear, which occurs when a fluid or two different fluids - wind and water, for example - interact at different speeds to create differing pressures at the back and front ends of the wave. Though these waves are ubiquitous in the Universe, their abundance was not known until scientists used data from NASA's Time History of Events and Macroscale Interactions during Substorms (THEMIS) mission, which launched in 2007 and provides unique, long-term observations. The results are described further in the journal Nature Communications. For more great nature science stories and general news, please visit our sister site, Headlines and Global News (HNGN). © 2021 NatureWorldNews.com All rights reserved. Do not reproduce without permission.
What is Shingles? Shingles is a common infection of the nerves that is caused by a virus. Shingles triggers a painful rash or small blisters on an area of skin. It can appear anywhere on the body, but it typically appears on only one side of the face or body. Burning or shooting pain and tingling or itching are early signs of the infection. After the rash is gone, the pain usually resolves. But it can continue for months, even years. This is called post-herpetic neuralgia. What causes shingles? Shingles is caused when the chickenpox virus is reactivated. After a person has had chickenpox, the virus lies dormant in certain nerves for many years. Shingles is more common in people with a weak immune system and in people over age 50. The risk goes up with each decade of life after that. Studies show that one out of every three people in the United States will develop shingles. Due to this, the CDC recommended that everyone 50 and older get the shingles vaccine. Ask your primary care physician, or visit one of the PIH Health pharmacies for your shingles vaccine. What are the symptoms of shingles? Symptoms may include: - Skin sensitivity, tingling, itching, or pain in the area of the skin before the rash appears - Rash, which typically appears 1 to 5 days after symptoms start. At first, the rash looks like small, red spots that turn into blisters. - Blisters typically scab over in 7 to 10 days and clear up within 2 to 4 weeks. Other early symptoms of shingles may include: - Stomach upset - Feeling ill - Fever or chills The symptoms of shingles may look like other health conditions. Always talk with your healthcare provider for a diagnosis. Visit one of the PIH Health pharmacies in Santa Fe Springs or Whittier for your shingles vaccine today!
Extraterrestrial life in our solar system just got a lot more likely: NASA has found convincing evidence that Ganymede and Enceladus, moons of Jupiter and Saturn respectively, might both harbor salty oceans beneath their frozen surface. Scientists estimate that the oceans are over 50 miles thick (80 km), which greatly increases the chances of alien life. In order for life to exist (at least life as we know it), there needs to be water; if the water contains salt, it gets even better, because water on Earth evolved from salty water. “There are possibilities of there being life in the Jupiter-orbiting moon Ganymede”, said the NASA scientists who confirmed to have discovered an ocean beneath Ganymede. The problem is that the ocean sits under a 95-mile-thick sheet of ice, which makes it incredibly difficult to study. But it’s highly exciting to discover potential life in such an unexpected place. “After spending so many years going after Mars, which is so dry and so bereft of organics and so just plain dead, it’s wonderful to go to the outer solar system and find water, water everywhere,” Christopher P. McKay, a planetary scientist at NASA’s Ames Research Center in Mountain View, California, told the New York Times. Enceladus is the sixth-largest moon of Saturn. It was discovered in 1789 by William Herschel, but very little was known about it until the Voyager fly-bys in the 1980s. We recently learned even more about it thanks to the Cassini spacecraft. We know that it has a type of tectonics, geysers, and there were some indications that it has an ocean under all its ice. Because hydrothermal activity (such as the geysers) on Earth occurs when seawater infiltrates and reacts with a rocky crust, researchers concluded that this must also be what is happening on Enceladus. This theory was further supported with the discovery of methane plume over the moon’s south pole and microscopic granules of silica, which are the building blocks of many rocks on Earth. “It’s very exciting that we can use these tiny grains of rock, spewed into space by geysers, to tell us about conditions on — and beneath — the ocean floor of an icy moon,” Sean Hsu from the University of Colorado at Boulder and the lead author of the paper published in the journal Nature, said, in a statement. Ganymede is just as interesting – it is the largest moon of Jupiter and in the Solar System, and the only moon known to have a magnetosphere. It’s this magnetosphere that actually helped scientists, because tocean interferes with the magnetic field, reducing the rocking of the auroras by 4 degrees. To have this kind of effect on the magnetic field, scientists estimate that the ocean is 60 miles thick. Because of its proximity to Jupiter, Ganymede’s magnetic field is affected by the planet’s, rocking it back and forth – this movement generates internal heat which melts the ice and creates a liquid ocean under the surface. “Because aurorae are controlled by the magnetic field, if you observe the aurorae in an appropriate way, you learn something about the magnetic field. If you know the magnetic field, then you know something about the moon’s interior,” Joachim Saur from the University of Cologne in Germany, and the lead author of the paper, said, in a statement.
The endoscope is a test instrument that integrates traditional optics, ergonomics, precision machinery, modern electronics, mathematics, and software. One has an image sensor, an optical lens, a light source illumination, a mechanical device, etc., which can enter the stomach through the oral cavity or enter the body through other natural channels. The endoscope can be used to see lesions that X-rays cannot display, so it is very useful for doctors. For example, an endoscopic doctor can observe ulcers or tumors in the stomach and develop an optimal treatment plan accordingly. The endoscope in medicine is a combination of multiple lenses, which can clearly image the optical system in the natural passage or tiny wound of the human body. A general endoscope is composed of an objective lens portion, a imaging portion, and an eyepiece portion. Its characteristics are: small size, the diameter of the general medical endoscope is about 2-6mm; the lens has a large field of view, and can be clearly imaged in the field of view above 90°. These features of the endoscope make it widely used in minimally invasive surgery.With the continuous development of science and technology, the accuracy requirements for imaging are also getting higher and higher.BRD Optical is positioned in the market to focus on the development of microscopic lenses to enhance the imaging of endoscopes. Our small size lenses have been recognized by the market.
An alphabet is a writing system, a list of symbols for writing. The basic symbols in an alphabet are called letters. In an alphabet, each letter is a symbol for a sound or related sounds. To make the alphabet work better, more signs assist the reader: punctuation marks, spaces, standard reading direction, and so on. Alphabets[change | change source] It seems that the idea of an alphabet – a script based entirely upon sound – has been copied and adapted to suit many different languages. Although no alphabet fits its language perfectly, they are flexible enough to fit any language approximately. The alphabet was a unique invention.p12 The Roman alphabet, the Cyrillic, and a few others come from the ancient Greek alphabet, which dates back to about 1100 to 800BC.p167 The Greek alphabet was probably developed from the Phoenician script, which appeared somewhat earlier, and had some similar letter-shapes. The Phoenicians spoke a Semitic language, usually called Canaanite. The Semitic group of languages includes Arabic, Maltese, Hebrew and also Aramaic, the language spoken by Jesus. We do not know much about how the alphabetic idea arose, but the Phoenicians, a trading people, came up with letters which were adapted by the early Greeks to produce their alphabet. The one big difference is that the Phoenician script had no pure vowels. Arabic script has vowels which may, or may not, be shown by diacritics (small marks above or below the line). The oldest Qu'ran manuscripts had no diacritics. Israeli children to about the third grade use Hebrew texts with vowel 'dots' added.p89 No ancient script, alphabetic or not, had pure vowels before the Greeks. The Greek alphabet even has two vowels (Eta) and Epsilon) for 'e' and two (Omega and Omicron) for 'o', to distinguish between the long and short sounds. It appears that careful thought went into both the Phoenician invention and the Greek adaptation, but no details survive of either process. Semitic scripts apparently derive from Proto-Sinaitic, a script of which only 31 inscriptions (plus 17 doubtful) are known. It is thought by some researchers that the original source of this script was the Egyptian hieratic script, which by the late Middle Kingdom (about 1900BC) had added some alphabetic signs for representing the consonants of foreign names. Egyptian activity in Sinai was at its height at that time. A similar idea had been suggested many years previously. Short list of alphabets[change | change source] A list of alphabets and examples of the languages they are used for: - Proto-Sinaitic script - Phoenician alphabet, used in ancient Phoenicia. - Greek alphabet, used for Greek - Roman alphabet (or Latin alphabet), most commonly used today - Arabic alphabet, used for Arabic, Urdu and Persian - Hebrew alphabet, used for Hebrew, Ladino (only in Israel) and Yiddish - Devanagari, used for Hindi - Cyrillic alphabet, which is based on the Greek alphabet, used for Russian and Bulgarian - Hangul, used for Korean Other writing systems[change | change source] Other writing systems do not use letters, but they do (at least in part) represent sounds. For example, many systems represent syllables. In the past such writing systems were used by many cultures, but today they are almost only used by languages people speak in Asia. A syllabary is a system of writing that is similar to an alphabet. A syllabary uses one symbol to indicate each syllable of a word, instead of one symbol for each letter of the word. For example, a syllabary would use one symbol to mean the syllable "ga", instead of two letters of the alphabet "g" and "a". - Japanese uses a mix of the Chinese writing (kanji) and two syllabaries called hiragana and katakana. Modern Japanese often also uses romaji, which is the Japanese syllabary written in the Roman alphabet. - The Koreans used the Chinese writing in the past, but they created their own alphabet called hangul. Originally, 1200 BC in the Shang dynasty, Chinese characters were mainly "pictographic", using pictures to show words or ideas. Now only 1% of Chinese characters are pictographic.p97 97% of modern characters are SP characters. These are a pair of symbols, one for meaning (semantics) and the other for pronunciation.p99 In many cases the P and S parts are put together into one joint character. Chinese is not one spoken language, but many, but the same writing system is used for all. This writing system has been reformed a number of times. Related pages[change | change source] References[change | change source] - The Romans largely copied their Latin alphabet from the Etruscans, who based their alphabet on the Greek one. Diringer D. 1968. The alphabet: a key to the history of mankind. 3rd ed, London: Hutchinson, vol. 1, p419. ISBN 009-067640-8 - Man, John 2000. Alpha Beta: how our alphabet shaped the western world. Headline, London. - Robinson. Andrew 1995. The story of writing. Thames & Hudson, London. - The modern practice in printed Arabic is not to use diacritics - enWP Arabic diacritics - Ong, Walter J. 1982. Orality and literacy: the technologising of the word. Methuen, London. - Short 'e' is ε epsilon, long 'e' is η eta. Short 'o' is o o micron; long 'o' is ω o mega. Languages other than Semitic have copied the Greek or Roman alphabets, making such changes as seem right for their particular language. - Diringer, David 1968. The alphabet: a key to the history of mankind. 2 vols, Hutchinson, London. - Sass B. 1988. The genesis of the alphabet, and its development in the 2nd millenium. Wiesbaden. - Gardiner, Alan 1916. The Egyptian origin of the alphabet. J. Egyptian Archaeology III. - DeFrancis, John 1989.Visible speech: the diverse oneness of writing systems. Honolulu: University of Honolulu Press. ISBN 0-8248-1207-7 - Boodberg, Peter A. 1957. The Chinese script: an essay in nomenclature (the first hacaton). Bulletin of the History and Philology Academia Sinica (Taipei). 39: 115.
Critical Race Theory (CRT) Code Words Abolitionist: This term is also aligned with Allyship and even Co-Conspirator. An individual who sees society as a network of racial power structures that must be dismantled. This individual recognizes the need to call out others for not recognizing and acknowledging privilege, power and supremacy. Names to look for include Ibrahim Kendi, Robin D’Angelo as well as the founders of Black Lives Matter. Abolitionist Teaching: “urges educators to tear down schools as they know them and rebuild using the intersectional tactics of past and present abolitionists.”(Bettina Love author of Abolitionist Teaching in Action) This is also frequently used as a technique/method to eradicate “whiteness” and end the “spirit murdering” of minority students. See Abolitionist Educators Workgroup and Abolitionist & Antiracist Teaching Action civics: encourages students to participate in protests and demonstrations more than study history and America’s founding principles. Teachers may even pressure students into supporting a particular cause without providing them with multiple perspectives. At a time when kids need substantive civics education more than ever, this seeks to indoctrinate them with only the desire to act on emotion without the capacity to consider other points of view. See How Action Civics is Teaching Our Kids to Protest and Civic Education vs. Action Civics Anti-bias: training programs or curriculum development that focuses on empowering learners to not see themselves as being marginalized or to treat others differently. In its more radical form, it promotes racializing of all relationships, dismantling “whiteness” and recognizing power and privilege by identity. See “Who’s got the power?”: A critical examination of the anti-bias curriculum Anti-blackness: The tendency to see different outcomes in society as rooted in a disdain or disgust for black people. The term can be leveraged as a Marxist tool for dismantling societal structures through dividing people by identity. Anti-racism, anti-racist: America’s leading anti-racist scholar Ibram X. Kendi. Kendi argues that there is no such thing as a non-racist idea and makes clear that racism occurs when there is any disparity between races, no matter how minor. In his words: “If discrimination is creating equity, then it is antiracist.” See Should Public Schools Ban Critical Race Theory? and Teachers that DO NOT teach anti-racism are abusing children. and Anti-racist Arguments Are Tearing People Apart BIPOC: Black, indigenous and people of color. According to the BIPOC Project: “We use the term BIPOC to highlight the unique relationship to whiteness that Indigenous and Black (African Americans) people have, which shapes the experiences of and relationship to white supremacy for all people of color within a U.S. context.” Blackness: related to “black community” and “black experience”. Can certainly be a term for positive empowerment or one used to “racialize” every aspect of society. Centering: acknowledging that white voices dominate platforms across society. It can bring about more vibrant discussions by including more diverse voices but in Critical Race Theory can also lead to overt silencing by “decentering” certain voices. Climate Justice: Often used in conjunction with equity and racial justice. May be used in “action civics” to push climate change politics around the issue of systemic racism. Collective Guilt through separating society into group identities, those who advocate for collective guilt seek to heap blame onto one or more groups for the historic injustices experienced by other groups. This is certainly a Marxist tactic in its deliberate collectivization of people to diminish individuality, further polarization and exacerbate hostility toward society, its history and its institutions. Colonizer, Decolonizing: A term of derision typically aimed at white males. It is used to delegitimize the Founding of America and asserts that the U.S. as a nation-state was built on and is still manifesting a colonial tradition of white supremacy which necessitates multifaceted decolonization. See Colonizer-Colonized Mindset: Processing Whiteness (as Ideology) Wherever it is Found Colorism: an assertion/belief that those with lighter skin have privilege over those with darker skin. Used in Critical Race Theory to create more identity groups at the expense of unity. Conscious & unconscious bias: People from all backgrounds exhibit both types of bias. These terms are typically used to introduce topics such as micro-aggressions and are easily manipulated to produce deliberate partitions among groups. May also be introduced as “checking your blindspots” or “implicit bias” training. Courageous Conversations: Discussions designed to isolate race as the determinate of all social interactions and compel participants to examine the presence and role of “whiteness”. Critical ethnic studies: Curriculum designed around intersectional thought and social justice activism. While diverse voices should be represented in all classrooms, Critical Race Theory will use voices that purposely promote hostility toward America. Critical pedagogy: a teaching philosophy that invites educators to encourage students to critique structures of power and oppression. Critical self-awareness: A term that may be used to assess positive personal growth but may also be used to assess one’s “whiteness” and membership in a privileged power structure. Critical self-reflection: refers to the process of questioning one’s own assumption, presuppositions, and meaning perspectives. Many times this will be used to end a session where participants examined their own predispositions to stereotype and employ bias. Particularly used in what is often called a “struggle session”. Cultural appropriate: the adoption of an element or elements of one culture or identity by members of another culture or identity. Cultural competence: the ability to understand, appreciate and interact with people from cultures or belief systems different from one’s own. Cultural proficiency: the ability to understand and affirm the cultures and identities of every individual. Cultural relevance: May be used to help build the confidence of diverse learners and lead to setting a high bar for expectations. However, it may also be another term used to advance the purposes of equity, diversity and inclusion which may result in lowering expectations for members of specific groups. Cultural responsiveness, culturally responsive practices: having an awareness of one’s own cultural identity and views about difference, and the ability to learn and build on the varying cultural and community norms of students and their families. Decolonize: To “dismantle” the structures of American society that are seen as perpetuating “whiteness” and “white supremacy”. Deconstruct Knowledge: The acquisition of knowledge has been based on “white supremacy”. The knowledge therefore that one has attained must be deconstructed through “lived experiences” and “speaking one’s truth”. Two overarching premises unite CRT scholarship: (1) to reveal the roots and perpetuation of white supremacy and (2) to engage in social justice. Along with the basic premises of CRT, seven tenets that most CRT scholars adhere to include: (a) interest convergence, (b) racism as everyday, (c) colorblindness as insufficient, (d) race as a social construction, (e) whiteness as property, (f) racialized narratives as significant and telling and (g) racialized realities as contextual. Discrimination: the unjust or prejudicial treatment of different categories of people or things, especially on the grounds of race, age, or sex. Discrimination however is acceptable according to CRT advocates such as Ibrahim Kendi if it brings about equity which would then be considered “anti-racist”. Dismantle, Dismantling racism: Everyone that respects our common humanity understands that racism is abhorrent. Those who seek to advance the Marxist underpinnings of CRT will manipulate perceptions of racism in order to subvert and ultimately dismantle the structures of society based on individualism and liberty. Disrupt: Seeing all problems as emanating from systemic racism that must be challenged through conversations on topics such as “white privilege”. Diversity, diversity focused, diversity training: often packaged with equity and implicit bias programs. While diversity does stimulate growth, foster respect and encourage personal reflection, when employed through CRT, it forces participants to see everyone and everything through the lens of race. Dominant discourses: The dominant discourses in our society powerfully influence what gets “storied” and how it gets storied. This term is used to assert that “stories” and even reverence of the written word are all used to reinforce power structures that Educational justice: aligned with social justice activism. May also result in racism of lower expectations for students as curriculum and assessments are designed to be less rigorous for certain groups. Equitable: A term long associated with fairness that has now become a euphemism for the Marxist goal of equality or outcome. Equity, Inequity, Deep Equity: The United States, according to CRT advocates, is made up of people who are either oppressors or the oppressed. The assertion of equity is to mitigate the results of ongoing racist structures that bring about inequity. The Deep Equity framework, based on the work of Gary Howard, helps schools and districts establish the climate, protocols, common language, and common goal of implementing culturally responsive teaching practices. Many times, teachers are instructed not to share these goals with parents. Equity Gap: A specific term used when talking about “equity” that refers to a deficit that can be solved using “Diversity, Equity, and Inclusion” (DEI) programs, training, redistribution of resources, hiring practices, etc. There are many different ways of wording this concept, so “equity gap” is not the only way activists will try to convey this concept. To determine whether a discussion of equity gaps is related to CSJ and/or CRT, one must look at the context of the discussion. Examine “systems”: All systems are tied in some way to the oppression of non-whites and therefore should be examined, dismantled and “reimagined”. Free radical therapy: Complete change is needed to eradicate problems. Free radical self/collective care: Healing occurs when POCI gain critical consciousness about their oppression and seek to resist the associated racial trauma. Hegemony: any group, nation, structure that maintains dominance over others. The term is used to reinforce the notion that BIPOC have been, and still are, oppressed by the hegemony of “white supremacy”. Identity deconstruction: Has been used to facilitate lessons where students are asked to identify their racial identities and then rank their power and privilege. See Identity Politics in Cupertino California Elementary School Identity-safe: All good teachers strive to build the confidence and intellectual ability of each student. However, through CRT prerogative, every interaction in the classroom is based on race and therefore teachers must make learning spaces identity-safe. Implicit/Explicit bias: Explicit biases and prejudices are intentional and controllable, implicit biases are less so. CRT may employ Implicit Association Tests to measure one’s bias which have proven to be very unreliable. Implicit/Explicit racism: Terms are used to falsely amplify the level of racism that exists despite the significant progress that has been made (according to PEW research, only 4% of whites in America would oppose a relative marrying a black person). Inclusion: Often combined with Diversity and Equity. Inclusion in itself is a positive action where everyone feels welcome. Under CRT however, it is posited that white dominated culture and America’s institutions were developed and have continued to exclude others. Inclusivity education: Inclusive education means different and diverse students learning side by side in the same classroom. This is ideal but the term can certainly be manipulated to infuse CRT curriculum which polarizes by emphasizing racial identities above all else. Injustice, Historical injustice: past moral wrong committed by previously living people that has a lasting impact on the well-being of currently living people. CRT seeks to cast all of present-day society as irreparable because of past injustices. This strategy enables them to delegitimize the Founding, the Constitution, the heroism of Civil Rights leaders and abolitionists and reject all of the progress that society has made. Institutional Bias: Practices, scripts, or procedures that work systematically to give advantage to certain groups or agendas over others. Institutionalized bias is built into the fabric of institutions. In Critical Social Justice, such things as “hiring the best person” or “merit-based evaluation” are thought to be institutionally biased on the grounds that the standard of what constitutes best, or what counts as “merit,” is rigged to give straight, white males an advantage. Institutional racism: also known as systemic racism. The belief that all institutions (education, government, military, police, banking, housing etc.) are rooted in the perpetuation of “white supremacy” and should therefore be “examined” and “dismantled”. Internalized racial superiority: According to anti-racist theorists, this is a form of internalized oppression that places one’s own race above others. In the realm of CRT, it involves “white supremacy”. Internalized racism: Societal messages that produce and perpetuate internal privilege and oppression. This is also manipulated to produce the desired social and emotional responses from students and engage them in activist causes. Internalized white supremacy: A term that is used to assert that all “white” or “white adjacent” people have reaped rewards because of their proximity to “whiteness”. Cultural norms such as meritocracy are seen as enabling this. Interrupting racism: Identifying differences in outcomes as the result of systemic racism and “doing the work” to “interrupt” and “dismantle” these systems. All teachers want their students to succeed regardless of their background; under CRT teachers would be encouraged to have different standards for students of diverse backgrounds which results in actually harming their learning experience. Intersectionality, intersection: Intersectionality is a framework for conceptualizing a person, group of people, or social problem as affected by a number of discriminations and disadvantages. It takes into account people’s overlapping identities and experiences in order to understand the complexity of prejudices they face. Intersectionality produces more tribalism among groups. See Irshad Manji Says ‘Don’t Label Me’ Intersectional identities: Intersectionality is the acknowledgement that everyone has their own unique experiences of discrimination and oppression and we must consider everything and anything that can marginalize people – gender, race, class, sexual orientation, physical ability, etc. These overlapping identities according to CRT morph into power structures where degrees of oppression are directly reflected by the identities one possesses. Intersectional studies: involves the study of the ways that race, gender, disability, sexuality, class, age, and other social categories are mutually shaped and interrelated through forces such as colonialism, neoliberalism, geopolitics, and cultural configurations to produce shifting relations of power and oppression. Interrupting racism: the Marxist underpinnings of CRT resolve to dismantle systems that they believe are the root of racism. According to the theory, racism is interrupted when systems that promote individualism and meritocracy are dismantled. Land acknowledgement: a term that can be wielded to demonize Western culture as well as the Founding and development of America. When used constructively, it can provide important recognition of indigenous cultures and histories but CRT tends to echo the phrase “stolen land” instead. Liberatory Education/Liberatory Pedagogy: is a pedagogy of liberation centered around the principles for social change and transformation through education based on consciousness raising and engagement with oppressive forces. This philosophy was made popular by Brazilian author Paulo Freire, a well-known Marxist who wrote “Pedagogy of the Oppressed”. Marginalized identities: the opposite of privileged identities. This term looks to assert that members of certain groups experience levels of discrimination and bias because of structural inequalities in a systemically racist society. Marginalized/Minoritized/Under-represented communities: often used to show that members of certain groups are not proportionally represented in programs, professions, etc.. May be used to justify ending meritocracy and instead promotes quotas. Microaggressions: The everyday verbal, nonverbal, and environmental slights, snubs, or insults, whether intentional or unintentional, that communicate hostile, derogatory, or negative messages to target persons based solely upon their marginalized group membership. Critical Social Justice continuously pushes the boundaries of what might constitute a microaggression and thus a “hostile” act, often in tedious and inappropriate ways. Multiculturalism: is the coexistence of diverse cultures, where culture includes racial, religious, or cultural groups and is manifested in customary behaviours, cultural assumptions and values, patterns of thinking, and communicative styles. This is certainly a positive when viewed as groups coming together to build bonds of affection and mutual respect as Americans. This can be weaponized to place individuals into “affinity” groups where people are deliberately separated to discuss how their membership in a culture ascribes oppressor or oppressed status. Neo-segregation: the voluntary separation of people by race, ethnicity or sexual orientation. Inspired by CRT, an increasing number of colleges have opted to offer more clubs, programs and even graduations only for members of a particular group. No Place for Hate: Created by the Anti-Defamation League to: engage students and staff in dialogue and active learning on the topics of bias, bullying, inclusion and allyship that matter most. This program does follow the tenets of CRT. Normativity: a term that is used to project “whiteness” and “white culture” as the norm which seeks the conformity of other groups. Being a good “ally” means dismantling “colorblindness” and “white normativity”. Oppressed, Oppressor, Oppression: the status assigned to members of groups based on race, gender, sexual orientation etc. which reflect the CRT narrative based on Marxist power struggles. CRT will use identities in place of class to divide and ultimately control the people and diminish individualism and liberty. Parity: the state or condition of being equal. This is not about equality of opportunity but rather equality of outcome which is the central tenet of equity. Patriarchy: according to CRT, racism, patriarchy and even capitalism are part of the same oppressive system. Patriarchy is identified as a piece of the power structure that needs to be dismantled but neglects to recognize the fact that women now outnumber men in undergraduate, graduate and doctoral programs. Privilege, Power & Privilege: terms that CRT align with “normative whiteness”. By “decentering” whiteness, power and privilege will be more “equitable”. Protect vulnerable identities: While poverty and predispositions to health risks should be examined to eradicate unequal access to opportunity, CRT will focus on group identity to create intersectional “scorecards” that reflect the effects of systemic racism. It automatically places those within certain groups as being vulnerable despite evidence that shows otherwise. It can often “infantilize” members of groups and diminish individual agency. Race essentialism: the view that members of racial groups have an underlying reality or true nature that one cannot observe directly. Can lead to more and not less prejudice. Racial healing: According to CRT, every human interaction is racialized and therefore creates harm that must be recognized and discussed to bring about healing. Racial/Racialized identity: The existence of assigned racial identity whether someone wants it or not. Under CRT, an individual’s most significant characteristic is their racial identity. Racial justice: Racial justice is the systematic fair treatment of people of all races, resulting in equitable opportunities and outcomes for all. Racial justice — or racial equity — goes beyond “anti-racism.” It is not just the absence of discrimination and inequities, but also the presence of deliberate systems and supports to achieve and sustain racial equity through proactive and preventative measures. Everyone wants each and every child to be successful and reach their highest potential. However, under CRT, “racial justice” can take the form of lower standards for members of certain groups in order to achieve equity. This harms these students and is essentially bigotry of low expectations. Racial sensitivity, Racial sensitivity training: According to CRT, whites are inherently racist and therefore require instruction on how to change their attitudes and behavior. Racial/Racialized prejudice: Individuals have preconceived notions about groups of people that look or sound differently. CRT posits that all members of a particular race are monolithic and therefore have experienced either privilege or oppression. Reflective exercises: reflecting on our own values, beliefs, and culture and how they impact the way we see the world and one another. As a teacher working under the guise of CRT, it involves what is often called culturally responsive teaching. Reimagining: Statement by the Teachers College at Columbia: “to reimagine education for an anti-racist society, we need to relearn our profession and view education through a racial equity lens”. All facets of teaching and learning therefore need to be refocused through the CRT design. As many schools continue to struggle to meet standards, many new teachers are being trained to substitute academic rigor and critical thinking with critical race theory. Representation and inclusion: All voices have a right to be heard. However, under CRT representation and inclusion can mean silencing “white” or “white adjacent” voices and utilizing quotas to meet goals instead of merit. Restorative justice: is a theory of justice that focuses on mediation and agreement rather than punishment. For sure, having students take responsibility for their actions through mediation is impactful. However, under CRT, restorative justice may consider race as the driving factor for student discipline and fail to consistently enforce rules for all students. Restorative practices: a social science that studies how to build social capital and achieve social discipline through participatory learning and decision making. Social Emotional Learning: is the process of developing and using social and emotional skills. This is important for intra and interpersonal development but, when used for critical race theory, it discourages critical thinking and evidence-based learning in favor of emotional, subjective truths. Social identity: views race as a socially constructed identity which serves to oppress non-white people. Only approved identity categories of discrimination have insight and “lived experience” that uniquely allows them to render the only acceptable definitions of racism and bigotry for the rest of society. If you don’t fit into one of these approved identity categories and feel you’ve experienced discrimination, you’re simply out of luck. Under this conception, only members of particular oppressed groups get to dictate the nature of oppression — even its ability to be experienced by others. Social justice, social injustice: justice in terms of the distribution of wealth, opportunities, and privileges within a society. “individuality gives way to the struggle for social justice” Social justice warrior: a person with progressive views that can’t accept the existence of differing views. These individuals tend to supplant rational thinking with outrage and advocate for the “tearing down” of the system. They also seek to “cancel” others for what they define as “offensive” speech and or behavior through threats and hostility; especially on social media. Spirit murdering: Coined by Patricia Williams, who first conceptualized spirit-murdering as a product of racism which not only inflicts pain, but it is a form of racial violence that steals and kills the humanity and spirits of people of color. According to Williams it is the: “disregard for others whose lives quantitatively depend on our regard.” Spirit murdering occurs every single day in many of our schools, virtually unnoticed, unchecked, and all in the name of some arbitrary norm created by a white person. Structural Bias: Refers to the institutional patterns and practices that confer advantages to some and disadvantages to others based on identity. It is not merely the institutions themselves but the way that institutions are structured and relate to each other and society. Structural racism: A system in which public policies, institutional practices, cultural representations, and other norms work in various, often reinforcing ways to perpetuate racial group inequity. CRT maintains that America, from its Founding to the present day, purposely developed and maintained a system that privileges whites. The 1619 Project, which is not historically supported, is an example of CRT that seeks to undermine the progress that America has made to fulfill its Founding ideals. Structural inequality: Discrimination within social institutions based on ethnicity, race, gender and socio-economic status. Unequal access to resources, political influence etc. inhibits the ability of some groups to better their conditions. CRT will look to explain any and all disparities in outcomes through racism and structural inequality when in reality there are many other factors that are significant. Systemic Bias: A social phenomenon based on the perceived and real differences among social groups that involves ideological domination, institutional control, and the promulgation of the oppressor’s ideology, logic system, and culture. Systemic racism: Also known as institutional racism. Systemic racism is the existence of discrimination by design in every area of life. CRT will assert the prevalence of systemic racism in order to justify the “dismantling” of institutions. Systems of power and oppression: Systems where members of dominant social groups privileged by birth or acquisition who knowingly or unknowingly exploit and reap unfair advantage over members of the target groups. According to CRT, these systems are rooted in “white supremacy”. Unconscious bias: also called implicit bias. Unconscious biases are social stereotypes about certain groups of people that individuals form outside their own conscious awareness. The irony is that some people will be told that they, by nature of their group membership, possess unconscious bias that cannot be denied. Author Robin D’Angelo makes such an assertion in her book “White Fragility”. Whiteness: Whiteness refers to the construction of the white race, white culture, and the system of privileges and advantages afforded to white people in the U.S. To be “white adjacent” means that individuals from other groups seek to advantage themselves by taking on aspects of white culture. White fragility: According to Robin D’Angelo, author of “White Fragility”, White people in North America live in a social environment that protects and in- sulates them from race-based stress. It is white resistance to the acknowledgement of their racial bias implanted in them from a racist society. According to Professor John McWhorter, D’Angelo’s book talks down to black people. In an article in the Atlantic, he writes: White Fragility is, in the end, a book about how to make certain educated white readers feel better about themselves. DiAngelo’s outlook rests upon a depiction of Black people as endlessly delicate poster children within this self-gratifying fantasy about how white America needs to think—or, better, stop thinking. Her answer to white fragility, in other words, entails an elaborate and pitilessly dehumanizing condescension toward Black people. The sad truth is that anyone falling under the sway of this blinkered, self-satisfied, punitive stunt of a primer has been taught, by a well-intentioned but tragically misguided pastor, how to be racist in a whole new way. White privilege: Invisible systems (norms) that give dominance to white people. It is the complex interplay between race, power, and privilege in both organizations and private life. CRT will label all whites as privileged despite life circumstances that show otherwise. White social capital: According to the OECD: “We can think of social capital as the links, shared values and understandings in society that enable individuals and groups to trust each other and so work together.” White social capital is therefore the exclusive networks formed by whites in their communities, organizations, and local governments. Community empowerment is important for establishing social capital; however CRT neglects to examine the role that progressive government policies played in weakening the stability of families; especially in minority communities. This has had a significant impact on the ability to build community resources. White supremacy: the belief that white people constitute a superior race and should therefore dominate society. CRT will assert that all whites are “oppressors” and cultural norms such as focusing on getting the right answer on a math problem reinforce white supremacy culture. White traitors (schools instructing white parents to become): is a pejorative reference to a person who is perceived as supporting attitudes or positions thought to be against the supposed interests or well-being of that person’s own race. White traitors are those that fall into one of the stages of white identity development who espouse the eradication of “whiteness”. A NYC school recently told parents to become “white traitors”. See NYC School Tells Parents to Become ‘White Traitors’ White abolition: The last step in the stages of white identity development (right after white traitor), white abolition seeks ending white identity. According to an article entitled “Abolish the White Race” in Harvard Magazine: “Make no mistake about it: we intend to keep bashing the dead white males, and the live ones, and the females too, until the social construct known as ‘the white race’ is destroyed—not ‘deconstructed’ but destroyed.” Whiteness: The processes and practices including basic rights, values, beliefs, perspectives, and experiences purported to be commonly shared by all but which are actually only consistently afforded to white people. CRT activists see whiteness as a dynamic operating at all times and on myriad levels. Woke: The act of being aware of and actively attentive to systems of power, especially as it concerns issues of racial and social justice.
Presentation on theme: "Daijiro Hata Feeding Habits of Swans"— Presentation transcript: Daijiro Hata Feeding Habits of Swans http://www.flickr.com/photos/singingfish/259448032/ Characteristics of Swans Anseriforms, Anserinae – 8 species. Large body Herbivory Social – make flocks Migratory waterfowl most of the translocated swans return to the original places the following year. Use wetlands for foraging and nesting. Mute swan Tundra swan Trumpeter swan N American Swans Food Type of Swans 3 types of food 1) Agricultural plants - High carbohydrates 2) Wetland plants - Lower carbohydrates than ag. p lants - Some high water & high fiber content 3) Animal matter - High protein Problems of Swans Population have increased. Wetlands & habitats have declined. Make flocks & concentrate in the scarce habitats. Large, but the limited digestive capacity. (21-34%: Mitchell & Wass 1995) - Eat a lot. Wave & Overexploitation of plants - Possible to destroy ecosystem in wetlands. - And compete with other animals. E Coast & Chesapeake Bay Wildlife managers say…Tundra swan 1) Significant damage to aquatic plants. 2) Conflict with other shorebirds. Once MBTA (Migratory Bird Treaty Act) did not distinguish b/w native and non- native bird. But, congress revised MBTA to exclude non-native birds in 2004. Migratory Bird Treaty Act Implemented treaties with Great Britain for Canada ratified in 1919, and Mexico 1936. For the protection of migratory birds and provided for regulations to control taking, selling, transporting, and importing migratory birds. This act was an important step in the development of international law. Rees 1990. Submersed, emergent, and floating-leaved macrophytes are all subject to substantial grazing losses. Many large and small grazers may affect: manatees, muskrats, waterfowl, fish, crayfish, and insects (Lodge 1991). Grazing Role of Swans in Wetlands Grazing Slow down the succession of wetlands. - Black-necked swan might play an important role as a regulator of aquatic plant biomass to cause a delay in ecological succession (Corti & Schlatter 2002). Bring and drop nutrition in wetlands. - 40% of N and 75% of P in a wetland (Post et al 1998). Cultivate wetlands. Disperse plants and invertebrates. Habits: Food Availability Swans well know the cost/benefit. - Prefer places with high food densities & low competition. Swans visit high food density patches at a higher frequency. Strong negative correlation b/w the number of swan-days and the number of goose- and wigeon-days (reduction in the food supply). Food supply decrease make smaller flocks and graze at several different sites. (Klaassen et al. 2006). Shift the food habit flexibly. - aquatic plants waste grains. Shallow Water (Mute swan: In depths <50cm extensive grazing on SAV) Nolet et al. 2006 Bewick’s swan: max depth is 0.89m, but prefer shallower water like <0.45m. From winter to spring - Potamogeton tubers were highly preferred. Summer - Potamogeton foliage. - Nestling trumpeter swans prefer Potamogeton spp. Chara spp. was eaten in proportion to its availability. (Squires 1995) Adverse Results Black swan population density was closely correlated with plant biomass. Although the swan population became as high as 25/ha, direct grazing growth consumption was slight. The grazing rate was 0.007/day, by comparison with plant growth rates of 0.06- 0.10/day, and loss rates in periods of decline of 0.07-0.18/day. Lack of light was far more important than swan grazing for plant decline. (New Zealand: Mitchell & Wass 1996) Adverse Results Numbers of mute swan and Bewick’s swan showed significant correlations with food sources. Swan numbers and their duration of stay were closely associated with the presence of Chara. Grazing pressure was low during spring and summer, and Chara colonized the lake in spite of consumption. (Netherlands) (Noordhuis et al. 2002) Badzinski et al. 2006. Herbivorous waterfowl can reduce quantity of aquatic plants during the breeding or wintering season. But tundra swan did not have any additional impact on biomass of aquatic plants it at staging areas in fall. Other Adverse Results Lower active in the winter (Squires &Anderson 1997). Little competition b/w whistling swans and other waterfowl for food and habitats (Sherwood 1960). Feeding time did not change in response to a change in food biomass density (Nolet & Klaassen 2005). Black swans are apparently highly mobile, and highly sensitive to quality of their habitat. The net daily population changes became as high as 40-50% on several days in summer. (Mitchell & Wass 1996). When different herbivores with similar food requirements live within the same ecosystem, the animal may not compete but form a grazing succession, where the feeding activity of one group improved conditions for other species present (Vesey-Fitzgerald 1960, Jarman & Sinclair 1979, Mddock 1979). Conclusion Like rich & comfortable food place. Results of swan grazing varies in species, places, and conditions. Eutrophication or Good nutrient vector. Not always affect reductions of plants. Destroyer or Succession regulator. Not always compete with other animals. Questions? http://www.feathersite.com/Poultry/ Swans/BRKBkNeck.html http://www.colszoo.org/anim alareas/islands/bswan.html Black-necked swan Black swan
Phonetic Picture-Writing: a letter-picture-writing What is a Phonetic Picture-Writing ? A phonetic picture-writing is a picture-writing, which also is a true phonetic writing. For its ideograms (picture symbols) are composed of special letters. An example: A simple Phonetic Picture-Writing Here we show a simple, but quite efficient phonetic picture-writing. It has only these 12 letters (3 of them contained in the above ideogram for "face"): (Red letters describe the pronunciation by international phonetic writing. h is a shortening for the sound "sh") One can bring into one's mind this alphabet by 5 ways: Just learn it. Or learn the system of this mini-alphabet (see below). Or learn a few words (the ideogram and the pronunciation): If you, for example, know the 5 words for 'face', 'two', 'circle', 'square' and 'rhomb' shown below, you know the whole alphabet of this picture-writing. Or print the letters (including their phonetic transcription), cut them out and lay words with them. Or type in words on screen. Examples of Words The words above can be spoken easily. For all syllables only consist of consonant + vowel, e.g. 'me' or 'la'. (At the beginning of the words, also the syllables 'e', 'a', 'o' may occur.) But what can one do, if an ideogram is an unspeakable series of letters, e.g. 'fp' ? To solve the problem consequently, and to yield a very clear and nice-sounding pronunciation, there is help: when speaking an ideogram, you insert as often the vowel 'i' and the consonant 'j' (spoken like the y in 'yes'), until the resulting word has only syllables consisting of consonant + vowel. (But the syllables 'e', 'a', 'o' at the beginning of a word are not changed - they can be pronounced easily). These i / j are not written, there are no letters for them. Thus, the ideogram 'fp' is spoken as 'fipi', the ideogram 'taa' as 'taja'. Examples: More examples of words in the printable dictionary and in the interactive dictionary For didactic reasons, we figured the phonetic picture-writing rather big. But its simply shaped letters may be reduced much in size, more than Latin letters. Then they appear more charming and not so bulky, also more realistic: one scarcely has the impression that details are missing. (Also gray instead of black colour lets ideograms look more impressionistic, also bigger distance between the letters.) Some smaller ideograms: Printed by laser or by types, the ideograms are more clear than on a screen. (If using an inkjet printer, the lines become too broad and thus the distances between lines become too small - the letters become indistinct at this size.) Here some pictures of scenes, explained in the article about grammar: The original purpose of a phonetic picture-writing is: it's an artificial language, by which one can express everything - optically and acoustically. For ideograms can be combined into scenes, and these can be spoken as sentences: This example is pronounced ani amimipi ela and means "legs, (over it) cape, (over it) face": "There is a standing man, with cape, his face is visible." The simple phonetic picture-writing presented above contains the 12 most useful signs, and the sounds are attached very systematically to the signs. It's already very efficient. But to yield a full speach, more signs are necessary: 25 letters seem to be the mininum, to be able to portray all kinds of things and ideas by During the antiquity, different versions of phonetic picture-writing have been used. Often the difference was only an other attachment of sounds to the signs, e.g. to exchange the humming for the hissing consonants, and vice versa. By this way, mystical circles tried to delimit from other such circles, and the "upper class" in greater mystical circles tried to delimit from the "ordinary people". The System of Letters In the picture beside we arranged the 12 letters so, that you recognize at once: There are narrow, middle-broad and brod signs. Below again the 12 letters, now arranged in a 4 * 3 matrix. You see: Similar sounds are represented by similar letters. Signs for vowels are flat, signs for consonants are high. For every (in writing direction) broadening sign there is a similar, narrowing sign. By turning down the one, you get the other one. - All signs for vowels ( e a o ) are horizontal lines It works like a chord: the longer it is, the deeper the sound - All signs for humming consonants ( l n m ) are vertical lines What is a humming sound? If you touch your larynx, or put a small finger into an ear, and speak a humming sound, you feel vibrations. Vowels also hum. So it is generally true: Humming sounds are written by straight lines, parallel to a coordinate axis. Not humming sounds are written by other lines: - All signs for hissing sounds ( s h f ) broaden on top They symbolize emitting, broadening air - All signs for stopping sounds ( t k p ) narrow on top What is a stop (or plosive)? An interrupted sound: if you speak, for example, slowly the word "apa", somewhen during it there is silence. Then air is emitted explosively Another memory-aid: If you remove the curves from the Latin lower case letters l,n,m (from m also remove the central vertical line), you receive the corresponding letters of the phonetic picture-writing presented in this article. You get the same result, if you remove all non-vertical lines from the Latin upper case letters L,N,M. Also Latin K and P resemble the corresponding letters of our picture-writing, if you now remove the vertical lines and turn the result by 90 degrees. Non-bold links indicate pages in German language. (I'm sorry I had no time to translate them.) Nevertheless, pictures on these sites often will give you a good idea of the content. Interactive programs: Type words Type sentences The phonetic-picture letters for printing, cutting out, laying words Dictionary (interactive) 1052 words Dictionary (printable) 220 words Dictionary (big, printable) 1590 words, below at downloads Phonetic picture-writing: Frequent questions Advantages of phonetic picture-writing Grammar: Grammar: the image (sentence) Grammar: writing 2 words one above the other Grammar: direction and perspective Grammar: 3-dimensional models Gammar: multiple perspective Grammar: inserted images Phonetic picture-writing and molecule grammar Formal grammar of phonetic picture-writing (for specialists) Versions of phonetic picture-writing: Extensions of the 12-letter picture-writing part 1 Extensions of the 12-letter picture-writing part 2 A phonetic picture-writing with 16 letters A phonetic picture-writing with 20 letters A phonetic picture-writing with 20 letters (other version) Thereby formed words (ideograms) of the subjects: Astronomy mathematics Buildings Visual Arts Religion A syllable writing with 28 signs A syllable writing with 80 signs The antique standard phonetic picture-writing The firstly published versions of phonetic picture-writing Typifying: Kinds of phonetic picture-writings Point writings Bar writings Overwriting of signs Forming signs for phonetic picture-writings The letter field of phonetic picture-writing Symbolic numbers for phonetic picture-writing History: History of phonetic picture-writing Quotations of antique authors about phonetic picture-writing Quotations of antique authors about encoding pictures Pictures encoded in texts Pictures encoded in texts of the antique writer Pliny (Plinius) Pictures encoded in texts of the antique writer Pliny, part 2 Encoded self-portrait of Pliny Latin writing as a helping phonetic picture-writing Latin words in phonetic picture-writing Greek writing as a helping phonetic picture-writing General language theorie (linguistics): Design principles for artificial languages A clear, nice sound system for artificial languages Forming words Design principles for a grammar Molecule grammar Alphabetic sorting Alphabetic sorting of numbers, numerical correctly A finger-spelling system interesting Language and psyche Writing and intuition Evaluation of some writings by intuitive correctness lautbildschrift-16-buchstaben.ttf (4 KB) Phonetic picture-writing with 16 letters, vektor-font (for all WINDOWS text-programs) laut16bu.fon (2 KB) Phonetic picture-writing with 16 letters, bitmap-font (usable in text-programs NOTEPAD (new), WORDPAD (old), not in WORD) lantik12.fon (4 KB) Antique phonetic picture-writing with 12 letters, bitmap-font (usable in text-programs NOTEPAD (new), WORDPAD (old), not in WORD) Simply view antique text by NOTEPAD and select this writing ! lautbildschrift-dokumentvorlage.doc Document pattern and instructions for using phonetic picture-writing in texts (29 KB, WORD - format) lautbildschrift-lexikonvorlage.doc Dictionary pattern for words of phonetic picture-writing (32 KB, WORD - format) lautbildschrift-lexikon.doc Dictionary (98 pages, about 1590 words, 1703 KB, WORD-Format) Author and inventor: Leonhard Heinzmann email Homepage This site also can be reached via www.lautbildschrift.de FREE COPYRIGHT for this article! update: 2013-8-9 The phonetic picture-writing is free for everybody and not subject to any rights or patents
Cardboard boxes appear everywhere in our world. They help us store our belongings and ship packages of all sizes. Many people have handled a corrugated box before, but probably never really knew how they were made. Here is a quick look at how these containers are made using a simple process of heat, glue and paper. 1. Corrugated box manufacturer, starts the process by designing the size and shape of the box. Once they have the size figured out, they can start thinking about how the box will function. Details regarding how the box opens or closes and how durable it is during shipping will determine how it is constructed. 2. Recyclable materials are fed through a machine in which the material is formed into three layers; two flat layers and a corrugated centerpiece. These layers combined make what we refer to as cardboard. In order to shape the layers, the machine uses steam heat, then glues them together to make a cardboard sheet. 3. Next, the machine cuts the cardboard to the desired length. The numbers of cuts used on the corrugated board vary depending on the size of the box. After the machine cuts the board, it is stacked and sent to a trimmer. 4. The trimmer cuts flaps and handles into the cardboard. The blades cut each piece with exact precision for a uniform set of corrugated boxes. A conveyor belt catches the trimmings and ships them to recycling. The trimmings can be recycled up to six times. 5. Once the flaps and handles are in place, the cardboard box shape takes form after the connecting edges are glued down. The finished product is loaded on pallets and prepared for printing. 6. The print added to the corrugated board will vary depending on the initial design. Some boxes receive only basic text printing, while others have elaborate designs applied to them. Special applications such as high-gloss, matte and moisture resistant finishes complete the process for most boxes.
Need or Want? Where do things come from? 1. description: Using a categorizing activity and images in a powerpoint, the instructor and students will explore certain concepts related to needs. These concepts include needs vs wants, different examples of how people meet needs in different cultures (e.g. there are different types of cuisine, but everyone eats), where the things we need come from, and what it means for some people to have more than other people. a. The instructor can begin lesson by prompting students to name some things they “need” and some things they “want”. b. On a large pad of paper, blackboard, or whiteboard the instructor can draw a vertical line to indicate two sections-needs and wants. The instructor can then ask students to sort 2D props representing various items that could be considered needs or wants into those two categories. The instructor can put the item in the appropriate section after the students discuss each one. items could include pictures of, or representing, food, a house, a doctor, friends, family, music, and love (note: students may need help understanding the idea that certain images are symbolic-e.g. A heart represents “love” or a hamburger represents “food”)(see MeetingNeeds_fig 1 in materials) Throughout the activity the instructor should prompt the students to explain why they would categorize a certain item as either a need or a want. c. The instructor and students can then look through a powerpoint together (see materials ) and use the images as jumping off points for a discussion of different examples of how people meet needs in different cultures (e.g. different people enjoy different types of cuisine and get food from different places, but everyone eats), where things we need come from, what it means for some people to have more than other people. A Community to Meet All Needs 1. description: Students and instructor can work together to make a large, multimedia map, showing an ideal community that would meet all needs. a. The instructor may start this lesson by explaining to the students that they will work together to design a community where everyone can meet all of their needs. The instructor can then ask the students to brainstorm the things that this community would have. As the students brainstorm, the instructor can write a list on the board. b. After the list has been completed, students will volunteer to make images to represent the different things and places that the community would need. c. Students will then have time to create their items using supplies such as colored paper, markers, and glue. d. Students will then work together to decide where to place their items on a large map with a basic grid structure. Students may also place their houses from lesson B on the map. e. For homework, students may write a journal entry exploring the idea that some communities don't have as many of the needs-meeting resources as would be ideal. A possible prompt could be, “Not all communities have places where people can get all the things they need. Some communities might not have hospitals or clothing stores. Imagine if you lived in a community without a grocery store. How do you think you could get food?” Things the Body Needs and Things the Mind Needs, How Do We Help Each Other Get What We Need? 1. description: Using a categorizing activity and a group game, the instructor and students will explore the idea of physical needs vs emotional needs as well as the idea of working together with a group to meet individual and group needs. a. The instructor can begin the lesson by asking students what they think describes a need-what does it mean to need something? The instructor can also present the idea that there are things the body needs in order to be healthy and avoid death (food, water, etc) but that there are also things the mind needs to be happy and healthy. The instructor can ask students to brainstorm some things that the mind might need. b. The instructor will then draw a large Tri-Venn Diagram on a pad of paper, blackboard, or whiteboard and ask students to sort 2D props representing various items that could be considered needs of the body, needs of the mind, or wants into the three categories. c. The instructor will then lead the students in a game. In this game, students will break into groups of 4 or 5 (these numbers can be flexible) and each student will be given an envelope with paper pieces of a house (see MeetingNeeds_figure 2 in materials). In each group, there must be the right number of pieces for each individual to complete a house, but each individual should not have all the right pieces. Students should be prompted to complete the activity without talking or writing to communicate. They also must not signal to others what they need but rather must pay attention to what others need and work to make sure everyone in the group completes his or her house. When a group is finished they can glue their houses together and decorate them with pens/pencils/markers/crayons while they wait for other groups to finish. Students should keep these houses for later lessons. d. After all the students have completed their houses the instructor can lead students in a follow-up discussion. Some possible questions include: i. How did you communicate without talking or writing? ii. How did you work together? How did you help each other? iii. Can you think of another time when someone has needed something and they didn't have the thing they needed? What can people do about that? iv. Were there things each group member needed? Was there something the group as a whole needed? e. Students may be assigned to write in a journal for homework about what they learned about needs and wants. Thank you so much to Daniel Rouse and Ann Perrone, who contributed significantly to the creation of this lesson plan! Mapping Needs in an Urban Environment 1. description: The instructor will direct students to complete a worksheet and explore an online map in order to help them develop the idea that- in urban environments- we meet our needs by going to certain places within communities that are spatially organized in certain ways and which may have limited resources. a. The instructor can start the class by asking the students to name some things people need. b. Students can then be asked to complete a matching and navigation worksheet (see materials), the first half of which will involve a matching exercise between needs and where one can get the thing they need, the second half of which requires further instruction. c. The second half of the worksheet will ask students to navigate a map on google earth (using aerial and street view). This map will include icons indicating several different types of places where one can get something they need (e.g.”shopping” and “emergency”) with several different examples of each type of place (which will be labeled, e.g. “shopping 1” and “emergency 2”). i. Note: the instructor will have to create this map in google earth. For an example, see “Where to Get What You Need in Our Community”- created by Ann Perrone (in materials). d. Students will be asked to explore the map and write down at least three items, including what the item is labeled as (e.g. “shopping 1”), what the item actually is (e.g. PathMark), and what someone could find there that is a need. It may help students to see the list of item labels and what each item is so that they can use deductive reasoning to figure out what items are if they aren't clear just by exploring the map. Students may also use the information from the matching exercise in the first part of the worksheet to think about things they could find at the places labeled on the map. e. For homework students may complete a mapping worksheet (see materials). Other Possible Themes a. Students could do a variety of activities to explore themes of homelessness including reading books on homelessness, visiting homeless shelters, and discussing ideas such as “Why might someone not have a home?”, “What would it be like to be homeless?”, “What can be done about the problem of homelessness?” b. possible resources: i. Messinger, Alex.Unsheltered Lives: An Interdisciplinary Resources and Activity Guide for Teaching about Homelessness in Grades K-12. ed. Burlington, VT: Committee on Temporary Shelter, 2010. Print. >especially pg17-Just Imagine , pg20-Illustrating Homeless Lives, pg22-Types of Homes, pg37-Class Survey, pg 56-Action Activities ii. DiSalvo, DyAnne. Uncle Willie and the Soup Kitchen. New York: Morrow Junior Books, 1991. Print. iii. McGovern, Ann, and Marni Backer. The Lady in the Box. New York: Turtle Books : 1997. Print. 2. historical development of communities a. Students could read books and have discussions about how communites organized with places that meet are needs came about. b. possible resources: i. Millard, Anne, and Steve Noon. A Street Through Time. New York: DK Pub., 1998. Print.
Asteroid smelter. Image: Bryan Versteeg, spacehabs.com - Roadmap Table of Contents - Part 1: General Milestones - Part 2: Utilization and Development of Cislunar Space - Part 3: To the Moon - Part 4: To Mars - Part 5: Asteroid Mining and Orbital Space Settlements (this page) - Part 6: Additional Expansion and Greater Sustainability of Human Civilization Remote or robotic characterization of near-Earth (and other) asteroid orbits, compositions and structures. Osiris Rex asteroid sample return mission launched September 2016. Image: NASA Telescopic observations will initially identify asteroids as Near Earth Objects (NEO’s), Earth threatening NEOs, main belt asteroids and other orbital groupings. Initial robotic missions to NEO asteroids of commercial interest will confirm the size and composition of different types of asteroids as being rocky, metallic or carbonaceous, and identify the actual abundances of minerals on each one. Metallic asteroids contain iron, nickel and platinum group metals, and carbonaceous ones contain carbon compounds and water. The probes will also estimate the structure of the asteroids, as being apparent “rubble piles” of loose fragments, or made of solid, non-fractured rock and metal. Some missions may bring back actual samples of asteroid material for analysis. All this information will assist governments in planning planetary defense against threatening NEOs and will assist mining companies to decide which asteroids to focus on. Earth-threatening NEO’s that are composed of useful minerals could be put on a list of objects to be totally mined away so there is no remaining risk. Radio beacons may be placed on NEO’s to make tracking them easier. - Lack of information on the composition and physical structure of individual asteroids. - Lack of telescopes dedicated to spectral analysis of asteroid composition. - Lack of telescopes dedicated to finding and tracking small Earth-threatening asteroids. - Lack of inexpensive robotic probes that can rendezvous with and analyze asteroids. The investigation of asteroids will be ongoing as activity expands into the main belt with thousands of objects, so finding a completion point would be difficult. Partial completion would be indicated when sufficient knowledge of asteroids exists to plan planetary defense or actual mining of surveyed asteroids begins. After robotic identification of suitable asteroids, robotic and human crews following to establish mining bases and habitats for transient occupation, and eventually building permanent human settlements nearby. The eventual construction of rotating space settlements from minable asteroids. Image: Bryan Versteeg, spacehabs.com Asteroids have huge mineral wealth if it can be accessed, including iron, nickel, platinum group metals, other non-volatile materials, and also volatiles like water ice. There are different classes of asteroids with varying amounts of these materials that would be useful both on Earth and in space. That potential value may be the primary driver for asteroid exploration and mining. As is expected to be the case on the Moon and Mars, deposits of volatiles can be converted to rocket fuel and oxygen, thus enabling further space operations. The metals in asteroids can be refined and turned into construction materials for building large structures in space. Smelting and fabrication of parts from asteroids will require either the development of new techniques for doing this in microgravity, or the use of rotating structures to provide gravity so that existing methods can be used. The practicality of returning asteroidal materials to Earth or other locations would depend on (1) transport costs, (2) value of the materials as delivered and (3) the extent to which those materials can be separated and purified to reduce the total mass before transport. Asteroid resources may greatly improve life on Earth. Economic return of materials to Earth is expected to involve the use of fuel obtained from asteroid mining. In time, asteroids will see more permanent rotating habitats created nearby, probably made of asteroid derived materials, and using unprocessed asteroid materials for radiation shielding. These habitats will house either visiting crews or, if there is sufficient mining to be done, permanent occupants. With an appropriate asteroid, these mining stations may go through the same processes of growth as settlements on the Moon and Mars, and evolve into permanent settlements where people will raise their children and live out their lives, as on any far frontier. Proposals have also been made for hollowing out an asteroid and building a rotating space settlement inside it. The eventual construction and location of rotating space settlements in the orbits of minable asteroids would reduce the materials transport costs of asteroid resources and derived materials for the construction of such settlements. This would create a synergism which should accelerate the asteroid mining industry. - Economic barriers (transport costs) to moving asteroidal products to space settlement construction sites and to the Earth and vicinity. - Lack of knowledge of methods for mining, refining, and fabricating asteroidal products such as building materials in microgravity. - Lack of detailed planning and design for use of the fabricated building materials to build large space structures. This milestone will be considered achieved when asteroid mines and smelters are regularly sending ores or refined products to the Earth, Moon or Mars or are contributing substantial structural mass to rotating space settlements in any location. Orbital “cities in space” built from asteroid or lunar materials. Image: Alexander Preuss Orbital space settlements are large pressurized structures that constitute cities or villages with residential, commercial and/or governmental functions, built in space from asteroid or lunar materials, where families live. The settlements would rotate to provide artificial gravity. In 1974, Princeton physicist Gerard O’Neill proposed the construction of orbital space settlements. An orbital settlement (sometimes called an “O’Neill Settlement”) is a giant rotating space structure, large enough and rotating fast enough so that people standing on the inner surface would experience a centrifugal force equivalent to gravity on the surface of the Earth. Thus, children on orbital space settlements would be raised in Earth-normal gravity, which is important for normal bone and muscle development. Three proposed types of orbital settlements are Bernal Spheres (and a variation called Kalpana), Stanford Tori and O’Neill Cylinders. Since orbital space settlements must rotate, only a few basic shapes work well: sphere, torus, cylinder, disk, or some combination. Current materials are strong enough for habitats many kilometers in extent, big enough for a moderately large city. The inner surface of the hull is real estate, i.e., land on which crops could be grown and homes and businesses could be constructed. While the outer hull will experience one gravity, interior structures can be positioned for fractional gravity, and even zero gravity at the axis of rotation. People and their families can live there indefinitely, in communities ranging in size from villages to cities which have their own internal economies as well as external imports and exports. The Equatorial Low Earth Orbit (ELEO) settlements discussed in Milestone 16, which can act as precursors for later orbital settlements, use the Earth’s geomagnetic field to shield them from space radiation. All other settlements would need to use extensive radiation shielding. Unlike Lunar or Martian surface settlements, radiation shielding is required on all sides, so roughly twice as much shielding mass is necessary. Shielding can consist of a substantial mass of asteroidal rubble, water, waste material, or some other mass. Orbital settlements could be built in, or moved to, a variety of orbits, including Earth, solar or other orbits, including special locations such as Lagrange points. Most of these orbits would be selected to have continuous solar energy available. The choice of orbit may be driven by access to materials, such as sites co-orbiting near an asteroid mine. For use in cislunar space, lunar material could be launched into space using electromagnetic launchers (mass drivers). Material mined from an asteroid could be utilized either in an orbit close to the asteroid or moved to some other desired location. There are thousands of candidate asteroids among the Near Earth Objects, some requiring less energy to reach than the Moon. Eventually such cities in space could be located throughout the solar system, orbiting around planets or moons, co-orbiting with asteroids, at Lagrange points, or in solar orbit. These settlements may be very different from each other, each reflecting the particular tastes and cultures of those who built, financed and settled it. Such diversity could provide a new flowering of human creativity. The result would also be to disperse humankind throughout the solar system, enabling survival even if some disaster were to befall the Earth. Orbital settlements may be built by private companies, governments, or consortiums. It will be financially possible to build them only when the cost of construction and support is less than the expected value of the settlement, financial or otherwise. The Potential Scale of Orbital Settlement Orbital settlements could be built in virtually unlimited numbers. NASA Publication SP-413 (Space Settlements: A Design Study) states: “If the asteroids are ultimately used as the material resource for the building of new colonies, and … assuming 13 km of total area per person, it appears that space habitats might be constructed that would provide new lands with a total area some 3,000 times that of the Earth.” COMPONENTS (required capacities) Habitats in space beyond Low Earth Orbit designed as space settlements must be able to: - Provide redundant life support systems for the residents that will last for many decades. - Store sufficient reserves of food and water for the residents, recycle water and grow food. - Provide a high level of protection and redundancy against loss of air pressure accidents. - Protect residents against constant cosmic radiation (heavy, fast nuclei from outside the solar system) to a level consistent with the presence of children and pregnant women. - Protect against intermittent radiation from solar mass ejections and general solar radiation to a level consistent with the presence of families and children. - Provide artificial (centrifugal) gravity with a sufficient level of gravity to maintain health. - Provide for permanent residency by making the habitats suitable (large enough, etc,) for comfortable living. - Provide employment and recreation for the residents. - Lack of asteroidal or lunar-derived materials and parts in space for construction of orbital settlements. - Lack of detailed planning, design and methods for creation of orbital space settlements, including use of materials to build large, pressurized, rotating structures in space. - Lack of immediate economic incentive to work toward orbital settlement construction. - Inadequate understanding of human physical adaptation and the psychology of individuals and large groups of people living in space. - Lack of information on all phases of the cost to construct orbital space settlements. - Planetary chauvinism: the idea that people should only live on planetary surfaces. This milestone can be considered achieved when a rotating space settlement built primarily from non-terrestrial materials has a population of at least 1000 including families and children. Another milestone will be achieved when the total population in orbital space settlements exceeds the population on Earth. ORBITAL SPACE SETTLEMENT DESIGNS (click for larger image)
NetLogo Models Library: ## WHAT IS IT? This project explores a simple ecosystem made up of rabbits, grass, and weeds. The rabbits wander around randomly, and the grass and weeds grow randomly. When a rabbit bumps into some grass or weeds, it eats the grass and gains energy. If the rabbit gains enough energy, it reproduces. If it doesn't gain enough energy, it dies. The grass and weeds can be adjusted to grow at different rates and give the rabbits differing amounts of energy. The model can be used to explore the competitive advantages of these variables. ## HOW TO USE IT Click the SETUP button to setup the rabbits (red), grass (green), and weeds (violet). Click the GO button to start the simulation. The NUMBER slider controls the initial number of rabbits. The BIRTH-THRESHOLD slider sets the energy level at which the rabbits reproduce. The GRASS-GROWTH-RATE slider controls the rate at which the grass grows. The WEEDS-GROWTH-RATE slider controls the rate at which the weeds grow. The model's default settings are such that at first the weeds are not present (weeds-grow-rate = 0, weeds-energy = 0). This is so that you can look at the interaction of just rabbits and grass. Once you have done this, you can start to add in the effect of weeds. ## THINGS TO NOTICE Watch the COUNT RABBITS monitor and the POPULATIONS plot to see how the rabbit population changes over time. At first, there is not enough grass for the rabbits, and many rabbits die. But that allows the grass to grow more freely, providing an abundance of food for the remaining rabbits. The rabbits gain energy and reproduce. The abundance of rabbits leads to a shortage of grass, and the cycle begins again. The rabbit population goes through a damped oscillation, eventually stabilizing in a narrow range. The total amount of grass also oscillates, out of phase with the rabbit population. These dual oscillations are characteristic of predator-prey systems. Such systems are usually described by a set of differential equations known as the Lotka-Volterra equations. NetLogo provides a new way of studying predatory-prey systems and other ecosystems. ## THINGS TO TRY Leaving other parameters alone, change the grass-grow-rate and let the system stabilize again. Would you expect that there would now be more grass? More rabbits? Change only the birth-threshold of the rabbits. How does this affect the steady-state levels of rabbits and grass? With the current settings, the rabbit population goes through a damped oscillation. By changing the parameters, can you create an undamped oscillation? Or an unstable oscillation? In the current version, each rabbit has the same birth-threshold. What would happen if each rabbit had a different birth-threshold? What if the birth-threshold of each new rabbit was slightly different from the birth-threshold of its parent? How would the values for birth-threshold evolve over time? Now add weeds by making the sliders WEEDS-GROW-RATE the same as GRASS-GROW-RATE and WEEDS-ENERGY the same as GRASS-ENERGY. Notice that the amount of grass and weeds is about the same. Now make grass and weeds grow at different rates. What happens? What if the weeds grow at the same rate as grass, but they give less energy to the rabbits when eaten (WEEDS-ENERGY is less than GRASS-ENERGY)? Think of other ways that two plant species might differ and try them out to see what happens to their relative populations. For example, what if a weed could grow where there was already grass, but grass couldn't grow where there was a weed? What if the rabbits preferred the plant that gave them the most energy? Run the model for a bit, then suddenly change the birth threshold to zero. What happens? ## NETLOGO FEATURES Notice that every black patch has a random chance of growing grass or weeds each turn, using the rule: if random-float 1000 < weeds-grow-rate [ set pcolor violet ] if random-float 1000 < grass-grow-rate [ set pcolor green ] ## RELATED MODELS Wolf Sheep Predation is another interacting ecosystem with different rules. ## HOW TO CITE If you mention this model or the NetLogo software in a publication, we ask that you include the citations below. For the model itself: * Wilensky, U. (2001). NetLogo Rabbits Grass Weeds model. http://ccl.northwestern.edu/netlogo/models/RabbitsGrassWeeds. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL. Please cite the NetLogo software as: * Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL. ## COPYRIGHT AND LICENSE Copyright 2001 Uri Wilensky. ![CC BY-NC-SA 3.0](http://ccl.northwestern.edu/images/creativecommons/byncsa.png) This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA. Commercial licenses are also available. To inquire about commercial licenses, please contact Uri Wilensky at [email protected]. This model was created as part of the projects: PARTICIPATORY SIMULATIONS: NETWORK-BASED DESIGN FOR SYSTEMS LEARNING IN CLASSROOMS and/or INTEGRATED SIMULATION AND MODELING ENVIRONMENT. The project gratefully acknowledges the support of the National Science Foundation (REPP & ROLE programs) -- grant numbers REC #9814682 and REC-0126227.
In This Section Find a Chapter Infection Prevention and Control Recommendations for Minimizing the Risk of Cross-Infection Among People with CF What is cystic fibrosis? Cystic fibrosis (CF) is a rare, genetic disease — only about 30,000 people in the United States have CF. Cystic fibrosis is not contagious. People with CF have a defective gene that causes the body to produce unusually thick, sticky mucus that can clog the lungs, pancreas and other organs. This can lead to breathing problems and susceptibility to developing lung infections from germs that would not pose a risk to healthy children or adults who do not have CF. However, these germs can be particularly dangerous for people with CF, especially when spread from another person with CF. Although CF is a rare disease, in some schools there may be more than one person with CF present. back to top Why are germs particularly dangerous for people with CF? The thick, sticky mucus that can clog the lungs also allow germs to thrive and multiply. For people with CF, this buildup makes them more susceptible to developing lung infections. Despite significant progress treating CF, infections remain a serious problem and can lead to worsening lung disease and death. Medical studies show that people with CF are at particular risk of spreading certain germs among others with the disease. This is known as cross-infection. back to top How can you lower the risk of cross-infection? When there is more than one person with CF in your school, it is essential that they be kept a minimum of 6 feet (2 meters) apart from each other. Germs can spread as far as 6 feet through droplets released in the air when people cough or sneeze. If there is more than one person with CF in the same school or classroom, the recommendations below can help minimize the spread of germs between people with CF. These recommendations are based on recent research and have been reviewed by medical experts. - Minimize the time that two people with CF can spend in one place. A minimum 6-foot distance should be maintained at all times. Place people with CF in separate classrooms whenever possible. - If they must be in the same classroom, but at different times, make sure the students are assigned separate desks and or work stations as far away as possible (a minimum of 6 feet) from the assigned location of the other student with CF. - Assign separate bathrooms and drinking fountains for students and staff members with CF. - Schedule the students with CF to be in other common gathering areas, such as the gym, at different times. - Assign lunch tables, lockers, etc. for all students with CF to be as far away as possible from the assigned locations of other students with CF. - Assign different locations for people with CF to go for their medications, or have the school nurse visit each student in their separate classrooms to administer the medications. - If a person with CF becomes ill while in school, one student can go to the health office, another to the principal’s office and a third to the counselor’s office. - If a student with CF is ill or needs to go to another room or office to get medications, the staff in that office should be notified prior to sending the student to the office to ensure that another person with CF is not present. - Encourage everyone to wash or clean their hands. Germs can spread when people touch something with germs already on it, like a doorknob or desk, and then touch their eyes, nose or mouth. - Everyone should clean their hands after coughing, sneezing or blowing their nose and after using common equipment (e.g., a pencil sharpener, lab equipment). This is especially important during the cold and flu season. - Make alcohol-based hand gel and or soap and water readily available for all students and staff to use in the classrooms. - Encourage everyone to cover their cough. Germs can remain in the air on tiny droplets – ready to be breathed in. They can also remain on surfaces long after a person has coughed or sneezed on or near them. - Make tissues readily available and encourage people to cough or sneeze into a tissue and throw it away immediately before washing or cleaning hands. If a tissue is not available, encourage everyone to cough or sneeze into their inner elbow. - Encourage everyone to get vaccinated. Vaccinations help the body protect itself from germs, like the flu virus, which are especially dangerous for people with CF. For a list of what vaccinations to get and when to get them, visit the Centers for Disease Control and Prevention’s website. back to top back to top - Get Germ Smart – Read and download posters, animated videos and a fact sheet on germ basics that can be used in the classroom. - Tip Sheets – Find easy-to-reference ways to guard against germs in health care settings and everyday life. - Webcasts – Hear from the experts on infection prevention and control in CF. - Staying Healthy – Learn more about maintaining health while living with CF.
Dirofilaria immitis, more commonly known as the heartworm, is small and threadlike. Distributed by mosquito bite, heartworms can enter an animal’s body and move towards their heart. Mosquitoes become infected with tiny immature forms of the heartworm, called microfilariae, when they bite an infected animal. Inside the mosquito, the microfilariae develop into larvae. A parasitic roundworm, these tiny heartworms can infect and host dogs, cats, ferrets, and other pets, as well as, in rare cases, human beings. The adult heartworm lives and breeds in the heart and lungs of its host, and causes damage to the surrounding lung and heart tissue. This may eventually result in serious illness for the host—even death-related illness like congestive heart failure. Once considered a problem for pets in the southern United States or in other warm climate areas, now heartworms have been seen in all 50 states. Heartworm disease is dangerous. Protecting your pets with preventative measures and knowing the signs for early detection can help with pet safety and peace of mind for any pet lover. Test Pets for Heartworm Disease An important part of any complete heartworm prevention program is appropriate testing. If you think your pet already has heartworm disease, please take your pet to a pet health specialist or licensed vet. This testing process is to check pets for adult heartworms, and lets owners know if their pet is ready to start preventive medication. The vet or vet tech will take the pet’s blood and complete a variety of tests to detect specific antigens or antibodies in the blood stream. Antigen tests are usually most effective for canines. Antigen tests detect specific antigens from adult female heartworms, and are used with much success to detect canine heartworm infection. Antibody tests are more effectively used in felines. Felines have less of a chance in contracting heartworm disease, but in many cases, the heartworm will not travel to the lungs or heart, instead infecting cats in other areas of their body—resulting in brain infection or heart disease. It’s important to complete appropriate testing before administering medication. Preventing Heartworm Infection One of the most common ways to prevent heartworm infection is by placing one’s pets on a heartworm prevention medication. There are several heartworm prevention medications used by pet owners and veterinarians—from pills or chewable tablets, to topical rubs or solutions, to injections from a pet health professional. These preventative drugs are highly effective in preventing heartworm infection and disease and are sold under a variety of brand names. Please check with a pet health professional or licensed veterinarian to help decide which option is appropriate for a specific pet or animal companion, based on their needs, size, and medical history. Some of these drugs also double as anti-flea medication. Others work to control and prevent infections by roundworms, hookworms, and whipworms, as well as heartworms. Other medications can be used in tandem to kill ticks and fleas. Remember, like with any prescribed medication, it is important and vital for pet owners to follow complete dosage guidelines on the box or bottle. The American Heartworm Society estimates 25% of all pets on medication their full doses. The best preventative measure is to limit pet exposure to mosquitoes. Signs of Heartworm Disease Heartworms are transmitted by mosquitoes, and can involve both dogs and cats, among other animals. However, the infection and detection of heartworm disease differs in each animal species. Once an infected mosquito bites a dog, the larval heartworm enters the dog’s skin through a bite wound. Over the next several months, the worms migrate through the dog’s bloodstream to the heart, where they grow into adult heartworms and breed. Their offspring live in the blood of an infected dog where they mature and breed. Over time, these multiple heartworms can lodge themselves in heart or blood vessels, which leads to health issues or death. For dogs, the first outward signs of heartworm disease may not be apparent until a year after infection. Symptoms of heartworm infection with a soft cough. As the disease progresses, the infected pet will have increased trouble with breathing, which can make exercise difficult. Other symptoms include a decreased appetite and weight loss. Once an infected mosquito bites a cat, the larval heartworm enters the cat’s bloodstream and migrates to the lungs or other cavities. These heartworms are smaller than those found in dogs or other large mammals, and generally take one month to mature into full-grown adults. The symptoms for a heartworm infected cat can mimic symptoms of many other diseases. These include difficulty with their breathing, coughing, vomiting, fainting, seizures, blindness, loss of appetite, and weight loss. Some of these symptoms can persist even after heartworms have been treated. Just like dogs and cats, ferrets and other small pets can become infected with heartworms. The signs of heartworm disease in ferrets or other small mammals are similar to those in dogs, but they develop more rapidly due to the size of the animal’s heart. While dogs may not show symptoms until they have many worms infecting their hearts, lungs, and blood vessels, just one worm can cause serious respiratory distress in a ferret or other small mammal. Symptoms can include coughing, fatigue, labored or rapid breathing. Heartworm Disease Treatment If a dog is diagnosed with heartworms, treatment may be possible. However, before a veterinarian or medical professional will treat heartworms, the dog must be evaluated for other health concerns and organ function—the health of the heart, liver, and kidneys may help determine treatment. Check with your medical professional which treatment is right for your pet. Some of the treatments include injected medicine which can be used to treat even late-stage infection. Once the treatment occurs, heartworms die and are absorbed by the animal’s body and removed naturally. After treatment, the dog must rest for several weeks or months to give the body sufficient time to heal itself properly. When the heartworms begin to die, they break up into pieces and are eliminated. Very active pets may cause the dead heartworms to travel to the lungs, which can cause causing respiratory failure or even death. For Cats and Other Pets Heartworms can be very difficult on the systems of small mammals, and significantly problematic to a cat’s health system. The general prognosis for feline heartworm disease is ‘guarded,’ which means it varies on an individual basis. Treatment usually consists of monthly preventative heartworm medications. Similar therapies that dogs receive may not be appropriate for cats. A significant number of cats have developed heart diseases such as pulmonary embolisms a few days after major treatment, therefore many therapies for feline heartworm are not recommended, but surgery can sometimes be an option. Contact a medical or veterinarian professional before deciding on treatment for feline heartworm disease. Once heartworm tests are negative, the treatment is considered a success, and one can move on to continuing preventative measures to ensure pet health. After a pet has been treated successfully and tested negative for heartworms, it is vital to keep the pet on a heartworm prevention medication. Heartworm preventive medications work to kill the larvae and growing heartworms. Most are given monthly. Remember to follow the prescription for a pet just as a human patient would follow a prescription from their own medical team. Consistent use of these medications are required for continued health. Many veterinarians recommend heartworm preventive medications to be given year round, because several of these medicines also treat and control other intestinal parasites as well Another tip is to reduce your pet’s exposure to mosquitoes. Make your yard less hospitable to mosquitoes by removing any areas with pooling or stagnant water. Remember: if it has held water for a week, it may be producing mosquitoes. Check for clogged rain gutters or leaky outdoor faucets, any low spots on the property, open watering cans or catch barrels. Mosquitoes nest in shaded, protected areas. Cut low-lying brush and remove loose vegetation. Keep shrubs and trees well-trimmed and grasses short to prevent the proliferation of mosquitoes, fleas, and ticks. Note: Pets cannot spread heartworms from one to another.
Table of Contents When people are searching for a new pair of headphones, they usually spend a lot of time reading their characteristics and specifications but the truth is they pay more attention to physical characteristics, battery life, Bluetooth version, etc. The part where frequency response, sensitivity, and impedance are listed are often skipped and neglected but these are very important features that affect headphone performance and sound quality. If you don’t know how your headphones work, you won’t be able to use them and maintain properly, and this is often one of the reasons why people are complaining about their headphones performance. We have run into this problem so many times that we have decided to write this article and explain headphone impedance in detail. If you understand what impedance is and why it is important for your device, you will use your headphones correctly and be more satisfied with their sound. What is Impedance? The term impedance was used for the first time at the end of the 19th century and, by definition, it is the measure of the opposition of a circuit when current is going through an electrical device. Impedance is marked with Z and it is measured in Ohms. It is also equal to zero when the voltage and current are constant (direct current) but it can never be equal to zero when alternating current runs through the device. Impedance actually describes the relationship that exists between voltage and current in a circuit that consists of capacitors, inductors, and resistors. Impedance VS resistance Impedance and resistance are commonly confused terms and one of the reasons may be the fact that they are both measured in Ohms. However, these two terms don’t have the same meaning and they most certainly don’t relate to the same thing. Impedance is a real vector with two dimensions. It consists of two phenomena – resistance and reactance, which means resistance can never be equal to impedance because resistance is contained in impedance, which has both magnitude and phase. Resistance, on the other hand, is an imaginary vector that has only magnitude. It enables us to measure the opposition of electron movement among atoms in a substance. As atoms accept electrons, the resistance becomes lower. When alternating current is running through a system (device), its sinusoidal wave is always generated at a certain frequency. This affects electrical components of the device, making their resistances vary along the power source frequency. This is why we can say that impedance is the voltage and current frequency domain ratio. When direct current is running through the device, we can’t establish a difference between impedance and resistance because of relating sinusoidal voltages and currents by linear law. What is Headphone Impedance? Based on what we have previously said about impedance in general, we can say that headphone impedance is the measure of the opposition of the headphones to the audio source signal. It is also expressed in Ohms and it is one of the characteristics included in the specification list that is often being neglected. However, impedance is extremely important for anybody who cares about sound quality. Headphones can be divided into two categories based on their impedance (low and high-impedance headphones) and depending on the group your headphones belong to, you have to use them differently. When we say this, we think about devices the headphones are connected to, which are the power and audio sources. If you use your headphones with an appropriate device, you will achieve maximum sound quality, including maximum loudness. Why is Important to Know Headphone Impedance? As we have mentioned, by their impedance, headphones can be divided into two groups: low-impedance and high-impedance headphones. Different people have different theories about headphone impedance. Some think that low impedance is impedance lower than 25 Ohms while the others think that it can go even higher and still be considered low impedance. We belong to the other group and, in our opinion, any impedance lower than 50 Ohms can be considered low. So, let’s see how you should use the headphones depending on their impedance. Low-impedance headphones (16, 18, 32, 40 Ohms, etc.) are the most common on the market. Actually, most of the portable headphones (on-ear, over-ear headphones, earbuds, in-ear headphones) belong to this group and it is the consequence of their purpose. They are made to be used “on the go” while connected to portable electrical devices that are used as audio and power sources (smartphones, tablets, players, etc.). As you can see, these devices are battery-powered devices, which means they can’t supply your headphones with a lot of power. This is why we use them with low-impedance headphones that don’t require a powerful source to drive them. High-impedance headphones (everything from 50 to 600 Ohms) are the headphones that require more power to drive them and they are commonly used in music studios or by DJs. They are mostly made for professional use but if you have a good amp that matches your headphones, you can also use high-impedance headphones at home. Just don’t try to drive them by using your phone or take them outside the house because it will be a great disappointment. In case you really want headphones that will be able to perform at least decently with both types of sources, you can get, for example, 32-Ohm version. These can be driven with both battery-powered devices and amplifiers. They won’t show their best in both situations but they will most certainly sound better than 16-Ohm headphones with an amplifier or 600-Ohm headphones with a smartphone. Impedance mismatch can’t destroy your headphones or the audio source but it is definitely very important to obey the basic rules of impedance matching for better performance. When you use a battery-powered device with low-impedance headphones, for example, a player with a 3-5-volt battery, the signal voltage is low at the output and these two devices are able to create high current and the headphones will deliver great sound. On the other hand, if you connect low-impedance headphones to a powerful amplifier, you risk blowing them out because they have a low threshold limit. Also, if you plug high-impedance headphones to a battery-powered device, they won’t be able to create high current. The voltage will also be lower and all of this will lead to very poor sound quality and low headphone volume. Which Headphones are Better Choice? Headphone impedance is significantly affected by the voice coil in the drivers. Low-impedance headphones have thicker voice coil that is also cheaper and easier to make while high-impedance headphones have extremely thin voice coil that can be thinner than a human hair, which is also more difficult to make. Thicker coils have fewer layers of wire, which makes them so easy to make. This is one of the reasons why they are more available and much cheaper. Thinner coils have more windings and smaller diameter that allows less air to go between the layers, making the wires tighter and electromagnetic field more powerful. Generally, this kind of voice coil decreases the risk of sound being distorted, which will make the high-impedance headphones better. To be honest with you, they do sound clearer and more natural, while their bass is more present and nicely defined. Also, the high-impedance headphones’ soundstage is wider than the soundstage of low-impedance headphones. However, no matter what their build and way of manufacturing tells us, we still claim that it is more important to match the impedance correctly than simply buy a new and expensive pair of high-impedance headphones. If you decide to do it, we advise you to check some of the high-impedance models made by Beyerdynamic or Sennheiser. Beyerdynamic makes headphone models with different impedance values, so you might even be able to find a good compromise between high and low-impedance headphones at a reasonable price. Hello, my name is James Longman. I’m a writer and editor at AudioReputation. I disassembled my first portable AM/FM radio when I was only 8. At the age of 11, I burned the circuit board on my old boombox cassette player. I’m not going to explain how but it was reckless and stupid. Since then, I have become much more careful around radios, boomboxes, and other audio devices (at least, I like to think so) but I have never lost the passion for audio equipment. Throughout 20 years of my professional career, I’ve been working for various audio equipment manufacturers and even started building speakers on my own in my little workshop. I love the work we do here at AudioReputation. Testing, comparing, and evaluating all kinds of audio devices (speakers, soundbars, headphones, home theater systems, etc.) is something I truly enjoy. I try to be unbiased and give you my honest opinion on every piece of equipment I test. Still, you should take my reviews with a pinch of salt and always be just a little bit skeptical. The fact that I liked some speaker or soundbar doesn’t mean that you are going to love it. If you have the opportunity, you should test it/hear it before buying it.
High-resolution distribution of bumblebees (Bombus spp.) in a mountain area marked by agricultural decline Since the 1980s, bumblebee species have declined in Europe, partly because of agricultural intensification. Yet little is known about the potential consequences of agricultural decline on bumblebees. In most mountainous areas, agricultural decline from rural exodus is acute and alters landscapes as much as intensive farming. Our study aims at providing a quantitative assessment of agricultural decline through its impact on landscapes, and at characterising bumblebee assemblages associated with land-use types of mountain regions. The studied area (6.2 km2) belongs to the Eyne’s valley in the French Pyrenees, known to host the exceptional number of 33 bumblebee species of the 45 found in continental France. We compare aerial photographs from 1953 and 2000 to quantify agricultural decline. We cross a bumblebee database (2849 observations) with land-use types interpreted from aerial photographs from 2000. Comparison of land-use maps from 1953 and 2000 reveals a strong progression of woodland and urbanised areas, and a decline of agricultural land (pastures and crops), except for hayfields. Spatial correlations between low altitude agro-pastoral structure and the occurrence of bumblebee species shows that bumblebee specific richness is highest in agro-pastoral land-uses (pastures and hayfields) and in the ski area, and poorest in woodland and urbanised areas. Urbanisation and agricultural decline, through increased woodland areas, could lead to a loss of bumblebee diversity in the future. To preserve high bumblebee richness, it is crucial to design measures to maintain open land habitats and the landscape’s spatial heterogeneity through agro-pastoral practices.
Author: Rachel Green Illustrator: Irene Makapugay Grade Level: K-1 Summary: In Charlie’s Book, Green clearly describes what it is like for Charlie to be the new boy in school. During show-and-tell, each student is asked to share their individual talents or interests. Sadly, when Charlie demonstrates his ability to read without using his eyes, his classmates do not believe him and treat him unkindly. Charlie’s teacher turns the situation into a teachable moment by reminding the class of all their unique differences and how they are able to learn from each another. Element 2 -Respect for Others: Once the other children hear that Charlie cannot see, they realize that this difference is what makes him special. Thankfully, the students also recognize that even though Charlie is unable to read all the books in their classroom, there are still many things he can do. Charlie’s classmates were all looking forward to having him teach them how to read with their hands too. --“We only see truly when using our hearts, then we all join together and not feel apart" (Green, 2010, p. 23). Activity: I really like the idea from the book about using show-and-tell to allow students to share who they are and what they can do as a way to teach about differences and diversity. I do believe that it is important to follow that activity up with guided questions to remind the children of their similarities as well, in order to help promote respect for others.
Massive volumes of water circulate throughout the Atlantic Ocean and serve as the central drivers of Earth’s climate. Now researchers have discovered that the heart of this circulation is not where they suspected. “The general understanding has been [that it’s] in the Labrador Sea, which sits between the Canadian coast and the west side of Greenland,” said Susan Lozier, a physical oceanographer at Duke University in Durham, North Carolina, who led the new research. “What we found instead was that … the bulk [of it] is taking place from the east side of Greenland all the way over to the Scottish shelf.” The discovery will help improve global climate models. Ocean In Motion Water courses through the Atlantic Ocean in two layers. A shallow layer pulls warm water from the tropics north. This layer, which includes the Gulf Stream, helps keep winters in Western Europe relatively mild. As the warm waters travel to the North Atlantic, they cool and then sink, forming the second layer that spreads south. This conveyor belt of currents, known as the Atlantic Meridional Overturning Circulation, or AMOC, influences the climate by transporting heat and moving carbon from the atmosphere to the deep ocean. Although its flow is variable, the Intergovernmental Panel on Climate Change predicts the AMOC will slow down in the 21st century. And as the climate warms, the waters at high latitude might not sink, or overturn, as much, slowing the AMOC. “We’re trying to understand in the years and decades ahead, how sensitive is the overturning to these changes we expect at high latitude,” Lozier said. That’s why, back in 2007, Lozier initiated the Overturning in the Subpolar North Atlantic Program, or OSNAP, so dubbed thanks to a phrase commonly used by her then 19-year old son. The $32 million, five-year program capitalizes on the expertise of scientists from seven countries in what Lozier calls “an amazing international collaboration.” In August of 2014, the researchers deployed the OSNAP observation system, a string of instruments that stretch from the Labrador Sea on the Canadian coast to the Scottish shelf, to assess the temperature, flow and salinity of water moving to and from the North Atlantic. After more than 21 months of making measurements, the researchers recovered the array’s first round of data in April of 2016. The OSNAP array revealed overturning circulation between the southwestern tip of Greenland and the Scottish shelf is about seven times greater than that in the Labrador Sea, the team reports today in the journal Science. Irregular overturning in the area east of Greenland also accounts for 88 percent of the variation in AMOC, the researchers found. “We want this data then to provide ground truthing to the models because the models are really the only ones that can provide predictions,” said Lozier. “Now they have this benchmark. And if they can match what we see with these OSNAP observations then they’re going to be able to give us better predictions for the years and decades ahead.”
Chapter 7 - Series-parallel Combination Circuits The goal of series-parallel resistor circuit analysis is to be able to determine all voltage drops, currents, and power dissipations in a circuit. The general strategy to accomplish this goal is as follows: - Step 1: Assess which resistors in a circuit are connected together in simple series or simple parallel. - Step 2: Re-draw the circuit, replacing each of those series or parallel resistor combinations identified in step 1 with a single, equivalent-value resistor. If using a table to manage variables, make a new table column for each resistance equivalent. - Step 3: Repeat steps 1 and 2 until the entire circuit is reduced to one equivalent resistor. - Step 4: Calculate total current from total voltage and total resistance (I=E/R). - Step 5: Taking total voltage and total current values, go back to last step in the circuit reduction process and insert those values where applicable. - Step 6: From known resistances and total voltage / total current values from step 5, use Ohm’s Law to calculate unknown values (voltage or current) (E=IR or I=E/R). - Step 7: Repeat steps 5 and 6 until all values for voltage and current are known in the original circuit configuration. Essentially, you will proceed step-by-step from the simplified version of the circuit back into its original, complex form, plugging in values of voltage and current where appropriate until all values of voltage and current are known. - Step 8: Calculate power dissipations from known voltage, current, and/or resistance values. This may sound like an intimidating process, but its much easier understood through example than through description. In the example circuit above, R1 and R2 are connected in a simple parallel arrangement, as are R3 and R4. Having been identified, these sections need to be converted into equivalent single resistors, and the circuit re-drawn: The double slash (//) symbols represent “parallel” to show that the equivalent resistor values were calculated using the 1/(1/R) formula. The 71.429 Ω resistor at the top of the circuit is the equivalent of R1 and R2 in parallel with each other. The 127.27 Ω resistor at the bottom is the equivalent of R3 and R4 in parallel with each other. Our table can be expanded to include these resistor equivalents in their own columns: It should be apparent now that the circuit has been reduced to a simple series configuration with only two (equivalent) resistances. The final step in reduction is to add these two resistances to come up with a total circuit resistance. When we add those two equivalent resistances, we get a resistance of 198.70 Ω. Now, we can re-draw the circuit as a single equivalent resistance and add the total resistance figure to the rightmost column of our table. Note that the “Total” column has been relabeled (R1//R2—R3//R4) to indicate how it relates electrically to the other columns of figures. The “—” symbol is used here to represent “series,” just as the “//” symbol is used to represent “parallel.” Now, total circuit current can be determined by applying Ohm’s Law (I=E/R) to the “Total” column in the table: Back to our equivalent circuit drawing, our total current value of 120.78 milliamps is shown as the only current here: Now we start to work backwards in our progression of circuit re-drawings to the original configuration. The next step is to go to the circuit where R1//R2 and R3//R4 are in series: Since R1//R2 and R3//R4 are in series with each other, the current through those two sets of equivalent resistances must be the same. Furthermore, the current through them must be the same as the total current, so we can fill in our table with the appropriate current values, simply copying the current figure from the Total column to the R1//R2 and R3//R4 columns: Now, knowing the current through the equivalent resistors R1//R2 and R3//R4, we can apply Ohm’s Law (E=IR) to the two right vertical columns to find voltage drops across them: Because we know R1//R2 and R3//R4 are parallel resistor equivalents, and we know that voltage drops in parallel circuits are the same, we can transfer the respective voltage drops to the appropriate columns on the table for those individual resistors. In other words, we take another step backwards in our drawing sequence to the original configuration, and complete the table accordingly: Finally, the original section of the table (columns R1 through R4) is complete with enough values to finish. Applying Ohm’s Law to the remaining vertical columns (I=E/R), we can determine the currents through R1, R2, R3, and R4 individually: Having found all voltage and current values for this circuit, we can show those values in the schematic diagram as such: As a final check of our work, we can see if the calculated current values add up as they should to the total. Since R1 and R2 are in parallel, their combined currents should add up to the total of 120.78 mA. Likewise, since R3 and R4 are in parallel, their combined currents should also add up to the total of 120.78 mA. You can check for yourself to verify that these figures do add up as expected. A computer simulation can also be used to verify the accuracy of these figures. The following SPICE analysis will show all resistor voltages and currents (note the current-sensing vi1, vi2, . . . “dummy” voltage sources in series with each resistor in the netlist, necessary for the SPICE computer program to track current through each path). These voltage sources will be set to have values of zero volts each so they will not affect the circuit in any way. series-parallel circuit v1 1 0 vi1 1 2 dc 0 vi2 1 3 dc 0 r1 2 4 100 r2 3 4 250 vi3 4 5 dc 0 vi4 4 6 dc 0 r3 5 0 350 r4 6 0 200 .dc v1 24 24 1 .print dc v(2,4) v(3,4) v(5,0) v(6,0) .print dc i(vi1) i(vi2) i(vi3) i(vi4) .end I’ve annotated SPICE’s output figures to make them more readable, denoting which voltage and current figures belong to which resistors. v1 v(2,4) v(3,4) v(5) v(6) 2.400E+01 8.627E+00 8.627E+00 1.537E+01 1.537E+01 Battery R1 voltage R2 voltage R3 voltage R4 voltage voltage v1 i(vi1) i(vi2) i(vi3) i(vi4) 2.400E+01 8.627E-02 3.451E-02 4.392E-02 7.686E-02 Battery R1 current R2 current R3 current R4 current voltage As you can see, all the figures do agree with the our calculated values. - To analyze a series-parallel combination circuit, follow these steps: - Reduce the original circuit to a single equivalent resistor, re-drawing the circuit in each step of reduction as simple series and simple parallel parts are reduced to single, equivalent resistors. - Solve for total resistance. - Solve for total current (I=E/R). - Determine equivalent resistor voltage drops and branch currents one stage at a time, working backwards to the original circuit configuration again. Published under the terms and conditions of the Design Science License
Selecting content on a web page with XPath OverviewTeaching: 30 min Exercises: 15 minQuestions How can I select a specific element on web page? What is XPath and how can I use it?Objectives Introduce XPath queries Explain the structure of an XML or HTML document Explain how to view the underlying HTML content of a web page in a browser Explain how to run XPath queries in a browser Introduce the XPath syntax Use the XPath syntax to select elements on this web page Before we delve into web scraping proper, we will first spend some time introducing some of the techniques that are required to indicate exactly what should be extracted from the web pages we aim to scrape. XPath (which stands for XML Path Language) is an expression language used to specify parts of an XML document. XPath is rarely used on its own, rather it is used within software and languages that are aimed at manipulating XML documents, such as XSLT, XQuery or the web scraping tools that will be introduced later in this lesson. XPath can also be used in documents with a structure that is similar to XML, like HTML. XML and HTML are markup languages. This means that they use a set of tags or rules to organise and provide information about the data they contain. This structure helps to automate processing, editing, formatting, displaying, printing, etc. that information. XML documents stores data in plain text format. This provides a software- and hardware-independent way of storing, transporting, and sharing data. XML format is an open format, meant to be software agnostic. You can open an XML document in any text editor and the data it contains will be shown as it is meant to be represented. This allows for exchange between incompatible systems and easier conversion of data. XML and HTML Note that HTML and XML have a very similar structure, which is why XPath can be used almost interchangeably to navigate both HTML and XML documents. In fact, starting with HTML5, HTML documents are fully-formed XML documents. In a sense, HTML is like a particular dialect of XML. XML document follows basic syntax rules: - An XML document is structured using nodes, which include element nodes, attribute nodes and text nodes - XML element nodes must have an opening and closing tag, e.g. <catfood>opening tag and - XML tags are case sensitive, e.g. <catfood>does not equal - XML elements must be properly nested: <catfood> <manufacturer>Purina</manufacturer> <address> 12 Cat Way, Boise, Idaho, 21341</address> <date>2019-10-01</date> </catfood> - Text nodes (data) are contained inside the opening and closing tags - XML attribute nodes contain values that must be quoted, e.g. XPath is written using expressions, and when these expressions are evaluated on XML documents they return an object containing the node(s) that you aim to select. Contrary to a flat text document, XML data is structured, as it is organized in nodes and subnodes. Therefore, when using XPath, we are not querying raw text or data values like we would so using Regular Expressions, for example. Instead, XPath makes use of the fact that XML documents are structured and instead navigates through the node structure to select the data that we are looking for. XPath is typically used to select and compare nodes, not edit them. To manipulate or edit nodes, another language such as XQuery would be used instead. XPath assumes structured data We can think of using XPath as similar to search a library catalogue using the advanced search function. In a catalogue, we can take advantage of the fact that bibliographic information has been properly structured in the database by specifying which metadata fields we want to query. For example, if we are looking for books about Shakespeare but not those written by him, we can use the advanced search function to look for that name in the “title” field only. Contrary to Regular Expressions, this means that we don’t have to know in advance what the data we are looking for looks like, we just need to know in which node(s) (or fields) it resides. Now let’s start using XPath. Navigating through the HTML node tree using XPath A popular way to represent the structure of an XML or HTML document is the node tree: In an HTML document, everything is a node: - The entire document is a document node - Every HTML element is an element node - The text inside HTML elements are text nodes The nodes in such a tree have a hierarchical relationship to each other. We use the terms parent, child and sibling to describe these relationships: - In a node tree, the top node is called the root (or root node) - Every node has exactly one parent, except the root (which has no parent) - A node can have zero, one or several children - Siblings are nodes with the same parent - The sequence of connections from node to node is called a path Paths in XPath are defined using slashes ( /) to separate the steps in a node connection sequence, much like URLs or Unix directories. In XPath, all expressions are evaluated based on a context node. The context node is the node in which a path starts from. The default context is the root node, indicated by a single slash (/), as in the example above. The most useful path expressions are listed below: ||Select all nodes with the name “nodename”| ||A beginning single slash indicates a select from the root node, subsequent slashes indicate selecting a child node from current node| ||Select direct and indirect child nodes in the document from the current node - this gives us the ability to “skip levels”| ||Select the current context node| ||Select the parent of the context node| ||Select attributes of the context node| ||Select nodes with a particular attribute value| ||Select the text content of a node| ||||Pipe chains expressions and brings back results from either expression, think of a set union| Navigating through a webpage with XPath using a browser console We will use the HTML code that describes this very page you are reading as an example. By default, a web browser interprets the HTML code to determine what markup to apply to the various elements of a document, and the code is invisible. To make the underlying code visible, all browsers have a function to display the raw HTML content of a web page. Display the source of this page Using your favourite browser, display the HTML source code of this page. Tip: in most browsers, all you have to do is do a right-click anywhere on the page and select the “View Page Source” option (“Show Page Source” in Safari). Another tab should open with the raw HTML that makes this page. See if you can locate its various elements, and this challenge box in particular. Using the Safari browser If you are using Safari, you must first turn on the “Develop” menu in order to view the page source, and use the functions that we will use later in this section. To do so, navigate to Safari > Preferences and in the Advanced tab select the “Show Develop in menu bar” option. Note: In recent versions of Safari you must first turn on the “Develop” menu (in Preferences) and then navigate to The HTML structure of the page you are currently reading looks something like this (most text and elements have been removed for clarity): <!doctype html> <html lang="en"> <head> (...) <title>Selecting content on a web page with XPath</title> </head> <body> (...) </body> </html> We can see from the source code that the title of this page is in a title element that is itself inside the head element, which is itself inside an html element that contains the entire content of the page. Say we wanted to tell a web scraper to look for the title of this page, we would use this information to indicate the path the scraper would need to follow at it navigates through the HTML content of the page to reach the element. XPath allows us to do that. Display the console in your browser - In Firefox, use to the Tools > Web Developer > Web Console menu item. - In Safari, use the Develop > Show Error Console menu item. If your Safari browser doesn’t have a Develop menu, you must first enable this option in the Preferences, see above. Here is how the console looks like in the Firefox browser: For now, don’t worry too much about error messages if you see any in the console when you open it. The console should display a prompt with a > character ( >> in Firefox) inviting you to type commands. $x("XPATH_QUERY"), for example: This should return something similar to <- Array [ #text "Selecting content on a web page with XPath" ] The output can vary slightly based on the browser you are using. For example in Chrome, you have to “open” the return object by clicking on it in order to view its contents. Let’s look closer at the XPath query used in the example above: /html/head/title/text(). The first the root of the document. With that query, we told the browser to ||Start at the root of the document…| ||… navigate to the ||… then to the ||… then to the ||and select the text node contained in that element| Using this syntax, XPath thus allows us to determine the exact path to a node. Select the “Introduction” title Write an XPath query that selects the “Introduction” title above and try running it in the console. Tip: if a query returns multiple elements, the syntax elementcan be used. Note that XPath uses one-based indexing, therefore the first element has index 1, the second has index 2 etc. should produce something similar to <- Array [ <h1#introduction> ] Before we look into other ways to reach a specific HTML node using XPath, let’s start by looking closer at how nodes are arranged within a document and what their relationships with each others are. For example, to select all the blockquote nodes of this page, we can write This produces an array of objects: <- Array [ <blockquote.objectives>, <blockquote.callout>, <blockquote.callout>, <blockquote.challenge>, <blockquote.callout>, <blockquote.callout>, <blockquote.challenge>, <blockquote.challenge>, <blockquote.challenge>, <blockquote.keypoints> ] This selects all the blockquote elements that are under html/body/div. If we want instead to select all blockquote elements in this document, we can use the // syntax instead: This produces a longer array of objects: <- Array [ <blockquote.objectives>, <blockquote.callout>, <blockquote.callout>, <blockquote.challenge>, <blockquote.callout>, <blockquote.callout>, <blockquote.challenge>, <blockquote.solution>, <blockquote.challenge>, <blockquote.solution>, 3 more… ] Why is the second array longer? If you look closely into the array that is returned by the $x("//blockquote")query above, you should see that it contains objects like <blockquote.solution>that were not included in the results of the first query. Why is this so? Tip: Look at the source code and see how the challenges and solutions elements are organised. We can use the class attribute of certain elements to filter down results. For example, looking at the list of blockquote elements returned by the previous query, and by looking at this page’s source, we can see that the blockquote elements on this page are of different classes (challenge, solution, callout, etc.). To refine the above query to get all the blockquote elements of the challenge class, we can type Array [ <blockquote.challenge>, <blockquote.challenge>, <blockquote.challenge>, <blockquote.challenge> ] Select the “Introduction” title by ID In a previous challenge, we were able to select the “Introduction” title because we knew it was the first h1element on the page. But what if we didn’t know how many such elements were on the page. In other words, is there a different attribute that allows us to uniquely identify that title element? Using the path expressions introduced above, rewrite your XPath query to select the “Introduction” title without using the - Look at the source of the page or use the “Inspect element” function of your browser to see what other information would enable us to uniquely identify that element. - The syntax for selecting an element like div[@id = 'mytarget']. should produce something similar to <- Array [ <h1#introduction> ] Select this challenge box - In principle, idattributes in HTML are unique on a page. This means that if you know the idof the element you are looking for, you should be able to construct an XPath that looks for this value without having to worry about where in the node tree the target element is located. - The syntax for selecting an element like div[@id = 'mytarget']. - Remember that XPath queries are relative to a context node, and by default that node is the root node. - Use the //syntax to select for elements regardless of where they are in the tree. - The syntax to select the parent element relative to a context node is Make sure you select this entire challenge box. If the result of your query displays only the title of this box, have a second look at the HTML structure of the document and try to figure out how to “expand” your selection to the entire challenge box. Let’s have a look at the HTML code of this page, around this challenge box (using the “View Source” option) in our browser). The code looks something like this: We know that the idattribute should be unique, so we can use this to select the h2element inside the challenge box: $x("//h2[@id = 'select-this-challenge-box']/..") This should return something like <- <blockquote class="challenge"> Let’s walk through that syntax: This function tells the browser we want it to execute an XPath query. Look anywhere in the document… … for an h2 element … [@id = 'select-this-challenge-box'] … that has an idattribute set to and select the parent node of that h2 element This is the end of the XPath query. Select the first element of the resulting array (since $x()returns an array of nodes and we are only interested in the first one). By hovering on the object returned by your XPath query in the console, your browser should helpfully highlight that object in the document, enabling you to make sure you got the right one: Advanced XPath syntax FIXME: All the content below is from the original XPath lesson. Adapt content to use current example. Operators are used to compare nodes. There are mathematical operators, boolean operators. Operators can give you boolean (true/false values) as a result. Here are some useful ones: ||Equivalent comparison, can be used for numeric or text values| ||Is not equivalent comparison| ||Greater than, greater than or equal to| ||Less than, less than or equal to| |Path Expression||Expression Result| |html/body/div/h3/@id=’exercises-2’||Does exercise 2 exist?| |html/body/div/h3/@id!=’exercises-4’||Does exercise 4 not exist?| |//h1/@id=’references’ or @id=’introduction’||Is there an h1 references or introduction?| Predicates are used to find a specific node or a node that contains a specific value. Predicates are always embedded in square brackets, and are meant to provide additional filtering information to bring back nodes. You can filter on a node by using operators or functions. ||Select the first node| ||Select the last node| ||Select the last but one node (also known as the second last node)| ||Select the first two nodes, note the first position starts at 1, not =| ||Select nodes that have attribute ‘lang’| ||Select all the nodes that have a “attribute” attribute with a value of “en”| ||Select all nodes that have a price node with a value greater than 15.00| |Path Expression||Expression Result| |//h1||Select 2nd h1| |//h1[@id=’references’ or @id=’introduction’]||Select h1 references or introduction| XPath wildcards can be used to select unknown XML nodes. ||Matches any element node| ||Matches any attribute node| ||Matches any node of any kind| ||Select all elements with class attribute ‘solution’| XPath can do in-text searching using functions and also supports regex with its matches() function. Note: in-text searching is case-sensitive! ||Matches on all author nodes, in current node contains Matt (case-sensitive)| ||Matches on all author nodes, in current node starts with G (case-sensitive)| ||Matches on all author nodes, in current node ends with w (case-sensitive)| ||regular expressions match 2.0| Complete syntax: XPath Axes XPath Axes fuller syntax of how to use XPath. Provides all of the different ways to specify the path by describing more fully the relationships between nodes and their connections. The XPath specification describes 13 different axes: - self ‐‐ the context node itself - child ‐‐ the children of the context node - descendant ‐‐ all descendants (children+) - parent ‐‐ the parent (empty if at the root) - ancestor ‐‐ all ancestors from the parent to the root - descendant‐or‐self ‐‐ the union of descendant and self • ancestor‐or‐self ‐‐ the union of ancestor and self - following‐sibling ‐‐ siblings to the right - preceding‐sibling ‐‐ siblings to the left - following ‐‐ all following nodes in the document, excluding descendants - preceding ‐‐ all preceding nodes in the document, excluding ancestors • attribute ‐‐ the attributes of the context node ||Select all h1 following siblings of the h1 introduction| ||Select all h1 following siblings| ||Select all id attribute nodes| Oftentimes, the elements we are looking for on a page have no ID attribute or other uniquely identifying features, so the next best thing is to aim for neighboring elements that we can identify more easily and then use node relationships to get from those easy to identify elements to the target elements. For example, the node tree image above has no uniquely identifying feature like an ID attribute. However, it is just below the section header “Navigating through the HTML node tree using XPath”. Looking at the source code of the page, we see that that header is a h2 element with the id FIXME: add more XPath functions such as concat() and normalize-space(). FIXME: mention XPath Checker for Firefox FIXME: Firefox sometime cleans up the HTML of a page before displaying it, meaning that the DOM tree we can access through the console might not reflect the actual source code. <tbody> elements are typically not reliable. The Scrapy documentation has more on the topic. XML and HTML are markup languages. They provide structure to documents. XML and HTML documents are made out of nodes, which form a hierarchy. The hierarchy of nodes inside a document is called the node tree. Relationships between nodes are: parent, child, sibling. XPath queries are constructed as paths going up or down the node tree. XPath queries can be run in the browser using the
In an increasingly globalized and mediatized world, in which mental illness is one of society’s most discussed cultural artifacts, Colleen Patrick Goudreau’s words ring out: “If we don’t have time to be sick, then we have to make time to be healthy”. With the prevalence of mental health problems, it is clear why. Mental health issues are one of the leading causes of the overall disease burden globally, according to the World Health Organisation. One study reported that mental health is the primary source of disability worldwide, causing over 40 million years of disability in 20 to 29-year-olds. Compared to previous generations, mental illness is now said to surpass the effects of the Black Death. The root causes of the unprecedented rise in people directly affected by mental illness, and the cost of this, can be considered across at least three levels of analysis. If we don’t have time to be sick, then we have to make time to be healthy. — Colleen Patrick Goudreau At the first level of analysis, the root cause of mental illness is an amalgamation of heredity, biology, environmental stressors, and psychological trauma. Notions of specific genes being responsible for illness have been supplanted by those of genetic complexity, where various genes operate in concert with non-genetic factors to affect mental illness. That is, health-relevant biology and mental health impact each other in a complex interplay, which is inherently social. Despite the importance of understanding the social underpinnings of biological risk factors for mental illness, there is a relative paucity of research investigating this topic. Research that does exist, is nevertheless engrossing. For example, one study, of many, found that social isolation leads to increased risk of coronary heart disease. Since low levels of social integration are related to higher levels of C-reactive protein, a marker of inflammation related to coronary heart disease, social integration is posited to be a biological link between social isolation and coronary heart disease. Moreover, social support affects physical perception. In a landmark study, researchers demonstrated that people accompanied by a supportive friend or those who imagined a supportive friend, estimated a hill to be less steep when compared to people who were alone. Mental health, like physical health, is more than the sum of functioning or malfunctioning parts. At the second level of analysis, the complex bio-social interplay scaffolding mental illness points to the fundamentally chemical underpinnings of human thinking and emotion. With recent advances in neuroscience like Clarity, we are now able to make the brain optically transparent, without having to section or reconstruct it, in order to examine the neuronal networks, subcellular structures, and more. In short, we can examine mental illness from a biological perspective. The depth and complexity of the bio-social root of mental illness, however, paints a more nuanced picture than discussed thus far. With such pioneering work, there is an increasingly popular assumption that the brain is the most important level at which to analyze human behavior. In this vein, mental illness perpetuates itself by virtue of the fact that people often consider it to be biologically determined. In turn, a ‘trait-like’ view of mental illness establishes a status quo of mental health stigma by reducing empathy. Such explanations overemphasize constant factors such as biology and underemphasize modulating factors such as the environment. At the third level of analysis, the obsession with seeing mental health in terms of mental illness reveals the fallible assumption that mental health is simply the absence of mental disorder. However, the problematic landscape of mental health draws on a far wider set of working assumptions. That is, mental health, like physical health, is more than the sum of the functioning or malfunctioning parts. It is an overall well-being that must be considered in light of unique differences between physical health, cognition, and emotions, which can be lost in a solely global evaluation. So, why do we as a society ponder solving mental illness, which should have been targeted long ago, far more than we consider improving mental health? In part, because when we think of mental health, we think of raising the mean positive mental health of a population, more than closing the implementation gap between prevention, promotion, and treatment. Cumulatively, social environments are the lubricating oil to biological predispositions, which influence mental health, such that mental health and physical health should be considered holistically. In this vein, national mental health policies should not be solely concerned with mental disorders, to the detriment of mental health promotion. It is worth considering how mental health issues can be targeted using proactive behavioral programs. To achieve this, it is pivotal to involve all relevant government sectors such as education, labor, justice, and welfare sectors. In a diverse range of existing players, many nonprofits’, educational institutions’, and research groups’ efforts contribute to the solution landscape of mental health promotion. In Ireland, for example, schools have mental health promotional activities such as breathing exercises and anger management programs. Nonprofits around the world are increasingly seeing the value of community development programmes and capacity building (strengthening the skills of communities in so they can overcome the causes of their isolation). In addition, businesses are incorporating stress management into their office culture. We think of raising the mean positive mental health of a population, more than closing the implementation gap between prevention, promotion and treatment. The pursuit to empower people to help themselves joins up these social ventures to teach us that promoting mental health is optimized when it is preventative, occurring before mental illness emerges, and when it is linked to practical skills within a community. Furthermore, these social ventures exemplify how different types of efforts (government, nonprofit, business etc.) cater to different populations, from children to corporates. While these social ventures bring hope to the future and underscore the importance of sustainable change, there are still too few programs effectively targeting people, who want to maximize already existent positive mental health not just to resolve or cope with mental health issues. If we continue to take such pride in our successful problem finding and solving of mental illness that we ignore mental illness prevention and mental health promotion, we are at risk of increasing the problem we are trying to solve. Heffner, K., Waring, M., Roberts, M., Eaton, C., & Gramling, R. (2011). Social isolation, C-reactive protein, and coronary heart disease mortality among community-dwelling adults. Social Science & Medicine, 72(9), 1482-1488. doi: 10.1016/j.socscimed.2011.03.016 Lozano, R., Naghavi, M., Foreman, K., Lim, S., Shibuya, K., & Aboyans, V. et al. (2012). Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the Global Burden of Disease Study 2010. The Lancet, 380(9859), 2095-2128. doi: 10.1016/s0140-6736(12)61728-0 Schnall, S., Harber, K., Stefanucci, J., & Proffitt, D. (2008). Social support and the perception of geographical slant. Journal Of Experimental Social Psychology, 44(5), 1246-1255. doi: 10.1016/j.jesp.2008.04.011 This guest article originally appeared on the award-winning health and science blog and brain-themed community, BrainBlogger: Mental Health is Not Just the Absence of Mental Illness.
Popular diets across the world typically focus on the right balance of essential components like protein, fat, and carbohydrates. These items are called macronutrients, and we consume them in relatively large quantities. However, micronutrients often receive less attention. Micronutrients are chemicals, including vitamins and minerals, that our bodies require in very small quantities. Common mineral micronutrients include zinc, iron, manganese, magnesium, potassium, copper, and selenium. A recent study published in Crop Science examined the mineral micronutrient content of crops grown in the province of Saskatchewan, Canada. The study was conducted jointly by the University of Saskatchewan and North Dakota State University. The researchers examined four types of grain legumes (pulses)--field peas, lentils, chickpeas, and common bean. Although these legumes have up to twice the micronutrients as cereals, according to Tom Warkentin, professor of plant breeding at the University of Saskatchewan, they are not cultivated on the same scale as cereals in most countries. Therefore, grain legume crops are often overlooked as potentially valuable sources of micronutrients. Diets that do not provide adequate amounts of micronutrients lead to a variety of diseases that affect most parts of the human body. Warkentin says, "Iron deficiency is the most common, followed by zinc, carotenoids, and folate." The study found that genetic characteristics (genotype) as well as environmental conditions--such as soil properties and local climate--can affect the micronutrient content of grain legumes. The researchers measured micronutrient levels by a technique known as atomic absorption spectrometry. According to Warkentin, "In the case of selenium, we found that environmental conditions are more important than genotype." Warkentin notes, "A 100-gram (3 ½-ounce) serving of any one of the four grain legume crops studied provided a substantial portion of the recommended daily allowance (RDA) of iron, zinc, selenium, magnesium, manganese, copper, and nickel." The serving size was based on the dry weight of the grain legumes. He adds that lentils were the best source of iron, while chickpeas and common bean were higher in magnesium. Calcium was the only key micronutrient that these crops lacked. Interestingly, most of the crops studied were high in selenium, with chickpeas and lentils being the best sources. Selenium is an important but often overlooked micronutrient. Selenium deficiency can lead to diseases that weaken heart muscles and cause breakdown of cartilage. It can also give rise to hypothyroidism, since selenium is a required chemical in the production of thyroid hormone. Warkentin concludes, "Increased production and consumption of grain legume crops should be encouraged by agriculturalists and dietitians around the world." Since grain legume crops don't require nitrogen-based fertilizers, which are derived from fossil fuels, they are very sustainable. Warkentin also says, "Grain legume crops are highly nutritious. In addition to the micronutrients described in this research, they also contain 20-25% protein, 45-50% slowly digestible starch, soluble and insoluble fiber, and are low in fat." Access the article here:
There are more than 1,000 species of begonia worldwide. Most begonias aren't hardy in cold temperatures, making them a garden annual. If grown indoors under the right conditions, begonias can also live as perennials. Some begonias can grow up to 12 feet tall and feature small flowers in a wide variety of colors, including pink, white, red, orange, yellow, peach and bi-colors. Most begonias thrive in part shade and well-draining soil. If planted in containers, they tend to grow well on porches and in other sheltered areas. If grown indoors, choose a location with indirect sunlight. When planting begonias in the garden, first till the soil. Dig a shallow hole and place the begonia tuber inside it. The tuber should sit just slightly below the soil line. If planted deeper, the tuber is more apt to rot. Water well. When shoots begin popping up, place a layer of peat moss over the tubers. Water begonias only when the soil begins to dry. Keeping the soil too moist will rot the plant. Because begonia leaves are prone to diseases caused by moisture, water in the morning so leaves have plenty of time to dry off. It also helps to water at the base of the plant so the leaves do not get wet. Do not cultivate around begonias, as this can destroy their root system. To fertilize, use a balanced liquid fertilizer, such as 10-10-10. Autumn and Winter Care Immediately after the first frost in the autumn, dig up begonia tubers. Cut off all foliage with a pair of hand pruners. Allow the tubers to dry for several days on a layer of newspapers in a dry, dark location. Place the dry tubers in a paper bag and cover them with dry peat moss or sawdust. Store the bag at about 50 degrees Fahrenheit. After the last threat of frost in the spring, replant the tubers. Evergreen flowering begonias can stay in a pot year round. However, they must come indoors if they normally live outside. Bushy types of begonias, such as wax begonias, are also suitable for growing indoors during cool weather. Pests and Diseases Begonias are prone to diseases such as powdery mildew, Botrytis blight and stem rot. Pests such as spider mites, mealy bugs, scales, slugs and snails are common pests to begonia plants.
The events of the English Revolution: The jury system was developed by King Henry II. He replaced feudal justice with a grand jury, courts, and jury trials. He wanted to strengthen the authority of royalty, but he furthered democracy instead. The Magna Carta (1215) limited royal power and stated that the king cannot put a free person in jail with judgment by his peers, and a king cannot levy taxes without asking the Great Council. King John was forced to sign this document by feudal lords who felt he was a despot and that he violated their rights. The Magna Carta came into effect for all English people and was the cornerstone of their democracy. Model Parliament (1295): King Edward I allowed middle-class representatives into the Great Council so he could place taxes upon them, and still have the loyalty of the wealthy middle class people. The Great Council became known as the Model Parliament because it was the model for England's future legislature. Since both the aristocrats and the commoners had representatives the Parliament split into two houses: the heredity House of Lords and the elected House of Commoners. English Common Law: Judges decided to establish their decisions on similar cases that were already ruled on. These laws applied to all people equally. To protect people against tyranny the law stated that life, liberty, and property couldn't be taken by an illegal or random action. Parliamentary Lawmaking (14th century): The Parliament threatened to withhold tax laws which compelled English monarchs to accept its legislation in all matters. Parliament issues the Petition of Rights (1628) which protested Charles I's despotism and reaffirmed that monarchs can't levy taxes without Parliament's permission, imprison people without a specific charge or trial by jury, or put soldiers in private homes without the owners consent.
A Squalodontid Success On a beach in Piña, Panama the tide is rolling out. Faint outlines of skeletal remains rise above the sand. Smithsonian scientists Nicholas Pyenson and Aaron O'Dea along with a team of students descend upon the beach. Their mission: to excavate the remains of a whale from the extinct group Squalodontidae, commonly known as "shark-toothed dolphins." Before they remove the fossil, they must encase it in a plaster jacket. It's a process that can take up to two days. But they are racing the tide and have just four hours to remove the fossil. Will they make it? Researchers Prepare for a Long Day in the Field Before heading out to the fossil locality in Piña, Panama on the Caribbean coast, the team of researchers have a full breakfast at a cantina by the side of the road: roasted chicken, plantains, and some coffee. Fish Skeleton Discovery The fossil squalodontid skull was located in the middle of the tidal environment in Panama, exposed to the eroding elements of surf, sun, sand and waves. When the researchers arrived at the site the tide was just going out. They ambled out through the surf and spotted several other interesting fossils along the way: a partial whale skull, shark teeth, and a complete skeleton of a large fish, probably a close relative to a swordfish or a marlin. This fish skeleton was complete from nose to the tailfin. Shown here is segment of the tailfin against a 10cm ruler for scale. Digging a Trench The first thing the researchers did when they arrived on site was outline the general excavation area and take careful measurements of exposed fossils. Next, they applied acrylic glue to any exposed bone to help stabilize it. Then a small surface-layer cap of plaster bandages is applied to the skull to protect it from any errant whacks while digging. Finally, the digging begins, and scientists work to make a deep trench around the skull (shown here). The trench allows the researchers to apply a plaster bandage cap around the block of rock containing the fossils in order to extract the skull from the rocks in which it is entombed. Securing the Fossil After several hours of non-stop digging, the researchers have exposed a deep trench around the skull to begin applying a bigger plaster bandage cap around the block of rock containing the fossils. They use medial plaster bandages which consist of fabric already dipped in plaster -- this way they only have to add water to initiate the hardening process. The team applied around five layers of bandages to the entire surface; creating a cap around the block. Dislodging the Fossil After the plaster cap has dried and has created a cap around the block of rock containing the fossils, the team is ready to start swinging. Here a researcher takes large pick and strikes the base of the pedestal that the fossil caps sits upon. Done correctly, a few good whacks will dislodge the jacket, flip it, and remove it. Excavating the Fossil A few whacks later and the plaster jacket breaks free from the rocks it is entombed in, estimated to be about 6-7 million years old. The researchers from the Smithsonian's National Museum of Natural History and Smithsonian Tropical Research Institute will move the jacket to higher (and drier) ground to be labeled and hauled away. Racing the Tide The fossil squalodontid skull was located in the middle of the tidal environment in Panama, giving researchers the added challenge of racing the tide for the excavation. The team was successful in their efforts, conducting an excavation that would normally take two days in just four hours. Panama Expedition Success Dr. Nicholas Pyenson, Curator of Marine Mammal Fossils at the National Museum of Natural History poses with the safely encased fossils in their plaster jackets. Eventually the squalodontid, or "shark toothed whale" will make its way to the Smithsonian's National Museum of Natural History.
Geostationary Operational Environmental Satellites (GOES) circle the Earth in a geosynchronous orbit over the equator. This means they observe the Earth from the exact same place all the time. This allows the GOES satellites to continuously monitor a single position on the earth's surface. From 35,800 kilometers (22,300 miles) above the earth, GOES satellites provide half-hourly observations of the earth and its environment. Earth coverage of the GOES-8 and GOES-10 satellites has been depicted below. GOES satellites are owned and operated by the National Oceanic and Atmospheric Administration (NOAA) while the National Aeronautics and Space Administration (NASA) manages the design, development and launch of the spacecraft. Once launched, NOAA once again resumes responsibility for the satellites. There are other geostationary satellites operated by other countries wihich contribute to cover the rest of the Earth. The first geostationary weather satellite (GOES-1) was launched on October 16, 1975 and quickly became a critical part of the National Weather Service operations. For the past 30 years, environmental service agencies have stated the need for continuous, dependable, timely, and high-quality observations of the earth and its environment. The new generation of GOES satellites, do just that. These satellites have instruments on board that measure Earth-emitted and reflected radiation from which atmospheric temperature, winds, moisture and cloud cover can be derived. GOES-8 and GOES-9 were the first members of this new satellite generation to be launched, replacing the older GOES-6 and GOES-7 orbiters. Selected Text and Image Provided By: GOES Mission Overview