content
stringlengths
275
370k
An organism that lives on the outer surface of another organism, its host, and which does not contribute to the survival of the host. Depending upon the parasite(s), the following may be observed: - Intense itching with intermittent or persistent scratching. - Loss of hair, ulcerated skin, abrasions or scabs (seen most commonly on the neck and back of shoulders when mites are involved). - May see light tan, brown, or reddish color “dots” on skin or the presence of silvery colored nits attached to hair shafts. - May see a fine bran like substance on the skin and fur. In sarcoptid or sarcoptid-like species crusted red or yellowish lesions may be seen on the auricle or pinna of the ear and on the nose; along with small reddish bumps to tail, genitals, and feet. Hair loss and skin sensitivity (pink to reddish, irritated, looking skin) may be present in conditions of mange. - May see actual fleas on the rat, or may see an indication of their presence by droppings of digested blood on the rat’s skin which may seem to appear like particles of dirt. - May be seen on legs, ventral surface of the body, ears, neck. They may appear red, brown, or black when engorged with blood. Also note: It is known that in dogs, a tick attached to the right place on a leg can cause that leg to be paralyzed, and by removing the tick it resolves the problem. Possibly, though not documented (since ticks are less seen on pet rats in general), it may also do the same to rats. : for additional information on recognizing various signs of pain or discomfort refer to: Signs of Pain In Rats Ectoparasites are those which live on skin or attach to hair follicles. The following listed external parasites are those that can most often affect rats. - Lice (phylum: Arthropoda, class: Insecta ) are of two orders, the mallophaga which are of a species that bite or chew, and the order of Anoplura (family Pediculidae) which are a species that suckle blood. The order of Anoplura which infest domestic animals is what is most often seen in rats. Polyplax spinulosa (spined rat louse) is a type of lice that causes hair loss and pruritus (itching). It can sometimes be detected by the silvery colored nits attached to the hair. Lice are species specific, meaning they do not cross from one species to another. They will spend their entire life cycle, approximately 14 to 21 days, from egg to nymph to adult on the host. They obtain nutrition by sucking blood, which in turn can cause anemia to the rat. They are also able to transmit the parasite Hemobartonella muris, leading to a disease similar to tick fever. - Mites (phylum: Arthropoda, class: Arachnids) are of the subclass Acari. Unlike lice they are considered host specific meaning that with certain species of mites, if the desired host is not available, they may cross to another species. The tropical mite Liponyssus bacoti (synonym: Ornithonyssus bacoti) is round in shape and appears dark when engorged with blood. They can survive on fomites (e.g, bedding, litter), and only stay on an animal when they are feeding. They are one of the species of mites that will also bite other animals including humans. Demodex spp., and Notoedres muris (a sarcoptid-like mite), are types of mites that cause mange; a type of skin condition. Deomodex spp. can be found anywhere on the skin but are primarily found deep within the hair follicles and sebaceous glands. Mange caused by Demodex spp. can produce signs of skin sensitivity and hair loss. Notoedres muris (also termed the ear mange mite) burrows into skin, and can present as yellowish crusty appearing warts on edges of ears and nose, or can appear on other extremities as reddened bumps. Both of these are not often seen in the domestic pet rat. Sarcoptes scabiei varieties, while not host-specific per se, do possess some host specific preference and physiologic differences do exist between varieties. Rats can be infested with a variety of sarcoptes mite; however, they do not give their owners their type of mange. Human infestation is with a different variety of scabies mite than what is found on animals’. Should your pet rat be infested with a sarcoptic mite, and have close contact with you, it can get under your skin and cause itching and skin irritation. However, the mite dies in a couple of days and does not reproduce. They may cause you to itch for several days, but you do not need to be treated with special medication to kill them. Until your rat is treated effectively and its environment cleaned continued infestation will be a source of discomfort for your rat and an annoyance to you. For more information on scabies in humans see The CDC Fact Sheet. The Radfordia ensifera is a fur mite that can cause dermatitis. It may occasionally be seen as white specks of dust on hair follicles. This type of mite is most commonly seen in rats. It produces intense itching and leads to scabs most frequently seen on the shoulders, neck, and face of the rat. The rat fur mite and mange mite do not infest humans or other animals. Mites under normal conditions are commensal in small numbers and do not tend to be bothersome to their host. It is when the rat is stressed, has a decreased immunity due to other illnesses, and/or is unable to keep the numbers reduced by normal grooming that causes the mites to flourish in numbers. Inattention to proper husbandry, a rat that is ill, or giving ineffective treatment can lead to reinfestation, and dermatitis. On the average the entire life cycle of the mite beginning with the eggs which hatch in about seven days through the larval, nymphal, and adult stages requires approximately 23 days to complete. It is therefore important to maintain care and follow through with treatment(s) prescribed. - Fleas, (phylum: Arthropoda, class: Insecta) of which thousands of species are recognized worldwide, affect human and animal. They belong to the order, Siphonaptera. The species of flea that most commonly affects animals, and humans, is Ctenocephalides felis. It causes severe irritation and can be responsible for flea allergy dermatitis. Fleas go through developmental stages before becoming adults. It is the adult fleas, which appear as 1-5mm, laterally flattened, wingless insects that infest the animals fur. Reinfestation can occur if care is not taken to include the surrounding environment of the animal when treating. The deposited eggs on the host by the adult female flea can fall from the host to the surrounding environment, go through development, emerge as young adults either moving back to the host, or to a newly acquired host. Flea infestation can be determined by the actual presence of fleas or by flea excreta seen as digested droppings of blood appearing as black dots. These black dots when dissolved on paper, or placed in water, will appear red. This species of flea, Ctenocephalides felis, is also responsible for the transmission of Murine typhus by Rickettsia typhi, a type of febrile disease in both man and small mammals, and principally seen in the southern coastal climates. Treatment for flea infestation should include the home, the rat’s environment, and any other animals living in the home. - Ticks (phylum: Arthropoda, class; Arachnids) are also of the subclass Acari along with mites. They are divided into two families: the Ixodidae (e.g. Amblyomma spp., Ixodes spp., Dermacentor spp. and the Rhipicephalus spp.) , which are hard body ticks and the Argasidae (e.g. Ornithodoros and Otobius) which are soft body ticks. They feed on the blood of mammals, birds and reptiles. Although some species of tick have preference for a certain species of host most are less host specific. Hard body ticks seek out a host by questing (a type of behavior), crawling up grass stems or leaves and perching with front legs extended and will attach as a host brushes against their front legs. Hard ticks will feed from several days to weeks depending upon the species of tick, the type of host, and the life cycle stage it is in. Many types of hard body ticks are called “three host ticks” because during each stage of development from larvae to nymph to adult requires a different host to feed from. The complete life cycle can take up to a year to complete. Both the nymphs and adults have bodies that are divided into two sections, the head containing the mouthparts, and the posterior of the body containing digestive tract, reproductive organs, and the legs. Often less than 5 mm in size, they may vary in color from red to brown or black when engorged with blood. The body of the adult tick can be seen to grow as it feeds and engorges on the blood of its host. The adult female lays only one batch of eggs, as many as 3000, and then dies. The male tick feeds very little and tends to stay with larger hosts so it can mate with the adult female tick. The male dies once it has reproduced. Soft body ticks have life stages that are difficult to discern. They go through multiple and repeated stages before becoming an adult, with each stage feeding multiple times, unlike hard body ticks. The life cycle of the soft body tick is considerably longer than hard body ticks. The adult female soft body tick has the ability to lay multiple batches of eggs during its life as an adult. Soft body ticks behave similar to fleas in their eating behavior. They can live in the nest of the host feeding each time the host returns to its nest. While ticks are not commonly seen on pet rats housed indoors, there is the potential for infestation if housed outdoors, or if in contact with other pets that do go outside. Where other household pets have been infested it is recommended to also check and treat pet rats if ticks are detected. Severe infestations can cause blood loss resulting in anemia. Zoonotic diseases associated with tick infestation in rabbits (e.g. Tularemia, Lyme disease, and Rocky Mountain spotted fever) could potentially be a factor in rats with tick infestation(1). Transmission of all the above ectoparasites can be by host to host or fomites to host. Fortunately with proper husbandry and persistent treatment they do not have to pose a problem. For information on hypersensitivity, allergic contact dermatitis, see Dermatitis/Eczema. Photos and Case Histories Involving Parasite Infestation - Fig. 1: Signs of mite infestation. - Fig. 2: Sarcopetes Mange photos and case history. - Fig. 3: Lice and case history. - Fig. 4: Ectoparasite slides and descriptions courtesy of University of Missouri Research Animal Diagnostic Laboratory - Fig. 5: Demodex Mites in 26-month-old female rat (Inca) Skin scrapings for possible parasites can be done, however, parasites may still be present even though the scrapings are negative. For information regarding dosages and usage of the following medications refer to the section Anti-infectives in the Rat Medication Guide For tick removal grasp with either forceps, tweezers, or tick extractors between head and body pulling straight out being careful not to squeeze the body of the tick releasing blood. In the event extraction is not easily obtained by pulling straight a slight twist motion can be tried (some brands of tick extractors are designed to do such). Immerse (place tick) in an acaricide solution or alcohol in a small container with lid. Be sure to search and remove all ticks! Following extraction:Wipe the area where the tick was removed from with saline or an alcohol wipe. It is recommended to give a single dose of Ivermectin 0.4 mg/kg to be sure any remaining ticks will be killed (1). Mites and lice Selamectin (Revolution) applied once topically. In some instances a second treatment (following a 30 day interval) may be needed. Rarely, and only by veterinary assessment, may it become necessary to dose at a two week interval. Ivermectin (sold as horse worming paste) given orally: brands Equimec in Australia, Equimectrin, Equalvan, Rotectin 1, and Zimecterin in the U.S., where the active ingredient ivermectin is - 1.87%. Treatment with oral or topical dosing noted to be less stressful to rats and mice. Rare incidences of adverse reactions have been reported when ivermectin has been given by injection in rats. For treatment specific to stubborn demodectic , notoedres, and sarcoptid mite infestation Ivermectin, selamectin (Revolution), or topical treatment of Mitaban (amitraz) may be considered. It is recommended to discuss the proper use of Mitaban with your vet before attempting to use. Ivermectin is considered to have a wider margin of safety. In cases of mange, treatment may need to be carried out for as long as 6-12 weeks. Skin infection, by normal skin flora, often accompanies persistent, severe, cases of mange. It may become necessary to treat with an antibiotic such as cephalexin (Keflex). Fleas and lice Topical dosing with Advantage (orange labeled package for cats/kittens 9 pounds and under). Fleas, mites (other than demodectic mites), and lice Topical dosing with selamectin (Revolution), a derivative of ivermectin, labeled for use in kittens. The topical application of selamectin, as directed, is less stressful for rats than other injectable treatments. Alternative treatment for mites, lice and fleas *Note, although a spray or shampoo sold for small animals such as rats and mice, or hamsters , or that which is safe for kittens or puppies 2 weeks of age containing 0.05 % or 0.06 % pyrethrin can be used for rats every 7 days for 4 weeks, its use should be avoided , or discussed with a veterinarian prior to use, due to risk of possible increased toxicity as a result of ingestion from licking by these small animals, besides absorption. Do not use concomitantly with other anthelmintics (e.g., ivermectin or selamectin). In Addition To Treatments Above: Treat all rats at the same time, clean all cages including bedding and toys thoroughly. Disinfecting with bleach can be very effective, but be sure to rinse cage and articles well and allow to dry before returning rats to their cage. Clip toenails of rear feet to prevent increased trauma to lesions from scratching. If irritation to skin from scratching is observed, a lightly-applied application of a Vitamin E cream, Polysporin ointment, or Aloe gel may help relieve and prevent further secondary infection from occurring. Rats groom frequently; therefore it is recommended to avoid using on those areas where the rat or cage mates can easily access. If there continues to be skin irritation, inflammation, or weeping lesions, systemic antimicrobials may need to be started. See your veterinarian. - When treating adult rats weighing between 300 to 500 grams with ivermectin (sold as horse worming paste where the active ingredient ivermectin is - 1.87%) orally: give a small amount equivalent to a grain of uncooked rice. Dose once a week, for at least 3 weeks. For rats that are younger or those under 300 g, split dose by half and dose once a week for at least 3 weeks. Contribution by C.Himsel-Daly DVM There is no need to decant ivermectin paste if in the original tube, decanting any fluid would concentrate the drug and thus raise the concentration in a volumetric dose, thus potentiating a toxicity. If in opening the original tube of paste and found to have a drip of fluid at the end of the tube, before the actual paste emerges, all that is required is to express the tube, discard that bit of paste until uniform in consistency, and dose according to the veterinarian’s recommendation. If the paste has been dispensed in another container, then mix thoroughly before dosing. Paste tends to be a less accurate dosing method than using the parenteral product. Fortunately the paste has a wide margin of safety.* - Continue treatment as prescribed. - Keep toe nails clipped on a regular basis making sure not to cut the quick. Keep styptic on hand if bleeding occurs. - Repeat cage and article disinfecting at least once a week. - Remove and discard articles made of wood. - Free of parasite infestation. - Free of inflammation and irritation of skin. - Maintain rats overall general health. - Use of prepackaged processed litter, and the freezing of litter where bags have been breached prior to purchase, may be of help. *Please note: that any bags of litter/bedding that have been noted to have a row of holes in the top of the bag or any bag that has been breached during storage in pet stores and feed/tack warehouses, where contamination through contact from residing infested animals, may be a potential risk. Freezing the litter before using in cages may be a helpful preventative measure. - The freezing of prepackaged or mixed foods and rat blocks, prior to feeding, is recommended if bags have been breached at time of purchase. - Provide a clean cage environment. - Quarantine all new rats for a minimum of three weeks and treat for infestation or infections if present prior to introducing to existing colony. - When holding or playing with rats other than your own, it is recommended that you wash and change clothes prior to handling your own rats. - Quesenberry, K., & Carpenter, J. (2012). Ferrets, Rabbits, and Rodents, Clinical Medicine and Surgery (Third Edition ed.). St. Louis: Saunders. - Arlian, L., Runyan, R., & Estes, S. (1984). Cross infestivity of Sarcoptes scabiei. J Am Acad Dermatol, 10(6), 979-86. - Vredevoe, L. (2003, May 16). Background information of the biology of ticks. UCD Entomology R. B. Kimsey Laboratory. Retrieved February 16, 2012, from http://entomology.ucdavis.edu/faculty/rbkimsey/tickbio.html - Beck, W., & Fölster-Holst, R. (2009). Tropical rat mites (Ornithonyssus bacoti) - serious ectoparasites. J Dtsch Dermatol Ges, 7(8), 667-70. Retrieved March 20, 2012, from http://www.dgvd.org/media/news/publikationen/2009/ddg_09094_eng.pdf Posted on June 29, 2003, 10:20, Last updated on June 27, 2014, 16:27 | Integumentary / Skin
Researchers at Japan’s National Institute of Advanced Industrial Science and Technology created a drone that transports pollen between flowers. It's tiny at just 4 centimeters wide and weighing only half an ounce. (You can see them in action in the above video.) And as New Scientist reports, it's effective when it comes to cross-pollinating: The bottom is covered in horsehair coated in a special sticky gel. When the drone flies onto a flower, pollen grains stick lightly to the gel, then rub off on the next flower visited. In experiments, the drone was able to cross-pollinate Japanese lilies (Lilium japonicum). Moreover, the soft, flexible animal hairs did not damage the stamens or pistils when the drone landed on the flowers. And at Harvard University, robotocists designed RoboBees, tiny robots inspired by the "biology of a bee and the insect's hive behavior." Helper bees, not substitutes I was quick to judge these robotic bees when I first head about them, based only on this headline: "Tiny Flying Robots Are Being Built To Pollinate Crops Instead Of Real Bees." "Sure," I thought, "Why bother saving the bees when we can build robot bees, and build factories and use precious resources to make them. What could be bad about that?" Then I took the time to learn what Harvard is doing. Yes, one of the possible functions of the RoboBees (shown in the above video) is pollination of crops. Real bees are critical to pollination. Without the pollination they provide, it's estimated that 85 percent of the Earth's plant species would be in danger. Right now, the bees are in danger. Colony collapse disorder has been decimating the bee population for years, and as mentioned above, there's a lot of work being done to figure out why it's happening and how to combat it. But the RoboBee is designed to do much more than pollination. Harvard lists all the useful applications this tiny robotic device can provide: - autonomously pollinating a field of crops - search and rescue (e.g., in the aftermath of a natural disaster) - hazardous environment exploration - military surveillance - high-resolution weather and climate mapping - traffic monitoring There are already electronic devices that can do these types of tasks, but the RoboBees have the potential to do them more efficiently, according to the designers. In mimicking the physical and behavioral robustness of insect groups by coordinating large numbers of small, agile robots, we will be able to accomplish such tasks faster, more reliably, and more efficiently. Walmart gets in on the action In 2017, the corporation filed six patent applications with the U.S. government to build drones. Similar to Harvard's RoboBees, some of Walmart's drones help with cross-pollination by using cameras and sensors to identify pollen on one plant and carry it to another. But bee-like drones aren't the only drones Walmart wants to build. Its other patents focus on alleviating crop damage by using drones to monitor, identify, track and eliminate pests. Now why would Walmart want to help the farming industry? CB Insights said the company "increasing its involvement in agriculture could help the company differentiate its food offerings and increase its focus on transparency and sustainability, as well as help mitigate inconsistent or unpredictable crop yields." There's no word yet on when Walmart would start building the drones if the patents are approved. Until then, we must continue to learn what's causing colony collapse disorder and bring the honeybee populations back to healthy numbers. But, having bee drones as a backup is not a bad idea, and the tiny robots' other uses offer even more reasons to be in favor of the RoboBees. Editor's note: This story has been updated with new information since it was originally published in October 2015.
- ultrasonic waves, used in medical and dental diagnosis and therapy, in cleaning and detecting flaws in metal, etc. - Radiology the use of ultrasonic waves in ultrasonography to form images of interior bodily organs, as the uterus or heart A woman has an ultrasound of her thyroid. The definition of ultrasound is a medical process that uses sound waves through a medical instrument to obtain images of internal organs. An example of ultrasound is the technology used to get images of an unborn baby. - Ultrasonic sound. - a. The use of ultrasonic waves for diagnostic or therapeutic purposes, specifically to image an internal body structure, monitor a developing fetus, or generate localized deep heat to the tissues.b. An image produced by ultrasound. - Sound whose frequency is above the upper limit of the range of human hearing (approximately 20 kilohertz). - An image produced by ultrasonography. - ultrasonic (ŭlˌ◌trə-sŏnˌ◌ĭk) A Closer Look Many people use simple ultrasound generators. Dog whistles, for example, produce tones that dogs can hear but that are too high to be heard by humans. Sound whose frequency is higher than the upper end of the normal range of human hearing (higher than about 20,000 hertz) is called ultrasound. (Sound at frequencies too low to be audible—about 20 hertz or lower—is called infrasound.) Medical ultrasound images, such as those of a fetus in the womb, are made by directing ultrasonic waves into the body, where they bounce off internal organs and other objects and are reflected back to a detector. Ultrasound imaging, also known as ultrasonography, is particularly useful in conditions such as pregnancy, when x-rays can be harmful. Because ultrasonic waves have very short wavelengths, they interact with very small objects and thus provide images with high resolution. For this reason ultrasound is also used in some microscopes. Ultrasound can also be used to focus large amounts of energy into very small spaces by aiming multiple ultrasonic beams in such a way that the waves are in phase at one precise location, making it possible, for example, to break up kidney stones without surgical incision and without disturbing surrounding tissue. Ultrasound's industrial uses include measuring thicknesses of materials, testing for structural defects, welding, and aquatic sonar. ultrasound - Computer Definition ultrasound - Medical Definition - Ultrasonic sound. - The use of ultrasonic waves for diagnostic or therapeutic purposes, specifically to image an internal body structure, monitor a developing fetus, or generate localized deep heat to the tissues. - An image produced by ultrasound. - You'll have a second ultrasound to check the baby's development (and gender if you're interested), and some women have amniocentesis performed if there is any reason to believe that something may be amiss with the developing fetus. - Between 10 and 14 weeks of pregnancy, physicians may use an ultrasound to look for a thickness at the nuchal translucency, a pocket of fluid in back of the embryo's neck, which may indicate a cardiac defect in 55 percent of cases. - There are no complications per se from the tests themselves with the exception of unfavorable test results or supine (lying horizontality on the back) hypotension secondary to a pregnant woman lying on her back for an ultrasound. - When OI occurs as a new dominant mutation and is found inadvertently on ultrasound, it may be difficult to confirm the diagnosis until after delivery since other genetic conditions can cause bowing and/or fractures prenatally. - The decision to have prenatal surgery is made on the basis of detailed ultrasound imaging of the fetus, including an echocardiogram that uses ultrasound to obtain images of the fetal heart, as well as other diagnostic tools.
In the late 19th and early 20th cent. the Arctic was explored by Nils Nordenskjöld, Roald Amundsen, Donald MacMillan, Richard Byrd, and others. In 1909, Robert E. Peary reached the North Pole. The continent of Antarctica was explored in the first half of the 20th cent. by William Bruce, Jean Charcot, Douglas Mawson, Ernest Shackleton, and others. The South Pole was reached first by Amundsen (Dec. 14, 1911) and almost immediately thereafter (Jan. 18, 1912) by Robert Scott. The airplane provided a new method of antarctic exploration, with George Wilkins and Richard E. Byrd as the pioneers. Since World War II there have been many well-equipped expeditions, most notably those during the International Geophysical Year (1957–58), to the Antarctic. Sections in this article: More on exploration Polar Explorations from Infoplease: See more Encyclopedia articles on: Maps and Mapping
In caves and rock shelters around the Levant, archaeologists keep finding gazelle scapulae (shoulder blades) marked with a series of regular notches. Scientists still aren't sure what kind of information the enigmatic marks once conveyed or how the bones themselves might have been used or displayed, but they may be able to tell us something about how early human cultures spread through Eurasia. Put another notch in your... gazelle scapula? Hayonim Cave in Western Galilee, Israel, overlooks the right bank of a large wadi a few miles from the Mediterranean shore. There, archaeologists found eight gazelle scapulae, mostly broken, along with hearths, tooth pendants, stone chips, and signs of ochre use within layers of sediment dating to the Upper Paleolithic. The bones are marked with rows of 0.5-2.5mm wide, 4-5mm long notches regularly spaced 0.5 to 7mm apart. They were put there by a stone blade; on the only unbroken scapula in the set, there are 32 notches, but some have as few as three. The notches aren’t on the same parts of the bone where you'd expect to find cut marks from butchering an animal. Butchering cuts also tend to be shallower and shorter, and the surface of a hacking or cutting mark looks very different under a microscope than a notch made by sawing into a pre-scraped surface. The notches always mark the thickest parts of the bone, on the posterior side—the side that faces the rear of the animal. Microscopic analysis shows that whoever made the notches also took the time to scrape the surface of the bone smooth beforehand. That's a lot of work, and most bone tools from the same time period aren't nearly so carefully prepared; bone awls and chisels were usually worked just enough to sharpen the business end. These markings clearly weren't the product of idle fiddling. Their meaning must have been intuitively obvious to the people who lived in the Levant 34,000 to 38,000 years ago. After all, it mattered enough that people put some effort into expressing it. José-Miguel Tejero of the Centre National de la Recherche Scientifique de France is the lead author of a new paper describing the findings. Tejero and his colleagues say the best explanation is that the marks are probably some form of symbolism. There’s plenty of evidence that people had been using visual symbols, from jewelry to cave paintings to decorative designs, for some time before the first settlers ventured beyond Africa. Being able to convey ideas like "There's water here," "I am an important person in this group," or "We killed a large mammoth" with visual cues had an adaptive advantage as people lived in ever-larger groups and interacted with other groups more often. Unique cultural signature At European sites in Italy, Belgium, France, and central Europe, notched marks like these have been found on antlers, tusks and teeth, ribs, limbs, and other bones from reindeer, red deer, mammoth, and several bovid species. But in the Levant, it's all about the gazelle scapulae, except for a single notched hyoid bone found in Manot Cave—and even that was from a gazelle. “We can assume that notched bones in Europe, in the African Middle Stone Age, as well as in the Levantine Aurignacian shared a similar purpose—that is, to convey individual or group information. Nevertheless, what makes the sample of the Levant in general and Hayonim in particular unique is the homogeneity of the raw material, taxa, and anatomical part selected,” said Tejero. (That's academic for "It's always gazelle scapulae with these guys.") The numbers are also unusual. Tejero and his colleagues found eight marked scapulae at Hayonim Cave and four more at Manot Cave—in Europe, notched bone artifacts usually show up in smaller numbers, one or two to a site. Archaeologists aren’t sure yet why that is, but the sheer homogeneity of these artifacts, and the fact that they're so common, may make them a good diagnostic marker of a culture called the Levantine Aurignacian, which seems to have flourished in a relatively small area along the eastern shore of the Mediterranean from about 34,000 to 38,000 years ago. Aurignacian sites in the Levant share some other commonalities, especially in the types of stone tools and the techniques used to make them. But archaeologists are still debating about how Aurignacian culture got started and how it spread. The Aurignacian culture existed in Europe, too, from 43,000 to 26,000 years ago, but it was much more widespread, with more variation between places. There are enough similarities with the Levantine Aurignacian to make it likely that the two cultures are related. Tejero and his colleagues say that if these scapulae are unique to the Levantine Aurignacian, then looking at where they're found and whether they change over time could help archaeologists understand how this culture spread into Europe and how it changed when it got there. What does it all mean? “One aspect of symbolic material culture, and possibly the most significant benefit of symbolical behavior in general, is its ability to link the individual or group with other individuals and groups through the inter- and intragroup transmission of information,” Tejero told Ars. “Such signals thus may include identification (class affinity, social group affiliation, rank, and so forth), authorship, and ownership.” Tejero and his colleagues say the marked scapulae may have been worn as pendants, with a string tied around the narrow neck of the scapula, the part that attaches to the humerus. The Himba people of Namibia wear pendants of a similar size to indicate marital status, and we've found evidence of jewelry at other sites from this period, so it’s plausible—but at this point, it’s still just speculation. The fact is that we're just not sure what the notches in the scapulae would have conveyed to people in the know. "The use of artifacts to transmit messages is advantageous when used to communicate information to people who are in 'the middle distance'—that is, those people who are not so close to the sender that the messages are already known and not so distant that the meaning of the message cannot be deciphered," Tejero told Ars. Thirty-four thousand years later, we're well beyond the middle distance. “Some authors suggested that this type of mark could be linked with a notation system marking lunar phases,” said Tejero. “However, the notches discussed here were most probably made in one session, on fresh bone, and likely by the same lithic tool.” That means the marks couldn’t have been made over a period of several months to mark lunar phases, and it rules out other kinds of tallies, too, like a count of days or successful hunts. It's unlikely that we'll ever crack the code, but using these marked bones as tracers might help archaeologists track the spread of the culture that briefly flourished here, which can tell us something interesting about how early human culture spread—even if we don't get it.
Globe willow (Salix matsudana Navajo) is not a weeping willow, but has a round, upright growth habit and a single trunk. Globe willow is native to China, but adapted to the Great Lakes region and desert southwest United States. With a height and spread of 40 feet at maturity, globe willow is one of the first trees to green up in spring. Unfortunately, like most willows, it is notoriously short-lived and prone to a host of naturally occurring problems. Brittle Wood and Short Life Plant globe willows for their beauty, but don't get too attached to them. Like other members of the Salix family, globe willow is naturally short-lived, reaching maturity at 30 years and declining after that. Its brittle, soft wood grows fast and is prone to storm damage and splitting at branch crotches. Careful pruning to reduce weight in the canopy can help extend its life. Keep in mind that pruning wounds themselves can invite disease and fungus. Disease and Fungus Be on the lookout for bark diseases in globe willow. One of the worst is slime flux disease. Symptoms include a smelly, frothy slime that oozes from branch bark. The ooze is from bacterial activity inside the branch, which forces sap out under pressure. There's no cure for slime flux. All you can do is make sure the tree gets enough water, and prune away the dead and diseased wood wherever you can. Clean pruning tools after use to avoid transmitting disease to other trees. Cytospora canker also affects globe willow, especially in stressed trees. This fungus attacks twigs and branches, and can even move back into the trunk, killing the tree. The only remedy is to cut out all the diseased wood, keep the soil moist and aerated, and fertilize each spring with 10-10-10 fertilizer to increase nitrogen and reduce stress. Keep an eye out for bugs on leaves and trunks. Giant willow aphids can attack twice a year, in spring and fall. As they feed on bark and twigs, they exude a sticky substance called honeydew. Sometimes mistaken for ticks due to their large size, giant willow aphids are easily killed by the insecticide imidacloprid. Insects that feed on other trees, like spider mites, tent caterpillars, grasshoppers and horn worms, also feed on globe willow, but aren't likely to kill one unless their numbers are huge. Weigh environmental concerns before spraying any number of commercial insecticides to kill these insects. You will also likely kill beneficial insects in the process.
Interactive Java Tutorials Diffracted Light in Phase Contrast Microscopy In all forms of optical microscopy, the specimen scatters light through processes that include diffraction, refraction, reflection, and absorption. Transparent specimens imaged by phase contrast techniques diffract light that is retarded by one-quarter wavelength (90 degrees) with respect to undiffracted (surround) incident illumination, whereas opaque specimens, such as diffraction gratings, diffract light that is 180-degrees (one-half wavelength) out of phase with the surround illumination. This interactive tutorial explores diffraction of light by a periodic grating in a phase contrast microscope. The tutorial initializes with a conoscopic view of the objective rear focal plane in a phase contrast microscope appearing in the window entitled: Diffraction Pattern. In this tutorial, the specimen is a variable opaque diffraction grating that gives rise to higher-order diffraction patterns, which can be observed in the objective rear focal plane. The central segmented circular white ring in the Diffraction Pattern window represents undiffracted light passing through the condenser annulus. Diffracted light, which is separated into a colorful spectrum according to wavelength (red, green, and blue), appears as successively higher (first and second) order circular rings to the left and right of the central annulus. To operate the tutorial, translate the Line Spacing slider to the right in order to decrease the spacing of the diffraction grating and alter the position of higher order diffracted light wavefronts. Note that as the diffraction grating line spacing is decreased, the diffracted light rings move away from the central annulus and become more diffuse. Light wavefronts passing through a grating are diffracted according to the wavelength spectrum of the incident light beam and periodicity of the grating. Individual wavefronts diffracted by successive grating lines are emitted as concentric spherical wavelets that interfere both constructively and destructively because they are all derived from the same wavefront and are therefore in phase. Wavefronts passing through the grating slits that are parallel to the incident light wave are referred to as zero order (undiffracted or surround) or direct light. Diffracted higher-order wavefronts are inclined at an angle (q) according to the equation: where l is the wavelength of the wavefront, P is the grating slit spacing and M is an integer termed the diffraction order (e.g., M = 0 for direct light, ±1 for first order diffracted light, etc.) of light waves deviated by the grating. The combination of diffraction and interference effects on the light wave passing through the periodic grating produces a diffraction spectrum (see the Diffraction Pattern window), which occurs in a symmetrical pattern on both sides of the zero order direct light wave. The periodic diffraction grating can now be used to examine Ernst Abbe's theory of image formation in the phase contrast microscope. When the line grating is placed on a microscope stage and illuminated with a parallel beam of light that is restricted in size by the condenser annulus, both zero and higher order diffracted light rays enter the front lens of the objective. Direct light that passes through the grating unaltered is imaged in the center of the optical axis on objective rear focal plane as an image of the illuminated condenser annular diaphragm. First and higher order diffracted light rays enter the objective at an angle and are focused as spectral renditions of the condenser annulus on both sides of the central circular annular diaphragm pattern at the objective rear focal plane. A linear relationship exists between the position of the diffracted light beams and their corresponding points on the periodic grating. Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657. Matthew J. Parry-Hill and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310. Questions or comments? Send us an email. © 1995-2015 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners. This website is maintained by our
An immense amount of literature has been conducted towards intervention related factors and substance use and abuse, underlying that a variety of factors lead to the behavior and development of substance use disorder. Research has shown that the key risk periods for drug abuse are during major transitions in children’s lives (National Institute on Drug Abuse, 2003) Now-a-days, adolescents face multiple risks involving substance use disorder. Idealistically responding to the risks sooner than later would be beneficial, however realistically is a difficult task. Preventing substance use disorders is the simplest and most lucrative investment made when done effectively; and thus most promising when directed and exposed to our nations youth. Substance prevention programs are aimed to minimize the influence of risk factors and augment the influence of protective factors. Such programs are provided through sociocultural environments (family, peers, media, etc). Comprehensive approaches preventing substance abuse has identified that “the effectiveness of prevention is difficult to measure given the lag of time when a young person goes through a program and when he or she starts doing drugs” (Office of National Drug Control Policy, 1999). In recent years, the trend of adolescents reporting exposure to substance use disorder prevention via sociocultural influences has declined. “Adolescents aged 12 to 14 were less likely than those aged 15 to 17 to have received prevention messages through media sources and to have talked with a parent about the dangers of substance use but were more likely to have received messages through school sources and to have participated in a substance use disorder prevention program outside of school” (World Federation Against Drugs, 2013). Currently research-based programs, which have been tested in diverse communities, variety of settings with an array of populations, have been proven to be successful (National Institute on Drug Abuse, 2003). Research-based programs entail core elements that define structure, content and delivery. Studies have shown that through these investigations substance abuse and progression has displayed how various sociocultural risk factors amplify a person’s chance for substance abuse while protective factors tend to lessen such risk. Nonetheless it is imperative to understand that such risk and protective factors can affect children at different stages of their lives and thus leading to each stage and the risks that may occur. These risk have the ability to be modified through prevention. Prevention programs aim focus on intervening early in a child’s development to strength protective factors prior to problematic behavior development. Substance use disorder prevention programs aim to decrease influence factors and amplify the influence of protective factors. Risk factors can increase a person’s chances for developing a substance use disorder, while protective factors reduce the risk (risk factors vary for each person). Research-based prevention programs concentrate on early child development to strength protective factors prior to a predicament. The following chart illustrates how risk factors & protective factors may affect people in five domains where interventions may take place: |Early Aggressive Behavior||Individual||Self-Control| |Lack of Parental Supervision||Family||Parental Monitoring| |Substance Abuse||Peer||Academic Competence| |Drug Availability||School||Anti-drug Use Policies| |Poverty||Community||Strong Neighborhood Attachment| Prevention programs must develop protective factors and repeal if not diminish risk factors. The following principles represent current research about risk factors and protective factors, (a) risk and protective factors concern people of all groups, such factors may have a different effect depending on an individuals age, gender, ethnicity, culture, and environment (Beauvais, Chavez, Oetting, Deffenbacher, 1996); (b) effective prevention programs should address all forms of substance use; (c) prevention programs should address the substance use disorder problem within the community, target modifiable risk factors and improve the identified protective factors (Hawkins & Arthur, 2002) ; (d) prevention programs should modified in respect of addressing risks specific to the population and audience, such as gender, age and ethnicity, to enhance the program effectiveness (Robertson, Sloboda, Boyd, Beatty, Kozel, 1997). Such principles offer structure and severity when reassessing current programs (National Institute on Drug Abuse, 1997). Various individuals and systemic factors that interact influence the success of prevention, as well as promote substance use disorder prevention awareness within cultures. Using the media to help advocate prevention strategies, communities become more educated, as well as a rise in public awareness, and development of community support. Norms are vital factors that affect substance use disorder; changing them regarding substance abuse is a goal-oriented option. However measuring social and subjective norms is a difficult task given the instability and dependence on the social environment. Alan Lesher, Director of the National Institute on Drug Abuse said “Scientific advances have contributed greatly to our understand of drug use and addiction, but there will never be a ‘magic bullet’ cable of making these problems disappear. Drug use and addiction are complex social and public health issues, and they require multifaceted” in a report to congress.
Better understanding of nature’s nanomachines may help in design of future drugs Many of the drugs and medicines that we rely on today are natural products taken from microbes like bacteria and fungi. Within these microbes, the drugs are made by tiny natural machines — mega-enzymes known as nonribosomal peptide synthetases (NRPSs). A research team led by McGill University has gained a better understanding of the structures of NRPSs and the processes by which they work. This improved understanding of NRPSs could potentially allow bacteria and fungi to be leveraged for the production of desired new compounds and lead to the creation of new potent antibiotics, immunosuppressants and other modern drugs. “NRPSs are really fantastic enzymes that take small molecules like amino acids or other similar sized building blocks and assemble them into natural, biologically active, potent compounds, many of which are drugs,” said Martin Schmeing, Associate Professor in the Department of Biochemistry at McGill University, and corresponding author on the article that was recently published in Nature Chemical Biology. “An NRPS works like a factory assembly line that consists of a series of robotic workstations. Each station has multi-step workflows and moving parts that allow it to add one building block substrate to the growing drug, elongating and modifying it, and then passing it off to the next little workstation, all on the same huge enzyme.” Ultra-intensive light beam allows scientists to see proteins In their paper featured on the cover of the May 2020 issue of Nature Chemical Biology, the team reports visualizing an NRPS mechanical system by using the CMCF beamline at the Canadian Light Source (CLS). The CLS is a Canadian national lab that produces the ultra-intense beams of X-rays required to image proteins, as even mega-enzymes are too small to see with any light microscope. “Scientists have long been excited about the potential of bioengineering NRPSs by identifying the order of building blocks and reorganizing the workstations in the enzyme to create new drugs, but the effort has rarely been successful,” said Schmeing. “This is the first time anyone has seen how these enzymes transform keto acids into a building block that can be put into a peptide drug. This helps us understand how the NRPSs can use so very many building blocks to make the many different compounds and therapeutics.” Materials provided by McGill University. Note: Content may be edited for style and length.
During the course of our weather unit we learned about the water cycle, how clouds form, and of course, rain. We thought it would be fun to make a rain cloud in a jar as part of our learning about rain. Note: You’ll find more weather-related activities on my Weather Unit Study page. How does rain form? As part of the water cycle, water in oceans, lakes, and rivers turns into gaseous water vapor when heated by the sun in a process called evaporation. The evaporated water rises into the air. As it goes higher, it encounters cooler and cooler temperatures, which causes the water vapor to condense back into liquid water droplets. When enough of these liquid water droplets come together, they form a cloud. The liquid water droplets that make up a cloud are very, very small – about 1/100 mm. At this size, the water droplets are so small they practically float on air. They are far too small to drop to the ground as rain. However, water droplets inside a cloud are always moving and bumping into each other. Sometimes, water droplets collide and join together, forming bigger water droplets. If these droplets reach at least 1/10 mm in size, they are big enough to fall to the ground as rain. Making a rain cloud in a jar We decided to model the rain formation process by making a rain cloud in a jar. To do this, we gathered the following materials: - Shaving cream - Clear jar - Container of water dyed blue (you must dye the water so it will be seen when it falls through the rain cloud) - Pipette (we own these pipettes but I also love the one in this set) We started by filling our jar nearly to the top with the plain (non-dyed) water. We filled it until there was only about 2″ of space between the top of the shaving cream and the top of the jar. Then we squirted shaving cream on top of the water in order to make our “cloud.” We allowed the shaving cream to fill up over the top of the jar for a fluffy, cloud-like look. Then, we used our pipettes to drip blue water onto our rain cloud. At first, nothing much happened. You can liken this to a cloud being filled with water droplets, but the water droplets are not yet big and heavy enough to fall to the ground as rain. However, we kept adding blue water to our cloud, and eventually the cloud became saturated enough to start “raining.” We saw beautiful streaks of blue water falling from the cloud into the water below. And there we had a rainstorm in a jar. But this rain storm, thankfully, didn’t require an umbrella. 🙂 More weather resources More weather posts from Gift of Curiosity: - Books about the weather - Weather 3-part cards - DIY weather station - Water cycle demonstration - Two ways to make a cloud in a jar - Cloud classification activities - Cloud classification craft - DIY weather vane - Wind resistance experiment - Make a tornado in a bottle - How do hurricanes form? - Make a hurricane - Printable weather Bingo game - Printable weather I Spy game
An exothermic reaction is a chemical reaction where the substances reacting release energy as heat. An example of this is combustion. Exothermic reactions transfer energy to the surroundings. The reaction that does the complete opposite (it absorbs heat) is an endothermic reaction. The energy is usually transferred as heat energy, causing the reaction mixture and its surroundings to become hotter. The temperature increase can be detected using a thermometer. Some examples of exothermic reactions are:
Injuries to the spinal cord often cause paralysis and other permanent disabilities because severed nerve fibers do not regrow on their own. During the past few years a number of paralyzed patients have experienced remarkable improvement as a result of Stem Cell Therapy. When their own stem cells or those extracted from cord blood were injected into the spinal column they went straight to the damaged nerves and helped them regenerate. One limitation is that the treatment must be given as soon as possible after the injury. When the same injections were given to patients who had been paralyzed for a year or more they were rarely successful. During the first few months after a spinal injury the nerves lose their ability to regenerate even with the introduction of new stem cells. Now, scientists of the German Center for Neurodegenerative Diseases (DZNE) have succeeded in releasing a molecular brake that prevents the regeneration of nerve connections. Treatment of mice with Pregabalin, a drug that acts upon the growth inhibiting mechanism, caused damaged nerve connections to regenerate. Researchers led by neurobiologist Frank Bradke report on these findings in the journal Neuron. Human nerve cells are interconnected in a network that extends to all parts of the body. In this way control signals are transmitted from head to toe, while sensory inputs flow in the opposite direction. For this to happen, impulses are passed from neuron to neuron, not unlike a relay race. Damages to this wiring system can have drastic consequences particularly if they affect the brain or the spinal cord. This is because the cells of the central nervous system are connected by long projections. When severed, these projections, which are called axons, are unable to regrow. Neural pathways that have been injured can only regenerate if new connections arise between the affected cells. In a sense, the neurons have to stretch out their arms, i.e. the axons have to grow. In fact, this happens in the early stages of embryonic development. However, this ability disappears in the adult. Can it be reactivated? This was the question Professor Bradke and co-workers asked themselves. “We started from the hypothesis that neurons actively down-regulate their growth program once they have reached other cells, so that they don’t overshoot the mark. This means, there should be a braking mechanism that is triggered as soon as a neuron connects to others,” says Dr. Andrea Tedeschi, a member of the Bradke Lab and first author of the current publication. In mice and cell cultures, the scientists started an extensive search for genes that regulate the growth of neurons. “That was like looking for the proverbial needle in the haystack. There are hundreds of active genes in every nerve cell, depending on its stage of development. To analyze the large data set we heavily relied on bioinformatics. To this end, we cooperated closely with colleagues at the University of Bonn,” says Bradke. “Ultimately, we were able to identify a promising candidate. This gene, known as Cacna2d2, plays an important role in synapse formation and function, in other words in bridging the final gap between nerve cells.” During further experiments, the researchers modified the gene’s activity, e.g. by deactivating it. In this way, they were able to prove that Cacna2d2 does actually influence axonal growth and the regeneration of nerve fibers. Cacna2d2 encodes the blueprint of a protein that is part of a larger molecular complex. The protein anchors ion channels in the cell membrane that regulate the flow of calcium particles into the cell. Calcium levels affect cellular processes such as the release of neurotransmitters. These ion channels are therefore essential for the communication between neurons. In further investigations, the researchers used Pregabalin (PGB), a drug that had long been known to bind to the molecular anchors of calcium channels. Over a period of several weeks, they administered PGB to mice with spinal cord injuries. As it turned out, this treatment caused new nerve connections to grow. “Our study shows that synapse formation acts as a powerful switch that restrains axonal growth. A clinically-relevant drug can manipulate this effect,” says Bradke. In fact, PGB is already being used to treat lesions of the spinal cord, albeit it is applied as a pain killer and relatively late after the injury has occurred. “PGB might have a regenerative effect in patients, if it is given soon enough. In the long term this could lead to a new treatment approach. However, we don’t know yet.” In previous studies, the DZNE researchers showed that certain cancer drugs can also cause damaged nerve connections to regrow. The main protagonists in this process are the microtubules, long protein complexes that stabilize the cell body. When the microtubules grow, axons do as well. Is there a connection between the different findings? “We don’t know whether these mechanisms are independent or whether they are somehow related,” says Bradke. “This is something we want to examine more closely in the future.”
Art and design are seen as distinct disciplines, but there are natural similarities between them, and many of the key elements are the same. The elements and principles are often thought of as belonging to a set of clear categories, and these apply to works of any purpose. The elements in works of art and design are a reflection of the principles used to create them. Color is an essential element in design and art, expressing basic human emotions. Color has a number of properties affecting its appearance, including hue (the specific color), intensity (how strong or pure the color is) and value (how light the color is). Significant use of color in art and design often involves contrasting colors as well as individual color elements. Shape or Form Shapes in an artwork or design are defined areas, sometimes delineated or indicated by the other elements present. Shape is typically thought of as two dimensional, and form as three dimensional. Shapes in a specific work may have an impact in their own right or in terms of how they fit within the design as a whole. In design, a line can be a visible, physical mark connecting two or more points or an implied line, drawing the viewer's eye in a particular direction. A line may be defined by surrounding objects or a clearly signified outline to a shape. Line affects the arrangement and appearance of the objects in an artwork or design object. Space is typically thought of in two senses. It can be an indicator of physical dimensions or depth and also of empty areas inside or outside an object. Space can be a factor in both two- and three-dimensional designs, indicating how elements are situated in relation to one another. Texture is an indicator of the touch quality of an element in an artwork or design. Texture implies that an element has some quality such as roughness, smoothness, heat, cold, softness or hardness. Texture can have a significant impact on how a piece of work is perceived. Balance refers to the way in which the elements in an artwork or design are weighed up against one another horizontally, vertically or on any other axis. Balance in art can be achieved using symmetry or asymmetry, both of which create significantly different effects on the viewer's eye. Proportion is a measure of the sizes and areas occupied by elements in a design or artwork, relative to one another. Proportion therefore indicates something about how the individual parts in an artwork relate to each other and will often involve scaling or distortion to create different visual effects. Emphasis is used in designs and compositions to imply a sense of relative importance for elements, some having more or less dominance than others. An effective use of emphasis in an artwork can affect the way in which the viewer's eye is drawn towards certain areas as well as how the objects within them are perceived. - Metropolitan School District of Steuben County: Elements and Principles of Art - Princeton Online: Elements and Principles of Design - Goshen College Department of Visual Art: Composition and Design Principles - Cornell University: Principles of Design - Princeton Online: Elements of Design - Jiskha Homework Help: Principles & Elements of Design
Using Color to make an Impact in Your Classroom It’s a well-known fact that color can enhance your mood, and while no one should go as far as ingesting yellow paint, you can use color in your classroom to positively impact your students. Most classrooms look like this: white walls, white tile, and boring seated desks. If students see that everyday do you think they’ll be inspired to be creative and attentive? Colors can impact people in many ways. When you go to McDonalds, the walls and tables are red because the color increases energy and appetite. So, how do you want your students to be affected in your classroom? Tips for using color effectively When you open a new box of Crayola crayons, what’s the most satisfying part? That fact that they’re all color-coded! Seeing someone purposefully mix them all up kind of makes you feel bad, too. That’s how color and organization effects our thoughts and feelings. So, use this color association to your advantage. Create A Color Palette for Your Classroom Choosing a color palette will help you set a mood for the classroom, and it can keep your decorations organized. No one wants to walk into a room that is decorated with 10 different colors. That would be too much for your eyes to handle, and in a classroom, it can lead to a chaotic atmosphere. Instead, choose a few soothing colors. A palette example could be using light blue and light purple as your base colors, and add an accent color of yellow for a pop of bright energy. These colors can work together to create a classroom atmosphere of well-being, calmness, creativity, and happiness. Pam Schiller, PhD, a curriculum specialist, author, and speaker, wrote the book Start Smart: Building Brain Power in the Early Years, Revised Edition, and created this color chart for classrooms: |Red||Creates alertness and excitement.| May be disturbing to anxious individuals. |Blue||Creates a sense of well-being.| Sky blue is tranquilizing. Can lower temperature. |Yellow||Creates a positive feeling.| Optimum color for maintaining attention. |Brown||Promotes a sense of security and relaxation.| |Off-White||Creates positive feelings.| Helps maintain attention. This chart can be a resource for you to choose the most effective color palette for your classroom atmosphere goals. Using Color with Different Age Groups Color can be used to positively impact students from pre-K to high school, it’s all about where it is placed though. With younger students, you can use red placemats and bowls to help those picky eaters during lunch. And when nap time rolls around, you can dim the lights, and use color changing lightbulbs to give the classroom a light blue haze for relaxation. This will encourage them to calm down and rest. You can also use color placement throughout the classroom with older students. When creating work nooks, think about what the students will be doing there. If the space is meant for reading, then decorate with blues, greens, and browns. This will signal their brains to be attentive to their reading material. If you have a math station, you might want to consider white and orange colors to encourage the students to be alert and positive about their work. Color can be used on everything from tables to wall hangings. You can also use colored tablecloths, decorations, and lights to change the mood of your classroom for each lesson or workday. The impact that color has on your students can change the way your class is ran, so try to implement some of these color tactics to have the best school year yet!
April 29, 2021 feature A tactile sensing foot to increase the stability of legged robots In order to effectively navigate real-world environments, legged robots should be able to move swiftly and freely while maintaining their balance. This is particularly true for humanoid robots, robots with two legs and a human-like body structure. Building robots that are stable on their legs while walking can be challenging. In fact, legged robots typically have unstable dynamics, due to their pendulum-like structure. Researchers at Hong Kong University of Science and Technology recently developed a computer vision-based robotic foot with tactile sensing capabilities. When integrated at the end of a robot's legs, the artificial foot can increase a robot's balance and stability during locomotion. "Our recent paper focuses on the application of vision-based tactile sensing on legged robots," Guanlan Zhang, one of the researchers who carried out the study, told TechXplore. "It is based on the idea that tactile/haptic sensing plays an important role in human interaction with the environment." The overall objective of the recent study by Zhang and Yipai Du, under the guidance of their advisor Professor Michael Y. Wang at HKUST Robotics Institute, was to develop robots that can sense surfaces while completing tasks within a given environment, just as humans would. More specifically, they wanted to allow robots to balance their legs by sensing the ground beneath them. To achieve this, they inserted a soft, artificial "skin" under their robotic foot and installed a camera inside it, just above the "skin." "We deliberately painted special patterns on the inside of the skin, and the camera we used can capture this pattern," Zhang explained. "As the foot touches the ground, the soft skin will deform due to external forces. The pattern will also deform, and through the deformation of the pattern, we are able to obtain contact information such as the degree of contact angle between the foot and the ground and tilting of the leg." The artificial foot developed by the researchers can collect far richer information about the surface a robot is walking on than conventional sensors. This information can then be used to improve a robot's stability in scenarios where balancing systems based on traditional sensors might fail or perform poorly. To convert images collected by the foot into contact-related data, the researchers used a new deep learning framework they developed. Subsequently, they carried out tests to evaluate the stability of a robotic leg with the foot integrated in it. They found that the foot could successfully estimate both the tilting angle of the surface beneath it and the foot's pose. In addition, Zhang and his colleagues carried out a series of experiments to test the overall feasibility and effectiveness of the tactile robotic system they created. Their system significantly outperformed conventional single-legged robotic systems, enabling greater balance and stability. "During our experiments, we found that the information contained in the contact phenomenon is more than we expected," Zhang said. "We thus abandoned some redundant knowledge obtained by the sensor. However, high-level information, such as events (slip, collision, etc.) may also be detected or predicted by the tactile sensing foot." In the future, the robotic foot created by this team of researchers could be used to develop legged robots that can maintain their stability when walking on different terrains and surfaces. In addition, it could enable more complex leg movements and locomotion styles in humanoid robots. "In the future, we would like to apply our sensor on a real legged robot and conduct experiments related to robot-environment interaction," Zhang said. "We want to focus on how the tactile information is related to some events in locomotion, such as slip. And how to make use of this information in robot control." © 2021 Science X Network
A Jewish immigrant who is saving money to bring his wife and children to join him in American creates ornate horses for a carousel on coney island; one for each member of his family. Grade Range: 6-12 Resource Type(s): Lessons & Activities Duration: 15 minutes Date Posted: 9/24/2012 Students can guide themselves to a deeper understanding of the Statue of Liberty with this 2-page activity sheet. The sheet includes description and analysis questions to use alongside the digitized version of a piece of art featuring the Statue of Liberty, suggestions for other related online resources, and possible extension activities. The student activity sheet is written for middle or high school students who are fluent in English. The digitized document is a part of Preparing for the Oath: U.S. History and Civics for Citizenship, a learning portal for recent immigrants studying for naturalization. The online descriptive captions are written at a “low-intermediate” ESL level.
Our Climate Challenge resources for 11–14 years old focus on the human impact of climate change: how communities around the world are being affected by climate change and how people are responding and adapting to these challenges. The activities link to a number of curricular areas including science, English and geography. Explore and develop existing ideas about climate change: - Carry out a science investigation to develop understanding of the greenhouse effect. - Identify human causes of climate change. - Compare carbon footprints. - Use a consequence web, reading mystery and role play to develop awareness of the impacts of climate change. - Play a climate change vulnerability game. - Investigate how communities are adapting to the effects of climate change. - Work with others to take action against climate change. How to cite this resource Citation styles vary so we recommend you check what is appropriate for your context. You may choose to cite Oxfam resources as follows: Author(s)/Editor(s). (Year of publication). Title and sub-title. Place of publication: name of publisher. DOI (where available). URL Our FAQs page has some examples of this approach.
A new study has shown that the microbes present in your gut could have a significant effect on cholesterol and overall cardiovascular health. It is bizarre to imagine, but there are trillions of microbes currently residing throughout our entire bodies. Though the majority of them are harmless, each one plays a key role in maintaining the functions of the body. According to a report from Time, a new study has shown that the bacteria living in our guts have more to do with regulating metabolism than previously thought. Microbes help digest food and keep invasive pathogens at bay, but it also turns out that they have a significant effect on blood cholesterol levels. The study, led by Jingyuan Fu from the University Medical Center Groningen in the Netherlands, was published in the journal Circulation Research. It revealed how the makeup of the different types of bacteria in the intestines can influence weight and cardiovascular health. The study examined blood and fecal samples from 893 people and genetically sequenced the bacterial colonies present in each. They discovered 34 different microbial sequences that appeared to influence body mass index, or BMI, and lipid levels in the blood. After the scientists controlled for factors like age, gender, and genetics, they found that gut bacteria were responsible for up to 6 percent of the discrepancies between patients’ triglyceride and HDL levels. Triglycerides are connected to an increased risk of diabetes and cardiovascular disease, whereas HDL cholesterol is linked to a decreased risk of these conditions. The gut bacteria accounted for 26 percent of HDL level differences after the scientists looked at the results in light of the entire microbiome in the digestive tract. The numbers reveal a pattern, but there will need to be more studies conducted to paint a clearer picture of the way microbes affect cholesterol and overall cardiovascular health.
The historical importance that this place holds as said earlier is very traumatising but worth knowing. The Andaman islands were used to hold the political Indian activists as prisoners even before the Cellular jail was constructed. After the revolt of 1857 (Sepoy Mutiny) the Britishers were using these islands as prisons for such freedom fighters but the Cellular jail was built between 1896 and 1906. After that particular revolt many activists were executed then and there while two hundred of them were sent to these islands for life imprisonment under the charge of jailer David Barry and a military doctor, Major James Pattison Walker who had earlier served as a warden at the prison in Agra. These islands were selected for the punishment of life imprisonment because they were literally located in isolation that is far away from the main Indian subcontinent. Moreover the journey through the kaala paani was considered threatening by the prisoners as crossing seas seemed to lead them to the loss of caste leading them to drop out of their social constructs. With time as the independence struggle grew stronger and patriotic activists gained enough confidence to provide a momentum to their fight, more and more activists were deported to these islands. The Penal settlement which said that Andaman would be used to hold the political rebels captive got a recommendation of inserting a ‘penal stage’ in the transportation sentence within the Penal settlement. This was suggested by Charles James Lyall, home secretary in the British government as he was appointed to review the settlement and A.S Lethbridge , a surgeon in the British administration. The penal stage was included because these people felt that the purpose for sending the prisoners there wasn’t getting fulfilled and hence the transported prisoners should be subjected to sessions of harsh treatment once they reached here. The result was the construction of the Cellular jail. The prisoners were chained to work for the construction of buildings, prisons and harbour facilities in order to help the British colonise the Andaman islands. The British soldiers were harsh towards the prisoners as they used to beat them up for getting tough labour chores done. The architecture of the jail is just as amazing and unique as its history. The original building was puce coloured that is dark red brown-purple as the bricks that were brought from Burma were puce coloured. Talking about the basic structure this building had seven wings with three floors each. The seven wings radiated from a central tower in straight lines just like the spokes in a bicycle wheel. The central tower was used by the British guards to keep an eye on the inmates. Also there was a bell in this tower to raise alarm when required for everyone at once. The whole idea of such a construction was inspired by Jeremy Benthan’s idea of the Panopticon which basically means a structured building which was built in the 18th century by English philosopher Jeremy Bentham who had the intention to construct the prison in such a way that it was easier to guard all the inmates from a single source that is the central tower here. There were no dormitories and a total of 696 cells were constructed in these seven wings which were triple floored. Each cell had dimensions of 4.5 x 2.7 metres (14.8 feet x 8.9 feet) with a ventilator located at a height of 3 metres. The very nature of these cells and the vast number of them led to the naming of this prison as the Cellular jail. The goal was to make every prisoner experience solitary confinement and hence the design of the wings was in such a wave that the front side of it faced the back side of the adjacent wing to make sure zero communication between the prisoners. Talking about the prisoner life here it is said that the British Raj got the Indian rebels and dissidents to this remote island in order to run the norms of torture, medical tests, forced labour and for many of them death. Out of 80,000 political prisoners that were held captive by the British there only few survived the tortuous blows. The independence activists held here as prisoners included Fazl-e-Haq Khairabadi, Babarao Savarkar and Sachindra Nath Sanyal among others. The solitary confinement rules were so strict that the Savarakar brothers, Babarao and Vinayak didn’t even know that they were in two different cells together for 2 years in the Cellular jail. The prisoners still tried to revolt and hence 238 prisoners tried to escape in March 1868. They were caught by April. 87 of them were hanged. In attempt of another resistance, some prisoners went for hunger strikes in May 1933. The jail authorities did pay attention to the three prisoners who were protesting against the torturous treatments. The three of them namely, Mahavir Singh (an associate of Bhagat Singh), Mohan Kishore Namadas and Mohit Moitra were killed due to force-feeding. In another turn of surprising events the Andaman islands were invaded by the Empire of Japan in March 1942. The Cellular jail was then used to capture the suspected British India supporters and members of the Indian Independence League. They were later tortured and killed. Suring this period the islands were under the control of Subhas Chandra Bose who hoisted the Indian flag on these islands for the first time. He announced the Azad Hind Government and freed the territory from the British rule. But the British got the islands under their control again on 7th October in 1945. The islands were surrendered to Brigadier J.A Salomons a month after the surrender of Japan, at the end of the World War II.After India got independence two wings of the cellular jail were demolished but it was followed by a lot of protests from former prisoners and leaders. This is because the demolition was seen as a way of removing and erasing the proof of evident black history of India’s struggle to freedom. There was a hospital constructed in the premises of the Cellular jail in 1963 named as the Govind Ballabh Pant hospital. Significance of the term Kaala Paani The significance of the term kaala paani when used it with the Cellular jail indicates that the prisoners were held captive at these islands which are surrounded by black waters. ‘Kaale paani ki saza’ was the phrase used for the act of putting the political activists in the cells. It was clear that even if they try to escape they won’t be able to do that because the island has black water all around. No matter how much they try they can’t run away from the torturous life in there. The beauty of these islands was used as the dark element to capture these independence rebels where all they knew was that they were far away from their homes with the kaala paani in between. ‘Kaale paani ki saza’ is as horrifying as it sounds because the prisoners faced the kind of atrocities which shake us to the core.
Life on Saturn's moon? All the ingredients are there: water, chemistry, organics 'Something is building upon itself and making itself more complex' Last fall, as NASA’s celebrated Cassini spacecraft spiraled toward its final, fatal descent into Saturn’s clouds, astrochemist Morgan Cable couldn’t help but shed a tear for the school-bus-size orbiter, which became a victim of its own success. Early in its mission, while flying past Saturn’s ice-covered moon Enceladus, Cassini discovered jets of ice and saltwater gushing from cracks in the south pole – a sign that the body contained a subsurface ocean that could harbor life. When the orbiter began to run low on fuel, it smashed itself into Saturn rather than risk a wayward plunge that would contaminate the potentially habitable world. Now, from beyond the grave, the spacecraft has offered yet another prize for scientists. New analysis of Cassini data suggests that those icy plumes shooting into space contain complex organic compounds – the essential building blocks of living beings. The fact that an aging orbiter not designed for life detection was able to sense these molecules – which are among the largest and most complex organics found in the solar system – makes the icy moon an even more tantalizing target in the search for extraterrestrial life, said Cable, a research scientist at the Jet Propulsion Laboratory who was not involved in the new research. “This is a powerful study with a powerful result,” she said. The findings published Wednesday in the journal Nature rely on data collected by two Cassini instruments – the Cosmic Dust Analyzer and the Ion and Neutral Mass Spectrometer – as the spacecraft flew through Saturn’s outermost ring and the plumes of Enceladus (pronounced “en-SELL-a-dus”). Previous research using these instruments has detected small organic molecules such as methane, which consists of four hydrogen atoms attached to a single carbon. The INMS has also detected molecular hydrogen – a chemical characteristic of hydrothermal activity that provides important fuel for microbes living around seafloor vents on Earth. The molecules reported in the new Nature paper are “orders of magnitude” larger than anything that’s been seen before, according to lead author Frank Postberg, a planetary scientist at the University of Heidelberg in Germany. There were stable carbon ring structures known as aromatics as well as chains of carbon atoms linked to hydrogen, oxygen, maybe even nitrogen. Some of the molecules sensed by the CDA were so large that the instrument couldn’t analyze them. This suggests that the organics Cassini found are only fragments of even bigger compounds, Postberg said. There may well be huge polymers – many-segmented molecules such as those that make up DNA and proteins – still waiting to be discovered. “We astrobiologists get excited about larger molecules and that sort of thing because it means that something is building upon itself and making itself more complex,” said Kate Craft, a planetary scientist at Johns Hopkins University Applied Physics Laboratory who was not involved in the research. The molecules Cassini has detected may be produced abiotically – without the involvement of life. But they are also the kinds of compounds that microbes on Earth like to eat, and they might even be byproducts of microbial metabolisms. “Put it this way, if they did all these tests and didn’t see these larger molecules, [Enceladus] wouldn’t seem to be habitable,” Craft said. “But these findings . . . are reason to say, ‘Hey, we need to go back there and take a lot more data.’ ” Scientists believe that the gravitational influence of Saturn squeezes and flexes the porous, rocky material at Enceladus’s core, generating heat. That heat allows for chemical interactions between the salty ocean and the seafloor. On Earth, such water-rock interactions provide fuel for chemotrophs – organisms that obtain energy by breaking down chemicals in their environments – and support vast ecosystems in the ocean’s deepest, darkest depths. Postberg and his colleagues propose that the organic molecules generated in Enceladus’s ocean’s depths eventually float to the surface, where they form a thin film just beneath the planet’s icy crust. Earth’s oceans are capped by a similar film, they note – a millimeter-thick blanket of tiny microbes and organic matter that serves as an important interface between sea and sky. Research shows that this layer helps drive weather; as bubbles burst from breaking waves, particles from the film are lifted into the air, where they provide a nucleus around which water can condense to form clouds and fog. A similar process on the surface of Enceladus’s ocean may form ice crystals with organics at their core, Postberg said. These grains are then sucked upward through cracks in the moon’s crust called “tiger stripes” and then flung into the vacuum of space. Enceladus’s plumes are extremely tenuous – more like a thin veil than a jet from a fire hose – and scientists have questioned whether a spacecraft flying through the spray would be able to collect enough organics to draw conclusions about their origin. This result, Postberg said, shows that Enceladus “is kind to us and delivers its organic inventory in high concentrations into space.” Cable is deputy project scientist for a concept called Enceladus Life Finder, which would use more advanced instruments than the ones on Cassini to sample the plume during a series of flybys. The mission has not been funded by NASA, and the space agency has no projects in development to return to Enceladus. She, Postberg and Craft expressed hope that this latest finding would generate interest in a new mission to the icy moon. “Enceladus is screaming at us that it has all the ingredients for life as we know it: water, chemistry, organics,” Cable said. “We have to go back.”
Information adds context and meaning to the data. This gives it meaning so that people can understand it. Data must have some kind of headings or structure around it to become information. Computer Input Devices often collect data automatically. Sensors can automatically measures a temperature or bar codes can be screened at a till. This data becomes information once it is put into a framework or structure that provides context. In both these cases the data will be read into a database for processing. A database is a collection of that can be used as information. You have learnt in the previous chapter that data is compiled in the form of tables containing records and fields. The tables from the basis of all type of processing in a database. A database in which ALL the data is stored in a single table is known as a flat file data base. (E.g. EXCEL WORKSHEET). Another type of database stores different types of data in different files with an application called Database Management System to links the files together. This type is called Relational Database (E>G Microsoft Access Database). IN this chapter you are doing to learn about a relational GUI database which is part of Microsoft office suite – Microsoft Access. Starting Ms Access TO start MS ACCESS, you may have followed these steps 1. Click at Start button. 2. From the Start – All Programs menu, click on Microsoft Access 3. And it will open the Ms-Access window. Ms Access Window Components When you see an Ms-Access window these are various components that may be used for veracity of tasks. These components are being discussed below: 1. Title Bar–This is the top most bar that display the title. 2. Menu Bar– This bar, which is below menu bar. This toolbar offers tools for performing various standard functions. 3. Database Window-When a database is opened, all its components are displayed in a separate sub-window of ms access window called database window. 4. Object Button: When a database is open it display various object buttons to navigate through various database objects like Tables , Queries ,Forms ,Reports etc 5. Access Toolbar– this is the bar below menu bar this toolbar offers tools for performing various standard function 6. Status bar. This bar is located on the lower left corner of MS-ACCESS window and this about and this report all the programs of database processing 7. Mode indicators. These are located on the lower corner of MS-ACCESS window and tell about various modes under which database processing is taking place Creating Ms-Access Database When MS-Access starts it offers you option to either create a database or open and existing database i. Create blank database (and, then create tables, forms queries, etc.) ii. Create database using wizard Creating Database Using Wizard i. Select option for database creation through wizard. Firstly you need to select the option Access database wizards, pages, and project from the opening dialog that appears when you start ms-access and click OK button. ii. Select the desire database wizard. Once you can click OK button, MS-Access opens new dialog box database tab select the desired for inventory management, therefore, we selected Inventory control wizard. iii. Specify the database name. Once you click OK file for new database opens up that asks you to specify the name foe new database being created. iv. Database wizard starts now you are taken to newly created database window where you are supposed to create to create tables forms etc. once you are through with reading or providing information you can click next button to move to next step. v. Specify fields for tables. First of all it will ask you to specify tables and fields in them you can select a field by checking its box or you can remove a field by UN checking its box. After specify fields foe all tables , click next vi. Choose a screen display style. After tables and filed MS-ACCESS database wizard asks you to choose a style for screen from the given choices click at next. vii. Choose a report print style. After screen display style for printed reports. Select the desire print style form the given options and click next. viii. Specify the database title. And finally specify the desired database file. You can click decide to included or exclude a picture by clicking. ix. Finally start the database. Now you reached at the last step of creating database through wizard; click the box you have a trick sign if you want to start the newly created database right away. Click at finish.
Ballooning for Cosmic Rays Ballooning for Cosmic Rays Astronomers have long thought that supernovas are the source of cosmic rays, but there's a troubling discrepancy between theory and measurements. An ongoing balloon flight over Antarctica could shed new light on the mystery. January 12, 2001 -- Hold out your hand for 10 seconds. A dozen electrons and muons just zipped unfelt through your palm. The ghostly particles are what scientists call "secondary cosmic rays" -- subatomic debris from collisions between molecules high in Earth's atmosphere and high-energy cosmic rays from outer space. Cosmic rays are atomic nuclei and electrons that streak through the Galaxy at nearly the speed of light. The Milky Way is permeated with them. Fortunately, our planet's magnetosphere and atmosphere protects us from most cosmic rays. Even so, the most powerful ones, which can carry a billion times more energy than particles created inside atomic accelerators on Earth, produce large showers of secondary particles in the atmosphere that can reach our planet's surface. [more] Above: Supernova explosions, like the one that created the expanding Crab Nebula (pictured), may be the source of galactic cosmic rays. Where do cosmic rays come from? Scientists have been trying to answer that question since 1912, when Victor Hess discovered the mysterious particles during a high altitude balloon flight over Europe. Galactic cosmic rays shower our planet from all directions. There's no definite source astronomers can pinpoint, although there is a popular candidate. Sign up for EXPRESS SCIENCE NEWS delivery "It takes an awful lot of power to maintain the galactic population of cosmic rays," says Adams. "Cosmic rays that lose their energy or leak out of the Galaxy have to be replenished. Supernovae can do the job, but only if one goes off every 50 years or so." The actual supernova rate is unknown. Observers estimate that one supernova explodes somewhere in the Galaxy every 10 to 100 years -- just enough to satisfy the energy needs of cosmic rays. But there could be a problem with the supernova theory, says Adams. "A supernova blast blows a bubble in the interstellar medium that grows until the shock wave runs out of energy," he explained. "They can accelerate particles up to some point, about 1014electron volts (eV) per nucleon, but not beyond that. Below an energy of 1014 eV, all of the different cosmic ray species -- protons, helium nuclei, etc. -- should have the same kind of energy spectrum: a power law with index around -2.7." Left: This log-log plot shows the flux of cosmic rays bombarding Earth as a function of their energy per particle. Researchers believe cosmic rays with energies less than ~3x1015 eV come from supernova explosions. The origin of cosmic rays much more energetic than that (above the "knee" in the diagram) remain a mystery. A "power law" spectrum is one that looks like a straight line on a piece of log-log graph paper. In the energy range ~1010 eV to 1014 eV, the supernova theory of cosmic ray acceleration predicts that the power law spectrum of protons should have the same slope as the power law spectra of heavier nuclei (about -2.7). The problem is when scientists compare the energy spectra of protons and helium nuclei, the two don't resemble one another as much as they should. Both are power laws, as expected, but "existing data indicate a possible spectral index difference between protons and helium of about 0.1," says Eun-Suk Seo, a cosmic ray researcher at the University of Maryland. "The [slope of the] proton spectrum is close to -2.7, but the energy spectra of helium and heavier nuclei seem to be flatter. The difference is small and it might not be statistically significant." If there is a genuine discrepancy, she added, it could signal trouble for supernova models of cosmic ray acceleration. To find out if the supernova theory is indeed in peril, a team of scientists led by John Wefel ( Louisiana State University) and Eun-Suk Seo, and aided by personnel from the National Science Balloon Facility, launched a helium-filled balloon from McMurdo, Antarctica on Dec. 28, 2000. The payload, which is now 120,000 feet above Earth's surface, includes a NASA-funded cosmic ray spectrometer known by its builders as the Advanced Thin Ionization Calorimeter or "ATIC" for short. "ATIC is sensitive to cosmic rays with energies between ~1010eV and 1014eV," says Wefel. By covering such a wide range of energies with a single modern spectrometer, the team hopes to measure the proton and helium cosmic ray spectra with better precision than ever before. Right: The ATIC payload hangs from a launch vehicle while the helium balloon is being filled in the background by personnel from the National Science Balloon Facility. The ATIC experiment lifted off on its circumpolar trip to measure Galactic cosmic rays on Dec. 28, 2000. "The higher energy cosmic rays are rare," he continued. "For example, each day ATIC collects no more than ~10 cosmic rays with energies exceeding 1013 eV. That's why we have to fly the balloon for such a long time, to gather enough particles for a statistically significant result." By the time ATIC lands on January 12th or 13th, the spectrometer will have been in the stratosphere counting cosmic rays for nearly two full weeks. The long flight time, more than any other reason, is why the researchers chose to fly the balloon over Antarctica. "We would be happy to fly this payload over North America," says Adams. "The problem is that we need the spectrometer to be aloft for a long time. Antarctica has two advantages: It's international territory, so we don't need to apply for lots of overflight permissions, and the Antarctic Vortex (a circulating weather system around the south pole) keeps the balloon confined to airspace over the continent." "If there is a difference between the proton and the helium spectra -- and that's not certain, by the way -- it won't necessarily kill the supernova model," continued Wefel. "But a discrepancy would cause problems." Theorists may have to consider the progress of supernova shock fronts in greater detail. "Every supernova explosion is an individual work of art," says Adams. "We use mathematical models that assume the explosions are spherical, but they are not. Within the blast wave itself you can see irregularities. There are bright knots, for example, where shock waves run into interstellar clouds. In crowded groups of massive stars ('OB associations') where supernovae can occur in quick succession, blast waves collide with other blast waves." It can get a little messy! Modeling such details might affect any necessary reconciliation between the theory and the data. Above: The ATIC balloon payload. Click on the image to find out how the Advanced Thin Ionization Calorimeter works. And what if the supernova model can't be rescued? "There are other possibilities," says Wefel, "but not a lot of good ones. We'll really have to look hard to find something other than supernovae that can meet the cosmic ray energy requirement." The analysis team led by Eun-Suk Seo is eager to sift through ATIC's data files after the balloon lands. The new particle counts, which the experimenters hope will be the most accurate to date in ATIC's energy range, could shed new light on the decades-old mystery of cosmic rays. Visit the ATIC home page for status reports about the ongoing balloon flight. Participants in the ATIC project include Louisiana State University, the University of Maryland, NASA, the Naval Research Laboratory, Southern University (Baton Rouge), the National Science Foundation, and collaborators from Germany, Korea and Russia. ATIC video -a collection of movies about the ATIC project Cosmic Ray Classroom Activities -from the ATIC team at LSU Cosmic Rays, what are they? - from NASA Goddard's "Imagine the Universe" NASA's Scientific Ballooning Program - supported the launch, flight and recovery of ATIC. Supernovae and Supernova Remnants - a tutorial from Harvard's Chandra Science Center "X-ray Astronomy Field Guide" |The Science and Technology Directorate at NASA's Marshall Space Flight Center sponsors the Science@NASA web sites. The mission of Science@NASA is to help the public understand how exciting NASA research is and to help NASA scientists fulfill their outreach responsibilities.| |For lesson plans and educational activities related to breaking science news, please visit Thursday's Classroom|| Production or: Dr. Tony Phillips Curator: Bryan Walls Media Relations: Steve Roy Responsible NASA official: Ron Koczor
Process of Communication The Communication is a two-way process wherein the message in the form of ideas, thoughts, feelings, opinions is transmitted between two or more persons with the intent of creating a shared understanding. Simply, an act of conveying intended information and understanding from one person to another is called as communication. The term communication is derived from the Latin word “Communis” which means to share. Effective communication is when the message conveyed by the sender is understood by the receiver in exactly the same way as it was intended. The communication is a dynamic process that begins with the conceptualizing of ideas by the sender who then transmits the message through a channel to the receiver, who in turn gives the feedback in the form of some message or signal within the given time frame. Thus, there are Seven major elements of communication process: - Sender: The sender or the communicator is the person who initiates the conversation and has conceptualized the idea that he intends to convey it to others. - Encoding: The sender begins with the encoding process wherein he uses certain words or non-verbal methods such as symbols, signs, body gestures, etc. to translate the information into a message. The sender’s knowledge, skills, perception, background, competencies, etc. has a great impact on the success of the message. - Message: Once the encoding is finished, the sender gets the message that he intends to convey. The message can be written, oral, symbolic or non-verbal such as body gestures, silence, sighs, sounds, etc. or any other signal that triggers the response of a receiver. - Communication Channel: The Sender chooses the medium through which he wants to convey his message to the recipient. It must be selected carefully in order to make the message effective and correctly interpreted by the recipient. The choice of medium depends on the interpersonal relationships between the sender and the receiver and also on the urgency of the message being sent. Oral, virtual, written, sound, gesture, etc. are some of the commonly used communication mediums. - Receiver: The receiver is the person for whom the message is intended or targeted. He tries to comprehend it in the best possible manner such that the communication objective is attained. The degree to which the receiver decodes the message depends on his knowledge of the subject matter, experience, trust and relationship with the sender. - Decoding: Here, the receiver interprets the sender’s message and tries to understand it in the best possible manner. An effective communication occurs only if the receiver understands the message in exactly the same way as it was intended by the sender. - Feedback: The Feedback is the final step of the process that ensures the receiver has received the message and interpreted it correctly as it was intended by the sender. It increases the effectiveness of the communication as it permits the sender to know the efficacy of his message. The response of the receiver can be verbal or non-verbal. Note: The Noise shows the barriers in communications. There are chances when the message sent by the sender is not received by the recipient.
Distant hybridization : It implies crosses of species, subspecies, breeds and strains for the purpose of obtaining marketable hybrids of the first generation. Importance of distant hybridization: • Distant hybridization is a combination of valuable features of parental forms in hybrids without noticeable increase in viability and, with intermediate growth rate. • They comprise both heterosis hybrids and hybrid forms with a favorable combination of parental features. • Abortion of embryos at one or the other stage of development is a characteristic feature of distant hybridization, e.g., the crossing of two species or varieties often fails when using distant parents who do not share many chromosomes. Embryo rescue is the process when plant breeders rescue inherently weak, immature or hybrid embryos to prevent degeneration. Common in lily, hybridizing to create new interspecific hybrids between the various lily groups (such as Asiatic, Oriental, Trumpet, etc ).
Using pictures to teach English is a great way to incorporate visual tools in English class. It can be a daunting task for teachers especially in the planning stage but its effects are far-reaching as it encourages both visual and non-visual learners to actively participate in the lesson. Use these few tips to effectively plan lessons, homework, and warm-up exercises around pictures. Activity #1 – Caption It! Students will learn the basic principles of caption writing and write a few captions for assigned photographs. This activity allows the students to be creative in using words. Best integrated in topics such as Sports or Society. Using a collection of old file photos (or any photos sourced from the internet, newspaper or magazines), have students write captions for the pictures. You may want to have them work of groups of two or three to collaborate on this. Then have them read the captions aloud to the class as they show the class the picture and do group critiques of the captions. Grammar: Present Simple, Present Continuous Activity #2 – Explain It! The next activity for teaching English through pictures will prompt the students to talk about the image. Unlike the previous activity which focuses only on the central theme in the photo, this activity prompts the students to give more detail of the whole scene. Teachers should use those pictures which have a detailed background and foreground so that the students can have plenty of opportunities to use nouns, adjectives, and verbs where applicable. After showing the picture, the teacher will ask the students about it. Students can either talk about the picture or explain it in writing. Scaffold by going through the following vocabulary on the board: - In the background/foreground we can see… - On the left/right… - At the top/bottom… - And other useful prepositions depending on the photo For starters, you may use the set of pictures below for this activity: Activity #3 – Dictate It! One of the activities on this list that focus on listening and comprehension skills, this activity requires greater participation from the teacher. - Once the teacher has selected an image, preferably a map, he/she describes it to the whole class. - Students have to listen to the details and draw an image according to this information. I would stick to a simple image with few objects and limited colors. But you can decide on the number of involved objects depending on your students’ drawing or comprehension skills. Instead of the teacher giving the dictation, you can make it as a pair work where one student describes the picture to his/her partner while the other draw! Prepare 2 sets of photos so they can switch roles (optional)! You may check out the Robots ESL lesson below where I use Picture Dictation Activity as a warmer! Activity #4 – Sequence It! In this story telling activity, students must put a series of pictures in order. They color the pictures and write descriptive words using adjectives, adverbs and expressions of time and sequence. When they finish, they go in front of the class to tell their story. By doing picture sequencing before the speaking activity, students are able organize information and ideas efficiently thereby enhancing necessary skills such as reasoning and inferring. Activity #5 – Instagram It! Using the concept of Instagram is one awesome way to engage students in the lesson, topic or theme. With a little creativity, teachers can bring the visual power of photos into the classroom using these Instagram template for students! There are many ways teachers could use “Instagram” in the classroom. The following are some simple suggestions that could turn a sleepy lesson into an exciting activity! - A back to school getting to know you activity. Let student introduce themselves with a short bio and some tidbits about themselves, their hobbies and interests. - Let students explore their identities and the world world around them. Encourage students to use creativity and share thoughts, opinions and social commentary via images. Check out this link for more suggested activities using “Instagram.”
Agent noun definition: An agent is a grammatical term for a type of noun. An agent noun is a person who performs an action. What is the Agent? The agent in English grammar is always a noun. That is because the agent (also called the actor) is the “doer” of an action, which usually makes it the subject. Agents generally have the endings “-er” or “-or.” These suffixes, when added to a root word, mean someone who does something. - Root word: teach - Agent: teacher - Root word: direct - Agent: director Agents and Recipient Nouns Agents and recipient nouns are invariably linked because they either perform (agent) or receive (recipient) actions. As we mentioned above, the agent noun is the person who completes the action, or the person who does something. - Agent: employer - Meaning: someone who employs - Agent: prosecutor - Meaning: someone who prosecutes The recipient of the agent noun’s action(s) is called the recipient noun. The recipient noun is the person who receives the action of the agent. - Recipient noun: employee - Meaning: someone who is employed (by the employer) - Recipient noun: honoree - Meaning: someone who is honored Not all recipient nouns have “parent” agent nouns. That is, there is no such thing as an “honorer” even though the word “honoree” exists. The Agent in the Active Voice Most writing occurs in active voice. Examples of Active Voice: - Shane plays music. - Tony wrote a play. In each of these sentences, the subject comes before the verb and object. The subject is “doing” the verb in the sentence. This is called the active voice. If you imagine there is an arrow connecting the subject to the verb, active voice sentences will always have an arrow going to the right. In the active voice, the agent is the subject of the sentence. The agent (actor) is the person doing the action of the sentence. Examples of Agent in the Active Voice: - The singer finished her recital. - “singer” is the agent - The singer completes the action (finished) in the active voice. - The baker woke early. - “baker” is the agent. - The baker completes the action (woke) in the active voice. The Agent in the Passive Voice The passive voice occurs when the action is done by what seems like it should be the object. Examples of Passive Voice: - The play was written by Shakespeare. It seems like Shakespeare should be the subject. However, the play is actually the subject of this sentence. A good key indicator for passive voice is a “to be” verb and past participle. - The ball was hit by Jeremy. In this example, the ball is subject, and the “doer” of the action, Jeremy, has become the object. In the passive voice, the agent is found after the verb (and often after a preposition). Examples of Agent in the Passive Voice: - The recital was finished by the singer. - The cake was made by the baker. In these sentences, the agent is at the end of each sentence (baker and singer). Summary: What is the Grammatical Agent? Define grammatical agent: the definition of grammatical agent is the doer of an action. In the active voice, the agent precedes the action; in the passive voice, the agent follows the action. In summary, an agent is a noun. More specifically, an agent is a person who performs an action. An agent is different from a recipient noun in that the agent performs an action and the recipient noun receives the action.
The evolution of music closely follows the evolution of humans. Primitive humans produced primitive music. As hunter-gatherers, people spent the vast majority of their time and effort on surviving. Hunting. Gathering. And defending themselves against predators. With the development of communities, and later civilizations, more time was found for cultural pursuits. Art, philosophy, and music developed as people changed over time. The history of music can be separated into three categories. - Primitive Music - Western Art Music - Modern Music Each category reflects the development of culture in a historic period. Primitive Music: Basic Rhythms Music is thought to be as old as mankind. Simple music, such as tapping a rock on a piece of wood led to the creation of pleasing rhythms and chanting songs in the local language. Music in its most basic form was created this way. Primitive music, also called prehistoric music, arose prior to 1500 B.C. Music existed before reading and writing and was used by ancient cultures to preserve information, to be passed to future generations. With the development of reading and writing, music began to be codified so that the notes could be replicated by singers and musicians despite never hearing the original work. This development led to the expansion of music and is the basis for music heard today. Western Art Music: Music Becomes More Complex With the establishment of music as an art form came the evolution of instruments and their incorporation into complete musical works. Music created during this period is called Western Art Music and can be broken into five subcategories, each adding to its predecessor as music became more and more complex. - Medieval Music - Renaissance Music This is the era of famous composers such as Mozart, Bach, and Beethoven. Music was comissioned by the wealthy and noble. Musicians were recognized as artists and entertainers. This influx of interest led to a patronage system which allowed creative and talented musicians to subsist wholly on their talent, allowing for even more sophisticated music to be created. Modern Music: A Celebration of Diversity and Technology Much like humans, today’s music bears little resemblance to its prehistoric ancestors. The styles of music popular now are too numerous to list and are in a constant state of evolution. Technology has played a large role in this development. Recording devices made music accessible to the masses. From wax record to audio files, music has become portable and accessible. Instruments can now be mass produced, allowing for anyone to become a musician. These advances have eliminated barriers to producing music and to having that music heard by an increasingly large audience. Rock and roll, jazz, the blues, and rap are just a few of the subgenres being explored and refined. Music Continues to Evolve Music has mirrored the evolution of musicians. From primitive beats to sophisticated electronic melodies, music continues to reflect the progress of the people from which it comes.
In the simplest terms, emotional memory can be defined as the memory of important emotional moments in the life of any individual. These emotional memories can either be good or bad. Emotional memories also tend to leave strong traces within the brain of an individual. The concept of emotional memory was developed after the theory of a single memory system inside the brain was refuted. On the other hand, episodic memory can be defined as the storage of memories that are related to autobiographical events. For example, time, associated emotions, context, place, or any other knowledge associated with an autobiographical event. These episodic memories can either be explicitly stated or they can also be conjured. There are many experts from different fields of study who try to learn more about emotional memory and episodic memory. Academic experts are also often interested in other related topics like episodic memory loss and emotional memory loss. Emotional memory and episodic memories also have a rather complex relationship. Because of this, it can take students a lot of time to work on assignments of these subjects. This is why it is recommended that students should hire professional online assignment help from the best assignment writer. By taking the online assignment help from the best assignment writer the student would be able to score the best possible marks or grades. Understanding Emotional Memory Before understanding the relationship that exists between emotional memory and episodic memories, it is important for a student to understand what exactly are emotional memories, episodic memories, and episodic memory loss. Every individual goes through emotional experiences in his or her life. These emotional experiences can either be positive or negative. And these positive or negative emotional experiences remain in the brain of that individual termed as emotional memories. The concepts of emotional memories and episodic memories are completely different from the single memory unit in the brain theory. The single memory unit theory simply stated that there was only one region of the brain which was responsible for storing memories. All sorts of memories might store region of the brain. According to various studies and research, there are different types of memories compartmentalize in different regions of the brain. According to the popular memory theory, memories division takes place into two categories of implicit memories and explicit memories. The implicit memory system is the one which primarily stored information unconsciously. And the explicit memory system is the one that supports or stores memory consciously. Emotional memories might be store in both of these systems. However, there are many experts who believe that emotional memories majorly fall under the division of implicit memories. This is why learning this topic can be a very difficult task. If a student is facing any difficulty in working on assignments then he or she should hire online assignment help from the best assignment writer. Understanding Episodic Memory Every individual has a lot of personal experiences every single day. These autobiographical events moves into the episodic memories system. There are many different kinds of information pieces which goes into episodic memories system. And some of those different information pieces include places, time, context, emotions associated with that event or any other information. It is important for a student to remember that episodic memories may either conjured or explicitly stated. However, in the majority of the cases, episodic memories are a collection of past experiences that have occurred at any particular time or place. For example, if an individual remembers getting some birthday present on his or her 20th birthday then that is an example of episodic memory. One should also remember that sometimes due to some sort of trauma or something episodic memory loss can occur. This type of memory loss results in the individual becoming incapable of recalling past events successfully. These topics are rather complex and because of this, it is important for a student to devote his or her majority time to the study of this subject. This means that students would be unable to complete assignments at the right time. In those cases, it would be best for the student to hire professional assignment help from the best assignment writer. This assignment help from the best assignment experts would ensure that the student does not lose his or her marks. And the student is also able to score the higher possible marks. The Relationship between Emotional Memory and Episodic Memory According to Endel Tulving, who was a prominent psychologist, episodic memories serve as the record of an individual’s experiences. These memories of the experiences hold spatio-temporal relations and temporally dated information. Endel Tulving further goes on to describe episodic memories as having the potential of allowing the individual to imagine oneself traveling back in time. Emotional memories are related intricately to episodic memories. Let’s consider an individual who receives a cue from the present environment. This cue results in that individual retrieving information related to the past memory. Within those few seconds, that individual does not just become aware of the past experience but is also able to remember the context of that memory, when that past experience occurred, and the emotions which one felt during that particular event. Hence, episodic memories and emotional memories in the same place but they are highly related to one another. This relation is so strong that the retrieval of one simply triggers the retrieval of the others too. It is important for a student to remember that these are simply theories. The brain is a very complex structure. The relationship between emotional memories and episodic memories is not easy to prove for certain. The fact that studies and researches conduct to learn more about the relationship between both these memories. Students required to put in extra effort to learn about the topics of these fields. This means that a student would not get enough time to work on assignments. This professional assignment help experts from the best assignment writer would ensure that the student does not suffer from any negative consequences of not working on the assignment or not submitting the academic assignment at the right time to the professor or teacher. Episodic memories are the memories of autobiographical events or of past experiences which an individual might have gone through. Episodic memories may either explicitly stated or conjured. On the other hand, emotional memories are the memories of any emotional taxing experience which an individual might have had. These emotionally taxing experiences can either be positive or negative in their nature. Both of these memories are intricately related to one another and the trigger of episodic memories often results in the trigger of emotional memories too.
Relate the volume of the conical tent whose volume is given to us to the formula and put all the values into the formulae and work out the unknown term that is height by using the inverse problem . As the cone is made of a sector shaped metallic sheet so the length of the arc of the given metallic sheet will lie along the circumference of the cone so formed. Apply the formula and do the calculation correctly you will get the solution . In order to find the cloth needed to make the conical tent we need to find its curved surface area as the base of the tent is open . As the plastic funnel was cut along its slant height so in order to find the radius of the sector so formed we need to find the slant height of the cone . Do you remember that the volume of a given cone with same height and base radius is 1/3 of that of the volume of cylinder . First of all find the radius of the given cone by relating the given values to that of unknown , then find the curved surface area . First of all find the radius of the given cone and then put the known values into the formula to work out the curved surface area of the given conical tent to calculate the cost of the tent so formed . Just find the slant height first and then go on to find the curved surface area . In order to find the ratio between cylinder to that of the cone , we should compare the formula of cylinder to that of cone . Video Lessons for CBSE class 9 maths Worksheets for CBSE class 9 maths NCERT Solutions for CBSE class 9 maths Exemplar Solutions for CBSE class 9 maths Support Services for CBSE class 9 maths Regular Classes for CBSE class 9 maths Live Coaching for CBSE class 9 maths
To understand the impact that a project-based learning (PBL) environment has on classroom instruction and student outcomes, MIDA Learning Technologies conducted a study of 2nd and 5th graders at a large suburban school district in Illinois. Research by Mida Learning Technologies showed that after utilizing PBL through Defined Learning in science classes for one year (2015-2016), teachers saw improvements in students’ engagement and motivation. In addition, students who used Defined Learning outperformed their peers in critical thinking and problem solving skills. Assessments revealed that students who used PBL by Defined Learning outperformed their peers by over +5 points 2nd-grade PBL and Defined Learning users achieved +49% higher scores than those that were in a traditional direct-instruction classroom. 5th-grade group who used Defined Learning outperformed the control group by +39% Stories of Success from Educators Using Defined Learning It was fun watching the kids’ interactions – their questions and discussions evolved into more in-depth ones and they became better problem solvers. – A participating 2nd-grade teacher Get the report Download the full research report by Mida Learning Technologies to learn more about the impact of project-based learning on student achievement.
Part One looked at the spread of radionavigation after the Second World War and particularly the deployment of Loran-C, which was both the US Air Force’s precision long-range navigation system and an important source of navigational cues for US ballistic missile-launching submarines. The US Navy had begun its own experiments to find a longer-ranged radionavigation system as soon as the original Medium Frequency Loran-A system was operational. Coast Guard experiments with Low Frequency Loran were followed in the early 1950s by an experimental Navy system called Radux and then in 1957 by a third hyperbolic Very Low Frequency system. The advantage of a VLF system over Loran-C was that, unlike Loran, it could offer true global coverage. Preliminary operations of what became known as Omega began in 1968. The VLF signal’s wavelength was so long that the system couldn’t measure pulse timing like Loran did, and so it used measurements of the phase difference of a continuous wave instead. The results were far less precise than Loran-C, with absolute accuracy of 1,800 to 3,600 meters, but only eight stations would be required for coverage of the entire world. The Omega Controversy Loran-C had fit comfortably into a world filled with civilian navigation aids. The original Loran (called Loran-A once Loran-C came into service) was available to anyone after the end of the Second World War. In Britain, Decca Navigator (developed during the war by Decca Records, of all people) was commercially available and popular. So too was Consol, an adaptation of the German wartime Sonne. All three used different mechanisms; any was sufficient for most civilian navigation needs. Availability was defined mostly by politics: Loran dominated the US and was available across Europe and the Pacific in allied territories, while Decca was popular in Europe and the British Commonwealth. Omega, on the other hand, attracted a lot of unfavorable attention when the US started negotiating the construction of overseas stations. Opposition to construction in New Zealand forced that station to be moved to Australia, where it also attracted numerous protests. The Norwegian station brought similar controversy. In each case, protestors claim that because Omega could be used as an external fix to reset the INS of a ballistic missile submarine, the stations would be targets in any US-Soviet war and, because of their vulnerability, the system was an inducement towards launching a first strike (You can read some contemporary summaries in issues of New Scientist). Particularly telling from this point of view was the fact that, unlike Loran-C, the Omega signal could be received by a submarine while still deep underwater. One never says never when it comes to military secrets, but there aren’t any signs that the US considered Omega in its operational years as important for its ballistic missile submarines. When development of Omega began there was no Loran-C network, nor any other systems offering the likelihood of precision navigation in distant waters. In the interim, not only had Loran-C gone into service but the discovery that the doppler shift of orbiting satellites could be used to fix locations had also led to the Transit navigational satellite system. The first Transit satellite was launched in 1959, and the system became operational in 1964. Why then did the construction of Omega systems provoke so much backlash?* For one thing, the 1970s were a less accepting era when it came to military activities than the early 60s, when the Loran-C network was built. For another, the transmitter at an Omega station was huge. A Loran-C station required a 620-foot tall (189 meter) transmission tower, roughly the height of a 43-storey office building. That’s not small. But, because of the extremely long wavelength, an Omega station’s transmission tower at least 1200 feet (365 meters) tall, just a little less than the height of the Empire State Building. In Norway and Hawaii, the transmitter was actually hung between two sides of a fjord or a mountain valley. Omega may have got bit by the technological sublime. As fans of eighteenth century aesthetics will recall, the original meaning of “sublime” was something so awe inspiring that it creates a feeling of horror as much as pleasure. With a physical footprint so impressive, Omega could hardly have avoided troubled attention. Impressive as it was, though, Omega also had the bad luck of being leapfrogged by satellite technology. VLF radionavigation development was well underway before Sputnik was launched, but Transit was fully operational before the operational Omega system got under construction. By the time Lumsdaine and Kjoller were taking an axe to its spiritual successor, GPS, in 1992, Omega’s days were numbered. Omega operations were discontinued in 1997. *There is a book on the subject written by Norwegian peace researchers and published in 1988, but it’s in off-site storage at the university library so I have no idea when I’ll have a chance to get a look.
What do auroras look like? As we mentioned, auroras take on different appearances. They can look like an orange or red glow on the horizon -- like a sunrise or sunset. Sometimes they may be mistaken for fires in the distance, like the American Indians thought. They can look like curtains or ribbons and move and undulate during the night. Auroras can be green, red or blue. Often they will be a combination of colors, with each color visible at a different altitude in the atmosphere. - Blue and violet: less than 120 kilometers (72 miles) - Green: 120 to 180 km (72 to 108 miles) - Red: more than 180 km (108 miles) After a particularly active solar maximum in the sun's cycle, the red color may appear at altitudes between 90 and 100 km (54 to 60 miles). Oxygen ions radiate red and yellow light. Nitrogen ions radiate red, blue and violet light. We see green in regions of the atmosphere where both oxygen and nitrogen are present. We see different colors at different altitudes because the relative concentration of oxygen to nitrogen in the atmosphere changes with altitude. Auroras can vary in brightness. People who regularly observe auroras and report on them generally use a rating scale from zero (faint) to four (very bright). They'll note the aurora's time, date, latitude and colors and make quick sketches of the aurora against the sky. Such reports help astronomers, astrophysicists and Earth scientists monitor auroral activities. Auroras can help us understand the Earth's magnetic field and how it changes over time. Because the Earth's magnetic field is three-dimensional, the aurora appears as an oval ring around the pole. This has been observed from satellites, the International Space Station and the space shuttle. It isn't a perfect circle because the Earth's magnetic field is distorted by the solar winds. The auroral ring can vary in diameter. Auroras can be seen as far south as the southern United States, but not frequently. In general, they stay near the polar regions. They also occur in pairs -- when we see an aurora borealis, there is a corresponding aurora australis in the southern hemisphere (learn why on the next page).
Information for Patients Anthrax is a disease caused by Bacillus anthracis, a germ that lives in soil. Many people know about it from the 2001 bioterror attacks. In the attacks, someone purposely spread anthrax through the U.S. mail. This killed five people and made 22 sick. Anthrax is rare. It affects animals such as cattle, sheep, and goats more often than people. People can get anthrax from contact with infected animals, wool, meat, or hides. It can cause three forms of disease in people. They are - Cutaneous, which affects the skin. People with cuts or open sores can get it if they touch the bacteria. - Inhalation, which affects the lungs. You can get this if you breathe in spores of the bacteria. - Gastrointestinal, which affects the digestive system. You can get it by eating infected meat. Antibiotics often cure anthrax if it is diagnosed early. But many people don't know they have anthrax until it is too late to treat. A vaccine to prevent anthrax is available for people in the military and others at high risk.
The Old World vultures are a group of birds belonging to the family Accipitridae. The birds live in the "Old World" continents of Europe, Asia, and Africa. The vultures are not closely related to the New World vultures but only superficially similar to these birds due to convergent evolution. The Old World vultures lack a keen sense of smell unlike their New World counterparts. However, both groups of vultures are scavengers and play a vital role in the ecosystem. 16. Palm-nut vulture The palm-nut vulture (Gypohierax angolensis) is a large bird-of-prey that feeds mainly on fruits of the oil palm. Molluscs, crabs, locusts, and fish also constitute the prey base of these birds. The palm-nut vulture is also referred to as the vulturine fish eagle, and is found in the forests and savannah throughout sub-Saharan Africa. It lives near water bodies and oil palm groves. The birds can also be seen living near human habitation. 15. Egyptian vulture The Egyptian vulture or the pharaoh's chicken (Neophron percnopterus), is found in northern Africa, southwestern Europe, and the Indian sub-continent. Although these birds mainly feed on carrion, they occasionally prey on small birds, reptiles, and mammals. The tropical populations of these birds are relatively sedentary while the populations in the temperate regions migrate in the winter to warmer areas in the south. The species is classified as Endangered as hunting, electrocution due to collision with power lines, and accidental poisoning has threatened the survival of these birds. 14. Bearded vulture The bearded vulture (Gypaetus barbatus), closely related to the Egyptian vulture, has a lozenge shaped tail that is unusual among birds-of-prey. The bird lives in the high mountain areas of Africa, Caucasus, Europe, and also the Indian subcontinent. The bearded vulture in unique in that it is the only known species in the animal world whose 70 to 90% of the diet is made up of bones. The vulture is a Near Threatened species. 13. White-headed vulture The white-headed vulture (Trigonoceps occipitalis) is an African endemic species of Old World vulture. The bird is medium sized and features a pink beak, white crest and pale naked areas on its head. The tail feathers are black and the upper parts are dark brown in color. The range of the vulture spreads across sub-Saharan Africa. Like most other vulture species, habitat degradation and poisoning threaten the survival of this species which is declared to be Critically Endangered. Hunters poison animal carcasses to kill-off the vultures so that these birds do not draw the attention of forest guards towards illegal kills by the hunters. 12. Lappet-faced vulture Also known as the Nubian vulture, the lappet-faced vulture (Torgos tracheliotos) is an Old World vulture that lives throughout the African continent. However, the vulture is rare in the continent’s central and western parts. The species is subdivided into two subspecies. The vulture can survive in a wide variety of habitats including deserts, open mountain slopes, dry savannah, and more. The birds also approach human settlements in search of carrion and waste. The birds are currently classified as Endangered by the IUCN. 11. Red-headed vulture The red-headed vulture (Sarcogyps calvus), also known as the Pondicherry vulture or the Indian black vulture is found primarily in the Indian subcontinent. Small populations are also found in some parts of Southeast Asia. The vulture is medium-sized with a size ranging between 76 and 86 cm. The neck is bare and a deep-red to orange in color. The body is black and a pale gray band features at the base of the flight feathers. The widespread use of the NSAID diclofenace has threatened the survival of these important natural scavengers. 10. Hooded vulture The hooded vulture (Necrosyrtes monachus) is an Old World vulture that is native to sub-Saharan Africa. The small vulture has a dark brown plumage and dark brown colored plumage. Poisoning, habitat loss and hunting have led to the birds being recognized as critically endangered. The hooded vulture breeds in trees and feeds on carrion. 9. Cape vulture The Cape griffon/vulture (Gyps coprotheres), also referred to as the Kolbe's vulture is a vulture species that is native to parts of southern Africa. The bird’s range includes parts of South Africa, Namibia, Lesotho, and Botswana. Within its range, the bird lives on tall cliffs in or near mountains which allow the bird to easily detect large carcasses. The Cape vulture is classified as Endangered due to its rapidly declining population. Factors like poisoning, harvesting for traditional needs, electrocution, destruction of foraging habitat, etc., threaten the survival of this species. 8. White-backed vulture The white-backed vulture (Gyps africanus) is a critically endangered species of vulture that lives in the savannah of east and west Africa. The vulture is medium-sized weighing about 4.2 to 7.2 kg and has a length ranging between 78 and 98 cm. The vulture is a scavenger that feeds on carcasses that its detects by soaring above the savannah. The species is classified as Critically Endangered and several factors like habitat loss, deforestation, poisoning, pollution, etc., have been responsible for this dire state of the vultures. 7. Himalayan vulture The Himalayan vulture (Gyps himalayensis), one of the largest vulture species, is found in the Himalayan region of the Indian sub-continent and adjacent Tibetan plateau. The bird is often regarded as the heaviest and largest bird of the Himalayan ecoregion. The bird is classified as near threatened. Habitat destruction is one of the important factors threatening these birds. 6. Slender-billed vulture The slender-billed vulture (Gyps tenuirostris) is found across the range stretching from the Sub-Himalayan region of the Indian sub-continent to Southeast Asia. The birds nest on trees, unlike the Indian vulture, its close relative, which breeds on cliffs. Sadly, the slender-billed vulture is classified as critically endangered by the IUCN since it experienced a sharp population decline of 97%, mainly due to diclofenac poisoning. Currently, the retail sale of diclofenac is banned in India and captive breeding programs are conducted to ensure the future survival of the species. 5. Indian vulture The Indian vulture (Gyps indicus) is native to parts of the Indian sub-continent. The birds have died in masses in the past few decades due to diclofenac-induced renal failure. Feeding on the carcasses of cattle subjected to diclofenac has led to this condition of the vultures. The Indian vulture is thus labeled as a Critically Endangered species. The bird is about 80 to 103 cm long and weighs around 5.5 to 6.3 kg. Conservation efforts have been undertaken to ensure the species is saved from extinction. Diclofenac has been banned in India for this reason. 4. Rüppell's vulture Rüppell's vulture (Gyps rueppelli) lives in central Africa’s Sahel region. The population of this vulture has steadily decreased in the past few decades and this has led to the vulture being declared to be Critically Endangered by the IUCN. Loss of habitat and deliberate poisoning by ivory poachers are the two primary factors that have hastened the population decline of the Rüppell's vulture. The vulture is regarded as the highest flying bird and is known to fly as high as 37,100 feet above sea level. The vulture is named after a 19th-century German explorer and zoologist, Eduard Rüppell. 3. White-rumped vulture The white-rumped vulture (Gyps bengalensis) is native to South Asia and Southeast Asia. Due to a dramatic decline in the population of this bird, it is currently classified as critically endangered. The birds have suffered death in masses due to renal failure from secondary diclofenac poisoning by consuming carcasses of diclofenac fed cattle. The white-rumped vulture population which numbered about several million in the 1980’s has reduced to less than 10,000 mature individuals as of 2016. 2. Griffon vulture The griffon vulture (Gyps fulvus) is an Old World vulture species which is widely distributed across Eurasia. The birds have a white head, wide wings, and short feathers in the tail. The buff colored wing coverts and body is in sharp contrast to the dark flight feathers. The griffon vulture is an efficient scavenger that feeds on carcasses of dead animals. Birds of this species can survive up to 41.4 years in captivity. 1. Cinereous vulture The cinereous vulture, also known as the monk, black or Eurasian black vulture (Aegypius monachus) is distributed throughout Eurasia. It is one of the largest among the Old World vulture species and can weigh as much as 14 kg. The birds feed on carrion of almost any type. The conservation status of the cinereous vulture is classified as near threatened due to habitat loss and hunting.
Structural Biochemistry/Electron Affinity Electron affinity is the energy change or gain that is accompanies an atom when an electron is added to it. When a neutral chlorine atom in the gaseous form picks up an electron to form a Cl- ion, it releases an energy of 349 kJ/mol or 3.6 eV/atom. It is said to have an electron affinity of -349 kJ/mol and this large number indicates that it forms a stable negative ion. Small numbers indicate that a less stable negative ion is formed. Groups VIA and VIIA in the periodic table have the largest electron affinities. Alkali earth elements (Group IIA) and noble gases (Group VIIIA) do not form stable negative ions. The sign of the electron affinity is associated with the potential energy that is created with the addition of the electron. If the addition of the electron were to create stability within the atom, it is considered to have higher electron affinity, which decreases the potential energy and this energy gained is negative. If the addition of the electron were to make the atom less stable, the potential energy increases, and the energy gained is found to be negative. Electron affinity is essentially the opposite of the ionization energy. Instead of removing an electron from the element, we add an electron to the element to create an anion. Also, the noble gases, alkali metals and alkali earth metals have electron affinity. Across the Period Typically, electron affinity increases from left to right across the period, but it is definitely not a regular or steady increase. Factors such as charge and the atomic size both affect electron affinities, hence the trend across the period is not as regular as the other periodic table trends. Across the Group Electron affinity tends to decrease down a group, but as mentioned previously, it is not a regular trend, and there are several exceptions to this rule of thumb down the group. - Silberberg, Martin S. Principles of General Chemistry. Boston: McGraw-Hill Higher Education, 2007. Print.
Spectacular view of our home Planet Earth and the Sun as seen from the Space Shuttle Solar System Facts - The solar system is around 4.6 billion years old. At the center of the solar system is the sun, a yellow dwarf star which produces vast amounts of energy. - There are eight major planets and over 100 moons in the solar system. - Mercury, Venus, Earth and Mars are the small rocky planets. Jupiter, Saturn, Uranus and Neptune are the gas giants. - All the planets orbit the sun in an elliptical, oval shaped path. - Many of the planets in the solar system are visible to the naked eye. Other objects in the solar system include dwarf planets, asteroids and comets. - The solar system is in a galaxy known as "The Milky Way". - It is estimated that at least a third of the 200 billion stars in the Milky Way are orbited by one or more planets. - The Voyager 1 spacecraft is the furthest man-made object in the solar system, it is around 12.5 billion miles (20 billion km) from the sun and is still sending data back to Earth. Birth of the Solar System Material accreting around the sun in the early solar system Our solar system began as part of a massive nebula cloud of molecular hydrogen and dust around 4.6 billion years ago. In a region of this dark cloud conditions allowed gravity to begin condensing the hydrogen until a substantial mass began to grow ever larger and hotter, eventually this mass collapsed in on itself, forming the early stage of a star called a protostar. The gravitational pull of the embryonic star caused a large disc of gas and dust to begin forming around it. Over millions of years pressure began to build up inside the protostar as it became hotter and denser until eventually nuclear fusion began in its core, giving birth to the sun as we know it. During this time the disc of gas and dust that had formed around the star had also began condensing into ever growing orbiting bodies. In time they would go on to form the planets, moons and other objects in our solar system. Planets of the Solar System The nearest planet to the sun is Mercury , aptly named after the swift footed messenger god it orbits the sun in only 88 days, quicker than any other planet in the solar system. Next comes Venus , often referred to as Earth’s sister planet but underneath her thick clouds lies a hellish oven baked landscape with temperatures hot enough to melt lead. Third planet from the sun lies Earth , a striking blue sphere covered with oceans of liquid water and the only planet known to us which The last of the rocky inner planets is Mars , consisting of a thin atmosphere this red body was once covered in oceans just like Earth but is now a desert where dust storms can engulf the entire planet. Almost 800 million kilometers from the sun we find the first of the gas giants, Jupiter , enormous in its size it would take over a 1,000 Earths to fill its volume. is the second of the gas giants and is unlike any other planet in the solar system with its spectacularly colorful rings made from dust and ice particles. Then comes the first of the planets known as ice giants, Uranus , this large turquoise ball of gas lies tilted on its side with freezing atmospheric temperatures of -371 Fahrenheit. At 4.5 billion kilometers from the sun we find the last of the planets, Neptune , a blue ice giant where winds in the atmosphere reach over 2,000 kilometers per hour. Moons of the Solar System As mankind explores space we have found that some of the moons in our solar system are equally and at times even more fascinating than the planets themselves. Jupiter’s wonderfully colored moon Io is the most volcanically active object in the solar system, Saturn’s moon Titan has a thick nitrogen based atmosphere which rains methane producing liquid lakes on its surface. Another of Saturn’s moons Enceladus has volcanoes which erupt with water ice and possibly has a liquid ocean underneath its icy surface. Then there is the ice world of Europa , another of Jupiter’s fascinating moons, underneath its surface lies a vast ocean of liquid water which is 100 kilometers deep and possibly teeming with life. Other Bodies in the Solar System At the center of the solar system is our sun , a massive ball of hydrogen and helium over a hundred times larger in diameter than Earth producing immense heat and enormous explosions which jettison solar winds millions of miles into space. Between the orbits of Mars and Jupiter lies the Asteroid Belt , millions of rocks reside in this area some as large as 60 miles (100 km) in diameter. Beyond Neptune we reach the Kuiper Belt where we find most of the dwarf planets including Pluto, at one time considered the ninth planet in our solar system this tiny body orbits the Sun at an average distance of almost 6 billion kilometers. originate in the Kuiper Belt or the even more far flung Oort Cloud , a massive spherical cloud of icy bodies that surrounds the solar system. The Planets in Scale of Size - Click for full screen image Life in the Solar System A future probe searching for life in the underground ocean of Europa Human kind has often pondered whether life exists elsewhere in the solar system, and if so what kind of life? Up until the mid 20th century it was thought every planet could harbor some kind of life forms, possibly even more advanced than ourselves. In the 1930’s an Orson Welles radio production of the novel “War of the Worlds” by H.G Wells caused widespread panic as listeners mistakenly thought it was real. This is an indication of how much the belief in advanced alien life forms from neighboring planets was held in the common psyche. Of course now we know this is just science fiction and that we're not under imminent attack from an alien civilization on Mars or Neptune. So is there life anywhere in the solar system apart from Earth and who are the suspects? Well you may be surprised to learn that there are several candidates in our varied solar system. Let's begin with Mars, the planet that captures our imagination the most. Even though we now know there are no canals on Mars or advanced civilizations, scientists involved in the Viking lunar landings still expected to find signs of life in the Martian surface. Viking 1 landed on Mars in June 1976 and analyzed samples of the Martian soil, but found nothing. Subsequent missions have also found no signs of life but there's still hope, and it lies underground. The reason for this theory is that surprisingly methane is present in the Martian atmosphere and one way methane can be produced is biologically. In the warmer summer months on Mars the presence of methane increases dramatically giving more credence to the idea that there are organisms living deep under the surface, possibly around hot vents. There are two fascinating moons around Saturn which could possibly support life. Both are very different from one another, they are Enceladus and Titan. Enceladus is a very small icy world with an area only slightly larger than Texas but under its surface there is believed to be a salt water ocean. How can there be warm water in such a cold area of the solar system? Well the tremendous gravitational force of Saturn pushes and pulls the small moon heating up its interior and melting the ice underneath its surface. If so this could provide an environment for micro-organisms or some other forms of life to exist. Enceladus's near neighbor, well 600,000 miles away, is another of Saturn's moons, Titan. This moon is exceptional in the solar system as it is the only one with a significant atmosphere. What's even more surprising is that its atmosphere is comprised mainly of nitrogen, just like our own planet. Even more amazing is that it is the only object in our solar system apart from Earth to have large areas of liquid on its surface, not of water but liquid methane. Methane on Titan acts like water on Earth, there are methane clouds which produce methane rain and methane lakes. Titan is often compared to primordial Earth, but unfortunately it's in deep freeze with surface temperatures around -179C (-290F). It is still possible that methane based microbial life could exist there breathing hydrogen instead of oxygen. Lastly we come to an icy moon orbiting around Jupiter called Europa. It is believed this moon, just slightly smaller than our own moon, presents the best possibilities for life in the entire solar system. The force of Jupiter's gravity produces tremendous tidal heating inside Europa, warming its interior and producing a salt water ocean which is 62 miles (100 km) deep. It is possible that there is twice the amount of liquid water on Europa than there is on Earth! It's been speculated that Europa's ocean could be teeming with life, not just bacteria but complex organisms could be swimming in the warm water. Around the year 2025 NASA hopes to land a probe on Europa's surface which will then melt through the ice and investigate the ocean for signs of life, maybe then we will find out that we are not alone in the solar system. Our Galaxy and the Universe The Andromeda Galaxy In the context of the size of the universe, the Earth and indeed the entire solar system is insignificant. Our sun is just one of 200 billion stars in our galaxy, and our own galaxy, the Milky Way, is just one of over 100 billion galaxies in the known universe. The solar system is located in a quiet area of our galaxy, far from its busy center where there also exists a supermassive black hole which has 4 million times the mass of our sun. The Milky Way is in fact a giant among galaxies, spanning 100,000 light years in diameter, there are more than 30 other galaxies in our "neighborhood" of which only Andromeda (pictured right) is larger. The best explanation we have as to how the universe was created is The Big Bang theory. Around 14 billion years ago a superdense, superhot mass billions of times smaller than a proton began expanding, in time creating the stars, planets and galaxies we have in the universe today, indeed it is still expanding. It is not known how the initial mass came to be. Share this page
Helping learners to get their message across By Tatsiana Khudayerka Quite often when learners face linguistic problems they fail to attempt to solve them due to insufficient knowledge of available strategies. The article focuses on various communication strategies and ways of helping students be better at them. Speaking is often considered by learners simply as practice of grammar and lexis they have learnt. However, spontaneity of speaking affects the speaker’s ability to plan and organise their message and requires a decent level of communicative competence. Dörnyei and Thurrell point out that strategic competence is an important part of communication in both L1 and L2; however, it is vital for foreign language speakers, because its lack ‘may account for situations when students with a firm knowledge of grammar and a wide range of vocabulary are unable to carry out their communicative intent’ (1991 p.17). Communicative competence consists of three component competencies: grammatical (knowledge of the language code), sociolinguistic (sociocultural rules and rules of discourse), and strategic, which is defined as ‘verbal and non-verbal communication strategies that may be called into action to compensate for breakdowns in communication due to performance variables or to insufficient competence’ (Canale and Swain 1980 p.30). Strategic competence is achieved by using communication strategies and activated when a speaker is unable to express what he wants to say, because he lacks necessary linguistic resources. Communication strategies include two main types of strategies: achievement, or compensatory strategies and reduction, or avoidance strategies (Faerch and Kasper 1983; Bygate 2015; Dornyei and Scott 1997). Both these types aim to compensate for a problem of expression. We use achievement strategies when we try to solve a communicative problem by attempting to find different ways of conveying the message and thus compensating for the language gap (Bygate 2015 p.42). There are three main categories of communication strategies (2015 pp.43–44): Guessing strategy, which includes: - Foreignizing, i.e. using an L1 word by adjusting it to L2 phonologically and/or morphologically (e.g., adding it to a L2 suffix). For example, Russian ‘shashlik’ for English ‘barbeque’. - Word coinage refers to the strategy when a learner replaces an L2 word with a created L2 item, based on his knowledge of rules (e.g., footballist for footballer). - Code-switching, or language switch refers to a situation when a learner uses the L1/L3 word or expression with L1/L3 pronunciation in L2 speech, e.g. French Voila! (Dornyei&Scott 1997 p.189). - Literal translation means translating literally a lexical item/structure from L1 to L2. For example, a Belarusian speaker may say ‘for whom how’ instead of ‘to each his own’. Paraphrase involves using the knowledge of the language to find an alternative way to express the idea. It includes: - Circumlocution, which means describing or exemplifying the object or action, e.g. ‘It becomes a gas’ instead of ‘evaporate’. - Approximation, or using a lexical item, which expresses the target lexical item as closely as possible, i.e. shares semantic features with the target word. For example, ‘shoes’ instead of ‘loafers’. - Using general words, i.e. extending a general, empty lexical item to contexts where specific words are lacking (e.g., the overuse of thing, stuff, make, etc.) (Dornyei&Scott 1997 p.188). The above-mentioned strategies can be described as non-cooperative, since the learner tries to solve the problem by employing their own resources. When a strategy involves the learner’s appeal for help to their interlocutor, it is called co-operative (Dornyei&Thurrell 1991 p.18). Faerch and Kasper note that ‘although problems in interaction are necessarily shared problems and can be solved by joint efforts […] it is up to a speaker to decide whether to attempt a solution himself or to signal his problems to his interlocutor and attempt to get the problem solved on a cooperative basis’ (1983 p.67). If a learner faces a communicative problem and feels that they need help, they make use of the cooperative communication strategy of ‘appeal’. Appeal for help can be direct or indirect. - Direct appeal means asking the interlocutor an explicit question concerning a gap in L2 knowledge, e.g. It’s a kind of race, in which er.. they run and jump over those things. What’s the name? A speaker can also provide a syntactic frame to elicit the correct word (Bygate 2015 p.46). S1: His face is covered with hair. He’s wearing a moustache and a... S2: A beard. Willems specifies the components of co-operative strategy. A speaker may request: - repetition if he hasn’t heard or understood the interlocutor, e.g. Pardon? What? - clarification in order to get the explanation of an unknown lexical item, by asking a clarification question, e.g. What do you mean?; echoing a word with a question intonation; using statements, e.g. I don’t understand or imperatives, e.g. Repeat please. - confirmation to see if he has understood the interlocutor correctly, e.g. Do you mean..? - Indirect appeal refers to trying to elicit help from the interlocutor indirectly by using verbal or non-verbal means, e.g. I don’t know the name (rising intonation, pause, eye contact, etc.) (Dornyei&Scott 1997 p. 191). Further Communication Strategies Apart from the strategies mentioned above, Faerch and Kasper (1980), Bialystok (1990), Tarone and Yule (1989) emphasize the importance of the following communication strategies: Nonverbal strategies are employed when a learner replaces a lexical item or an action with mime, gesture, facial expression and sound imitation or accompanies a verbal strategy with a visual illustration. (Dornyei&Scott 1997 p. 190). Time-gaining strategies involve knowledge of fillers and hesitation devices. These are vital because they let the speaker gain time to think and fill pauses when a communication difficulty occurs (Dornyei & Thurrell 1991 p.19). The most frequent native English fillers and ‘prefabricated’ responses are well, er, ah, uhm, of course, so, actually, well, etc. Reduction strategies are activated when a speaker has poor linguistic or strategic competence and therefore can make a deliberate decision not to speak or alter the original message: - Topic avoidance, or message reduction refers to reducing the message by avoiding problematic language, e.g. conditionals, or topics or leaving out some relevant information through lack of vocabulary. - Message abandonment refers to the situation when a learner begins to talk about something, but leaves the message unfinished if he comes across a linguistic obstacle, e.g. It’s an insect er.. a big one, with wings…er.. It’s like a big grasshopper… well, never mind. - Meaning replacement, or substituting the original message with a new one, because a speaker is not capable of executing it. (Faerch and Kasper, 1983 p.44). In the Classroom My teaching experience shows that lower-level students encounter several major problems, i.e. insufficient knowledge of fillers and hesitation devices, lack of paraphrasing strategies and reluctance to use co-operative strategies. What can we do in the classroom to have learners employ achievement strategies? In order to raise learners’ awareness of fillers and hesitation devices a teacher chooses a recorded authentic conversation. Students are given a task sheet with common fillers. They listen to the dialogue and tick the fillers they hear. Afterwards they discuss, which fillers are used and why it is important to use them in speech. Alternatively, a teacher records two conversations with and without fillers. Students listen and discuss which one sounds more authentic and why. A teacher then might give them a script and get the students to make it sound more natural by adding necessary fillers. Making a Game A ‘Taboo’ game works well to encourage the usage of circumlocution and approximation as an alternative way of describing and object/idea. A learner should explain the word to their partner(s), using some prefabricated chunks, e.g. It’s a thing..., It’s a kind of…, You do it when…, etc. After the game, the teacher elicits the strategies they used to win, e.g. giving a definition of an item, explaining what it looks like, paraphrasing, etc. Alternatively, the students replay the game and come up with as many definitions of the word as they can. Both tasks provide a variety of structures to accomplish the task and compensate for a lack of knowledge. Working with Dialogue Practice To develop learners’ strategy to ask for help from a communication partner a teacher selects a dialogue that they know, picks out the most important content words and writes them in the form of a skeleton next to the name of the character, e.g. John: failed exam. The teacher then boards the dialogue and pre-teaches/elicits the questions that the students can ask if they have forgotten/don’t know the word and boards them too, e.g. What do you call it? What’s the word for? The students look at the skeleton and practice reconstructing the dialogue. As a real task they act out the dialogue again and Speaker A decides which words s/he doesn’t remember and therefore has to elicit from Speaker B, by appealing for help. After the first round the students change roles and repeat the task. The activity helps students notice that both partners contribute to achieving a communicative goal. I strongly believe that strategy training exercises have real measurable benefits for L2 learners. They train them to be flexible and able to cope with unpredictable nature of spoken interaction. Author's Bio: Tatsiana has been an English teacher at IH Minsk since 2007. Her academic background includes an MA in Education from Belarusian State Pedagogical University, and the Cambridge Delta (2016). She has taught a variety of levels and courses. Her professional interests include motivating students and helping them become autonomous learners, teaching Business English and CPD. Bialystok, E. (1990). Communication Strategies. Oxford: Blackwell. Bygate, M. (2015). Speaking. Oxford: OUP. Canale, M. and Swain, M. (1980). Theoretical Bases of Communicative Approaches to Second Language Teaching and Testing. Applied Linguistics, 1/1: 1 – 47. Dornyei, Z. and Scott, M. (1997). Communication Strategies in a Second Language: Definitions and Taxonomies. Language Learning, 47/1: 173 – 210. Dornyei, Z. and Thurrell, S. (1991). Strategic Competence and How to Teach It. ELT Journal, 45/1: 16 – 23. Faerch, C. and Kasper, G. (1983). Strategies in Interlanguage Communication. London: Longman. Tarone, E. and Yule, G. (1989). Focus on the Language Learner. Oxford: OUP.
What Other Kids Are Reading What Is Tourette Syndrome? Tourette syndrome is a condition that affects a person's central nervous system and causes tics. Tics are unwanted twitches, movements, or sounds that people make. To have Tourette syndrome, a person must have at least two tics that affect body movement and one that is a sound. If you are having trouble imagining what tics are like, they're kind of like hiccups. You don't plan them and you don't want them. You can try tricks to make the hiccups stop, like drinking water upside down, but you can't just decide to stop hiccuping. Hiccups that last too long can even start to hurt and feel uncomfortable. Tics can be like that, too. Sometimes, tics can also be a little like "scratching an itch." You don't really want to scratch the itch, but you just can't help it. In these situations, the person has some control over the tic. The person feels an urge to make a movement or a sound before actually doing it. The person can even hold back the tic for a while. But eventually the person will have to let the tic out. Anyone who has a tic will need to see a doctor, and possibly a neurologist, which is a doctor who knows a lot about the nervous system. It's important to know what's causing the tic. All kids who have Tourette syndrome have tics, but a person can have tics without having Tourette syndrome. Some health conditions and medicines, for instance, can cause tics. And many kids have tics that disappear on their own in a few months or a year. Who Gets Tourette Syndrome? Tourette syndrome can affect people of all races and ethnic groups. It's more common in boys than in girls, and it almost always starts before age 18 — usually between ages 5 and 7. Even though kids with Tourette syndrome can get better as they get older, many will always have it. The good news is that it won't make them less intelligent or need treatment at a hospital or doctor's office. Sometimes a person with Tourette syndrome might have other conditions, like attention deficit hyperactivity disorder (ADHD), obsessive-compulsive disorder (OCD), or trouble learning. There are also lots of people who have other tic disorders who don't have Tourette syndrome. Why Do People Get Tourette Syndrome? Tourette syndrome is probably, in part, a genetic condition, which means that a person inherits it from his or her parents. Tourette syndrome is not contagious. You cannot catch it from someone who has it. Doctors and scientists don't know the exact cause, but some research points to a problem with how nerves communicate in the brain. Neurotransmitters — chemicals in the brain that carry nerve signals from cell to cell — may play a role. What Are Tics? People with Tourette syndrome have motor tics and vocal tics. Motor tics are movements of the muscles, like blinking, head shaking, jerking of the arms, and shrugging. Vocal tics are sounds that a person with Tourette syndrome might make with his or her voice. Throat clearing, grunting, and humming are all common vocal tics. A person with Tourette syndrome will sometimes have more than one type of tic happening at once. Tics can happen throughout the day, although they often occur less, or go away completely, when a person is concentrating (like working on a computer) or relaxing (like listening to music). The type of tic often changes over time. The frequency of the tic — how often it happens — usually also changes. Tics are often worse when a person is under stress (like when studying for a big test) or excited or very energized about something (like at a birthday party or a sports activity). Tics can even happen when a person first falls asleep, but usually slow down and then disappear completely during the deeper stages of sleep. How Is Tourette Syndrome Treated? There's no cure for Tourette, but often no treatment is needed. The person is able to deal with the tics and still do normal stuff, like go to school and play with friends. If tics are making it hard to do normal stuff, a doctor may suggest medicine. Visiting a psychologist or therapist can be helpful, too. Tourette isn't a psychological problem, but a psychologist can teach coping and relaxation skills that can help. Being stressed or upset can make the tics worse, and kids with Tourette syndrome might feel upset because of the tics and the problems that go with them. Counselors and Tourette syndrome organizations can help kids learn how to explain tics to others. How Should I Act Around Someone Who Has It? Kids who have Tourette syndrome want to be treated like everybody else. They can do regular stuff, just like other kids. In fact, Tim Howard grew up to be a soccer star. Howard is the starting goalkeeper for both Everton (in the English Premier League) and the United States national team. Reviewed by: Elana Pearl Ben-Joseph, MD Date reviewed: July 2014 Note: All information on KidsHealth® is for educational purposes only. For specific medical advice, diagnoses, and treatment, consult your doctor. © 1995- The Nemours Foundation. All rights reserved.
Vitamin D is a hormone that helps your body absorb calcium and is essential for the health of bones and muscles. Most of the vitamin D you need is produced in the skin when it is exposed to sunlight. Small amounts are obtained from the diet. Who’s at risk You may have low vitamin D if you don't expose your skin to enough sunlight, or your body is unable to produce, absorb or obtain the vitamin D you need. You may be at increased risk of low vitamin D if you: - are confined indoors because of age, illness or disability - have dark skin - wear clothing that covers most of your body (for example, for religious or cultural reasons) - avoid the sun because of concerns about skin cancer - are obese - have a condition that affects vitamin D absorption from your diet (for example, coeliac or Crohn's disease) - take medicines (for example, for epilepsy) that interfere with vitamin D - are a baby whose mother has low vitamin D. Low vitamin D doesn't usually cause symptoms, although muscle aches, tiredness and weakness can occur. Some say it can leave sufferers feeling depressed or down. Children with moderate-to-severe vitamin D deficiency are a risk of developing rickets (soft bones), but in Australia this is rare. In older Australians, low vitamin D levels can lead to osteoporosis. Muscle weakness can increase the risk of falls that may cause bone fractures. Studies have suggested possible links between vitamin D and a wide range of medical conditions, including diabetes and certain cancers. However, more research is needed before we know how much vitamin D deficiency may or may not influence these conditions. Prevention and treatment Most people can maintain healthy vitamin D levels through sensible sun exposure. This varies according to the time of day and across the seasons, your skin type and your location in Australia. Natural food sources of vitamin D include oily fish, liver and eggs. In Australia, margarine and some milk products are fortified with vitamin D. However, if you are low in vitamin D, you won't correct it through diet alone. Some people simply can’t get enough sun to maintain healthy vitamin D levels and will need a supplement. Getting enough vitamin D is just one way to look after your bones and muscles - adequate calcium in your diet and regular exercise are also important. If you're low in calcium, vitamin D has to work harder to maintain healthy bones. Dairy products, canned fish, tofu and some green vegetables are good calcium sources. If you don't already have bone problems, weight-bearing exercises (such as jogging, tennis) and lifting weights can help prevent osteoporosis. Muscle-strengthening and balance exercises, such as tai chi, can help prevent falls and possible fractures. When to contact a professional If you're worried you might be at risk of low vitamin D, talk to your doctor or pharmacist. Blood tests aren’t usually recommended for low-risk individuals. Tips for safe sun exposure and vitamin D - During summer, most Sydney and Melbourne residents can get enough vitamin D by going for a short outdoor walk, mid-morning or mid-afternoon, with their arms exposed. - During winter, two to three hours a week of sun is usually required. - Dark-skinned people may need three to six times this exposure. - Common sense is the key: to reduce the risk of skin cancer, avoid the sun, or use sun protection when the UV index is 3 or above. - See the Osteoporosis Australia site for a map of recommended sun exposure for vitamin D, based on where you are in Australia, as well as the time of year. This information provided by NPS MedicineWise, an independent, non-profit and evidence-based organisation funded by the federal government. For information about prescription, over-the-counter and complementary medicines, see nps.org.au.
If you have ever visited an aquarium or museum gift shop, you have probably seen an ABC book, such as The Ocean Alphabet book. After researching and learning about a topic in your classroom such as biology, ancient civilizations, or even the unique history and geography of your state, have students create an ABC book to share their knowledge with others. This lesson serves as a performance task you can use to evaluate student comprehension of information they have learned during a unit of study. As you complete the unit and introduce the performance task to your students, share a few examples of Alphabet-style informational texts. You can use the alphabet books Jerry Pallotta has written on topics like the ocean, flowers, and icky bugs or work with your librarian or media specialist to find other examples that will interest your students and match your expectations for their work. If you are creating an informational ABC book as an entire class, assign each student a single letter to complete. Depending on your topic, you may be able to differentiate through intentional letter assignments. If your class has less than 26 letters, challenge your advanced learners to take on an additional letter. If you want students to complete an entire Alphabet book on their own or in small groups, provide students with an organizer they can use to brainstorm ideas for each letter of the alphabet: A is for __________, B is for __________, and so on. Before students begin creating their page, outline your expectations for the content they need to include. Obviously, each page needs to include the letter, but do you expect that every page will have the same style heading, such as “A is for ________,” at the top? This consistency will be helpful if each student is creating a single page that you will combine with other student pages to create a class book. Your expectations can be more generalized to give students more creative freedom as well as require them to think about how they can best share their content. For example, you could simply state that each page must include: If you have specific expectations for quantity and quality of the work, share those with students. Do you expect a single sentence, or are you expecting an entire paragraph that includes research or evidence from text? Either of these options are valid, just make sure students are clear on your expectations. If you have time, and your learners are ready, ask students to help develop evaluation criteria for the content of the pages. When you ask students what content each page should include, it gives them more agency over the process, requires them to think and reflect, and makes them an equal partner in the evaluation process. Alphabet-style books also rely heavily on imagery, so you may also want to spend some time exploring how visual literacy impacts messages. Have students use a digital publishing tool to design and share their page. If you are using Wixie, you may want to create and assign a template. If you have developed criteria for the content of each page, you can include this information in the Options panel by adding instructions to the template. If you are sharing the file digitally, be sure to have students record narration for the page to make the book more accessible to young learners. If students worked individually or in small teams, share their work in printed form or share digitally by exporting their work in Wixie as a PDF or ePub. If you worked on a class ABC Book, print out each student’s page and bind them together to create a printed version you can keep in your classroom library. If students created individual pages using Wixie, combine them together using Wixie’s Import Pages feature. If you want a more professional looking published product, export student work as JPG files and upload to a photo-sharing site. Then, use the site’s features to publish the book. If you can’t get funding to create a book for each student, link to the online version so everyone can view the project. You may also want to share a link to the “photo” book so that families can purchase individually. Have your class read and share their book with younger students at your school or students at a local school studying the same topic. If students included voice recording on their page, share the URL to the online version of the class book so viewers can both read and listen to the story. You can also export the class book as an eBook, so students can read and enjoy on their iPads at home. An alphabet-style informational text is a great “writing across the curriculum” performance task that allows you to evaluate students’ content knowledge without a worksheet or quiz. You want students to use this opportunity to engage with content in a way that is as challenging as possible without being overwhelming, so begin formative assessments before students begin writing for their final performance task. You can gauge their interest and comprehension about the topic by the ease at which they are able to find and assign words for each letter of the alphabet. Review student word choice before they begin writing to help individual students and teams clarify thinking and avoid misconceptions. You may want to create a checklist to clearly define the content the pages should include. The final pages for each letter of the alphabet provide students an opportunity to demonstrate knowledge and understanding. Create a rubric or checklist to help guide student work during research, writing, and publishing. Lois Ehlert. Eating the Alphabet. ISBN-10: 015201036X Lynn Cheney. America: A Patriotic Primer. ISBN-10: 148147961X Raj Haldar and Chris Carpenter. P Is for Pterodactyl: The Worst Alphabet Book Ever. ISBN-10: 1492674311 CCSS.ELA-LITERACY.W.3.2, 4.2, 5.2 Write informative/explanatory texts to examine a topic and convey ideas and information clearly. CCSS.ELA-LITERACY.W.3.4, 4.4, 5.4 With guidance and support from adults, produce writing in which the development and organization are appropriate to task and purpose. CCSS.ELA-LITERACY.W.6.2, 7.2, 8.2 Write informative/explanatory texts to examine a topic and convey ideas, concepts, and information through the selection, organization, and analysis of relevant content. CCSS.ELA-LITERACY.W.6.4, 7.4, 8.4 Produce clear and coherent writing in which the development, organization, and style are appropriate to task, purpose, and audience. 3. Knowledge Constructor Students critically curate a variety of resources using digital tools to construct knowledge, produce creative artifacts and make meaningful learning experiences for themselves and others. Students: a. plan and employ effective research strategies to locate information and other resources for their intellectual or creative pursuits. b. evaluate the accuracy, perspective, credibility and relevance of information, media, data or other resources. c. curate information from digital resources using a variety of tools and methods to create collections of artifacts that demonstrate meaningful connections or conclusions. 6. Creative Communicator Students communicate clearly and express themselves creatively for a variety of purposes using the platforms, tools, styles, formats and digital media appropriate to their goals. Students: a. choose the appropriate platforms and tools for meeting the desired objectives of their creation or communication. b. create original works or responsibly repurpose or remix digital resources into new creations. c. communicate complex ideas clearly and effectively by creating or using a variety of digital objects such as visualizations, models or simulations. d. publish or present content that customizes the message and medium for their intended audiences. What can your students create? Create custom rubrics for your classroom. Graphic Organizer Maker Create custom graphic organizers for your classroom. A curated, copyright-friendly image library that is safe and free for education.
Many kids with autism struggle to identify their emotions and the emotions of others. This can sometimes lead to problem behaviors. This packet helps address this skill deficit and includes: *2 visual picture choice boards for students to identify how they are feeling (one has 9 picture options and the other has 16 picture options). *1 visual picture choice board for student to identify where their body is hurting. *12 flashcards, each with a written scenario (and pictures to aid in comprehension) as well as the corresponding emotion flash card to go with it. These help students develop empathy for others by identifying how others are feeling in a given scenario.
Qubits teleported at kilobits per second Teleport entanglement For the first time, researchers have teleported 10,000 bits of information per second inside a solid state circuit. Although the accomplishment differs from teleporting mass - such as that seen on science fiction shows like Star Trek - the remarkable feat demonstrates what could be possible with a quantum computer. In their experiment, the team spaced three micron-sized electronic circuits on a seven-by-seven-millimetre computer chip. Two of the circuits worked as a sending mechanism, while the other served as the receiver. The scientists cooled the chip to near absolute zero and ran a current through the circuits. At that frigid temperature and small scale, the electrons in the circuit - known as quantum bits or qubits - started to behave according to the rules of quantum mechanics. The qubits became entangled. This means they become linked, sharing identical quantum states, even if physically separated from one other. Specifically, the qubits in the sender circuit became entangled with those in the receiving circuit. The ETH team encoded some information into the qubits in the sending circuits and then measured of the state of the qubits in the receiver circuit. Whatever state the qubits had been in the sender was reflected instantly in the receiving circuit - the researchers had teleported the information. This is different from the way information is sent in ordinary computers, electrons carry information along wires or through the air via radio waves. In this case, no bit of data physically travelled along a route - instead the information disappeared from one location and reappeared at another. Other experimenters have teleported quantum bits, too, and have done so across a larger distance. But those teams only got the teleportation to work once in a while, perhaps a few per cent of the time. The ETH team was also able to teleport up to 10,000 quantum bits every second, and get it to work right consistently. That's fast enough and accurate enough to build a useful computer. "Basically we can push a button and have this teleportation work every time," says Andreas Wallraff, Professor at the Department of Physics and head of the study.
Help students recognize the importance of following directions accurately, and give them activities that guide them in developing the skills they need to do so. The information is presented in three main sections: following physical and verbal directions, following written directions, and following directions when working with a partner or as part of a group. Reviewed By: Mrs. Z. (RI) Don't we all wish our young students would follow our directions better or read the directions on the paper? This resource is perfect in the first few weeks of school for teaching them how to follow directions well, and setting the expectation that they do.
The origin of spoken language has stumped linguistics dating as far back as the Twenty-sixth dynasty in Egypt and the first recorded language experiment conducted by a Pharaoh named Psammetichus I. While it is widely understood that our ability to communicate through speech sets us apart from other animals, language experts, historians and scientists can only hypothesize how, where and when it all began. Some new findings may provide some real insight into this conundrum. A recent study conducted by Quentin D. Atkinson, a biologist at the University of Auckland in New Zealand, suggests two very important findings: language originated only once, and the specific place of origin may be southwestern Africa. While most studies focus on words in order to trace the birth of modern language, Atkinson zeroed in on phonemes (the basic distinctive units of sound by which words are represented) of over 500 languages around the world. By applying mathematical methods to linguistics, Atkinson discovered that the further humans traveled from Africa, the fewer number of phonemes survived. To put this into perspective: Many African click languages or “click consonants,” found in all three Khoisan language families, have more than 100 phonemes while the languages of Oceania, the spoken language of the Pacific Islands, Papua New Guinea and New Zealand – the latter being the furthest migration route out of Africa, have only 13. The Modern English language has approximately 45 phonemes. Atkinson’s findings challenge a long-held belief by linguistics that the origin of spoken language only dates back some 10,000 years. Atkinson hints that if African populations began their dispersal from Africa to Asia and Europe 60,000 years ago, perhaps the spoken language had to exist around that time and, as Atkinson hints at, may have been the catalyst for their dispersion and subsequent migration. Back to Top
First use of radiocarbon dating Because the cosmic ray bombardment is fairly constant, there’s a near-constant level of carbon-14 to carbon-12 ratio in Earth’s atmosphere. For example, Christian time counts the birth of Christ as the beginning, AD 1 (Anno Domini); everything that occurred before Christ is counted backwards from AD as BC (Before Christ). The Greeks consider the first Olympic Games as the beginning or 776 BC. Find out more about the Kindle Personal Document Service. is a technique used by scientists to learn the ages of biological specimens – for example, wooden archaeological artifacts or ancient human remains – from the distant past. is a term for radiocarbon dating based on timestamps left by above-ground nuclear explosions, and it is especially useful for putting an absolute age on organisms that lived through those events. In The Cosmic Story of Carbon-14 Ethan Siegel writes: The only major fluctuation [in carbon-14] we know of occurred when we began detonating nuclear weapons in the open air, back in the mid-20th Century. A child mummy is found high in the Andes and the archaeologist says the child lived more than 2,000 years ago. How do scientists know how old an object or human remains are? A detailed description of radiocarbon dating is available at the Wikipedia radiocarbon dating web page. Bottom line: is a technique used by scientists to learn the ages of biological specimens – for example, wooden archaeological artifacts or ancient human remains – from the distant past. Here’s an example using the simplest atom, hydrogen. Carbon-14 is an unstable isotope of carbon that will eventually decay at a known rate to become carbon-12.
Genetic Clotting Disorders Genetic Clotting Disorders Some children are born with a disorder also known as a genetic condition that makes them at greater risk for a blood clot, a blockage in a child's veins or arteries. A genetic condition is something that is passed down from a child's parent(s). These conditions include: Factor V(5) Leiden Factor V(5) Leiden is the most common genetic condition that can lead to blood clots. Almost all people with factor V Leiden have one affected gene and one normal gene. A gene is a characteristic that is passed down from a child’s parent(s). It is rare for a child to have both genes affected. Factor V is a protein that helps form blood clots, a plug that help stop bleeding. With factor V Leiden a child’s body cannot turn off factor V. As a result, too much blood clotting can happen. For children with one affected gene, the chance of getting a blood clot increases 10 times. This means that for children with factor V Leiden, one out of 5,000-10,000 will get a blood clot. Children who are healthy and do not have factor V Leiden will get a blood clot in one out of 50,000-100,000 children. In the rare case that both genes are affected, their chance of getting a blood clot is 100-fold higher than someone without factor V Leiden. This means that one out of 500-1,000 children with two affected genes will get a blood clot. Most children with factor V Leiden will never get a blood clot in their childhood or young adult life. A child’s risk of getting a clot increases when they have a serious medical condition, a central line, an undiagnosed autoimmune disease or are on birth control (also known as oral conraceptives and hormones). For example: In otherwise healthy teenagers, only one out of 50,000 will get a blood clot. A teenage girl who is on birth control, however, has about a three-fold risk of getting a blood clot. If she also has factor V Leiden, then the risk for getting a blood clot is 30-40 times higher than an otherwise healthy teenager. In other words, if a 16-year-old girl who has factor V Leiden is put on the birth control pill, her risk of getting a blood clot is 35 out of 50,000, or about one out of 1,500. Who it Affects: Factor V Leiden affects different groups of people: - Families with ancestry in the Middle East (Arab countries, Turkey and Armenia) have the highest rates of around 10 percent. - Families with ancestry in Southern Europe (Greece and Italy) have rates of around five percent. - Families with ancestry in Central and Northern Europe have rates around two to five percent. - The one gene mutation for factor V Leiden occurs in about 2-10 percent of Caucasians. - People who have no white ancestors (African, Asian, Native American or Pacific Islanders) do not have factor V Leiden in their genes. - In the United States, many people have a mixed ancestry which they may or may not be aware of. As a result, many people whose family has been in the United States for many generations may have factor V Leiden regardless of their race or ethnic group. The Prothrombin Mutation The prothrombin mutation is the second most common genetic clotting disorder. Almost all children with the prothrombin mutation have one gene (a characteristic passed down from a child’s parent or parents) that is affected. It is uncommon for a child to have both genes affected by the mutation. Prothrombin is an important part of normal blood clotting, forming a plug to stop bleeding. Children with the prothrombin mutation have too much prothrombin in their blood which causes the body to produce more blood clots. Children with one gene affected have a two- to three-fold increased risk of getting a blood clot. This means that two to three out of 1,000 children with the condition will get a blood clot. Children who are healthy and do not have the prothrombin mutation will get a blood clot in one out of 50,000-100,000 children. For children that have both genes affected, they have a greater chance of getting a blood clot than a child who only has one affected gene. Most children with the prothrombin mutation will never get a blood clot in their childhood or young adult life. A person’s risk of getting a clot increases when she or he has a serious medical condition, a central line, an undiagnosed autoimmune disease and/or is on birth control (also known as oral contraceptives or hormones). Who it Affects: The prothrombin mutation affecting one gene occurs in about two to three percent of children with white ancestry. Children with a mixed ancestry can be affected as well. About one out of 10,000 children will have the prothrombin mutation affecting both genes. Protein C and Protein S Deficiency Protein C and protein S work together in the body to prevent blood clots. If a child does not have enough of either protein, they are at risk for getting a blood clot. A child’s risk for getting a blood clot increases by 10-20 fold if they inherit (passed down from a child’s parents) protein C or protein S deficiency. Most children with either condition will not get a blood clot during their childhood and teen years. Protein C Deficiency - Affects all races - Inherited protein C deficiency occurs in one out of 1,000 people - Treatment: patients are treated with Ceprotin, a blood-derived product that replaces the missing protein C Protein S Deficiency - Affects all races - Inherited protein S deficiency occurs in one out of 5,000 people - Treatment: patients are treated with plasma or blood thinners Severe Protein C and Protein S Deficiency Severe protein C or protein S deficiency is a rare (less than one out of a million people), but severe, condition. Babies with severe protein C or protein S deficiency are born with blood clots that often cause brain damage and blindness. Doctors will be alerted to the condition after birth because babies will have large purple patches on their skin. These patches are blood clots within the skin. Non-Genetic Protein C and Protein S Deficiency Children with liver disease or vitamin K deficiency can also become deficient in protein C or protein S. It is unlikely for the average child or adolescent to have undiagnosed liver disease or vitamin K deficiency. Antithrombin is a natural protein in the human body whose job is to prevent blood clots. Not having enough antithrombin in the body can lead to blood clots. Antithrombin deficiency has an even higher rate of causing blood clots than protein C or protein S deficiency and often causes blood clots in childhood or the teen years. Who it Affects: Antithrombin deficiency occurs in about one out of 20,000-50,000 people. The chance of a person getting a blood clot depends on how much antithrombin is in their body. Once a child is diagnosed with antithrombin deficiency due to having a blood clot, they can be treated with: - Blood thinners -- which are usually enough to treat and prevent blood clots - Antithrombin-containing factors –- either blood-derived or synthetic (man-made) Homocysteine is an amino acid (a building block for protein) and is found naturally in the human body. Certain genetic conditions, a disorder passed down from a child’s parents, can lead to high homocysteine in the blood. High homocysteine can cause blood clots or a blockage in arteries and veins. There are rare conditions of high homocysteine levels (more than 100 with normal being less than 12), but these conditions are associated with many other medical problems and are diagnosed shortly after birth or in the first year of life. More commonly, high homocysteine can be caused by a poor diet lacking B vitamins (B6, B12 and folic acid) or by diseases of the blood or intestine that causes the body to not absorb these vitamins. Children with poor kidney function may also see abnormally high homocysteine levels. - Lower the homocysteine levels using vitamins B6, B12 and/or folic acid - For children with blood clots from high homocysteine levels, see a doctor to discuss the use of blood thinners to prevent blood clots Lipoprotein(a) is a naturally occurring lipoprotein. It is similar to the lipoproteins known as cholesterol. The function of lipoprotein(a) in the body is not known. In some families, lipoprotein(a) can be high just like cholesterol. High lipoprotein(a) can lead to blood clots. Unfortunately, there is no good way to lower lipoprotein(a). Unlike cholesterol, lipoprotein(a) is not affected by diet or exercise. High doses of the vitamin niacin can lower lipoprotein(a) in some cases, but niacin has side effects and does not always work. Children who have had a blood clot because of high lipoprotein(a) need to be treated with blood thinners and may need to stay on them for a long time. Elevated Factor VIII (8) High factor VIII (8) levels can lead to blood clots and there are families that genetically have high factor VIII. Factor VIII is a protein that helps stop bleeding in a child’s body. Factor VIII can also be elevated due to infection, inflammation and autoimmune diseases. It is not clear at what level of factor VIII someone is at risk for a blood clot. Some doctors who are testing for thrombophilia will not test for a high factor VIII level. Antiphospholipid Antibody Syndrome (APLA or APLAS) Unlike most other thrombophilias, antiphospholipid antibody syndrome (APLAS) is an acquired disorder, meaning one does not have the condition from birth. APLAS is an autoimmune disease, causing the immune system to work too much and attack healthy parts of a child’s body. Like autoimmune diseases, APLAS runs in families, but there is not a specific gene for APLAS like factor V Leiden. The cause of this condition and how often it affects children is not known, but it is not rare. It is not completely clear how APLAS leads to blood clots. Hematologists think that APLAS causes holes to be made on the surface of cells that line blood vessels and normally prevent blood clots from forming in the vessel. The condition is more common in adolescent girls and young women than in boys or men. It can cause many problems in pregnancy for women who have the condition. It is important to test for APLAS in otherwise healthy children who get a blood clot. Special blood tests are needed to test for APLAS and the results can be tricky to understand. As a result, an expert in blood clotting conditions needs to explain the blood test results. Patients with APLAS who get blood clots need to be treated with blood thinners and should be managed by a doctor who specializes in blood clotting disorders. Contact the Hemostasis and Thrombosis Center to speak with one of our experts.
Jones Act, formally Philippine Autonomy Act of 1916, statute announcing the intention of the United States government to “withdraw their sovereignty over the Philippine Islands as soon as a stable government can be established therein.” The U.S. had acquired the Philippines in 1898 as a result of the Spanish–American War; and from 1901 legislative power in the islands had been exercised through a Philippine Commission effectively dominated by Americans. One of the most significant sections of the Jones Act replaced the Commission with an elective Senate and, with minimum property qualifications, extended the franchise to all literate Filipino males. The law also incorporated a bill of rights. American sovereignty was retained by provisions of the act reserving to the governor general power to veto any measure passed by the new Philippine legislature. The liberal governor general Francis B. Harrison rarely used this power and moved rapidly to appoint Filipinos in place of Americans in the civil service. By the end of Harrison’s term in 1921, Filipinos had taken charge of the internal affairs of the islands. The Jones Act remained in force as a de facto constitution for the Philippines until it was superseded by the Tydings–McDuffie Act of 1934. Its promise of eventual absolute independence set the course for future American policy in the islands.
Greenhouse gases are measured as atmospheric concentrations and are expressed in parts per million (ppm). For instance, 1 metric ton of carbon equals 3.664 metric tons of CO2. Greenhouse gases are often expressed as CO2 equivalents (CO2e), based on their respective global warming potential (GWP). GWP is based on a number of factors, including a gas’s ability to absorb heat compared to that of carbon dioxide, as well as its decay rate (the amount removed from the atmosphere over a given number of years). Because scientists are not exactly certain how each gas concentration decays over time (some gases decay faster than others and some even grow in certain atmospheric conditions), the associated GWPs are estimates and may vary according to the purpose and parameters of a study. The carbon dioxide equivalent for a gas is derived by multiplying the mass of the gas by the associated GWP. For example, the IPCC’s Fourth Assessment Report gives the GWP for carbon dioxide as 1, methane as 23 and nitrous oxide as 296. This means that emissions of 1 million metric ton of methane and nitrous oxide, respectively, are equivalent to emissions of 23 and 296 million metric tons of carbon dioxide. Even though an entity may emit the same levels of annual greenhouse gas emissions, the atmospheric conditions may cause a change in the greenhouse gas concentrations. MtCO2e = million metric tons carbon dioxide equivalent GtCO2e = billion metric tons carbon dioxide equivalent Carbon Footprint Defined When industries discuss reducing their carbon footprint, they mean their product’s total amount of carbon dioxide equivalent added to the environment throughout the production and lifetime of that product. This includes obtaining the raw materials, manufacturing the product, delivering it to the marketplace, the energy requirement for its consumption by consumers, and its disposal. Direct emissions are those an organization releases from the facilities it owns. Indirect emissions are from sources other than those owned by the organization but are caused by actions on the part of the organization. Examples of indirect emissions are those generated by a power company to provide electricity at the organization’s facility or in the obtaining and delivering of materials used in production at a facility.
Absent pulmonary valveAbsent pulmonary valve syndrome; Congenital absence of the pulmonary valve; Pulmonary valve agenesis Absent pulmonary valve is a rare defect in which the pulmonary valve, through which oxygen-poor blood flows from the heart to the lungs (where it picks up oxygen) is either missing or poorly formed. This condition is present at birth (congenital). Absent pulmonary valve occurs when the pulmonary valve does not form or develop properly while the baby is in the mother's womb. When present, it often occurs as part of a heart condition called tetralogy of Fallot. When the pulmonary valve is missing or does not work well, not enough blood can flow efficiently to the lungs to get oxygen. There is also usually a hole between the left and right ventricles of the heart (ventricular septal defect). This defect will also lead to low-oxygen blood being pumped out to the body. The skin will have a blue appearance (cyanosis), because the body's blood contains a low amount of oxygen. Absent pulmonary valve also results in very enlarged (dilated) branch pulmonary arteries (the arteries that carry blood to the lungs). They can become so enlarged that they press on the tubes that bring air to the lungs (bronchi) and cause breathing problems. Other heart defects that can occur with absent pulmonary valve include: - Abnormal tricuspid valve - Atrial septal defect - Double outlet right ventricle - Ductus arteriosis - Endocardial cushion defect - Marfan syndrome - Tricuspid atresia Heart problems that occur with absent pulmonary valve may be due to defects of the genes (chromosomes). Symptoms can vary depending on which other defects the infant has, but may include: - Blue coloring to the skin (cyanosis) - Failure to thrive - Poor appetite - Rapid breathing - Respiratory failure Exams and Tests Absent pulmonary valve may be diagnosed before the baby is born with a test that uses sound waves to create an image of the heart (echocardiogram). During an examination, the doctor may hear a murmur in the infant's chest. Tests for absent pulmonary valve include: - A test to measure the electrical activity of the heart (electrocardiogram) - Chest CT scan - Chest x-ray - Magnetic resonance imaging (MRI) of the heart Infants who have breathing symptoms typically need immediate surgery. Infants without severe symptoms typically have surgery within the first 3 to 6 months of life. Depending on the type of other heart defects the infant has, surgery may involve: - Closing the hole in the wall between the left and right ventricles of the heart (ventricular septal defect) - Closing a blood vessel that connects the aorta of the heart to the pulmonary artery (ductus arteriosis) - Enlarging the flow from the right ventricle to the lungs Types of surgery for absent pulmonary valve include: - Moving the pulmonary artery to the front of the aorta and away from the airways - Rebuilding the artery wall in the lungs to reduce pressure on the airways (reduction pulmonary arterioplasty) - Rebuilding the windpipe and breathing tubes to the lungs - Replacing the abnormal pulmonary valve with one taken from human or animal tissue Infants with severe breathing symptoms may need to get oxygen or be put on a breathing machine (ventilator). Without surgery, most infants who have severe lung complications will die. In many cases, surgery can treat the condition and relieve symptoms. - Brain infection (abscess) - Lung collapse (atelectasis) - Right-sided heart failure When to Contact a Medical Professional Call your health care provider if your infant has symptoms of absent pulmonary valve. If you have a family history of heart defects, talk to your doctor before or during pregnancy. Although there is no way to prevent this condition, families may be evaluated to determine their risk of congenital defects. Bernstein D. Cyanotic congenital heart lesions: Lesions associated with decreased pulmonary blood flow. In: Kliegman RM, Behrman Re, Jenson HB, Stanton BF, eds. Nelson Textbook of Pediatrics. 19th ed. Philadelphia, PA: Saunders Elsevier; 2011:chap 424. Brown JW, Ruzmetov M, Vijay P, Rodefeld MD, Turrentine MW. Surgical treatment of absent pulmonary valve syndrome associated with bronchial obstruction. Ann Thoracic Surg. 2006;82:2221-2226. PMID: 17126138 www.ncbi.nlm.nih.gov/pubmed/17126138. Nölke L, Azakie A, Anagnostopoulos PV, Alphonso N, Karl TR. The Lecompte maneuver for relief of airway compression in absent pulmonary valve syndrome. Ann Thorac Surg. 2006;81:1802-1807. PMID: 16631676 www.ncbi.nlm.nih.gov/pubmed/16631676. Park, MK. Pediatric Cardiology for Practitioners. 5th ed. Philadelphia, PA: Mosby; 2008. Absent pulmonary valve - illustration Absent pulmonary valve Cyanotic 'Tet spell' - illustration Cyanotic 'Tet spell' Tetralogy of Fallot - illustration Tetralogy of Fallot Review Date: 2/17/2014 Reviewed By: Kurt R. Schumacher, MD, Pediatric Cardiology, University of Michigan Congenital Heart Center, Ann Arbor, MI. Review provided by VeriMed Healthcare Network. Also reviewed by David Zieve, MD, MHA, Isla Ogilvie, PhD, and the A.D.A.M. Editorial team.
A new blood test has been developed that could help doctors to diagnose Parkinson’s disease without being as invasive as previous methods. It uses a newly discovered protein, which when found in the blood denotes the presence of the disease. In the past, patients have had to undergo a spinal fluid test to discern whether the symptoms they are experiencing are in fact the result of Parkinson’s. The new discovery, which was made during research by scientists at Sweden’s Lund University, could be a big step forward in understanding the disease. Some 127,000 people, mainly in middle-age or elderly, are believed to have Parkinson’s disease. Symptoms include slow movement, tremors and rigidity of the muscles, making everyday tasks hard to complete. While there are currently no cures for the disease, early diagnosis is key in controlling the symptoms and helping patients live as ordinary lives as possible. Progressive nerve cell damage is thought to begin before symptoms appear and a diagnosis can be sought. There are other conditions that display similar symptoms to those produced by Parkinson’s disease, making tests of fluid the best way to discern the difference. Blood is much simpler to extract than spinal fluid, however, and should be less uncomfortable for the patient. Dr Oskar Hansson, leader of the study, said: “We have found that concentrations of a nerve protein in the blood can discriminate between these diseases as accurately as concentrations of that same protein in spinal fluid.” Throughout the research, the scientists tested the blood of 504 people, including healthy participants and those who have been diagnosed with the disease for up to six years. Results showed that the blood tests were as accurate at the spinal fluid tests in discerning between Parkinson’s and other neurological diseases. Dr Hansson added: “Our findings are exciting because when Parkinson's or an atypical parkinsonism disorder is suspected, one simple blood test will help a physician to give their patient a more accurate diagnosis.” Claire Bale, head of research communications at Parkinson's UK, welcomed the development, stating that it could cut down on the stress and delays that come with getting a definitive diagnosis. It is thought that around one in ten people diagnosed with Parkinson’s actually have a different condition. This means they do not receive the right treatment from the outset and clinical trials into new drugs to tackle Parkinson’s are therefore flawed. More research with bigger study groups needs to be done into the blood test, but it is a step forward in the route to diagnosis for potential Parkinson’s patients.
An annulus is a flat shape like a ring. Because it is a circle with a circular hole, you can calculate the area by subtracting the area of the "hole" from the big circle's area: Area = πR2 − πr2 = π( R2 − r2 ) Example: a steel pipe has an outside diameter (OD) of 100mm and an inside diameter (ID) of 80mm, what is the area of the cross section? Convert diameter to radius for both outside and inside circles: - R = 100 mm / 2 = 50 mm - r = 80 mm / 2 = 40 mm Now calculate area:
Types of Chemical Bonds Bond energy refers to the energy that bonds hold with each other, it is tested by how much energy it takes to break the bond, whatever it takes to break, it takes to create and hold. Ionic Bonding is made between two oppositely charged ions come together to form a bond Ionic Compound is a nonmetal and a metal that bonds. Bond Length is the minimal distance between the two atoms. Covalent Bonding is when atoms “share” electrons in order for both of the outer shells are completed. Covalent Bonding is done by two or more nonmetals bonding with each other. Polar covalent bonds are done by more that one type of atom Electronegativity Electronegativity is the ability of an atom in a molecule to attract shared electrons to itself. It depends on the bond energy of the two different atoms. Bond Polarity and Dipole Movements Dipolar is when a molecule has opposite poles on either side, due to the polar molecule. Depends on the geometry of the atom If geometry is linear it has polar bonds but is not a polar molecule. Ions: Electron Configurations and Sizes Ionic bonding is done in solid form, gas particles are too far apart to react as fast if two solids or liquids come into contact. Isoelectric ions are ions containing the same number of electrons. Energy Effects in Binary Ionic Compounds Lattice energy is the change in energy that takes place when separated gaseous ions are packed together to form an ionic solid Split up the reactions and then add up the energies to get sum of energy it realeases. Partial Ionic Character of Covalent Bonds There is no complete Ionic Bond Any compound that conducts an electric when melted is considered Ionic.
The common cold and seasonal flu have similar symptoms -- both are contagious respiratory illnesses -- but there are a few key differences. A cold is generally milder than the flu, and the flu usually comes with a fever, aches and chills. Colds also come on slower, while the flu usually hits hard and fast. In young children, a fever may accompany their cold because their bodies aren't yet accustomed to fighting off infection without raising their body temperature. The Centers for Disease Control and Prevention (CDC) lists common cold symptoms as: - Runny nose - Dry throat - Mild body aches or headaches - Watery eyes Flu symptoms include: - Fever (often high) - Extreme tiredness - Dry cough - Sore throat - Runny or stuffy nose - Muscle aches - Stomach symptoms, such as nausea, vomiting, and diarrhea, also can occur but are more common in children than adults When to Call the Doctor: If your child is 3 months or younger, call the pediatrician at the first sign of illness. For a cold in a child older than 3 months, the American Academy of Pediatrics (AAP) recommends calling the doctor if: - Nasal mucus persists for longer than ten to fourteen days. William Sears, M.D. notes that if "discharge begins seeping from her eyes as well, it's time to see the doctor. Babies with the eye-nose combo may have both a sinus and an ear infection and likely need antibiotics to treat them." - A cough persists for more than one week - Your child has pain in his ear. This can be tough to discern since signs of ear pain aren't very clear, says Meg Fisher, M.D., the department chair of pediatrics and medical director of the Children's Hospital at Monmouth Medical Center in Long Branch, New Jersey. General irritability and fever can be indicators; tugging on the ear, while widely considered a sign of ear pain, is more likely done out of habit than ear discomfort, says Dr. Fisher. - For diarrhea and vomiting, be sure to offer rehydrating drinks like Pedialyte or Gatorade, and small portions of bland foods like rice, noodles, or toast. Call the pediatrician immediately if there is blood or bile in the vomit, or blood in the diarrhea. - Your child has a high fever, or a recurring fever. A high fever means: - For babies 3 months or younger, 100.4ºF or higher - For babies 3 to 6 months, 101.1ºF or higher - For children older than 6 months, 103ºF or higher - Your child is excessively sleepy, lethargic or cranky If your child exhibits flu symptoms, call the doctor as early as possible; antiviral medication may help if it is given within the first 48 hours of flu signs.
Spring migration from its wintering grounds in Colombia and Venezuela started back in early April, and by now the Cerulean Warbler has flown across the Gulf of Mexico, passed through Mississippi, Alabama, Louisiana, and Georgia, and is continuing north and northeast. During breeding season, this warbler builds its nest and forages high in the canopy of older and mature deciduous forests (up to 3,500 feet). The species prefers large tracts of forest consisting of a variety of hardwood tree species and relatively little undergrowth. Click here to listen to the song of the Cerulean Warbler. Until the middle of the 20th century ‘Dendroica cerulea’ was common throughout much of eastern North America, and was most abundant in the central Appalachian Mountains. But today the Cerulean is America’s fastest declining migratory songbird. Faced with habitat loss in both their wintering and breeding grounds, populations of the species have been steadily declining for decades; showing as much as a 70% drop since the 1960s, and the trend continues downward. This beautiful blue denizen of mature deciduous forests has suffered following widespread deforestation for agricultural and energy development. Within the Cerulean Warbler’s historical breeding range, over 50% of forests have been cleared, and 40 to 50% of South American shade coffee plantations — a highly preferred wintering ground — have been converted to monocultures of sun coffee, devoid of the large trees that the species needs to survive. Prime Cerulean breeding habitat in North America happens to correspond with prime coal producing regions of Appalachia where mountaintop removal is practiced. Researchers with the USGS Biological Resources Division completed a study in 2002 that indicated Ceruleans have an unexpected preference for ridgetops. They found that “92% of [breeding] territories occurred only in fragments with ridgetop habitat remaining.” This is precisely the habitat destroyed by mountaintop removal mining. The USGS study also found that Cerulean breeding density is lower in forest habitats that are fragmented or closer to mine edges. The bird is now increasingly found in marginal secondary forest habitat that has regenerated following the abandonment of farms, growth of trees following timber harvests, and other reforestation efforts. Mountaintop mines are reclaimed primarily with grasses. The compacted nature of the soil slows or even prohibits the natural succession of forest in these areas, making fragmentation effects long-lasting. Cowbirds feed in grasslands, so this is another factor that may be hurting Cerulean populations. Like many of its warbler cousins, Ceruleans may receive the unwelcome attention of parasitic cowbirds. These cowbirds attempt to foist their young onto unsuspecting adoptive parents by pushing Cerulean eggs out of the nest, then laying their own replacements. While the cowbird adults shirk their parenting duties, Ceruleans will energetically raise the changeling youngsters because they do not recognize cowbird eggs or young. The Cerulean Warbler is on the Audubon Watch List, and is also recognized as a species of conservation concern through Audubon’s Important Bird Areas program. Attempts to categorize the bird as ‘threatened’ under the United States Endangered Species Act had not succeeded as of November 2008. It is, however, listed as a species of special concern in Canada, where it is protected. Additionally, the Cerulean Warbler is considered “Vulnerable” by the International Union for Conservation of Nature.
Digital Citizenship refers to the appropriate behaviors for positive engagement with digital tools and in digital spaces. We can view digital citizenship as an extension of citizenship in the physical world, where we have rights, duties, and obligations depending on our national affiliations. In schools, much of the hidden curriculum is concerned with student behavior as well as interaction between individuals and groups of people. Teachers help students develop team building skills, cooperation, kindness, sharing and other such attributes within the course of classroom and extracurricular activities. These skills are even more important online, where it’s easier to be mean and misunderstandings occur more often without the nuances of speech and body language. Mark Ribble, in his book Digital Citizenship in Schools, identifies 9 elements of digital citizenship. The elements are Digital Etiquette, Digital Communication, Digital Law, Digital Literacy, Digital Access, Digital Rights and Responsibilities, Digital Health and Wellness, Digital Commerce, and Digital Security. Teaching Digital Citizenship Experiences in the physical and virtual worlds work in tandem to create ways of thinking and being. Children have some experiences online before doing so in the physical world. They may also experiment and explore their identity online. Adults can help children unify their online and offline worlds, and help facilitate constructive and positive experiences through intentional conversations and guidance in both spaces. As children experience new situations and problems, and engage in steps to resolve them, they build resilience. Students develop digital citizenship skills by engaging in online spaces, with appropriate support and guidance. Digital citizenship lessons are best taught within the context of technology use. Just as we can’t teach a child to ride a bike through pen and paper exercises, we can’t teach digital citizenship skills in that way. What we teach about digital citizenship and how we teach it should depend on the age of the child. In every class and subject, it is up to the teacher to highlight any relevant digital citizenship skill that students are using during the course of a lesson. The following essential questions for use with students, derived from Mark Ribble’s work, may help you develop lessons and activities for your classroom: - What are my rights and responsibilities in a digital society? (Digital Rights and Responsibilities) - How does my use of technology affect other people? (Digital Etiquette) - Am I using technology responsibly and appropriately? (Digital Law) - Do I communicate appropriately with others when using digital tools? (Digital Communication) - What technology can I use to improve my learning? How does technology help me learn? (Digital Literacy) - Does everyone have access to the appropriate technology tools when he/she needs them for learning, work, and for local and global collaboration? (Digital Access) - How can I protect myself and my equipment from being harmed by my online activities? (Digital Security) - What are the physical and psychological dangers of digital technology use? (Digital Health and Wellness) Start the year with clear agreements with students about their use of technology at school and in the classroom. If your school has a technology use policy, discuss it with students and help them understand its contents and how it applies to their classes. Develop classroom rules that clarify and build upon existing school rules about technology use. Make sure that classroom rules address software installations, changes to computer configuration, and uses of technology devices. During orientation at the beginning of the school year, students in one grade 4 classroom made class agreements on taking photos and videos in the classroom and downloads and purchases on classroom iPads, learned about password strength, and made a list of trusted adults beside their parents/guardians to go for help in the physical world if they have a problem in the virtual world. Throughout the year, reinforce the agreements, concepts and skills from the start of the year. As you plan your lessons and units, select the essential question relevant to the content area, and to the use of technology by students. Use this essential question to include relevant tasks and conversations in your lessons. Also model digital citizenship skills in your own teaching. Finally, include descriptions of digital citizenship skills that students are learning in your regular communication home. Teaching digital citizenship in Elementary School Throughout elementary school, teachers should share reliable, relevant websites with children. One way to do that is through a bulletin board of QR codes that students can quickly use to access websites. Other tools for sharing include social bookmarking tools like Diigo, Google Classroom or other learning management systems, and tools like Chirp. It’s important to emphasize which tools and websites students may use, the process for selecting a new website or tool, and how to identify unsafe situations online. In lower elementary school, most of the tools used by students at school will be found and shared by the teacher. The major focus of digital citizenship for students should be on finding and using safe, appropriate sites, and on what to do if they find themselves in a new or scary place. Common Sense Media has a lesson using the analogy of traffic lights for K-2 students where green light sites are those that are appropriate for the child. If you’re an elementary school teacher, you may want to make a poster or bulletin board of green light sites for the classroom. You can involve students in evaluating the sites, and in posting them. You may connect this idea to a QR code bulletin board, for students to quickly access green light sites. In upper elementary school, students will begin to find more of their own websites to use. They start to make accounts independently and need to learn about strong passwords, and protecting their accounts. It’s important for teachers to help children develop independence in selecting appropriate resources for use in their learning. My favorite lesson for helping children in Grades 3 – 5 recognize the opportunity and responsibility of digital citizenship is Rings of Responsibility from Common Sense Media.This lesson can be done each year, customized to the grade level of the children. It’s also a good idea to send related material home, with ideas for connections at home. Even though students in Grades 3 – 5 do not meet the requirement for many online sites and tools, many of them have these accounts, with or without their parents’ permission. Discussions of cyberbullying, online civility, and privacy are especially important as children engage more in virtual spaces. As a teacher, you can facilitate conversations with students about their choices and habits when using digital tools. It’s important in this lesson to be a listener, and facilitator, and to guide students’ choices without being bossy. Teaching Digital Citizenship in Middle and High School Students in middle and high school generally have much more independence in using their digital devices. They engage in social media and in social networks. It’s important to teach about cyberbullying, time management, and mental and physical health, as topics connect to the digital lives of teens. Common Sense Media has a variety of kits and lessons to help you. Since many students have their own devices at home, issues of Digital Commerce and Digital Security are relevant to them. They should learn about these topics as part of core courses of technology, maths, and other relevant subject areas. Alternatively, some schools organize a digital citizenship bootcamp for students during the first days of school. Favorite Resources for Teaching Digital Citizenship I have used many different websites for teaching digital citizenship, but in the past few years, I’ve focused on the following 3 resources: I’ve recently learned about one more tool, which sounds exciting, the Digital Intelligence Quotient (DQ). The DQ includes 8 digital skills: digital citizen identity, screen time management, cyber bullying management, cyber security management, digital empathy, digital footprints, critical thinking and privacy management. DQ World is an online game with free access for kids ages 9 to 12 to develop their digital citizenship skills. You can create a school/classroom account to use the site in your classroom. If you try it out, please leave me a comment. Another resource – Tech Time Digital Citizenship wiki based on Mark Ribble’s book
Cutting Skills Worksheets Links verified 03-30-16 - Practice your Cutting - Use these worksheets to practice your skills in cutting. Cutting sheet 1 ; Cutting sheet 2 ; Cutting sheet 3 . - Printable Scissor Skills Practice Worksheets - Print these worksheets to give toddlers, preschoolers and kindergarten students practice with scissors. - Tracing/Cutting Templates - Use these printable pages to practice tracing or cutting with a scissors. - Scissor Skills Worksheets for Kids - These worksheets are designed to help kids develop their scissor skills. - Scissors Practice - Kids love cutting things, and the patterns printed on these sheets will give them valuable practice. - Scissors Practice Worksheets - Help your child develop strong cutting skills by starting with easy-to-complete activities (like completing a single cut) and progressing to more advance activities (like cutting many times on long curved lines). - Cutting Skills Printables - Here you can find a range of worksheets which will help your children practice their cutting skills.
Exploring the beliefs of English people during the fifth to ninth centuries Proto-Indo-European origins of words Almost all the words in most modern European languages – including those deemed to be Celtic rather than Germanic or Romantic – have a shared origin in the mists of linguistic time. The same is true of the Indian languages based on Sanskrit. Basque is a key exception, as this seems to be a unique language with no surviving relatives. Finnish and Hungarian are also from a very different language group, called Finno-Ugric, once spoken over much of what is now eastern Europe but now restricted to two the furthest extremes of that range as a result of subsequent displacement of societies. As the name Proto-Indo-European suggests, the major languages of India and Europe had a shared ancestry. The time and place when Proto-Indo-European was spoken have yet to be historically established. In practice there never would have been one language spoken over a vast area – then, as now, there would be been significant local variations. Until recently historical linguists and archaeologists shared the belief that in the Bronze Age Kurgan warrior-horsemen spread their culture and distinctive Indo-European language westwards. However this has been disproved by recent linguistic studies, instigated by John Colarusso's observation that –ssos place-name endings in the eastern Mediterranean and Balkan region were associated with settlements that existed before 5,000 BCE. If the names are that old too (as yet an unproven assumption) then a whole new time-frame opens up. This is consistent with other evidence indicating that Indo-European had spread to most parts of its range during the Mesolithic at the latest. Even if Proto-Indo-European was spoken in Britain during the Mesolithic, do not assume that it evolved smoothly into later Celtic 'dialects' – it is just as possible that the so-called 'Beaker People' of the Bronze Age (the Amesbury Archer, excavated in 2002, is the best- and earliest-known of such people in Britain) brought with them a Germanic language as a Celtic one, or that the Neolithic language was closer to Germanic than Celtic anyway. Think how much English has changed since the time of Chaucer and then multiply that by the time durations of prehistory – for example, early Neolithic people were as distant from their late Neolithic descendants by 1,600 years, which is two-and-a-half-times longer than since Chaucer was writing. And Chaucer's dialect, while closest to modern English, was only one of many all-but mutually unintelligible regional dialects at the time. Modern British culture has far more influences on linguistic change that the comparatively small 'insular' population of Neolithic Britain, so rates of change are faster now – but the substantial time-scales and lack of standardising influences means that languages always evolved. While the origins of Indo-European languages are still being argued about, what is more certain is how groups of modern words in the various Indo-European languages can be tracked back to a much smaller number of 'root' words. The expertise involved is formidable but, on the basis of the collective erudition of many decades of historical linguists, there is reasonable agreement about which words were cognate ('of common descent') and which ones simply sound rather similar but are ultimately unrelated. If you really want to get to grips with Proto-Indo-European studies then see Wikipedia's Proto-Indo-European page. copyright © Bob Trubshaw 2013
purposes only. Any person depicted here is a model and not an actual patient unless otherwise noted. Understanding Bipolar I Disorder Bipolar I disorder is a serious mental illness marked by extreme changes in mood, thinking, and behavior—all of which can affect personal relationships with friends and family. There are no lab tests that can detect bipolar I disorder. A doctor must take a thorough history of your child and your family. He or she will ask about your child's moods, behavior, sleep habits, and other factors. It is important to be honest about all your child's symptoms, such as feeling unusually happy, energetic, or talking fast. And because bipolar I disorder can be mistaken for other conditions, the doctor may also use tests to see if your child's symptoms could be caused by another illness. There have been recent changes in the definition of symptoms for bipolar I disorder. Please see your child's healthcare provider for more information about these changes. While the specific cause of bipolar I disorder is not yet known, it’s thought to be caused by an imbalance of chemicals found in the brain. Some research suggests that bipolar I disorder runs in families. Learn more Challenges For Your Child Bipolar I disorder can take a toll on children’s ability to function in different areas, such as at home, at school, and in social situations. They may have trouble connecting with family members or peers. They may have mood swings at inappropriate times, causing others to react in a certain way. They may also have difficulty concentrating and remembering things. In general, they probably feel misunderstood. It's important to try and relate to your child and understand the challenges he or she may be facing. That way, you'll learn how to handle certain situations better, while helping your child to cope better with things and people. Substance abuse and suicide are serious risks, which is why accurate diagnosis is so important for children and teens who may have bipolar I disorder. With proper treatment, bipolar I disorder symptoms can be managed. Important Safety Information About Allergic Reactions: Patients should not use ABILIFY (aripiprazole) if they are allergic to aripiprazole or any of the ingredients in ABILIFY. Allergic reactions have ranged from rash, hives and itching to anaphylaxis, which may include difficulty breathing, tightness in the chest, and swelling of the mouth, face, lips, or tongue.See More Safety Information Personalize a Doctor Discussion Guide Create a list of questions based on yourGet started child's specific needs and concerns to take to the next appointment with the doctor. Symptoms of Bipolar I Disorder Learn more about the symptoms of bipolar I disorder (manic and mixed episodes) in children and teens.More
Through extensive archaeological research, much evidence has been uncovered indicating that man has been present in northern Niger for over 600,000 years. By at least 4000 BC , a mixed population of Libyan, Berber, and Negroid peoples had evolved an agricultural and cattle-herding economy in the Sahara. Written history begins only with Arab chronicles of the 10th century AD . By the 14th century, the Hausa had founded several city-states along the southern border of what is today the Republic of the Niger. About 1515, an army of the Songhai Empire of Gao (now in Mali), led by Askia Muhammad I, subjugated the Hausa states and captured the Berber city of Agadez, whose sultanate had existed for many generations. The city had been important largely because of its position on the caravan trade routes from Tripoli and Egypt into the Lake Chad area. The fall of the Songhai Empire to Moroccan invaders in 1591 led to expansion of the Bornu Empire, which was centered in northeast Nigeria, into the eastern and central sections of the region. The Hausa states and the Tuareg also remained important. It was probably during the 17th century that the Djerma settled in the southwest. Between 1804 and 1810, a devout Fulani Muslim named 'Uthman dan Fodio waged a holy war against the Hausa states, which he subjugated along with a part of the Bornu Empire, west of Lake Chad. About that time, European explorers began to enter the area, starting with a Scot, Mungo Park, in 1805–06. Bornu, Hausa, and Fulani entities vied for power during the 19th century, a period during which political control was fragmented. The first French military expeditions into the Niger area, at the close of the 19th century, were stiffly resisted. Despite this opposition, French forces pushed steadily eastward and by 1900 had succeeded in encircling Lake Chad with military outposts. In 1901, the military district of Niger was created as part of a larger unit known as Haut-Sénégal et Niger. Rebellions plagued the French forces on a minor scale until World War I, when a major uprising took place. Some 1,000 Tuareg warriors attacked Zinder in a move promoted by pro-German elements intent on creating unrest in French and British African holdings. British troops were dispatched from Nigeria to assist the French in putting down the disturbance. Although this combined operation broke the Tuareg resistance, not until 1922 was peace fully restored. In that year, the French made Niger a colony. Niger's colonial history is similar to that of other former French West African territories. It had a governor but was administered from Paris through the governor-general in Dakar, Senegal. From 1932 to 1947, Niger was administered jointly with Upper Volta (now Burkina Faso) for budgetary reasons. World War II barely touched Niger, since the country was too isolated and undeveloped to offer anything of use to the Free French forces. In 1946, the French constitution conferred French citizenship on the inhabitants of all the French territories and provided for a gradual decentralization of power and limited participation in indigenous political life. On 28 September 1958, voters in Niger approved the constitution of the Fifth French Republic, and on 19 December 1958, Niger's Territorial Assembly voted to become an autonomous state, the Republic of the Niger, within the French Community. A ministerial government was formed by Hamani Diori, a deputy to the French National Assembly and secretary-general to the Niger branch of the African Democratic Rally (Rassemblement Démocratique Africain—RDA). On 11 July 1960, agreements on national sovereignty were signed by Niger and France, and on 3 August 1960, the Republic of the Niger proclaimed its independence. Diori, who had been able to consolidate his political dominance with the help of the French colonial administration, became Niger's first president. His principal opponent was Djibo Bakary, whose party, known as the Sawaba, had been banned in 1959 for advocating a "no" vote in the 1958 French constitutional referendum. The Sawaba was allegedly responsible for a number of unsuccessful attempts to assassinate Diori after 1959. Diori was able to stay in power throughout the 1960s and early 1970s. His amicable relations with the French enabled him to obtain considerable technical, military, and financial aid from the French government. In 1968, following a dispute between the ruling Niger Progressive Party (Parti Progressiste Nigérien—PPN) and the civil service over alleged corruption of civil service personnel, the PPN was given a larger role in the national administration. Over the years, Diori developed a reputation as an African statesman and was able to settle several disputes between other African nations. However, unrest developed at home as Niger, together with its Sahel neighbors, suffered widespread devastation from the drought of the early 1970s. On 15 April 1974, the Diori government was overthrown by a military coup led by Lt. Col. Seyni Kountché, the former chief of staff who subsequently assumed the presidency. Madame Diori was killed in the rebellion, as were approximately 100 others, and the former president was detained (1974–80) under house arrest. Soon after the coup, French troops stationed in Niger left at Kountché's request. The economy grew markedly in the late 1970s, chiefly because of a uranium boom that ended in 1980. The Kountché regime, which was generally pro-Western, broke diplomatic relations with Libya in January 1981 in alarm and anger over Libya's military intervention in Chad. Relations with Libya slowly improved, and diplomatic ties resumed in 1982. Nevertheless, Niger continued to fear Libyan efforts at subversion, particularly among the Tuareg of northern Niger. In October 1983, an attempted coup in Niamey was suppressed by forces loyal to President Kountché. Kountché died of a brain tumor in November 1987, and (then) Col. 'Ali Seybou, the army chief of staff, was appointed president. In 1989, Seybou created what he intended to be a national single party, Le Mouvement National pour la Société de Développement/The National Movement for a Developmental Society (MNSD). However, the winds of democratic change ushered in multiparty competition. At the forefront for political reform was the labor confederation, which organized a widely observed two-day-long general strike. Following the example of Benin, a National Conference was held from July to October 1991 to prepare a new constitution. The conferees appointed an interim government, led by Amadou Cheiffou to work alongside the Seybou government to organize multiparty elections. Widespread fighting in the north and military mutinies in February 1992 and July 1993 postponed the elections, but a new constitution was adopted in a December 1992 referendum. Niger's first multiparty elections took place on 27 February 1993. Mamadou Tandja, who succeeded Seybou as MNSD leader came in first with 34%. However, with Mohamadou Issoufou's support, Mahamane Ousmane defeated him in the March runoff with 54% of the vote. In the legislative elections, the MNSD won the largest number of the seats (29), but a coalition of nine opposition parties, the Alliance of Forces of Change (AFC) dominated the National Assembly with 50 of the 83 seats. Prime Minister Issoufou led the AFC. The new government found itself threatened by an insurgency in the north. In March, it reached a three-month truce with the major Tuareg group, the Liberation Front of Air and Azaouak (FLAA), and was able to extend it for three more months. By September, however, the Tuaregs had split into three factions and only one, the Front for the Liberation of Tamoust (FLT), agreed to renew the truce for three more months. Some Tuaregs, chiefly under the Armée Revolutionnaire de la Libération du Nord Niger (ARLNN) continued the rebellion, prompting more government reprisals. The Tuareg raids created tension with Libya, suspected of inciting the insurgencies, and divided Nigerians over issues of favoritism. The government appeared biased in favor of members of the Djerma-Songhai (or Zarma-Songhai), one of Niger's five major ethnolinguistic groups. In April 1995, a tentative peace was reached via the joint mediation of Algeria, Burkina Faso, and France. However, ethnic disturbances continued in the south of the country. In late 1994, disagreements between the president and the leadership of the National Assembly resulted in a political stalemate lasting throughout 1995. In the legislative elections of 12 January 1995, the AFC succumbed to factionalism allowing the MNSD to win a slight majority (29 seats). The MNSD formed a ruling coalition with its allies in the Democratic and Social Convention (24 seats). However, the two sides fought over the appointment of a prime minister, and then-prime minister, Hama Amadou, and President Ousmane fought over IMF austerity measures. In January 1996, Colonel Ibrahim Baré Maïnassara (known as Baré) toppled Ousmane and dissolved the Assembly. The military regime suspended political parties and civil liberties, and placed the president, prime minister, and president of the Assembly under house arrest. Despite Baré's pledge to restore democracy, donors cut assistance to Niger. In May 1996, voters approved a new constitution that strengthened the powers of the executive. However, only 40% of the electorate voted. Baré lifted the ban on political parties, and in the July elections, despite evidence of massive fraud, declared himself the winner with 52% of the vote. Ousmane received 19% Tandja Mamadou 16%, Mahamadou Issoufou 8%, and Moumouni Amadou Djermakoye 5%. Baré's UNIRD took 52 of 83 Assembly seats in the November 1996 legislative elections. On 9 April 1999, while boarding his helicopter, President Baré died in a hail of bullets. Political gridlock gripped the country, eroded public confidence in government, and allowed the military to intervene. The day prior to the assassination, opposition leaders had called on Baré to step down. Major Daouda Mallam Wanké said the presidential guard had opened fire in self-defense, and his junta later described the murder as an unfortunate accident. Few people believed it was, and the coup was roundly condemned by the international community. Baré's widow, Clemen Aicha Baré, filed claims against Wanké, as the prime perpetrator, and against the former prime minister Ibrahim Assane Mayaki for his alleged role in the assassination. In October 1999, Wanké made good on his promise to return the country to civilian rule. Despite allegations of vote rigging, seven candidates contested the presidential elections. In the first round, Mamadou Tandja (MNSD) took 32.3% of the vote to Mahamadou Issoufou's (PNDS) 22.8%. In the 24 November runoff, Tandja was elected with 59.9% to Issoufou's 40.1%. Observers declared the second round free and fair. In the 24 November Assembly elections, five of seven parties won seats. The MNSD took 38 of 83 seats, the CDS 17, the PNDS 16, the RDP 8, and the ANDP 4. The new National Assembly passed an amnesty for perpetrators of the January 1996 and April 1999 coups to avoid "the spirit of revenge or any form of resentment." Eight members of Maïnassara Baré's party dissented. Tandja said his top priority would be to work for political, social, and institutional stability, essential for national recovery. In May 2002, Niger and Benin submitted a boundary dispute between them to the International Court of Justice in the Hague. At issue are sectors of the Niger and Mékrou Rivers and islands in them, in particular Lété Island. On 30 July 2002, soldiers from three garrisons in the southeastern Diffa region mutinied, protesting low and overdue salaries and improper working conditions. The mutiny threatened Niamey, but troops loyal to the government put down the 10-day rebellion on 9 August. In December, at least 80 of the mutineers who were arrested in August escaped from prison.
Human parainfluenza viruses (HPIVs) commonly cause respiratory illnesses in infants and young children. But anyone can get HPIV illness. Symptoms may include fever, runny nose, and cough. Patients usually recover on their own. However, HPIVs can also cause more severe illness, such as croup or pneumonia. HPIVs Are Not the Same as Influenza (Flu) Viruses - There are many different types of viruses that cause respiratory infections. Two of those viruses are HPIVs and influenza (flu). - People get HPIV infections more often in the spring, summer, and fall. Flu is more common in the winter. - Flu vaccine will not protect you against HPIV infections. How HPIVs Spread HPIVs spread from an infected person to other people through— - the air by coughing and sneezing, - close personal contact, such as touching or shaking hands, and - touching objects or surfaces with the viruses on them then touching your mouth, nose, or eyes. - Page last reviewed: October 6, 2017 - Page last updated: October 6, 2017 - Content source:
Co-generation is the concept of producing two forms of energy from one fuel. One of the forms of energy must always be heat and the other may he electricity or mechanical energy. In a conventional power plant, fuel is burnt in a boiler to generate high-pressure steam. This steam is used to drive a turbine,which in turn drives an alternator through a steam turbine to produce electric power. The exhaust steam is generally condensed to water which goes back to the boiler. As the low-pressure steam has a large quantum of heat which is lost in the process of condensing, the efficiency of conventional power plants is only around 35%. In a cogeneration plant, very high efficiency levels, in the range of 75%-90%, can be reached. This is so, because the low-pressure exhaust steam coming out of the turbine is not condensed, but used for heating purposes in factories or houses. Since co-generation can meet both power and heat needs, it has other advantages as well in the form of significant cost savings for the plant and reduction in emissions of pollutants due to reduced fuel consumption. 1. High concentrations of harmful gases are resulted from_____. A) ozone depletion B) global wanning C) the consumption of fossil fuels D) serious water and air pollution 2. The sun and wind are called renewable energy because they are_____. A) natural B) inexhaustible C) newly-found D) clean 3. Biomass, though a renewable energy, mainly causes_____. A) indoor pollution B) outdoor pollution C) industrial pollution D) agricultural pollution 4. In the 1970s, some countries began to be concerned about solar energy because of_____. A) economic recession B) sharp rise in oil prices C) reduced oil production D) increased research funds 5. In the hills of the Himalayas, "chakki" are used for_____. A) purifying water B) keeping animals C) producing power D) exchanging goods 6. What is recommended to be used by the remote rural areas with little access to conventional energy sources? A) Small hydropower plants. B) Solar energy heaters. C) Wind power mills. D) Hot spring thermal energy. 7. It is mentioned that, between the surface and the depth of the ocean, there are great differences in____. A) dissolved substance B) natural resource variety C) marine life species D) water temperature 8. After coal, oil and natural gases, the fourth most important fuel is____. 9. Iceland was the first country that____. 10. In the conventional power plants, a large quantum of heat is lost in theprocess of condensing____.
|AP Biology | Regents Biology | Teachers Center | Blogs & Discussions | Teachers Vault || Walruses and whales are both marine mammals. So are dolphins, seals, and manatees. They all have streamlined bodies, legs reduced to flippers, blubber under the skin and other adaptations for survival in the water. Although mammals evolved on land, these species have returned to the sea. Did they evolve from a single ancestor who returned to the ocean, or were there different return events and parallel evolution? We can't go back in time to observe what happened, but DNA sequences contain evidence about the relationships of living creatures. From these relationships, we can learn about the evolutionary history of marine mammals. In this lab, students use sequence information in GenBank (the public repository of all known DNA and protein sequences from many species) and bioinformatics software to test hypotheses about the relationship between aquatic mammals (seals, whales, dolphins, walruses, manatees, and sea otters) and their potential ancestral relationship to land mammals. In the process, students learn how to build cladograms from molecular data and how to analyze them to make phylogenetic conclusions. This lab requires a computer lab. Optimally with the teacher's computer projected so that students can initially follow along. In addition, two software products must be installed on each computer ahead of time: A sample lab report for this exercise is available to teachers in the Teachers Vault.
The two species of murres (known as guillemots in Europe), the thick-billed murre, Uria lomvia, and common murre, Uria aalge, both have circumpolar distributions, breeding in Arctic, sub-Arctic, and temperate seas from California and northern Spain to northern Greenland, high Arctic Canada, Svalbard, and Novaya Zemlya. The thick-billed murre occurs mostly in Arctic waters, while the common murre, although overlapping extensively with the thick-billed murre, is more characteristic of sub-Arctic and temperate waters. They are among the most abundant seabirds in the Northern Hemisphere with both species exceeding 10 million adults. From collection: Arctic Biodiversity Trends 2010 Hugo Ahlenius, GRID-Arendal & CAFF
“Describe how a narrator’s or speaker’s point of view influences how events are described.” Using a historical fiction novel to teach this standard is perfect–in Finding My Place: One Girl’s Strength at Vicksburg, there are a few events that are told from more than one character’s point of view. One great example would be the fire downtown. Another would be the army hospital. Here’s how you can use these events to work on this standard with students: 1. Pick an event that you’ve read in the story with students or they’ve read on their own, such as the fire downtown. 2. Ask students: how did Dr. Franklin describe the fire downtown? 3. Ask students: What is Mrs. Franklin’s description or opinion of the fire downtown? 4. Make sure students are giving details from the text to support this (Dr. Franklin tells what it was like to fight the fire; Mrs. Franklin at first says it serves the people right for having high prices.) 5. Ask students: What is James’s version of the fire? Again ask for novel support. 6. Discuss with students WHY each of these characters has a slightly different version of the fire. You can even bring in Rev. and Mrs. Lohrs as well as Anna. Each of these characters has an opinion/interaction with the fire. Why aren’t they all describing it the same way? Why don’t they all feel the same way about it? 7. Ask students to tell about an event the entire class attended. You can have them write in their journals first for about 10 to 15 minutes OR you can do think-pair-share–where they are thinking about the event, sharing it orally with a partner, and then the partner shares with the class. Did everyone mention the same details? Why or why not? 8. Now go back to the book and think about the army hospital. Ask students to write down Anna’s description of the army hospital. Next write down Molly’s. Finally do Michael’s or Frank’s. Do they all sound the same? Why are the descriptions slightly different? (They should be different or the students are not thinking about the individual characters.) This will help students see bias in writing as well as unreliable narrators. To buy a copy of Finding My Place, see this page: http://margodill.com/blog/buy-finding-my-place/ (Links to Amazon and Barnes and Noble and Left Bank Books)
What is asbestos? Asbestos is a mineral fiber which comes from a group of naturally occurring minerals with silicate composition and crystalline structure. Three of the most common types of asbestos are chrysotile, amosite, and crocidolite. Asbestos has been mined and widely used throughout history to add strength, heat insulation, and fire resistance to thousands of products. It is also a hazardous air pollutant and known airborne carcinogen when fibers are released into the air and inhaled into the lungs. ...but I thought asbestos had been banned? False. Products containing asbestos are still manufactured and sold in the United States today, and there are hundreds of thousands of buildings still standing that have asbestos-containing materials within them. It is impossible to visually tell if a material contains asbestos; building materials must be tested in a laboratory as the fibers are microscopic. The only building materials that do not require laboratory testing to verify if asbestos is present are glass, metal, and wood. Where can I find asbestos? Examples of products that might contain asbestos are: - Sprayed-on fire proofing and insulation - Insulation for pipes and boilers (Thermal system insulation) - Wall and ceiling insulation - Ceiling tiles - Floor tiles - Old fume hoods and lab benches - Putties, caulks, and cements (i.e., window glaze) - Roofing shingles - Siding shingles on old residential buildings (i.e., transite siding) - Wall and ceiling texture in older buildings and homes - Joint compound in older buildings and homes - Brake linings and clutch pads When is asbestos dangerous? The most common way for asbestos to enter the body is through inhalation. In fact, asbestos containing material is not generally considered to be harmful unless there has been a fiber release (through deterioration, damage, or disturbance), where microscopic fibers can be inhaled. Most of the fibers will become trapped in the mucous membranes of the nose and throat, but some may pass deep into the lungs, or, if swallowed, into the digestive tract. Once they are trapped in the body, the fibers can cause severe health problems. Asbestos is most hazardous when it is friable. The term "friable" means that the asbestos is easily crumbled by hand, which releases fibers into the air. For example, sprayed-on asbestos-containing insulation is highly friable whereas asbestos-containing vinyl floor tile is not. Asbestos-containing acoustical ceiling tiles, vinyl floor tiles, roof shingles, fire doors, and transite siding will not release asbestos fibers unless they are disturbed or damaged in some way. If an asbestos-containing ceiling tile is drilled into or broken, for example, it may release fibers into the air. If it is left alone and not disturbed, it will not. Damage and deterioration will increase the friability of asbestos-containing materials. Water damage, continual vibration, aging, and physical impact such as drilling, grinding, buffing, cutting, sawing, or striking can break the materials down, making fiber release more likely. As a result of the microscopic nature of the asbestos fibers, the body can not break them down or remove them once they are trapped in lung or body tissues. They remain in place where they can cause disease. There are three primary diseases associated with asbestos exposure: - Lung Cancer Asbestosis is a serious, chronic, non-cancerous respiratory disease. Inhaled asbestos fibers aggravate lung tissues, which result in scarring. Symptoms of asbestosis include shortness of breath and a dry crackling sound in the lungs when inhaling. In its advanced stages, the disease may cause cardiac failure. There is no effective treatment for asbestosis at this time. As a result, the disease is usually severely disabling and/or fatal. The risk of asbestosis is minimal for those who do not work with asbestos. Those who renovate or demolish buildings that contain asbestos may be at significant risk, depending on the nature of the exposure and the precautions taken. Lung cancer causes the most deaths as a result of asbestos exposure. The incidence of lung cancer in people who are directly involved in the mining, milling, manufacturing, and use of asbestos and its products is much higher than in the general population. Common symptoms of lung cancer include coughing, shortness of breath, persistent chest pains, hoarseness, and anemia. People who have been exposed to asbestos as well as other carcinogens, such as cigarette smoke, have a significantly greater risk of developing lung cancer than people who have only been exposed to asbestos. One study found that asbestos workers who smoke are about 90-times more likely to develop lung cancer than people who neither smoke nor have been exposed to asbestos. Mesothelioma is a rare form of cancer that most often occurs in the thin membrane lining of the lungs, chest, abdomen, and (rarely) heart. About 200 cases are diagnosed each year in the United States. Virtually all cases of mesothelioma are linked with asbestos exposure. Those who work in asbestos mines, mills, factories, and shipyards that use asbestos, as well as individuals who manufacture and install asbestos insulation, have an increased risk of mesothelioma. People who live near asbestos mining areas, factories, or near shipyards where use of asbestos has produced large quantities of airborne asbestos fibers also are at an increased risk. Three things seem to determine your likelihood of developing one of these asbestos related diseases: - The amount and duration of exposure. The more often you are exposed to asbestos and the more fibers that enter your body, the more likely you are to develop asbestos related health problems. While there is no "safe level" of asbestos exposure, people who are exposed more frequently over a long period of time are more at risk. - Whether or not you smoke. If you smoke and have been exposed to asbestos, you are far more likely to develop lung cancer than someone who does not smoke and who has not been exposed to asbestos. If you work with asbestos or have been exposed to it, the first thing you should do to reduce your chances of developing cancer is to stop smoking. - Age. Cases of mesothelioma have occurred in the children of asbestos exposed workers as a result of dust/fibers brought home on clothing. The younger individuals are when they inhale asbestos, the more likely they are to develop mesothelioma over the course of their life. This is why enormous efforts are made to prevent school-aged children from being exposed. For more information on these and other health effects of asbestos exposure see the National Institute for Occupational Safety and Health, the National Cancer Institute, or the Mesothelioma Treatment Community.
Light is everything to a plant. We’ve known for a long time that light is their form of sustenance and warmth, and even that it helps them track through the seasons. It makes sense that they would coordinate their annual cycles in reference to sun — but can a plant also have an awareness of the time of day? It’s becoming increasingly clear that not only do plants know and care what time it is, but that we can exploit that tendency to make them better for us — even after they’ve been ripped from the soil. One recent study from Rice University made the rather startling discovery that post-harvest plant life like a head of cabbage can still react to the time of day. Plants don’t die right after harvest, and by using artificial light to simulate the sun’s movements we can trick them into continuing a normal daily cycle — or even an abnormal one. Antioxidants are one healthful metabolic product of plant biology, and research has shown that they increase production of these potentially cancer-fighting molecules at certain times of the day. For the plant, it’s a gambit to make themselves less tasty to hungry insects. For human beings, it’s a chance to make a healthy meal even healthier. A possible consequence of this work is that plants could be made more nutritious simply by changing how we store them. Though keeping them in a perpetual antioxidant production state will probably have some sort of side-effect, perhaps as tame as shortening shelf life, research should allow a measured approach that could improve the nutrition of existing species — no genetic modification required. Watch the awesome video below, to see the utility of this scheduling for the plant itself. Another piece of research, this time from the John Innes Centre, shows that plants maintain an awareness of time even after the sun goes down. How do we know this? Because when a plant hunkers down to survive through the sun-drought of the night, it uses up its stores of food and energy at a very regular pace. Suddenly subject them to the night, and they adjust their usage of starch deposits accordingly. A smaller store of starch leads them to a slower rate of metabolism, ensuring that they make it through to dawn without starving. Their use is so reliable and mathematically measured that the researchers realized they must be doing some basic sort of arithmetic division. By tracking their use of starch reserves in response to odd night-time schedules, they were able to prove that the plants can estimate the length of a night, the size of their food reserves, and enact the appropriate rationing system with astonishing accuracy. No matter what, they plant will aim to use up 95% of its starch by morning. They always make full, efficient use of the resources available to them without ever getting so greedy that they starve themselves out later in the night. It seems we could stand to learn a thing or two from cabbage.
1. stdout (Standard Out) By now, we've become familiar with many commands and their output and that brings us to our next subject I/O (input/output) streams. Let's run the following command and we'll discuss how this works. $ echo Hello World > peanuts.txt What just happened? Well check the directory where you ran that command and lo and behold you should see a file called peanuts.txt, look inside that file and you should see the text Hello World. Lots of things just happened in one command so let's break it down. First let's break down the first part: $ echo Hello World We know this prints out Hello World to the screen, but how? Processes use I/O streams to receive input and return output. By default the echo command takes the input (standard input or stdin) from the keyboard and returns the output (standard output or stdout) to the screen. So that's why when you type echo Hello World in your shell, you get Hello World on the screen. However, I/O redirection allows us to change this default behavior giving us greater file flexibility. Let's proceed to the next part of the command: The > is a redirection operator that allows us the change where standard output goes. It allows us to send the output of echo Hello World to a file instead of the screen. If the file does not already exist it will create it for us. However, if it does exist it will overwrite it (you can add a shell flag to prevent this depending on what shell you are using). And that's basically how stdout redirection works! Well let's say I didn't want to overwrite my peanuts.txt, luckily there is a redirection operator for that as well, >>: $ echo Hello World >> peanuts.txt This will append Hello World to the end of the peanuts.txt file, if the file doesn't already exist it will create it for us like it did with the > redirector! Try a couple of commands: $ ls -l /var/log > myoutput.txt $ echo Hello World > rm $ > somefile.txt What redirector do you use to append output to a file?
Principles of Andragogy/Adult Learning - Adults must want to learn. They learn effectively only when they are free to direct their own learning and have a strong inner and excited motivation to develop a new skill or acquire a particular type of knowledge, this sustains learning. They learn effectively only when they have a strong inner motivation to develop a new skill or acquire a particular type of knowledge. - Adults will learn only what they feel they need to learn. Adults are practical in their approach to learning; they want to know, “How is this going to help me right now? Is it relevant (content, connection, and application) and does it meet my targeted goals.” Helping their children is a strong motivator for learners who are parents; getting a high school diploma or a good job is another strong motivator for adults. Aadults will learn only what they feel they need to learn Adults are practical in their approach to learning; they want to know, “How is this going to help me right now?” - Adults learn by doing. Adolescents learn by doing, but adults do through an active practice and participation, this helps in integrating component skills into a coherent whole. Both adults and children learn by doing, but active participation is more important among adults. - Adult learning focuses on problem solving. Adolescents tend to learn skills sequentially. Adults tend to start with a problem and then work to find a solution. A meaningful engagement, such as posing and answering realistic questions and problems is necessary for deeper learning. This leads to more elaborate, longer lasting, and stronger representations of the knowledge (Craik & Lockhart, 1972). Begin by identifying what the learner can do, what the learner wants to do and then address the gaps and develop practical activities to teach specific skills. - Experience affects adult learning. Adults have more experience than adolescents. This can be an asset and a liability, if prior knowledge is inaccurate, incomplete, or naive, it can interfere with or distort the integration of incoming information (Clement, 1982; National Research Council, 2000). Use the learners’ experience (negative or positive) to build a positive future by making sure that negative experiences are not part of their experience in your program. - Adults learn best in an informal situation. Adolescents have to follow a curriculum. Often, adults learn by taking responsibility by the value and need of content they have to understand and the particular goals it will achieve. Being in an inviting, collaborative and networking environment as an active participant in the learning process makes it efficient. - Adults want guidance and consideration as equal partners in the process. Adults want information that will help them improve their situation. They do not want to be told what to do and they evaluate what helps and what doesn’t. They want to choose options based on their individual needs and the meaningful impact a learning engagement could provide. Socialization is more important among adults. Involve adults in the learning process. Let them discuss issues and decide on possible solutions. Make the environment relaxed, informal and inviting. Moving from directed learning to adult learning to self-directed learning Based on the diagram “The Difference Between Pedagogy, Andragogy, And Heutagogy” by Terry Heick, this working document aims to provide us with a framework for learning in interpreting classes. Sign language interpreting skills education benefits far greater from a worked problem and problem-based learning (PBL) center than a traditional pedagogical, instructor-centered approach. |(Learning for children)||(Learning for adults; Knowles, M. (1990) The adult learner. A neglected species, 4th Edition. Houston: Gulf Publishing.)||(Self-directed learning; Hase, S. and Kenyon, C. (2000). From andragogy to heutagogy. Ultibase, RMIT.)| |Dependence||The learner is a dependent personality. Instructor determines what, how, and when anything is learned.||Adults are independent. They strive for autonomy and self-direction in learning.||1) Learners are interdependent. 2) They identify the potential to learn from novel experiences as a matter of course. 3) They are able to manage their own learning.| |Resources for learning||The learner has few resources; the teacher devises transmission techniques to store knowledge in the learner’s head.||Adults use their own and other’s experience.||Teacher provides some resources but the learner decides the path by negotiating the learning.| |Reasons for learning||Learn in order to advance to the next stage.||Adults learn when they experience a need to know or to perform more effectively.||1) Learning is not necessarily planned or linear. 2) Learning is not necessarily based on need but on the identification of the potential to learn in novel situations.| |Focus of learning||Learning is subject-centered, focused on prescribed curriculum and planned sequences according to the logic of the subject matter.||Adult learning is task or problem-centered.||1) Learners can go beyond problem solving by enabling proactivity. 2) Learners learn by doing. 3) They use their own and others’ experiences and internal processes such as reflection, environmental scanning, experience, interaction with others, proactivity, and problem-solving behaviors.| |Motivation||Motivation comes from external sources, usually parents, teachers, grades, and a sense of competition.||Adult learning motivation stems from internal sources: increased self-esteem, confidence, and recognition that comes from successful performance.||1) Self-efficacy. 2) Knowing how to learn. 3) Development of creativity, synthesis. 4) Ability to apply learning/principles in novel, familiar, and teamed situations.| |Role of the instructor||Designs the learning process, imposes material, “knows best.”||Enabler and facilitator of a climate of collaboration, respect, and openness.||1) Encourage the development of learner capability and capacity. Capable people: a) know how to learn, b) are creative, c) have a high degree of self-efficacy, d) apply competencies in novel and familiar situations, e) work well with others|
African Clawed Frog Photo credit: © Brian Gratwicke, https://flic.kr/p/dFNYGb Report this Species! If you believe you have found this species anywhere in Pennsylvania, please report your findings to iMapInvasives by submitting an observation record. Species at a Glance The African clawed frog is from a unique family of frogs called Pipidae that lack a tongue and visible ears. It was widely used as an experimental amphibian in laboratories and also became a popular pet species, leading to releases and escapes from captivity that have allowed this highly adaptable species to form invasive populations around the world. This plump, medium-sized aquatic frog has a flattened body and a wedge-shaped head that is smaller than the body. Males are 5-6 cm (2.5 in) long and females are larger, reaching 10-12 cm (4 in). Instead of moveable eyelids, this species has a horny transparent covering that protects the small, upward turned eyes. The front limbs are small with unwebbed fingers and the hind legs are large and webbed. The three inside toes on the back feet have sharp black claws. The skin is smooth and slippery, except where the lateral line gives the appearance of “stitching”. It is multicolored on the back with blotches of olive-gray or brown and the underside is creamy white or yellowish. This frog has the ability to change its appearance to match its background; therefore, it can become dark, light, or mottled. A distinguishing characteristic is that males lack a vocal sac. Females also have a cloacal extension at the end of the abdomen. No frog in North America resembles this species. A water-dependent species, the African clawed frog occurs in a wide range of habitats, including heavily modified and degraded ecosystems. It prefers stagnant pools and quiet streams and tends to avoid large rivers and waterbodies with predatory fish. It can tolerate wide fluctuations in pH; however, metal ions are toxic to it. It leaves the water only when forced to migrate, crawling over long distances to other ponds. During dry conditions, this frog can burrow into the mud and lie dormant for up to a year. The African clawed frog was introduced as a laboratory test species to be used in pregnancy tests after it was discovered that females would begin laying eggs when injected with a pregnant woman’s urine. It was in such high demand that large numbers were bred in captivity and a significant pet trade developed in the 1960s. It was intentionally released from laboratories around the world when new technologies for pregnancy diagnosis were developed in the late 1950s. Other modes of introduction include intentional releases of unwanted pets and escapes from aquariums, especially because of its long lifespan of up to 15 years. Native to southern and western regions of Africa, the African clawed frog was shipped around the world in the 1940s and 1950s. In the Mid-Atlantic region, the frog is established in Virginia ponds, and has also been collected in North Carolina. Note: Distribution data for this species may have changed since the publication of the Mid-Atlantic Field Guide to Aquatic Invasive Species (2016), the source of information for this description. The African clawed frog has a voracious appetite and will eat anything it catches, including native invertebrates, frogs, fish, and birds, as well as its own tadpoles. It can out-compete native frogs and other aquatic species and can act as a vector for parasites and diseases such as chytrid fungus that can be transmitted to native frogs. It can also secrete a skin toxin that may be harmful to predators, including native fish. Information for this species profile comes from the Mid-Atlantic Field Guide to Aquatic Invasive Species (2016).
What is Image Stitching? Combining multiple images captured at certain intervals with a certain overlap of the field of view to create a large panoramic view. Generally, this is useful in 3D reconstruction, surveillance, mapping, and virtual reality. The input images are as follows: We can combine the above images using the following steps: - Feature extraction: we need to first find the highest probability features available amongst the left image w.r.t center and then the right image w.r.t center image. This can be done in various ways and I have made use of the most commonly used and pretty reliable method of SIFT features. - Feature matching: this step can be done by a simple function available in the cv2 library – BruteForce Matcher. It takes the descriptor of one feature in the first image and matches that with the second image based on distance(L2 Norm). Select the top features(min distance), so we don’t end up considering the erroneous matches. 3. Finding Homography (Perspective Transform): any two images of the same planar surface in space are related by homography (assuming a pinhole camera model). We can use this to transform(translation and rotation) one image into another image’s perspective and this will help us to easily mesh the two images together. The concept of Homography comes when we are understanding the pinhole camera model. When we project a 3D point into a 2D plane, we have them in a homogenous coordinate system and can be denoted with, where the The two transformations denote the Camera Intrinsic and Camera Extrinsic matrix. Camera Extrinsic denotes the camera position in the 3D space and the second matrix denotes the camera intrinsic matrix which converts the image plane to pixels and shifts the origin to match pixel coordinates whose top left of the image corner starts at (0,0). These two matrices can be merged into a matrix known as the camera calibration matrix, which is used to derive the homography. As we know map this 3D point into a 2D plane, the Z becomes 0, and hence C13, C23, and C33 become 0. This means we can get rid of those and come up with just the first, second, and fourth columns as a matrix which is referred to as a homography matrix. In order to compute homography, we need to solve the linear equations for a minimum number of points(8). The above equations denote the relation between a single matching pair and we have to calculate 8 of these to be able to predict the homography. Hence, using 4 pairs of these we will get a final matrix, which can then be simplified as homography. The first matrix below denotes the A, matrix, and the h vector denotes the homography. Note: We can solve this equation by choosing any 4 point pairs, however, how do we decide which points are acting as most accurate for the complete set of matches? In order to finalize the most suitable pair, we can use methods like RANSAC to estimate the best homography which gives minimum errors in transformation. RANSAC stands for Random Sampling Consensus and works on the principle of picking any 4 pair of points and calculating the homography for those. For each time homography is calculated, we have to find the number of outliers within a specified threshold. In this problem, I performed 1000 iterations and obtained the following homography: 4. Using the homography, we can wrap the images over each other using the cv2.warpPerspective(src_img, homography, shape_of_final_image) function. 5. Lastly, merge all the obtained warps together and we can obtain the final stitched image: June 4, 2023
Leukocytes, more commonly known as white blood cells, are cells involved in the immune system. Their name comes from the Greek words leukos meaning white and kutos meaning cell. Given their essential role in the body’s defense, these cells are particularly monitored during a blood test. Leukocytosis is a disorder in which the amount of leukocytes in the blood increases. Although this phenomena is most commonly associated with illness, it can also be produced by a variety of other circumstances, including stress. Types of leukocytosis There are five forms of leukocytosis: - Eosinophilia: Similar to monocytosis, this kind of leukocytosis is rare and is caused by a large amount of eosinophils, which account for 1-4 percent of your body’s white blood cells. - Neutrophilia: A rise in neutrophils, which account for 40-60% of the body’s white blood cells, causes this prevalent kind of leukocytosis. - Basophilia: the most uncommon kind of leukocytosis, basophilia is characterized by an increase in the number of basophils, which account for only 0.1 to 1% of the body’s white blood cells. - Lymphocytosis: This condition arises when you have a high amount of lymphocytes, which account for 20-40% of your white blood cells. - Monocytosis: This kind of leukocytosis is distinguished by a high concentration of monocytes, which account for 2-8 percent of your white blood cells. The precise kind of white blood cells that are raised can be used to classify the causes of leukocytosis. The following are some of the most prevalent causes of neutrophilia: - mental or physical stress Here are some of the possible causes of lymphocytosis: Certain kinds of leukemia, viral infections, allergic responses, whooping cough, or poppy, Some of the most common causes of eosinophilia are: Allergies and allergic responses, such as hay fever and asthma, parasite infections, certain skin illnesses, lymphoma, or lymphatic cancer Monocytosis can be caused by a variety of factors, including: Epstein-Barr virus infections (including mononucleosis), TB, fungal infections, autoimmune illnesses such as lupus and ulcerative colitis, Basophilia is caused by the following factors: Allergies to leukemia and bone marrow cancer For most healthy people who are not pregnant, the typical white blood cell count is between 4,500 and 10,500 per microliter of blood. A white blood cell count exceeding this threshold may indicate leukocytosis. A white blood cell count of 50,000 to 100,000 per microliter of blood may indicate a significant infection, organ rejection, or a solid tumor. An extremely high white blood cell count of more than 100,000 is frequently associated with leukemia or other kinds of blood and bone marrow malignancy. Two types of testing are typically performed to detect the cause of a high white blood cell count: - Full blood count (CBC) with differential: When your white blood cell count is greater than usual, this is the most typical test. A machine is used in this test to determine the proportion of each kind of white blood cell in a blood sample. - Blood smear: when you have neutrophilia or lymphocytosis, your doctor may do this test, which examines the morphology and maturity of all blood cells. This test will either confirm or deny the kind of leukocytosis. If immature white blood cells are seen, a bone marrow biopsy may be performed. Read also: Neutrophils: definition, absolute count, high, low and normal range When the body’s white blood cell count is extremely high, the blood can become extremely thick, obstructing blood circulation. This can result in a condition known as hyperviscosity syndrome. Although this condition can develop in patients with leukemia, it is quite rare. This disease can lead to a number of major issues, including: - vision problems - difficulty breathing Other signs of leukocytosis might occur. These may be connected to the impact of the specific kind of white blood cells that are high, as well as any underlying health problem that is producing the leukocytosis. The following are the most prevalent symptoms: Fever, soreness, easy bruising, trouble breathing, coughing, hives and itching, weight loss, and night sweats are all symptoms of a fever. Keep in mind that if your leukocytosis is caused by stress or a response to medicine, you may not have any symptoms. Read also: What Is The Average Lifespan of a Red Blood Cell? The treatment for leukocytosis varies depending on the reason. Among the most prevalent therapeutic options are: - antihistamines used to treat allergic reactions - Inhalers for asthma - antibiotics used to treat bacterial infections - Chemotherapy, radiation therapy, or stem cell transplantation for leukemia are examples of cancer therapies. - drugs used to alleviate stress or anxiety inflammatory illnesses - Changing drugs to avoid the negative effects of particular medications
The Rock Cycle When rocks form, they do not stay the same forever. They also do not stay in one place forever. They move about. The rock cycle is the entire journey that rocks make as they change. These take millions of years. Let us start the cycle with molten magma in the earth’s core. Molten magma may cool off and crystalize beneath the earth’s crust, forming intrusive igneous rocks. With time, pressure may cause uplift and rocks end up on the surface. Molten magma may also flow to the surface by volcanic action, causing extrusive igneous rocks as they harden and crystalize. On the surface, they undergo weathering, erosion, and transport. Sediments are therefore carried to low lying places and into rivers and water bodies. The pilling up of sediments causes compaction and cementation. Sedimentary rocks form. After a long period of pressure and heat from the overlying weight, the igneous and sedimentary rocks buried deep inside the crust change to metamorphic rocks, deep under the earth’s crust. Some of the metamorphic rocks begin to melt as they get closer to the molten magma region. Some will also undergo uplift to the surface again, in places where volcanic activity is not common. If they melt, they get released back to the surface through volcanic activity, especially in places with high tectonic activity.
Do you ever wonder how a software program or application works, how it acts, how it behaves, and how it helps the end user in a real-world situation? If yes, you are at the right place! Because we're going to look at an important concept known as OOPs which can help developers incorporate real-world scenarios into their software program. We can now order online groceries, foods, electronics such as cell phones, televisions, refrigerators, and air conditioners, as well as book cabs and much more with just one tap. All of this is possible solely as a result of the OOPs technique in any programming language. What are OOPs? Object-Oriented Programming, or OOPs, refers to programming languages that make use of objects (whether physical or logical). Inheritance, abstraction, polymorphism, and other real-world concepts are all part of object-oriented programming. The primary goal of OOP is to connect data and the functions that operate on it so that no other part of the code can access it. Polymorphism is a term that refers to a process that performs a single action in multiple ways. It happens when we have a lot of classes that are related to each other through inheritance. There are two types of polymorphism: - Compile-time polymorphism – This occurs when an object's functionality is bound at compile time. The method signatures are checked at compile time, and the programming language knows which method to call. It is also known as static or early binding. - Runtime polymorphism – Dynamic Method or Runtime Polymorphism Dispatch is a method of resolving calls to overridden methods at runtime rather than at compile time. A superclass's reference variable is used to call an overridden method in this process. Inheritance is a concept in which one class's properties can be inherited by another. It aids in the reuse of code and the establishment of a relationship between various classes. There are two classes in programming: - Parent class (Super or Base class) - Children class (Subclass or Derived class) A class that inherits properties is referred to as a Child Class, whereas a class that inherits properties is referred to as a Parent Class. The wrapping up of data into a single unit is referred to as encapsulation. It’s a process of making a class's fields read-only or write-only. It is similar to a protective shield that prevents data from being accessed by code outside of the shield. The reusability of this method is also improved and unit testing is also easier with encapsulated code. Abstraction is a technique for displaying only the information that is required while hiding the rest. The main purpose of abstraction, we can say, is to hide the data. Abstraction is the process of selecting data from a large set of data in order to display the information required, thereby reducing programming complexity and effort. The beauty of abstraction is that you don't have to know how a call is generated internally. As a result, abstraction aids in the reduction of complexity. There are two ways to achieve abstraction: - Abstract Class Abstract Class : A class that contains one or more abstracted methods is called an abstract class. It is defined by the keyword "abstract." When you declare a class as an abstract class, it can't be directly instantiated, which means it can't be used to create an object. The abstract class cannot be directly accessed, but it can be inherited from another class. You must provide implementations for the abstract methods defined in an abstract class if you inherit it. Interface : An interface in a programming language is a blueprint of a class. It has static constants and abstract methods. It is a mechanism to achieve abstraction. There can be only abstract methods, not method bodies. It is used to achieve abstraction and multiple inheritance in Programming language. What is an Object? An object is any entity that has a state and behavior's. For instance, a chair, pen, table, keyboard, bicycle, and so on. It could be physical or logical in nature. An instance of a class can be defined as an Object. An object contains an address and occupies memory space. Objects can communicate even if they are unaware of each other's data or code. The only thing that matters is the type of message that is accepted and the type of response that the objects provide. What is Class? The term "class" refers to a group of objects. It's a logical thing. A class can also be thought of as a blueprint from which an individual object can be created. Class doesn't take up any room. What is Method? A method is a block of code or a collection of statements that are grouped together to perform a specific task or operation. The reusability of code is achieved through the use of a method. A method is created once and then reused multiple times. It also allows for easy code modification and readability. Only when we call or invoke a method is it executed. Apart from these concepts, there are some other terms that are used in Object-Oriented design: - Coupling - The relationship between two classes is referred to as coupling. It denotes how well one object or class understands another. That is, if one class changes its properties or behaviour, the dependent changes in the other class will be affected. As a result, these changes will be determined by the degree of interdependence between the two classes. - Cohesion - Cohesion is a metric that measures how closely a class's methods and attributes are related to one another and how focused they are on completing a single, well-defined task for the system. The term "cohesion" refers to a class's ability to have a single, well-defined responsibility. - Association - The term "association" refers to a relationship that exists between two distinct classes and is established through the use of their Objects. It establishes a connection between two or more objects. One-to-one, one-to-many, many-to-one, and many-to-many associations are all possible. - Aggregation - Aggregation is a weak association or we can say it’s a one-way relationship with a unidirectional association. It is a relationship between an object and the objects it contains. It’s a term used to describe a part of a larger relationship in which a part can exist without the whole. The Has-A relationship is represented by aggregation. - Composition - Composition is a strong association. There is a strong link between composition. It’s a term used to describe a part of a larger relationship in which a part cannot exist without the whole. When a whole is deleted, all of its parts are also removed. As a result, the part-of relationship is represented by composition. Object-Oriented Programming (OOP) is a programming paradigm that has become widely used in software development due to its ability to make code modular, maintainable, and reusable. OOP is based on four pillars - Encapsulation, Inheritance, Polymorphism, and Abstraction - which help developers to create software that is easy to understand and modify. Popular programming languages such as Java, Python, and C++ use OOP concepts, and some advantages of OOP include code reuse, modularity, and flexibility. OOP is a powerful tool for developers who want to write maintainable, scalable, and efficient code. Perfect eLearning is a tech-enabled education platform that provides IT courses with 100% Internship and Placement support. Perfect eLearning provides both Online classes and Offline classes only in Faridabad. It provides a wide range of courses in areas such as Artificial Intelligence, Cloud Computing, Data Science, Digital Marketing, Full Stack Web Development, Block Chain, Data Analytics, and Mobile Application Development. Perfect eLearning, with its cutting-edge technology and expert instructors from Adobe, Microsoft, PWC, Google, Amazon, Flipkart, Nestle and Info edge is the perfect place to start your IT education. Perfect eLearning provides the training and support you need to succeed in today's fast-paced and constantly evolving tech industry, whether you're just starting out or looking to expand your skill set. There's something here for everyone. Perfect eLearning provides the best online courses as well as complete internship and placement assistance. Keep Learning, Keep Growing. If you are confused and need Guidance over choosing the right programming language or right career in the tech industry, you can schedule a free counselling session with Perfect eLearning experts.
Introduction to Yellowjackets Yellowjackets are wasps known for their distinctive yellow and black coloration. They are commonly found in gardens and outdoor spaces, where they seek food and build nests. Though beneficial in controlling other pests, yellowjackets can become a nuisance and pose a threat to people with allergies. Understanding their behavior and life cycle is the first step in effective management. Yellowjackets can be easily confused with other wasps or bees. They have a sleek, bright body marked with alternating yellow and black bands. Unlike bees, yellowjackets lack the hairy appearance and are usually smaller. Yellowjackets’ distinguishing characteristics include their slender waist and elongated wings, along with an aggressive behavior when threatened. Knowing these traits helps in identifying them correctly and applying suitable control measures. Yellowjackets build nests in sheltered locations, often underground or in cavities within walls or trees. These nests can house thousands of wasps, making proper identification and control vital. Yellowjacket Control Techniques Various methods can be employed to manage yellowjackets in the garden without causing undue harm to the environment or other beneficial insects. Monitoring and Inspection Regular monitoring and inspection of the garden help in early detection of yellowjacket activity. Looking for nests in common hiding spots can prevent larger infestations. Physical and Mechanical Control Several non-chemical methods can effectively control yellowjackets. Commercially available or homemade traps using attractive baits can be used to capture yellowjackets. Placement and maintenance of traps should be done with care to target only the intended pests. If a nest is discovered, removal can be a viable option. Professional pest control services are usually recommended for this task, as improper handling can lead to aggressive behavior from the wasps. In certain situations, chemical control may be necessary to manage a significant infestation. Specific insecticides designed to target wasps can be used. Following the manufacturer’s guidelines and taking precautions to minimize impact on non-target organisms is vital. Non-Target Effects and Considerations When managing yellowjackets, it’s essential to consider their role in the ecosystem and potential non-target effects. Pollination and Pest Control Yellowjackets are predators of many garden pests and contribute to pollination. Therefore, control measures should be carefully chosen to minimize disruption to these beneficial activities. Due to their potential to sting, especially if provoked, safety considerations must be at the forefront of any yellowjacket management plan. Protective clothing, careful planning, and potentially seeking professional assistance are key factors. Preventing Future Yellowjacket Infestations Preventing yellowjackets from becoming a problem in the garden involves a multi-pronged approach. Proper Sanitation and Maintenance Maintaining a clean garden, sealing garbage cans, and removing potential food sources can deter yellowjackets from establishing a presence. Monitoring and Early Intervention Regular inspections and prompt action can prevent small problems from becoming larger infestations. Incorporating plants that do not overly attract yellowjackets and creating a less hospitable environment for nesting can be part of a long-term prevention strategy. Yellowjackets, though beneficial in many ways, can become problematic when they are present in large numbers or pose a threat to human health. A well-informed approach that combines understanding, careful monitoring, physical control methods, targeted chemical control when needed, and prevention can successfully manage these insects. Recognizing their positive role in the garden and taking steps to minimize harm to other beneficial organisms is essential in maintaining a balanced and healthy garden environment.
Storytelling in elementary schools improves children’s language skills by providing students with a valuable opportunity to practice auditory comprehension, a vital component of early childhood education. The ability to understand spoken language involves so much more than simply hearing words and figuring out what the speaker intends the words to mean. Nonverbal cues of vocal pitch, tempo, and tonality are essential in effective communication. In face-to-face interactions, the additional nonverbal elements of body language, gestures, and facial expressions form up to 80% of expressive language. But how, in our multitasking, screen-dominant learning environments, can teachers capture and hold the attention of their distraction-prone students? Why not try using the Japanese paper folding art of origami to help focus students’ attention during language arts activities? When an unexpected curiosity like origami is added to a storytelling presentation, the educational benefits for elementary school students are increased. Origami models and other interesting objects add visual stimulation and grab attention, so that young learners are focused and motivated to pay closer attention. Another advantage to adding origami to stories is that origami is created one step at a time. As a story progresses scene by scene, an origami model can also be constructed, fold by fold. When the story ends, the origami model is also created. This specialized storytelling technique is called Storigami. Storytelling + Origami = Storigami. Watching and listening to stories illustrated by the progressive folds of origami models enables students to imagine the visual details of the scenes and characters described by the words, but also gives students experience with analyzing the symbolic representations of the paper shapes and folds that are paired with story characters or actions. The ability to understand how the shapes relate to the story and then imagine possible outcomes are key elements of successful problem solving, one of the most important goals of elementary education. How can teachers and other educators learn how to use Storigami to build problem solving and language arts skills in their elementary school classrooms? Fortunately a Mid-Western educational publisher, Storytime Ink International, has published several collections of origami stories, such as Nature Fold-Along Stories: Quick and Easy Origami Tales About Plants and Animals. This book and other fold-along storybooks describe how to use the technique, step by step. The Storigami books are available in most public libraries and from several online sources, including http://Amazon.com/ and http://Storytimeink.com/
The adoption of the South African Constitution on 8 May 1996 was one of the turning points in the history of the struggle for democracy in this country. The Constitution is considered by many as one of the most advanced in the world, with a Bill of Rights second to none. South Africa's Constitution was drafted by an all-inclusive constitutive assembly, which had representatives from all the major political parties and liberation organisations. The constitutional assembly sat between May 1994 and October 1996 drafting and completing the new constitution. The new Constitution was the embodiment of the vision of generations of anti-apartheid freedom fighters and democrats who had fought for the principle that South African belonged to all, for non racialism and for human rights. The guiding principles of the new constitution were first articulated in the ANC's African Claims document of 1943, the Non-European Unity Movements 10 point program of 1943, and the 1955 Congress Alliance Freedom Charter. The Constitution is the supreme law of the land, against which all other laws are judged. The constitution made provision for the establishment of a constitutional court which is the final arbitrator of the interoperation of the constitution. The constitutions make provisions for the way the country is governed, The establishment of parliament, the election of the president, the creation and government of provinces and local authorities.
In certain instances, “open sesame” might be something you exclaim to magically open the door to a cave full of treasure, but for the sesame plant, open sesame is a way of life. In sesame’s case, seeds are the treasure, which are kept inside a four-chambered capsule. In order for the next generation of plants to have a chance at life, the seeds must be set free. Sesame’s story is similar to the stories of numerous other plant species whose seeds are born in dehiscent fruits. But in this instance, the process of opening those fruits is fairly unique. Sesamum indicum is a domesticated plant with a 5000 plus year history of cultivation. It shares a genus with about 20 other species – most of which occur in sub-Saharan Africa – and belongs to the family Pedaliaceae – the sesame family. Sesame was first domesticated in India and is now grown in many other parts of the world. It is an annual plant that is drought and heat-tolerant and can be grown in poor soils and locations where many other crops might struggle. However, the best yields are achieved on farms with fertile soils and adequate moisture. Depending on the variety and growing conditions, sesame can reach up to 5 feet tall and can be unbranched or highly branched. Its broad lance-shaped leaves are generally arranged directly across from each other on the stem. The flowers are tubular, similar in appearance to foxglove, and are typically self-pollinated and short-lived. They come in shades of white, pink, blue, and purple and continue to open throughout the growing season as the plant grows taller, even as fruits formed earlier mature. The fruits are deeply-grooved capsules with at least four separate chambers called locules. Rows of tiny, flat, teardrop shaped seeds are produced in each chamber. The seeds are prized for their high oil content and are also used in numerous other ways, both processed and fresh. One of my favorite uses for sesame seeds is tahini, which is one of the main ingredients in hummus. The fruits of sesame are dehiscent, which means they naturally split open upon reaching maturity. Compare this to indehiscent fruits like acorns, which must either rot or be chewed open by an animal in order to free the seeds. Dehiscence is also called shattering, and in many domesticated crop plants, shattering is something that humans have selected against. If fruits dehisce before they can be harvested, seeds fall to the ground and are lost. Selecting varieties that hold on to their seed long enough to be harvested was imperative for crops like beans, peas, and grains. In domesticated sesame, the shattering trait persists and yield losses are often high. Most of the world’s sesame crop is harvested by hand. The plants are cut, tied into bundles, and left to dry. Once dry, they are held upside down and beaten in order to collect the seeds from their dehisced capsules. When harvested this way, naturally shattering capsules may be preferred. But in places like the United States and Australia, where mechanical harvesting is desired, it has been necessary to develop new, indehiscent varieties that can be harvested using a combine without losing all the seed in the process. Developing varieties with shatter-resistant seed pods, has been challenging. In early trials, seed pods were too tough and passed through threshers without opening. Additional threshing damaged the seeds and caused the harvest to go rancid. Mechanically harvested varieties of sesame exist today, and improvements in these non-shattering varieties continue to be made. In order to develop these new varieties, breeders have had to gain an understanding of the mechanisms behind dehiscence and the genes involved in this process. This research has helped us appreciate the unique way that the capsules of the sesame plant dehisce. As in the seed bearing parts of many other plant species, the capsules of sesame exhibit hygroscopic movements. That is, their movements are driven by changes in humidity. The simplest form of hygroscopic movement is bending, which can be seen in the opening and closing of pine cone scales. A more complex movement can be seen in the seed pods of many species in the pea family, which both bend and twist as they split open. In both of these examples, water is evaporating from the plant part in question. As it dries it bends and/or twists, thereby releasing its contents. The cylindrical nature and cellular composition of sesame fruits leads to an even more complex form of hygroscopic movement. Initially, the capsule splits at the top, creating an opening to each of the four locules. The walls of each locule bend outward, then split and twist as the seed falls from the capsule. In a study published in Frontiers in Plant Science (2016), researchers found that differences in the capsule’s inner endocarp layer and outer mesocarp layer are what help lead to this interesting movement. The endocarp layer is composed of both transvere (i.e. circumferential) and longitudinal fiber cells, while the mesocarp is made up of soft parenchyma cells. The thicknesses of these two layers gradually changes along the length of the capsule. As the mesocarp dries, the capsule initially splits open and starts bending outwards, but as it does, resistance from the fiber cells in the endocarp layer causes further bending and twisting (see Figure 1 in the report for an illustration). As the researchers write, “the non-uniform relative thickness of the layers promotes a graded bi-axial bending, leading to the complex capsule opening movement.” All this considered, a rock rolling away from the entrance of a cave after giving the command, “Open sesame!” almost seems simpler than the “open sesame” experienced by the fruit of the sesame plant. See Also: Seed Shattering Lost – The Story of Foxtail Millet
You will need - the ability to integrate Let the body under the action of a certain force F has changed its velocity from V1 to V2 in a short period of time Δt. The acceleration of the body is equal to a=(V2-V1)/Δt, where a and V1 and V2 are vector quantities. Substitute this expression into the formula for Newton's second law: F=ma=m(V2-V1)/Δt=(mV2-mV1)/Δt, not forgetting that the force F is also a vector quantity. Write down the resulting formula in a slightly different form: FΔt=mΔV =Δp. Vector quantity FΔt, is the product of force at the time of its impact is called the impulse force and measured in Newtons multiplied by the second (N•s). And the product of mass and velocity p=mV – momentum of the body or body movement. This vector quantity measured in kilograms multiplied by meter per second (kg•m/s). Thus Newton's second law can be formulated differently: the impulse of the force acting on a body is equal to change of momentum: FΔt=Δp. If the impact force was very small, for example, during impact, the average force find: Fср=Δp/Δt=m(V2-V1)/Δt.Example: a Ball with a mass of 0.26 kg, flying at a speed of 10m/s. After hitting a player, the ball speed increased to 20 m/s Time of impact is 0.005 s. the Average force the hand of the player on the ball is equal in this case Fср=0,26•(20-10)/0.005=520Н. If the force acting on a body is not constant, but changes with time according to the law F(t), by integrating the function F(t) at time t in the interval from 0 to T, find the change of momentum of the body: DP=F(t)dt. And the formula Fср=dp/dt, determine the value of the average power.Example: a Force changing with time according to a linear law F=30t+2. Find the average impact force for 5s. First compute the momentum p=∫(30t+2)dt=15t2+2t , then the average force: Fср=(15t2+2t)/t=15t+2=15•5+2=77Н Force is a vector quantity. If the result of the calculation value Fср turned out negative, it means that the force vector is directed in the direction opposite to the direction of the coordinate axis. Don't forget when solving problems to translate everything used in formulas values in SI. Ie, mass in kilograms, speed in meters divided by the second, and force in Newtons.
Humanities › History & Culture Glossary of Holocaust Terms to Know Share Flipboard Email Print (Photo by Ian Waldie/Getty Images) History & Culture European History The Holocaust European History Figures & Events Wars & Battles European Revolutions Industry and Agriculture History in Europe American History African American History African History Ancient History and Culture Asian History Genealogy Inventions Latin American History Medieval & Renaissance History Military History The 20th Century Women's History View More By Jennifer Rosenberg History Expert B.A., History, University of California at Davis Jennifer Rosenberg is a historian and writer who specializes in 20th-century history. our editorial process Jennifer Rosenberg Updated April 30, 2019 A tragic and important part of world history, it is important to understand what the Holocaust entailed, how it came to be and who were the major actors. When studying the Holocaust, one can come across numerous terms in many different languages as the Holocaust affected people from all sorts of backgrounds, be it German, Jewish, Roma and so on. This glossary lists slogans, code names, names of important people, dates, slang words and more to help you understand these terms in alphabetical order. "A" Words Aktion is a term used for any non-military campaign to further Nazi ideals of race, but most often referred to the assembly and deportation of Jews to concentration or death camps. Aktion Reinhard was the code name for the annihilation of European Jewry. It was named after Reinhard Heydrich. Aktion T-4 was the code name for the Nazi's Euthanasia Program. The name was taken from the Reich Chancellery building's address, Tiergarten Strasse 4. Aliya means "immigration" in Hebrew. It refers to the Jewish immigration into Palestine and, later, Israel through official channels. Aliya Bet means "illegal immigration" in Hebrew. This was the Jewish immigration into Palestine and Israel without official immigration certificates nor with British approval. During the Third Reich, Zionist movements set up organizations to plan and implement these flights from Europe, such as Exodus 1947. Anschluss means "linkage" in German. In the context of World War II, the word refers to the German annexation of Austria on March 13, 1938. Anti-semitism is a prejudice against Jews. Appell means "roll call" in German. Within the camps, inmates were forced to stand at attention for hours at least twice a day while they were counted. This was always carried out no matter what the weather and often lasted for hours. It was also often accompanied by beatings and punishments. Appellplatz translates to "place for roll call" in German. It was the location within the camps where the Appell was carried out. Arbeit Macht Frei is a phrase in German that means "work makes one free." A sign with this phrase on it was placed by Rudolf Höss over the gates of Auschwitz. Asocial was one of the several categories of people targeted by the Nazi regime. People in this category included homosexuals, prostitutes, Gypsies (Roma) and thieves. Auschwitz was the largest and most infamous of the Nazi's concentration camps. Located near Oswiecim, Poland, Auschwitz was divided into 3 main camps, at which an estimated 1.1 million people were murdered. "B" Words Babi Yar is the event in which the Germans killed all the Jews in Kiev on September 29 and 30, 1941. This was done in retaliation for the bombing of German administration buildings in occupied Kiev between September 24 and 28, 1941. During these tragic days, Kiev Jews, Gypsies (Roma) and Soviet prisoners of war were taken to the Babi Yar ravine and shot. An estimated 100,000 people were killed at this location. Blut und Boden is a German phrase that translates to "blood and soil." This was a phrase used by Hitler to mean that all people of German blood have the right and duty to live on German soil. Bormann, Martin (June 17, 1900 - ?) was Adolf Hitler's personal secretary. Since he controlled access to Hitler, he was considered one of the most powerful men in the Third Reich. He liked to work behind the scenes and to stay out of the public spotlight, earning him the nicknames "the Brown Eminence" and "the man in the shadows." Hitler viewed him as an absolute devotee, but Bormann had high ambitions and kept his rivals from having access to Hitler. While he was in the bunker during Hitler's last days, he left the bunker on May 1, 1945. His future fate has become one of the unsolved mysteries of this century. Hermann Göring was his sworn enemy. Bunker is a slang word for Jews' hiding places within the ghettos. "C" Words Comite de Defense des Juifs is French for "Jewish Defense Committee." It was an underground movement in Belgium established in 1942. "D" Words Death March refers to the long, forced marches of concentration camp prisoners from one camp to another closer to Germany as the Red Army approached from the east in the last few months of World War II. Dolchstoss means "a stab in the back" in German. A popular myth at the time claimed that the German military had not been defeated in World War I, but that the Germans had been "stabbed in the back" by Jews, socialists, and liberals who forced them to surrender. "E" Words Endlösung means "Final Solution" in German. This was the name of the Nazi's program to kill every Jew in Europe. Ermächtigungsgesetz means "The Enabling Law" in German. The Enabling Law was passed March 24, 1933, and allowed Hitler and his government to create new laws that did not have to agree with the German constitution. In essence, this law gave Hitler dictatorial powers. Eugenics is the social Darwinist principle of strengthening the qualities of a race by controlling inherited characteristics. The term was coined by Francis Galton in 1883. Eugenics experiments were done during the Nazi regime on people who were deemed "life unworthy of life." Euthanasia Program was a Nazi-created program in 193 that was to secretly but systematically kill mentally and physically disabled people, including Germans, who were housed in institutions. The code name for this program was Aktion T-4. It is estimated that over 200,000 people were killed in the Nazi Euthanasia Program. "G" Words Genocide is the deliberate and systematic killing an entire people. Gentile is a term referring to someone who is not Jewish. Gleichschaltung means "coordination" in German and refers to the act of reorganizing all social, political and cultural organizations to be controlled and run according to Nazi ideology and policy. "H" Words Ha'avara was the transfer agreement between Jewish leaders from Palestine and the Nazis. Häftlingspersonalbogen refers to prisoner registration forms at the camps. Hess, Rudolf (April 26, 1894 - August 17, 1987) was deputy to the Führer and successor-designate after Hermann Göring. He played an important role in using geopolitics to gain land. He was also involved in the Anschluss of Austria and the administration of the Sudetenland. A devoted worshipper of Hitler, Hess flew to Scotland on May 10, 1940 (without the Führer's approval) to a plea for Hitler's favor in an effort to make a peace agreement with Britain. Britain and Germany denounced him as crazy and sentenced to life imprisonment. The sole prisoner at Spandau after 1966, he was found in his cell, hung with an electric cord at age 93 in 1987. Himmler, Heinrich (October 7, 1900 - May 21, 1945) was head of the SS, the Gestapo, and the German police. Under his direction, the SS grew into a massive so-called "racially pure" Nazi elite. He was in charge of the concentration camps and believed that the liquidation of the unhealthy and bad genes from society would help better and purify the Aryan race. In April 1945, he tried to negotiate a peace with the Allies, bypassing Hitler. For this, Hitler expelled him from the Nazi Party and from all offices he held. On May 21, 1945, he attempted to escape but was stopped and held by the British. After his identity was discovered, he swallowed a hidden cyanide pill that was noticed by an examining doctor. He died 12 minutes later. "J" Words Jude means "Jew" in German, and this word often appeared on the Yellow Stars that Jews were forced to wear. Judenfrei means "free of Jews" in German. It was a popular phrase under the Nazi regime. Judengelb means "Jewish yellow" in German. It was a term for the yellow Star of David badge that Jews were ordered to wear. Judenrat, or Judenräte in plural, means "Jewish council" in German. This term referred to a group of Jews who enacted the German laws in the ghettos. Juden raus! means "Jews out!" in German. A dreaded phrase, it was shouted by the Nazis throughout the ghettos when they were trying to force Jews from their hiding places. Die Juden sind unser Unglück! translates to "The Jews Are Our Misfortune" in German. This phrase was often found in the Nazi-propaganda newspaper, Der Stuermer. Judenrein means "cleansed of Jews" in German. "K" Words Kapo is a position of leadership for a prisoner in one of the Nazi concentration camps, which entailed collaborating with the Nazis to help run the camp. Kommando were labor squads made up of camp prisoners. Kristallnacht, or "Night of Broken Glass", occurred on November 9 and 10, 1938. The Nazis initiated a pogrom against Jews in retaliation for the assassination of Ernst vom Rath. "L" Words Lagersystem was the system of camps that supported the death camps. Lebensraum means "living space" in German. The Nazis believed that there should be areas attributed to only one "race" and that the Aryans needed more "living space." This became one of the Nazi's chief objectives and shaped their foreign policy; the Nazis believed they could gain more space by conquering and colonizing the East. Lebensunwertes Lebens means "life unworthy of life" in German. This term derived from the work "The Permission to Destroy Life Unworthy of Life" ("Die Freigabe der Vernichtung lebensunwerten Lebens") by Karl Binding and Alfred Hoche, published in 1920. This work was referring to the mentally and physically handicapped and regarded the killing of these segments of society as a "healing treatment." This term and this work became a base for the right of the state to kill unwanted segments of the population. Lodz Ghetto was a ghetto established in Lodz, Poland on February 8, 1940. The 230,000 Jews of Lodz were ordered into the ghetto. On May 1, 1940, the ghetto was sealed. Mordechai Chaim Rumkowski, who had been appointed the Elder of the Jews, attempted to save the ghetto by making it a cheap and valuable industrial center to the Nazis. Deportations began in January 1942 and the ghetto was liquidated by August 1944. "M" Words Machtergreifung means "seizure of power" in German. The term was used when referring to the Nazi's seizure of power in 1933. Mein Kampf is the two-volume book written by Adolf Hitler. The first volume was written during his time in Landsberg Prison and published in July 1925. The book became a staple of Nazi culture during the Third Reich. Mengele, Josef (March 16, 1911 - February 7, 1979?) was a Nazi doctor at Auschwitz who was notorious for his medical experiments on twins and dwarves. Muselmann was a slang term used in the Nazi concentration camps for a prisoner who had lost the will to live and was thus just one step from being dead. "O" Words Operation Barbarossa was the code name for the surprise German attack on the Soviet Union on June 22, 1941, which broke the Soviet-Nazi Non-Aggression Pact and plunged the Soviet Union into World War II. Operation Harvest Festival was the code name for the liquidation and mass killings of the remaining Jews in the Lublin area that occurred on November 3, 1943. An estimated 42,000 people were shot while loud music was played to drown out the shootings. It was the last Aktion of Aktion Reinhard. Ordnungsdienst means "order service" in German and refers to the ghetto police, which was made up of Jewish ghetto residents. "To organize" was camp slang for prisoners acquiring materials illicitly from the Nazis. Ostara was a series of anti-Semitic pamphlets published by Lanz von Liebenfels between 1907 and 1910. Hitler bought these regularly and in 1909, Hitler sought out Lanz and asked for back copies. Oswiecim, Poland was the town where the Nazi death camp Auschwitz was built. "P" Words Porajmos means "the Devouring" in Romani. It was a term used by the Roma (Gypsies) for the Holocaust. Roma was among the victims of the Holocaust. "S" Words Sonderbehandlung, or SB for short, means "special treatment" in German. It was a code word used for the methodical killing of Jews. "T" Words Thanatology is the science of producing death. This was the description given during the Nuremberg trials to the medical experiments performed during the Holocaust. "V" Words Vernichtungslager means "extermination camp" or "death camp" in German. "W" Words White Paper was issued by Great Britain on May 17, 1939, to limit the immigration to Palestine to 15,000 persons a year. After 5 years, no Jewish immigration was permitted unless with Arab consent. "Z" Words Zentralstelle für Jüdische Auswanderung means "Central Office for Jewish Emigration" in German. It was set up in Vienna on August 26, 1938 under Adolf Eichmann. Zyklon B was the poison gas used to kill millions of people in the gas chambers.
In simulation video games, a technological tree, tech tree, research tree, or technological tree is a graphical representation of the possible orders of upgrades a player may take during gameplay. Because these trees are often acyclic and technically oriented, they are more accurately thought of as a technological graph. They represent a tree, which has nodes representing different aspects of the technology tree. These nodes represent the technology that will be used in the future to make improvements on the current technology. One major difference between a technological tree and a technological graph is the level of detail that is used in each of them. A technological graph uses a very intricate grid to represent all of the nodes, whereas a technological tree shows only the highest priority nodes. Another difference is that when a technological graph shows an increase in a number of nodes, it shows a decrease in the number of nodes that are currently being used in a process. The technology tree has many branches. The main ones are: hardware, software, programs, networking, databases, and applications. Each of these branches of the tree have several sub-branches that exist as separate entities that are not directly related. For example, some programs and hardware require access to networks, others require databases, and others may require any of many other types of applications. If a branch of the tree does not have a corresponding application, it may be referred to as an interface branch. The main purpose of a technological tree is to represent the state of the technology today, as well as the future state of that technology. This allows for a game designer to plan their future technology tree to be compatible with the current state of the game. For instance, if a programmer changes a program, he needs to change the interface of that program so it is compatible with his new program. By creating a game plan that has information about the current state of the game, and then using a technology tree to map all of the necessary changes, the programmer can map the entire future tree without having to make large changes to the current system. It is important to realize that the current state of the tree is always being updated as new information is discovered. The interface, for example, may have been modified, but information about the other branches of the tree may be inaccurate. One final distinction between a technological graph and a tree is that a technological graph may be categorized by the direction in which the nodes are arranged. If there are a few nodes that can be found in any one direction, then the direction of that node is called a path. If, however, that direction contains several nodes that are very far from each other, then the node is said to branch into a series. The direction of a branch is the first direction from the root that a node follows after it leaves the root and starts traveling back to itself, and it is a very simple way to organize branches. In a tree, the first direction the branch takes may be difficult to see, but it is easy to see the other directions in a tree because the nodes that are farther away can be viewed easily.
Unit 1: Matter & Measurement- Classifications of Matter- * Element- contains ONLY ONE type of atom * Compound- TWO or MORE elements joined by a chemical band * Homogeneous- pure substances * Heterogeneous- not evenly mixed substances Separation Techniques- 1. Filtration- particle size 2. Distillation- Boiling 3. Evaporation- solid remain 4. Chromatography- solubility 5. Centrifugation- Density 6. Magnetism- Magnet Unit 2: Atomic Structure- 1. Isotopes- Atoms of the same element with different mass numbers Unit 3: Moles & Nuclear Chemistry- Half-life Problems- 1. Carbon-14 is used for dating. It has a half-line of 5730 years. If it is the year 2010 and we know a sample has decomposed from 1.00g to 0.125 grams how old is this sample? What year did the sample originate in? 2. How much will of a 10g radioactive sample will remain after 2 minutes when its half-life is only 8 seconds? Nuclear Chemistry- * Nuclear Reaction- reactions that involve the change in the atomic nucleus * 3 types of radiation 1. Alpha particles (+2) Ex.) 42He 2. Beta particles (-1) Ex.) 0-1e 3. Gamma rays (no charge) Ex.) 00y 4. Positron emission Ex.) 11p -> 10n+01B 5. Electron capture Ex.) 11p+0-1e -> 10n * Fission- spits * Fusion- gather to form Unit 4: Electrons & Electrons Configurations- Electromagnetic Radiation- 1. Radiowaves 2. Microwaves 3. Infrared light 4. Visible light 5. Ultraviolet light 6. X-rays 1. Red 2. Orange 3. Yellow 4. Green 5. Blue 6. Indigo 7. Violet 1.) What's the frequency of a wave with the wavelength of 50 m? 2.) What's frequency if wavelength equals 5.0*10-8m? Electron Configuration- * The elements in Group 18 are called the NOBLE GASES because they are STABLE AND tend to be UNREACTIVE (because they have completely filled energy level and 8 valence electrons) * Octet Rule- All elements want 8 valence electrons, will give/take to get 8 Quantum Numbers- n- energy level l- (s=0, p=1, d=2, f=3) ml- from -l to 0 to +l ms- +1/2 or -1/2 Principle States- * Pauli Exclusion- only two electrons per orbital and opposite spins * Aufbau- "to build up". electrons…
Motor planning is a complex idea that has many different components. In short, it is the body’s ability to remember the small steps that when combined, allow us to carry out a specific activity such as riding a bike or tying your shoes. Sometimes children have difficulty with processing the information needed to learn new motor actions. These children may appear to be clumsy or need extra time to complete what may seem like a simple task. So perhaps you may have already shown your child the steps it takes to brush their teeth, but after the 20th time they still seem to need reminding to put toothpaste on the toothbrush! No need to get frustrated, there are some motor planning activities and exercises you can work on with your child that will help to develop their ability to motor plan. Yoga is a great way for kids to have fun with moving their bodies in unique ways! I love Yoga Pretzels! These cards allow for kids to work through different poses with information presented to cater to different learning styles. At first when you start with this activity, it is okay to give your child feedback on how they are doing verbally, showing them visually with your body, and helping with your hands to get them into the right position. Over time, you want to start to take one of those ways you are helping away to see if they are beginning to process how to motor plan these poses more on their own. Have races around the living room walking like a bear, crab, frog, snake, or giraffe. Or you can go on a treasure hunt from your own living room but make it a rule where you must walk like a certain animal. Or to challenge them, have them come up with a way to walk like a specific animal! Getting back to your roots as a young one! You as the parent can start off by being Simon and holding different poses such as standing one foot in front of the other and one hand on top of your head. The sillier the better! Do not stress as much about your child following the rules of the game, but rather more on challenging their bodies to replicate what you are showing them. You will need a ball of any size and a rope or tape for this game. The goal of the activity is for the child to move the ball along the rope from one side to the other. First, they can start with their hands to roll the ball along the line. Then make it more challenging by having them move it with their foot. You can also increase the challenge even more by allowing them to only move it with their elbow or knee, or to make the pattern of the rope like a wave or a zig zag. With this idea, you do not have to always resort to climbing on furniture. You can create a sequence of stations your child has to go through such as kicking a ball to knock down a block tower, followed by frog jumping towards 3 different targets, and then finally pushing a weighted laundry basket around 2 obstacles. You can challenge them by changing up one part of the circuit each time such as making them jump sideways to the targets instead of forwards. Another challenge could be having them come up with their own tasks they have to do at the stations! We hope you found these motor planning activities helpful. Click here for more therapist-approved activities to try at home! Dana Thomsen has been a pediatric physical therapist for 8 years, with experience in working with a wide range of diagnosis. Her favorite part of working in the pediatric field is being able to get paid to play with such adorable children! She enjoys spending her time cuddling with her lovable dog and reading a good book.
By Susan Oliverio, MSEd, Certified School Psychologist Special education is a broad term used by the law to describe specially designed instruction that meets the unique needs of a child who has a disability.Under Pennsylvania law, children with a specific learning disability may be eligible to receive special education services through the public school system and at no cost to the family. What is a Learning Disability? A child may have a specific learning disability in one or more of the following areas: oral expression, listening comprehension, written expression, basic reading skills (dyslexia), reading fluency skills, reading comprehension, math calculation (dyscalculia) and math problem solving. A learning disability is not determined by academic achievement alone. For example, a child who has never been exposed to appropriate reading instruction or reading materials would likely read far below age and grade level. However, the child's inability to read would not be explained by a reading disability. The most common way in which a learning disability is determined isthrough the use of the discrepancy model. Students with a learning disability will show an unexpected gap between their potential (IQ) and academic achievement. You will frequently hear parents say, for example, “She is very bright and creative. She learns quickly and easily, but just can't quite master reading.” Determining whether the gap is unexpected or unexplained requires assessment from a psychologist who will administer an IQ test and an achievement test. The results of these assessments will be compared to determine if there is a significant gap between the scores. The Special Education Process The special education process begins with determining whether a child is eligible to receive specially designed instruction in the school setting. A group of qualified professionals in the school will review evaluation materials which can include: medical reports, psychological evaluations, review of educational records, parent and teacher report and interviews, and individual (one-on-one) assessment with the child. If a child has been determined to be in need of specially designed instruction, the school will create and implement an individual education plan (IEP) for the child. The IEP document will include information about your child's current developmental and academic levels and will include educational goals that the child will work towards. The document will also include Specially Designed Instruction (SDI’s) strategies that teachers will use to assist the child in reaching those goals.
Ever uncovered a fantastic record of your ancestor overseas — only to be baffled by the unfamiliar words of a bygone era? Genealogist Kirsty Gray demystifies antiquated terms. When you decided to start climbing your ancestral tree, it is unlikely that you considered the importance of obtaining a degree in the terminology and handwriting of ‘yester-century’. Your quest to find out more than just the names of your forebears may well be blighted with illegible script and a lack of records (or lack of access to them) — but what do you do when you can read the words and yet you have no idea what they mean? An obvious starting point is the enormous and comprehensive Oxford English Dictionary, available in print, on CD-ROM and online, or the Australian equivalent, the Macquarie Dictionary. Other dictionaries and various local and family history companions also exist to help the modern day genealogist to comprehend the now-redundant vocabulary used in previous centuries. There are a few terms that are slightly more commonplace in historical records, an understanding of which would significantly assist researchers when accessing records, particularly in the 19th century and earlier. This round-up of top 10 terms serves as a starting point for demystifying some old terminology you may come across in your research. The regular use of the word is obviously numerical, but historically speaking the term ‘hundred’ was also used to describe a subdivision of a county or shire which had its own court. In the late Anglo- Saxon period, hundreds probably consisted of 100 hides, which was a unit of taxation. The names of the hundreds were taken from their original meeting places, usually at remote locations, often at a river crossing or major highway. Some hundreds had over a dozen parishes whereas others included just a handful. 2. Acres, roods and perches The ‘quantities of statute measure’ in the tithe schedule (or apportionment) and later, the valuation lists, were listed in acres [A], roods or rods [R] and P [perches]. One acre was equal to four roods and one rood was equivalent to 40 perches. The rent charge payable to the parish was calculated based on these measures. There are several online converters that can be used to work out the area of land using the acres, roods and perches system, including Converterin. 3. Relict ‘Relict’ (or relictus) is a Latin term meaning having inherited or been bequeathed. The relict is the survivor (usually a widow) of a marriage, who has not married again. Nottinghamshire Guardian (pictured above) records two widows as such in the Births, Marriages and Deaths column on 25 May 1858. Learn seven more antiquated words and phrases in the March-April issue of Inside History magazine, available to purchase on our online store now!
A fast battleship was a battleship which emphasized speed without — in concept — undue compromise of either armor or armament. Most of the early WWI-era dreadnought battleships were typically built with low design speeds, so the term "fast battleship" is applied to a design which was considerably faster. The extra speed of a fast battleship was normally required to allow the vessel to carry out additional roles besides taking part in the line of battle, such as escorting aircraft carriers. A fast battleship was distinguished from a battlecruiser in that it would have been expected to be able to engage hostile battleships in sustained combat on at least equal terms. The requirement to deliver increased speed without compromising fighting ability or protection was the principal challenge of fast battleship design. While increasing length-to-beam ratio was the most direct method of attaining a higher speed, this meant a bigger ship that was considerably more costly and/or could exceed the naval treaty tonnage limits (where these applied—such as the Washington Naval Treaty shaping Naval fleet composition before World War II). Technological advancements such as propulsion improvements and light, high-strength armor plating were required in order to make fast battleships feasible. Unlike battlecruiser, which became official Royal Navy usage in 1911, the term fast battleship was essentially an informal one. The warships of the Queen Elizabeth class were collectively termed the Fast Division when operating with the Grand Fleet. Otherwise, fast battleships were not distinguished from conventional battleships in official documentation; nor were they recognized as a distinctive category in contemporary ship lists or treaties. There is no separate code for fast battleships in the US Navy's hull classification system, all battleships, fast or slow, being rated as “BB”. Origins[edit | edit source] Between the origins of the armoured battleship with the French Gloire and the Royal Navy’s Warrior at the start of the 1860s, and the genesis of the Royal Navy’s Queen Elizabeth class in 1911, a number of battleship classes appeared which set new standards of speed. The Warrior herself, at over 14 knots (26 km/h) under steam, was the fastest warship of her day as well as the most powerful. Due to the increasing weight of guns and armour, this speed was not exceeded until Monarch (1868) achieved 15 knots (28 km/h) under steam. The Italian Italia of 1880 was a radical design, with a speed of 18 knots (33 km/h), heavy guns and no belt armour; this speed was not matched until the 1890s, when higher speeds came to be associated with second-class designs such as the Renown of 1895 (18 knots) and the Swiftsure and Triumph of 1903 (20 knots). In these late pre-dreadnought designs, the high speed may have been intended to compensate for their lesser staying power, allowing them to evade a more powerful opponent when necessary. From about 1900, interest in the possibility of a major increase in the speed of Royal Navy battleships was provoked by Sir John (“Jackie”) Fisher, at that time Commander-in-Chief of the Mediterranean Fleet. Possibly due to Fisher’s pressure, The Senior Officer’s War Course of January 1902 was asked to investigate whether a ship with lighter armour and quick-firing medium guns (6-inch to 10-inch (150 mm – 254 mm) calibre), with a 4-knot (7 km/h) advantage in speed, would obtain any tactical advantage over a conventional battleship. It was concluded that “gun power was more important than speed, provided both sides were determined to fight”; although the faster fleet would be able to choose the range at which it fought, it would be outmatched at any range. It was argued that, provided that the fighting was at long range, an attempt by the faster fleet to obtain a concentration of fire by ”crossing the T” could be frustrated by a turn-away, leading to the slower fleet “turning inside the circle of the faster fleet at a radius proportional to the difference in speed” (Figure 1). War games conducted by the General Board of the US Navy in 1903 and 1904 came to very similar conclusions. Fisher appears to have been unimpressed by these demonstrations, and continued to press for radical increases in the speed of battleships. His ideas ultimately came to at least partial fruition in the Dreadnought of 1906; like Warrior before her, Dreadnought was the fastest as well as the most powerful battleship in the world. The Early Dreadnoughts[edit | edit source] Dreadnought was the first major warship powered by turbines. She also included a number of other features indicating an increased emphasis on speed: - An improved hull form was developed, with increased length-to-beam ratio. - The thickness of the main belt was reduced to 11 inches, compared to 12 inches for preceding classes. - The belt terminated at the upper deck, the usual ‘upper belt’ being deleted - The forecastle was raised, allowing higher sustained speed in heavy seas. In the decade following the construction of the Dreadnought, the Royal Navy’s lead in capital ship speed was eroded, as rival navies responded with their own turbine-powered “dreadnoughts”. Meanwhile, in Britain, Fisher continued to press for still higher speeds, but the alarming cost of the new battleships and battlecruisers provoked increasing resistance, both within the Admiralty and from the new Liberal Government that took office in 1906. As a result, a number of potentially significant fast battleship designs failed to achieve fruition. A notable abortive design was the 22,500-tons “X4” design of December 1905. This would have been a true fast battleship by the standards of the time, carrying the same armament and protection as Dreadnought at a speed of 25 knots (46 km/h). In the event, the British lead in dreadnought and battlecruiser construction was deemed to be so great that a further escalation in the size and cost of capital ships could not be justified. The X4 design is often described as a “fusion” of the Dreadnought concept with that of the battlecruiser, and it has been suggested that she “would have rendered the Invincibles obsolete". Fisher was again rebuffed in 1909 over the first of the 13.5in-gunned “super-dreadnoughts”, the Orion class; of the two alternative designs considered, one of 21 knots (39 km/h) and the other of 23 knots (43 km/h), the Board of Admiralty selected the slower and cheaper design. Fisher had his dissent recorded in the board minutes, complaining that “we should not be outclassed in any type of ship”. The Queen Elizabeth class[edit | edit source] In the event, Fisher’s aspirations for faster battleships were not fulfilled until after his retirement in 1910. Following the success of the 13.5-inch (343 mm) gun, the Admiralty decided to develop a 15 inch gun to equip the battleships of 1912 construction programme. The initial intention was that the new battleships would have the same configuration as the preceding Iron Duke class, with five twin turrets and the then-standard speed of 21 knots (39 km/h). However, it was realized that, by dispensing with the amidships turret, it would be possible to free up weight and volume for a much enlarged power plant, and still fire a heavier broadside than the Iron Duke. Although War College studies had earlier rejected the concept of a fast, light battlefleet (see Origins and Figure 1, above), they were now supportive of the concept of a Fast Division of 25 knots (46 km/h) or more, operating in conjunction with a conventional heavy battleline, which could use its advantage in speed to envelop the head of the enemy line (Figure 2). Compared to Fisher’s idea of speeding up the entire battlefleet, the advantages of this concept were that there would be no need to compromise the fighting power of the main fleet, and that it would be possible to retain the use of the existing (and still brand-new) 21-knot ships. Up to this time, it had been assumed that the role of a Fast Division could be fulfilled by the battlecruisers, of which there were at that time ten completed or on order. However, it was realized that there were now two problems with this assumption. The first was the likelihood that the battlecruisers would be fully committed in countering the growing and very capable German battlecruiser force. The second was that, as the then First Lord of the Admiralty, Winston Churchill, put it, our beautiful “Cats” had thin skins compared to the enemy’s strongest battleships. It is a rough game to pit ... seven or nine inches of armour against twelve or thirteen”. The new battleships would, in fact, be the most heavily armoured dreadnoughts in the fleet. The original 1912 programme envisaged three battleships and a battlecruiser. However, given the speed of the new ships, it was decided that a new battlecruiser would not be needed. In the event, five ships were built, the extra unit, HMS Malaya, being funded by the Federated Malay States. The battleship design for the following year’s programme, which became the Revenge class, also had 15-inch (381 mm) guns, but reverted to the 21-knot (39 km/h) speed of the main battlefleet. Again, no battlecruiser was included, a decision which suggests that the fast battleships were perceived at that time as superseding the battlecruiser concept. Combat Experience at the Battle of Jutland[edit | edit source] When the fast battleship concept was put to the test at the Battle of Jutland, the Queen Elizabeths had been temporarily attached to Vice-Admiral Beatty’s Battlecruiser Squadron at Rosyth (this was to release the Invincible-class battlecruisers of the Third Battlecruiser Squadron for gunnery practice at Scapa Flow). The Queen Elizabeths proved an outstanding success, firing with great rapidity, accuracy and effect, while surviving large numbers of hits from German 28.4 cm (11-inch) and 30.5 cm (12-inch) shells, and successfully evading the main German battlefleet during the so-called run to the North. In the fighting, Warspite was severely damaged, suffered a steering failure and was obliged to withdraw, while Malaya suffered a serious cordite fire which nearly caused her loss. However, both ships returned safely to port. This was in notable contrast to the performance of the battlecruisers, of which three (out of nine present) were destroyed by magazine explosions after a relatively small number of hits. When the main body of the Grand Fleet came into action, the Queen Elizabeths were unable to reach their intended station ahead of the battleline, and instead joined the rear of the line, seeing little further action. Meanwhile, the six surviving battlecruisers assumed the “Fast Division” role, operating ahead of the battleline with some success, exploiting their advantage of speed to damage the head of the German line with virtual impunity. Jutland was a crippling blow to the reputation of the existing battlecruisers. However, it also reinforced the views of the commander-in-chief, Sir John Jellicoe, that the Queen Elizabeths were too slow to operate with the Battlecruiser Fleet on a permanent basis. Based on combat reports, Jellicoe credited the German König-class battleships with 23 knots (43 km/h), which would mean that Queen Elizabeths, which were good for just 24 knots (44 km/h), would be in serious danger if they were surprised by a battlefleet headed by these ships. The Admiral class[edit | edit source] Even before Jutland, Jellicoe and Beatty had expressed concern at the lack of new construction for the Battlecruiser Fleet, and the inadequacy of the ships already provided. Early in 1916, they had rejected proposals for a new fast battleship design, similar to the Queen Elizabeth but with reduced draught, pointing out that, with the five new 'Revenge-class nearing completion, the fleet already had a sufficient margin of superiority in battleships, whereas the absence of battlecruisers from the 1912 and 1913 programmes had left Beatty’s force with no reply to the new 30.5 cm (12-inch) –gunned German battlecruisers. Jellicoe had believed that the Germans intended to build still more powerful ships, with speeds of up to 29 knots (54 km/h), and hence had called for 30-knot (56 km/h) ships to fight them. Although two new battlecruisers (Renown and Repulse) had been ordered in 1914, and were being constructed remarkably quickly, Jellicoe had argued that, although their speed was adequate, their armour protection (dramatically reduced at Fisher’s insistence) was insufficient. The 1915 design had therefore been recast as a 36,000 ton battlecruiser with 8 15-inch (381 mm) guns, and a speed of 32 knots (59 km/h). The main belt was only 8 inches thick, sloped outwards to give the same protection as a vertical 9-inch belt. A class of four ships had been authorised, the first being laid down on 31 May – the day that Jutland was fought. The losses at Jutland led to a reappraisal of the design. As noted above, the British were now convinced that their fast battleships were battleworthy but too slow, and their battlecruisers — even the largest — unfit for sustained battle. As a result, the new ships were radically redesigned in order to achieve the survivability of the Queen Elizabeths while still meeting the requirement for 32-knot (59 km/h) battlecruisers, although this reworking was flawed. The resulting ships would be the Admiral-class battlecruisers; at 42,000 tons by far the largest warships in the world. In 1917 construction was slowed down, to release resources for the construction of anti-submarine vessels; when it became clear that the threatened new German battlecruisers would not be completed, the last three were suspended and ultimately canceled, leaving only the lead ship to complete as the famous HMS Hood. Although the Royal Navy always designated Hood as a battlecruiser, some modern writers such as Anthony Preston have characterised her as a fast battleship, as she theoretically had the protection of the Queen Elizabeths while being significantly faster. On the other hand, the British were well aware of the protection flaws remaining despite her revised design, so she was intended for the duties of a battlecruiser and served in the battlecruiser squadrons throughout her career. Moreover, the scale of her protection, though adequate for the Jutland era, was at best marginal against the new generation of 16-inch (406 mm) gunned capital ships that emerged soon after her completion in 1920, typified by the US Colorado class and the Japanese Nagato class. Other designs, 1912-1923[edit | edit source] During the First World War, the Royal Navy was unique in operating both a Fast Division of purpose-built battleships and a separate force of battlecruisers. However, the 1912-1923 period saw a series of advances in marine engineering which would eventually lead to a dramatic increase in the speeds specified for new battleship designs, a process terminated only by the advent of the Washington Naval Treaty. These advances included: - small-tube boilers, allowing more efficient transfer of heat from boiler to propulsive steam; - increases in steam pressure; - reduction gearing, which allowed propellers to rotate at a slower, and more efficient, speed than the turbines that powered them; By the early 1920s, the wealth of the USA and the ambition of Japan (the two Great Powers least ravaged by World War I) were forcing the pace of capital ship design. The Nagato class set a new standard for fast battleships, with 16-inch (406 mm) guns and a speed of 26.5 knots (49.1 km/h). The Japanese appear to have shared Fisher’s aspiration for a progressive increase in the speed of the whole battlefleet, influenced partly by their success at outmanoeuvring the Russian fleet at Tsushima, and partly by the need to retain the tactical initiative against potentially larger hostile fleets. The immediate influence of the Nagatos was limited by the fact that the Japanese kept their actual speed a closely guarded secret, admitting to only 23 knots (43 km/h). As a result, the US Navy, which had hitherto adhered steadily to a 21-knot (39 km/h) battlefleet, settled for a modest increase to 23 knots (43 km/h) in the abortive South Dakota class of 1920. The Japanese planned to follow up the Nagatos with the Kii class, (ten 16-inch (406 mm) guns, 29.75 knots, 39,900 tons) described as "fast capital ships" and, according to Conway’s, representing a fusion of the battlecruiser and battleship types. Meanwhile, the Royal Navy, alarmed at the rapid erosion of its pre-eminence in capital ships, was developing even more radical designs; the 18-inch (457 mm) gunned N3 class and the 32-knot (59 km/h), 16-inch (406 mm) gunned G3 class both of some 48,000 tons. Officially described as battlecruisers, the G3s were far better protected than any previous British capital ship, and have generally been regarded, like the Kiis, as true fast battleships. The G3s were given priority over the N3s, showing that they were considered fit for the line of battle, and orders were actually placed. However, both the British and the Japanese governments baulked at the monstrous cost of their respective programmes, and ultimately were forced to accede to US proposals for an arms limitation conference; this convened at Washington DC in 1921, and resulted in the 1922 Washington Naval Treaty. This treaty saw the demise of the giant fast battleship designs, although the British used a scaled-down version of the G3 design to build two new battleships permitted under the treaty; the resulting Nelson-class vessels were completed with the modest speed of 23 knots (43 km/h). The Washington Treaty Era[edit | edit source] The signatories of the Washington Treaty were the USA, UK, Japan, France and Italy; at that time the only nations in the world with significant battlefleets. As a result, the terms of the Washington Treaty, and the subsequent treaties of London 1930 and London 1936 had a decisive effect on the future of capital ship design. The treaties extended the definition of capital ship to cover all warships exceeding 10,000 tons standard displacement or carrying guns exceeding 8-inch calibre; imposed limits on the total tonnage of capital ships allowed to each signatory; and fixed an upper limit of 35,000 tons standard displacement for all future construction. These restrictions effectively signaled the end of the battlecruiser as a distinct category of warship, since any future big-gun cruiser would count against the capital ship tonnage allowance. It also greatly complicated the problem of fast battleship design, since the 35,000 ton limit closed off the most direct route to higher speed, as the increasing length-to-beam ratio would have meant a bigger ship. Evidence of continued interest in high-speed capital ships is given by the fact that, although the signatories of the treaties were allowed to build 16-inch (406 mm) gunned ships as their existing tonnage became due for replacement, most of them passed up the opportunity to do so, preferring instead lighter-armed but faster ships. A British Admiralty paper of 1935 concludes that a balanced design with 16-inch (406 mm) guns would not be possible within the 35,000 ton limit, since it would be either insufficiently armoured or too slow; it is clear that by this date the 23-knot (43 km/h) speed of the Nelsons was considered insufficient. The recommended design (never built) was one with nine 15-inch (381 mm) guns and speed “not less than 29 knots (54 km/h)”. Four capital ships of the treaty era were built to displacements appreciably less than the 35,000 limit; the French Dunkerque and Strasbourg, and the German Scharnhorst and Gneisenau. The Dunkerque class was built in response to the German Panzerschiff (or “pocket battleship”) Deutschland class. The Panzerschiffe were, in effect, a revival of the late 19th century concept of the commerce-raiding armoured cruiser; long-ranged, heavily armed, and fast enough to evade a conventional capital ship. Likewise, the Dunkerque, can be regarded as a revival of the armoured cruiser’s nemesis, the battlecruiser. With 29-knot (54 km/h) speed and 330 mm (13 inch) guns, she could operate independently of the fleet, relying on her speed to avoid confrontation with a more powerful adversary, and could easily overtake and overwhelm a Panzerschiff, just as Sturdee’s battlecruisers had done to von Spee’s cruisers at the Falkland Islands in 1914. On the other hand, as a member of the line of battle, alongside the elderly and slow dreadnoughts that made up the rest of the French battlefleet, the design would make no sense, since her speed would lose its value and neither her armament nor her protection would be at all effective against a modern 16-inch (406 mm) gunned battleship such as Nelson. The Scharnhorst and Gneisenau were Germany’s response to the Dunkerques. They were an attempt to redress the inadequacies of the Panzerschiff design in speed, survivability and powerplant (the diesel engines of the Panzerschiffe were unreliable and produced severe vibration at high speed), and used much material assembled for the Panzerschiffe programme (most significantly, the six triple 11-inch (279 mm) gun mountings originally intended for Panzerschiffe D to F). Although much larger than the Dunkerques, the Gneisenaus were also not intended for the line of battle; apart from their insufficient armament, set-piece battles against the vastly more numerous Allied battlefleets had no place in Germany’s strategic requirements. Instead, the two German ships relied throughout their career on their superlative speed (over 32 knots) to evade the attentions of Allied capital ships. The treaties also allowed the reconstruction of surviving battleships from the First World War, including up to 3,000 tons additional protection against torpedoes, high-altitude bombing and long-range gunnery. In the late 1930s, the Italian and Japanese navies opted for extremely radical reconstructions: in addition to replacing the powerplant in their existing ships, they lengthened the ships by adding extra sections amidships or aft. This had a double benefit; the extra space allowed the size of the powerplant to be increased, while the extra length improved the speed/length ratio and so reduced the resistance of the hull. As a result, both navies realised significant increases in speed; for example the Japanese Ise class was increased from 23 to 25 knots (46 km/h), and the Italian Conte di Cavour class from 21 knots (39 km/h) to 27 knots (50 km/h). France, the UK and the US took a less radical approach, rebuilding their ships within their original hulls; boilers were converted to oil-firing or replaced, as were the engines in some cases, but increases in the output of the powerplant were generally canceled out by increases in the weight of armour, anti-aircraft armament and other equipment. The exception to the European battleship trend was Japan, which refused to sign the Second London Treaty. It rather uncharacteristically settled for a moderate speed of 27 knots (50 km/h), for the sake of heroic level of protection and firepower in the 18.1-inch (460 mm) gunned 64,000 ton displacement Yamato class. After much debate, the US settled on two 35,000 ton classes, also with a speed of 27 knots (50 km/h), in the North Carolina and South Dakota classes. Due to treaty restrictions, firepower and protection were emphasized first, although both did manage respectable speed increases compared to their WWI contemporaries to be able to operate as carrier escorts. The US signed the Second London Treaty but was quick to invoke an “escalator clause” to up the main battleship caliber from 14-inch (356 mm) to 16-inch (406 mm) as Italy and Japan refused to adopt it. This made the North Carolina a somewhat unbalanced ship, being designed to resist shells from the 14-inch (356 mm) guns that it was originally intended to carry, but being up-gunned during construction. The South Dakota rectified this with protection proof against 16-inch (406 mm) guns. In order to counter the increase in armor weight and stay within tonnage limits, the South Dakota class had to go with a shorter hull to reduce the length of the required protected area, compensating by installing more powerful machinery than the North Carolina, and this made the ships somewhat cramped. The US also used the treaty's “escalator clause” to order the 45,000 ton, 33-knot (61 km/h) Iowa class after Japan's withdrawal from the treaty. Being free of treaty limitations, the Iowa class had new 16-inch (406 mm) guns with a greater maximum range, and it had even more powerful engines and a lengthened hull for a significantly faster speed over the North Carolinas and South Dakotas. World War II Designs[edit | edit source] In 1938 the USA, Britain and France agreed to invoke the above-mentioned escalator clause of the Second London Treaty, allowing them to build up to 45,000 tons standard. By this time, all three allied nations were already committed to new 35,000-ton designs: the US North Carolinas (two ships) and South Dakotas (four), the British King George V class (five ships) and the French Richelieus (two completed out of four planned, the last of the class, Gascogne, to a greatly modified design). The UK and US laid down follow-on classes, designed to the new 45,000 ton standard, in 1939 and 1940 respectively. The US succeeded in completing four of the intended six Iowas, but the British Lion class would prove abortive; two of the planned four units were laid down, in the summer of 1939, but neither was completed. They would have embarked nine 16-inch (406 mm) guns and, at 29 to 30 knots (60 km/h), would have been significantly faster than the King George V class. The UK did complete one final battleship to an “emergency” design, the Vanguard, built around the 15-inch (381 mm) gun mountings removed from the cruisers Courageous and Glorious after their conversion to aircraft carriers. Completed in 1946, she was similar in speed to the Lions. The last two US capital ship designs were the first since 1922 to be entirely free of treaty constraints, and were sharply contrasted. The huge Montana-class battleships represent a return to “normal American practice” in battleship design, with massive protection, heavy firepower, and moderate speed (27 knots). At 60,500 tons standard, they approached the size of the Yamatos, which they resembled in concept. Five of these ships were ordered, but they were ill-suited to the needs of fast carrier task force operations, and none were laid down. Summary of "fast battleship" classes[edit | edit source] The following classes of warship have been considered to be fast battleships, in accordance with the definition used in this article and/or with contemporary usage. The list includes all new construction of the 1930s and 1940s, along with some reconstructions; this reflects the fact that, while not all of these ships were notably fast by contemporary standards of new construction, they were all much faster than the considerable number of capital ships built in the pre-Treaty era and still in service at that time. All speeds are design speeds, sourced from Conway’s; these speeds were often exceeded on trial, though rarely in service. - Royal Navy - Queen Elizabeth class (25 knots): the prototype fast battleship class - Hood, the sole member of the Admiral class was characterised by the Royal Navy as a battlecruiser throughout her lifetime, nonetheless some modern authorities characterise her as a fast battleship as she appeared on paper to be an improvement over the Queen Elizabeth class. (28 knots) - King George V class (28 knots) - Vanguard (30 knots) - United States Navy - Imperial Japanese Navy (Dai-Nippon Teikoku Kaigun) - Kongō class – as reconstructed (30.5 knots). Originally classified as battlecruisers, these ships were reclassified as battleships after their first reconstruction in 1929-1931. Even after a second reconstruction in the late 1930s, they remained relatively weak in armament and protection by Second World War standards. - Nagato class – as completed (26.5 knots). Unusually for a Japanese design, the speed was reduced to 25 knots (46 km/h) when the class was reconstructed in 1934-36. - Yamato class (27 knots) - German Navy (Kriegsmarine) - Scharnhorst class (also known as the Gneisenau class) (32 knots). These ships were officially designated kleine Schlachtschiffe ("small battleships"). The contemporary Royal Navy termed them "battlecruisers", on the basis of their exceptionally high speed and weak armament. - Bismarck class (30.8 knots) - French Navy (Marine Nationale) - Dunkerque class (29.5 knots). As with the Scharnhorst and Gneisenau, the contemporary Royal Navy termed these ships "battlecruisers". Some modern French-language sources also characterise these ships as battlecruisers (croiseurs de bataille) rather than battleships (cuirassés or bâtiments de ligne). - Richelieu class (30 knots) - Italian navy (Regia Marina) - Conte di Cavour class – as reconstructed, 1933-1937 (27 knots) - Andrea Doria class – as reconstructed, 1937-1940 (26 knots) - Vittorio Veneto class (30 knots). References[edit | edit source] - Brown, DK Warrior to Dreadnought: Warship Development 1860-1905. Caxton Editions 2003. ISBN 1-84067-529-2 - Brown, DK The Grand Fleet: Warship Design and Development 1906-1922. Caxton Editions 2003. ISBN 1-84067-531-4 - Campbell, NJM Jutland: An Analysis of the Fighting Conway Maritime Press, 1986. ISBN 0-85177-379-6 - Churchill, Winston S The World Crisis, 1911-1918. Free Press 2005. ISBN 0-7432-8343-0 - Conway’s All the World’s Fighting Ships, 1906-1921 Conway Maritime Press, 1985. ISBN 0-85177-245-5 - Conway’s All the World’s Fighting Ships, 1922-1946 Conway Maritime Press, 1980. ISBN 0-85177-146-7 - Jellicoe, John Rushworth (author), Chesnau, Roger (ed.) The Grand Fleet 1914-1916, Ad Hoc Publications (Stowmarket, UK) 2006; ISBN 0-946958-50-5. - Friedman, Norman Battleship Design and Development 1905-1945, Conway Maritime Press 1978; ISBN 0-85177-135-1. - Preston, Anthony The World’s Worst Warships, Conway Maritime Press 2002; ISBN 0-85177-754-6). - Roberts, John Battlecruisers. Caxton Editions 2003. ISBN 1-84067-530-6 Notes[edit | edit source] - Admiralty Weekly Order no. 351, 24 November 1911; quoted in Roberts, p.24 - Roberts, p.11 - Ibid, p.16 - Ibid, p.17 - Brown, “Warrior to Dreadnought”, p.188 - Roberts, p.26 - Roberts, p.32 - Three Invincible class, three Indefatigable, two Lion class, HMS Queen Mary and Tiger - Churchill, “The World Crisis”, Part 1, Chapter 5. - The Invincibles were the oldest of the British battlecruisers - Campbell, p 132 - Jellicoe, The Grand Fleet, ch 13. The relevant passage is available on-line at the War Times Journal website - Roberts, p 56 - Roberts, p 58 - Preston, p 96 - Friedman, p 92 - Conway's, 1906-21 volume, p.231 - Conway's, 1906-21 volume, p.41 - ADM1/9387: Capital Ships: Protection (1935), Available on-line via the HMS Hood Association Website - Conway's, 1922-46 volume, p.225 - Friedman, p67 - Friedman, pp47-48 - Conway's, 1922-46 volume, pp. 171, 284 - Conway's, 1922-46 volume, passim - Friedman, p. 307 - Conway’s, 1922-46 volume, p.99 - Conway's, 1922-46 volume, p.100 - Conway’s, 1922-1946 volume, p.89 - 1906-1921 volume for Queen Elizabeth and Nagato; 1922-1946 volume for other classes, including reconstructions - These include Anthony Preston in The World’s Worst Warships, and DK Brown in The Grand Fleet - Conway’s, 1922-1946 volume, p173 - Conway’s, 1922-1946 volume, p172 - Asmussen, John. "Bismarck: Gallery". www.bismarck-class.dk. http://www.bismarck-class.dk/bismarck/gallery/gallbismseatrials.html. Retrieved April 23, 2011. - For examples of the characterisation of the Dunkerque class on French-language websites, see: Histoire de la seconde guerre mondial(for croiseur de bataille); le.fantasque.free.fr (for bâtiment de ligne); and merselkebir.org merselkebir.org (French) (for cuirassé). |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
To use all functions of this page, please activate cookies in your browser. With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter. - My watch list - My saved searches - My saved topics - My newsletter Cyanobacteria, also known as Cyanophyta or blue-green algae, is a phylum of bacteria that obtain their energy through photosynthesis. The name "cyanobacteria" comes from the color of the bacteria (Greek: κυανός (kyanós) = blue). They are a significant component of the marine nitrogen cycle and an important primary producer in many areas of the ocean, but are also found on land. Stromatolites, putative fossilized cyanobacteria, have been found from 3.8 billion years ago. The ability of cyanobacteria to perform oxygenic photosynthesis is thought to have converted the early reducing atmosphere into an oxidizing one, which dramatically changed the life forms on Earth and provoked an explosion of biodiversity. Chloroplasts in plants and algae have evolved from cyanobacteria. Additional recommended knowledge Cyanobacteria are found in almost every conceivable habitat, from oceans to fresh water to bare rock to soil. Most are found in fresh water, while others are marine, occur in damp soil, or even temporarily moistened rocks in deserts. A few are endosymbionts in lichens, plants, various protists, or sponges and provide energy for the host. Some live in the fur of sloths, providing a form of camouflage while they are safe. Cyanobacteria include unicellular and colonial species. Colonies may form filaments, sheets or even hollow balls. Some filamentous colonies show the ability to differentiate into several different cell types: vegetative cells, the normal, photosynthetic cells that are formed under favorable growing conditions; akinetes, the climate-resistant spores that may form when environmental conditions become harsh; and thick-walled heterocysts, which contain the enzyme nitrogenase, vital for nitrogen fixation. Heterocysts may also form under the appropriate environmental conditions (anoxic) wherever nitrogen is necessary. Heterocyst-forming species are specialized for nitrogen fixation and are able to fix nitrogen gas, which cannot be used by plants, into ammonia (NH3), nitrites (NO2−) or nitrates (NO3−), which can be absorbed by plants and converted to protein and nucleic acids. The rice paddies of Asia, which produce about 75% of the world's rice, could not do so were it not for healthy populations of nitrogen-fixing cyanobacteria in the rice paddy fertilizer too. Many cyanobacteria also form motile filaments, called hormogonia, that travel away from the main biomass to bud and form new colonies elsewhere. The cells in a hormogonium are often thinner than in the vegetative state, and the cells on either end of the motile chain may be tapered. In order to break away from the parent colony, a hormogonium often must tear apart a weaker cell in a filament, called a necridium. Each individual cell of a cyanobacterium typically has a thick, gelatinous cell wall. They differ from other gram-negative bacteria in that the quorum sensing molecules autoinducer-2 and acyl-homoserine lactones are absent. They lack flagella, but hormogonia and some unicellular species may move about by gliding along surfaces. In water columns some cyanobacteria float by forming gas vesicles, like in archaea. Cyanobacteria have an elaborate and highly organized system of internal membranes which function in photosynthesis. Photosynthesis in cyanobacteria generally uses water as an electron donor and produces oxygen as a by-product, though some may also use hydrogen sulfide as occurs among other photosynthetic bacteria. Carbon dioxide is reduced to form carbohydrates via the Calvin cycle. In most forms the photosynthetic machinery is embedded into folds of the cell membrane, called thylakoids. The large amounts of oxygen in the atmosphere are considered to have been first created by the activities of ancient cyanobacteria. Due to their ability to fix nitrogen in aerobic conditions they are often found as symbionts with a number of other groups of organisms such as fungi (lichens), corals, pteridophytes (Azolla), angiosperms (Gunnera) etc. Cyanobacteria are the only group of organisms that are able to reduce nitrogen and carbon in aerobic conditions, a fact that may be responsible for their evolutionary and ecological success. The water-oxidizing photosynthesis is accomplished by coupling the activity of photosystem (PS) II and I (Z-scheme). In anaerobic conditions, they are also able to use only PS I — cyclic photophosphorylation — with electron donors other than water (hydrogen sulfide, thiosulphate, or even molecular hydrogen) just like purple photosynthetic bacteria. Furthermore, they share an archaeal property, the ability to reduce elemental sulfur by anaerobic respiration in the dark. Their photosynthetic electron transport shares the same compartment as the components of respiratory electron transport. Actually, their plasma membrane contains only components of the respiratory chain, while the thylakoid membrane hosts both respiratory and photosynthetic electron transport. Attached to thylakoid membrane, phycobilisomes act as light harvesting antennae for the photosystems . The phycobilisome components (phycobiliproteins) are responsible for the blue-green pigmentation of most cyanobacteria. The variations to this theme is mainly due to carotenoids and phycoerythrins which give the cells the red-brownish coloration. In some cyanobacteria, the color of light influences the composition of phycobilisomes. In green light, the cells accumulate more phycoerythrin, whereas in red light they produce more phycocyanin. Thus the bacteria appear green in red light and red in green light. This process is known as complementary chromatic adaptation and is a way for the cells to maximize the use of available light for photosynthesis. A few genera, however, lack phycobilisomes and have chlorophyll b instead (Prochloron, Prochlorococcus, Prochlorothrix). These were originally grouped together as the prochlorophytes or chloroxybacteria, but appear to have developed in several different lines of cyanobacteria. For this reason they are now considered as part of cyanobacterial group. Relationship to chloroplasts Chloroplasts found in eukaryotes (algae and plants) likely evolved from an endosymbiotic relation with cyanobacteria. This endosymbiotic theory is supported by various structural and genetic similarities. Primary chloroplasts are found among the green plants, where they contain chlorophyll b, and among the red algae and glaucophytes, where they contain phycobilins. It now appears that these chloroplasts probably had a single origin, in an ancestor of the clade called Primoplantae. Other algae likely took their chloroplasts from these forms by secondary endosymbiosis or ingestion. It was once thought that the mitochondria in eukaryotes also developed from an endosymbiotic relationship with cyanobacteria; however, we now suspect that this evolutionary event occurred when aerobic bacteria were engulfed by anaerobic host cells. Mitochondria are believed to have originated not from cyanobacteria but from an ancestor of Rickettsia. Cyanobacteria and Earth History The biochemical capacity to use water as the source for electrons in photosynthesis evolved once, in a common ancestor of extant cyanobacteria. The geological record indicates that this transforming event took place early in our planet's history, at least 2450-2320 million years ago (Ma), and possibly much earlier. Geobiological interpretation of Archean (>2500 Ma) sedimentary rocks remains a challenge; available evidence indicates that life existed 3500 Ma, but the question of when oxygenic photosynthesis evolved continues to engender debate and research. A clear paleontological window on cyanobacterial evolution opened about 2000 Ma, revealing an already diverse biota of blue-greens. Cyanobacteria remained principal primary producers throughout the Proterozoic Eon (2500-543 Ma), in part because the redox structure of the oceans favored photautotrophs capable of nitrogen fixation. Green algae joined blue-greens as major primary producers on continental shelves near the end of the Proterozoic, but only with the Mesozoic (251-65 Ma) radiations of dinoflagellates, coccolithophorids, and diatoms did primary production in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae. Cyanobacterial Evolution from Comparative Genomics Recent high-throughput sequencing has provided DNA sequences at an unprecedented rate, posing considerable analytical challenges, but also offering insight into the genetic mechanisms of adaptation. Here we present a comparative genomics-based approach towards understanding the evolution of these mechanisms in cyanobacteria. Historically, systematic methods of defining morphological traits in cyanobacteria have posed a major barrier in reconstructing their true evolutionary history. The advent of protein, then DNA, sequencing - most notably the use of 16S rRNA as a molecular marker - helped circumvent this barrier and now forms the basis of our understanding of the history of life on Earth. However, these tools have proved insufficient for resolving relationships between closely related cyanobacterial species. The 24 cyanobacteria whose genomes have been compared occupy a wide variety of environmental niches and play major roles in global carbon and nitrogen cycles. By integrating phylogenetic data inferred for hundreds to nearly 1000 protein coding genes common to all or most cyanobacteria, we are able to reconstruct an evolutionary history of the entire phylum, establishing a framework for resolving how their metabolic and phenotypic diversity came about. The cyanobacteria were traditionally classified by morphology into five sections, referred to by the numerals I-V. The first three - Chroococcales, Pleurocapsales, and Oscillatoriales - are not supported by phylogenetic studies. However, the latter two - Nostocales and Stigonematales - are monophyletic, and make up the heterocystous cyanobacteria. The members of Chroococales are unicellular and usually aggregated in colonies. The classic taxonomic criterion has been the cell morphology and the plane of cell division. In Pleurocapsales, the cells have the ability to form internal spores (baeocytes). The rest of the sections include filamentous species. In Oscillatorialles, the cells are uniseriately arranged and do not form specialized cells (akinets and heterocysts). In Nostocalles and Stigonematalles the cells have the ability to develop heterocysts in certain conditions. Stigonematales, unlike Nostocalles include species with truly branched trichome. Most taxa included in the phylum or division Cyanobacteria have not yet been validly published under the Bacteriological Code. Except: Biotechnology and applications Certain cyanobacteria produce cyanotoxins like anatoxin-a, anatoxin-as, aplysiatoxin, cylindrospermopsin, domoic acid, microcystin LR, nodularin R (from Nodularia), or saxitoxin. Sometimes a mass-reproduction of cyanobacteria results in algal blooms. The unicellular cyanobacterium Synechocystis sp. PCC6803 was the third prokaryote and first photosynthetic organism whose genome was completely sequenced. It continues to be an important model organism. The smallest genomes have been found in Prochlorococcus spp. (1.7 Mb) and the largest in Nostoc punctiforme (9 Mb). Those of Calothrix spp. are estimated at 12-15 Mb, as large as yeast. At least one secondary metabolite, cyanovirin, has shown to possess anti-HIV activity. See hypolith for an example of cyanobacteria living in extreme conditions. Some cyanobacteria are sold as food, notably Aphanizomenon flos-aquae (E3live) and Arthrospira platensis (Spirulina). It has been suggested that they could be a much more substantial part of human food supplies, as a kind of superfood. Along with algae, some hydrogen producing cyanobacteria are being considered as an alternative energy source, notably at Oregon State University, in research supported by the U.S. Department of Energy, Princeton University, Colorado School of Mines as well as at Uppsala University, Sweden Some species of cyanobacteria produce neurotoxins, hepatotoxins, cytotoxins, and endotoxins, making them dangerous to animals and humans. Several cases of human poisoning have been documented but a lack of knowledge prevents an accurate assessment of the risks. |This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Cyanobacteria". A list of authors is available in Wikipedia.|
There is an imminent threat of Climate Change, due to greenhouse gas emissions and no nation can sit back without taking action. At the UNCED, the UNFCCC treaty was signed and India was also a party. India’s plan to deal with climatic change is titled “National Action Plan on Climate Change”. National Action Plan on Climate Change National Action Plan on Climate Change was prepared by the Prime Minister’s Council on Climate Change in 2008. This plan balances India’s responsibility for taking measures to control climate change without compromising on the development front. The National Action Plan focuses on the attention of 8 National Missions. These are: - Solar Energy – Ministry of New and Renewable Energy - Enhanced Energy Efficiency – Ministry of Power - Sustainable Habitat – Ministry of Urban Development - Conserving Water – Ministry of Water Resources - Sustaining the Himalayan Ecosystem – Ministry of Science and Technology - A “Green India” – Ministry of Environment and Forests - Sustainable agriculture – Ministry of Agriculture - Strategic Knowledge Platform for Climate Change – Ministry of Science and Technology Proposed: A Ninth Mission – National Bio-energy Mission – Ministry of New and Renewable Energy
A knight was a warrior in the medieval ages that fought to defend. Only men could be knights and all knights were born as a noble. These nobles were third on the social pyramid and received land from the lord or king in return for protection. All knights owed loyalty to people above them on the social pyra mid. Just being a warrior and defending people didn’t mean that one was a knight. Knights had to act and think in certain ways and had to have certain attitudes. What made a knight a knight was that they always had to seek justice and what was right to gain honor through their actions. To defend and do good deeds as a respected noble was what knights needed to desire most. To develop these characteristics and fighting skills, knights had to go through the steps of knighthood. First, a 6-7 year old boy from a noble family was taken to a castle where he started out as a page. A page rarely learned to read and write. His main job was to practice fetching, carrying, running errands, and helping the lady whenever he was demanded to. It was his duty to be with the lady and the lord at all times. As a page grew older, he received more work and responsibilities. He had to start taking care of horses, learn instruments, hunt, learn how to use arm weaponry (sword, ax, lance, etc.), practice running, leaping, wrestling, cleaning armor, and vaulting onto a horse saddle in full armor. Squires would sometimes train pages and Chaplains gave religious lessons to them. With all these daily training, a page would be expected to be quick, graceful, and flexible. Around the age of 14, the page would become a squire. The squire became the knight’s personal assistant. He had to take care of the knight by dressing and undressing him, combing his hair, preparing his bed, and keeping his armor and weapons clean. During battles, the Squire would supply the knight with all the weapons and tools he needed. He was taught how to use real weapons and ride on a war horse with the weapon hand free. At meal times, he had to know how to cut bread, pour wine, and serve food properly. The lady taught him to sing and how to act properly in the King’s court. All squires needed to respect and help ladies the most. They were to defend ladies at all times and swore to protect women. The ladies would then teach the squires to be gentle, kind, and affectionate. All these practicing was because if the squire was to become a knight, he had to learn how to develop the correct attitude, manners, and personality towards others. When a squire proved himself worthy and honest, he was knighted or “dubbed”. As preparation, the soon-to-be-knight prayed all night without eating or sleeping and took a warm bath in the morning. Then, he would put on his armor and a white tunic over it. The knight would proudly kneel before the lord and the lord would tap his shoulders with the blade of the sword and say “I dub thee, Sir Knight”. The knight then received his weapons as a gift, a sword, lance, and golden spurs. After the congratulating ceremony, the knight would roam around freely to quests of adventure. A knight usually lived by entering different tournaments and battles endlessly since tournaments were such a popular sport and career of a young man. Chivalry in general was a type of philosophy. Because following Chivalry was almost like a religious order to all knights, almost every knight swore to live by his chivalric virtues. These rules of Chivalry formed the basic characteristics and goals that all knights were required to have. A knight was to practice and develop most of the attitudes and habits that Chivalry taught during the steps of knighthood. Chivalry just meant rules and ways of becoming a good, open-minded role model. Knights would have to defend the weak, respect all others, live to serve everyone above them, obey laws, be faithful, loyal, fair, seek justice and what was right, and have good manners. These were the actual codes and rules of Chivalry that knights had to follow: The Ten Commandments of the Code of Chivalry From Chivalry by Leon Gautier 1. Thou shall believe all that the Church teaches, and shall observe all its directions. 2. Thou shall defend the Church. 3. Thou shall respect all weaknesses, and shall constitute thyself the defender of them. 4. Thou shall love the country in the which thou was born. 5. Thou shall not recoil before thy enemy. 6. Thou shall make war against the Infidel without cessation, and without mercy. 7. Thou shall perform scrupulously thy feudal duties, if they be not contrary to the laws of God. 8. Thou shall never lie, and shall remain faithful to thy pledged word. 9. Thou shall be generous, and give largess to everyone. 10. Thou shall be everywhere and always the champion of the Right and the Good against Injustice and Evil. The Code of Chivalry From the Rifts: England Supplement 1. Live to serve King and Country. 2. Live to defend Crown and Country and all it holds dear. 3. Live one's life so that it is worthy of respect and honor. 4. Live for freedom, justice and all that is good. 5. Never attack an unarmed foe. 6. Never use a weapon on an opponent not equal to the attack. 7. Never attack from behind. 8. Avoid lying to your fellow man. 9. Avoid cheating. 10. Avoid torture. 11. Obey the law of king, country, and chivalry. 12. Administer justice. 13. Protect the innocent. 14. Exhibit self control. 15. Show respect to authority. 16. Respect women. 17. Exhibit Courage in word and deed. 18. Defend the weak and innocent. 19. Destroy evil in all of its monstrous forms. 20. Crush the monsters that steal our land and rob our people. 21. Fight with honor. 22. Avenge the wronged. 23. Never abandon a friend, ally, or noble cause. 24. Fight for the ideals of king, country, and chivalry. 25. Die with valor. 26. Always keep one's word of honor. 27. Always maintain one's principles. 28. Never betray a confidence or comrade. 29. Avoid deception. 30. Respect life and freedom. 31. Die with honor. 32. Exhibit manners. 33. Be polite and attentive. 34. Be respectful of host, women, and honor. 35. Loyalty to country, King, honor, freedom, and the code of chivalry. 36. Loyalty to one's friends and those who lay their trust in thee. In the 14th Century, a knight’s protective armor was the chain mail armor. It was made of about 250,000 ring shaped steel sewn onto a leather jacket called a jerkin. After the invention of the longbow, a full plate armor was introduced. It was so complicated that it took 2 men to dress a knight with this armor and only rich knights could afford it. Some people died wearing these armors because it was so heavy and difficult to move in. The pieces of the armor were these: the helmet was the head covering. The gorget was a collar of metal to protect the throat of the knight. The shoulder covering metal was called the shoulder piece. A cuirass was the breast plate and the arm protector was called the brassard. The elbow piece was the small knee and elbow pieces that acted like joints of the armor so that the knight could bend his legs and arms. The short, skirt-like, overlapping pieces of plates around the hips were called the tasset. Gauntlet were the gloves and sabaten were the foot covering. The cuisse were the metal thigh covering and the greave was the metal from the ankle to the knee. A knight’s favorite weapon was his sword. Large, two-handed swords swung with both hands were called the Great Swords. Another sword made in the 1460’s, called the shining sword, was made for richer knights. Many kinds of axes were used as well. The Battle Axes, made in the northern Europe, were popular with Vikings and were swung with both hands. The Pole Axes, made to hit the opponent’s head, were popular in battle and foot combat. Since short axes were easier to use on horsebacks, knights used a lot of the small, single handed axes and only sometimes used two-handed axes. One of the smallest weapon a knight would carry would be a dagger. Daggers would be the backup sword when knights’ first swords were knocked out of their hands. Knights also used long, wooden, painted spears called lances, and maces, which were wooden clubs with spikes. Archers would use bows and arrows as their main weapons. There were various kinds of bows and arrows such as crossbows and longbows. In the 12th century, it was very difficult to distinguish knights apart from one another because of their heavily worn armor that covered almost every inch of their bodies. So, in order to recognize a knight, different symbols of animals representing the knight, were painted on the knight’s shield and on his surcoat, a sleeveless garment that was worn over the armor. This was called the Coat of Arms, a system of personal symbols that represented a knight. Coat of Arms became a popular military status symbol or a mark of noble status by 1400 A.D. It was almost like a personal name that the knights were known by. The earliest coat of arms were fairly simple; it was usually bars, wavy lines, or an animal. When the symbols grew more detailed and complicated by the 15th century, heraldry was formed. Heraldry is the study and the knowledge of symbols that represented a family, country or a person. Heralds went to school at a young age and attended to Heraldry College where heralds learned and memorized all the symbols and meanings of the Coat of Arms. It was their job to help identify knights in wars. When new knights entered tournaments, it was the herald’s job to explain what the symbols on the knight’s Coat of Arms meant. Coat of Arms had different parts to it. There were different types of main shield shapes. Colors were chosen as well to go on the shield and each color represented different traits of the knight. For example, gold (meant generous), red (brave and strong), green (joy), blue (loyalty), purple (royalty), and black (steadfastness). There were also symbols drawn onto the shield that told others about the knight or what he like to do. Some medieval symbols are: a sword (meaning honor and justice), an arrowhead (speed), hammer (hard working), a deer (peace and harmony), an eagle (bravery), a flower or tree (life), lighting bolts (power), and dragons (protection). You also had to choose a crest. A crest was a picture that is on top of the shield and has to do with your name. It usually is an animal that says something about your character or an animal that represents your name in any way. At the bottom of the coat of arms would be a Motto, three basic words describing the most significant characteristics of a knight. It might also be a philosophy knights live by.
1914 – Manchester, England ‘Moseley’s law – the principle outlining the link between the X-ray frequency of an element and its atomic number’ Working with ERNEST RUTHERFORD’s team in Manchester trying to better understand radiation, particularly of radium, Moseley became interested in X-rays and learning new techniques to measure their frequencies. A technique had been devised using crystals to diffract the emitted radiation, which had a wavelength specific to the element being experimented upon. In 1913, Moseley recorded the frequencies of the X-ray spectra of over thirty metallic elements and deduced that the frequencies of the radiation emitted were related to the squares of certain incremental whole numbers. These integers were indicative of the atomic number of the element, and its position in the periodic table. This number was the same as the positive charge of the nucleus of the atom (and by implication also the number of electrons with corresponding negative charge). By uniting the charge in the nucleus with an atomic number, a vital link had been found between the physical atomic make up of an element and its chemical properties, as indicated by where it sits in the periodic table. This meant that the properties of an element could now be considered in terms of atomic number rather than atomic weight, as had previously been the case – certain inconsistencies in the MENDELEEV version of the periodic table could be ironed out. In addition, the atomic numbers and weights of several missing elements could be predicted and other properties deduced from their expected position in the table.
In our collection of science experiments about the sense of touch, we had one hot water and cold water experiment. Today we add more hot and cold experiments for kids to better understand temperature. While the weather getting colder, it is a perfect time to do some cold science experiments with kids. Since everything is relative, talking about cold means you are also talking about hot. 11 Cold Science Experiments that Are Magical for Kids Cold Science Experiments for Kids to Do in the Kitchen What is the temperature’s impact on molecules movement? Try this simple science activity with kids. The visual effect is so cool. Another way to observe cold temperature’s impact is to watch a balloon deflate. Blow up a balloon inside in room temperature, then leave it outside in the cold and watch it deflate. Bring it back inside to warm up and watch it re-inflate. The cold temperature makes the air volume shrink and density increase. How can you change ice to water without raising the temperature? This magic trick will show you. Why do you think polar bears can swim in the freezing cold water? Don’t they feel cold? This fun science activity will show kids why. How do gloves keep our hands warm? Do they generate heat? This simple science experiment helps kids understand that our hands stay warm because of the gloves, but gloves don’t generate heat. I love this home made thermometer. You can use this to show kid how cold it is outside and then bring it inside and watch what happens when it is warmer. Cold Science Experiments Can Only Be Done Outdoor in Winter When it is really cold outside, you can create some snow with boiling water. Did you notice the temperature in the vidoe? It was -21°F with a wind chill of -51°F. When you do this, make sure the wind is blowing away from you. Although it is cold, if the wind is blowing towards you, the hot water may hurt you before turning to snow. This article explains why this is happening. If it is snowing and cold, you may enjoy some homemade snow candy. Do make sure the snow is clean. Another option is to make instant slurpee. I am sure kids will love to have some no matter how cold it is. Looking for more science experiments for kids to explore the sense of touch? Check out 8 Science Activities for Kids to Learn the Sense of Touch. For more science experiments for cold winter, check out 10 Amazing Science Experiments for Kids with Ice and 6 Snow Science Activities for Kids
To measure the distance from a point, A, to a plane, p, you measure along the line through A perpendicular to the plane, p. That's true because a leg of a right triangle is always shorter than the hypotenuse. Further, if the vector is written as Ax+ By+ Cz= D as yours is, then the vector <A, B, C> is perpendicular to the plane. So you want to measure the distance from A to p, you want a line that passes through P and is in the direction of the vector <A, B, C>. Okay, a line through point in the direction of vector <A, B, C> is given by the parametric equations , , . Here, you are given the equation of the plane so you know A, B, and C. You are given the point so you know , , and . Use that information to write the equation of the line through (3, 0, -1) perpendicular to 4x+ 2y- z= 6. The point at which that line intersects the plane (replace x, y, and z in the equation of the plane by there expressions in terms of t for the line and solve for t) is the point on the plane closest to .
Each term is accompanied by a brief definition. Then we can solve those equations simultaneously to find the end rotations. These end rotations may then be substituted back into the slope-deflection equations to find the real moments at the ends of all of the members. From these moments, we can find the shears and reactions and the moment diagrams for the entire structure. The entire process for an indeterminate beam is summarized as follows: Find all of the unrestrained DOFs in the beam structure. Define an equilibrium condition for each DOF for rotations, the sum of all moments at each rotating node must equal zero. Construct each slope deflection equation. Use the resulting equilibrium equations to solve for the values or the unknown DOF rotations solving a system of equations. Use the now-known DOF rotations to find the real end moments for each element of the beam sub the rotations back into the slope-deflection equations. Use the end moments and external loadings to find the shears and reactions. Draw the resultant shear and bending moment diagrams. Node B cannot move horizontally since it is restrained by members AB and BC, which are both fixed horizontally. Node B is also restrained from moving vertically due to the roller support at that location; however, node B can rotate. Overall, this structure has only one DOF, which means it is a good structure to analyse using the slope-deflection method. For the sole rotational DOF at node B, the equilibrium condition will be: These two moments are the only moments that act at point B. For equilibrium to be maintained the sum of the moments around node B must sum to zero. The next step is to construct the slope-deflection equations for each member to find an expression for the moment at each end in terms of the end rotations, chord rotation, and fixed end moments. There are two slope deflection equations for each member one for the moment at each end. We first need to find the chord rotations and fixed end moments for both members, since they are a required input for the slope-deflection equations. This results in a rotation of the chord of the member. Recall that the element chord is just a straight line that joins the two ends of the member without regard to the actual deflected shape of the beam, as shown in the figure.The Slope-Deflection method is an alternative formulation of the displacement method which is also known as the stiffness method, the slope deflection method deals only with the . © regardbouddhiste.com 1 MECHANICS OF SOLIDS - BEAMS TUTORIAL 3 THE DEFLECTION OF BEAMS This is the third tutorial on the bending of beams. You should judge your. The procedure to compute a deflection component Apply a unit couple at the point where slope is to computed A D BC x P (real load) L Deflections Let’s examine the following beam and use virtual work to Using the method of section the virtual moment expressions are: ft. 0 Back to Monitor Repair FAQ Table of Contents. Introduction Monitors, monitors, and more monitors In the early days of small computers, a baud teletype with a personal paper tape reader was the 'preferred' input-output device (meaning that this was a great improvement over punched cards and having to deal with the bozos in the computer . – Determine the deflection of statically determinate beam by using Macaulay’s Method. • Expected Outcomes: – Able to analyze determinate beam – deflection and slope by Macaulay Method. Slope-Deflection Method Updated February 20, Page 4 Interpretation of the Slope-Deflection Equation Several insights are gained from the slope-deflection equation, and these insights.
A circuit breaker is an important electrical device that can operate automatically to protect an electric circuit from the damage of excess current or low or faulty current. The basic function of the circuit breaker is to disrupt current flow after a faulty current is detected. When there is an overload or short circuit that occurs when a hot wire touches a neutral wire, ground wire or another hot wire, the circuit breaker trips and breaks the current to prevent the wires from overheating, preventing the potential for electrical fires. A circuit breaker is an automatically operated electrical switch designed to protect an electrical circuit from damage caused by excess current from an overload or short circuit. Its basic function is to interrupt current flow after a fault is detected. Unlike a fuse, which operates once and then must be replaced, a circuit breaker can be reset (either manually or automatically) to resume normal operation. Circuit breakers are made in varying sizes, from small devices that protect low-current circuits or individual household appliance, up to large switchgear designed to protect high voltage circuits feeding an entire city. The generic function of a circuit breaker, or fuse, as an automatic means of removing power from a faulty system is often abbreviated as OCPD (Over Current Protection Device). All circuit breaker systems have common features in their operation, but details vary substantially depending on the voltage class, current rating and type of the circuit breaker. The circuit breaker must first detect a fault condition. In small mains and low voltage circuit breakers, this is usually done within the device itself. Typically, the heating or magnetic effects of electric current are employed. Circuit breakers for large currents or high voltages are usually arranged with protective relay pilot devices to sense a fault condition and to operate the opening mechanism. These typically require a separate power source, such as a battery, although some high-voltage circuit breakers are self-contained with current transformers, protective relays, and an internal control power source. Once a fault is detected, the circuit breaker contacts must open to interrupt the circuit; this is commonly done using mechanically stored energy contained within the breaker, such as a spring or compressed air to separate the contacts. Circuit breakers may also use the higher current caused by the fault to separate the contacts, such as thermal expansion or a magnetic field. Small circuit breakers typically have a manual control lever to switch off the load or reset a tripped breaker, while larger units use solenoids to trip the mechanism, and electric motors to restore energy to the springs. The circuit breaker contacts must carry the load current without excessive heating, and must also withstand the heat of the arc produced when interrupting (opening) the circuit. Contacts are made of copper or copper alloys, silver alloys and other highly conductive materials. Service life of the contacts is limited by the erosion of contact material due to arcing while interrupting the current. Miniature and molded-case circuit breakers are usually discarded when the contacts have worn, but power circuit breakers and high-voltage circuit breakers have replaceable contacts. When a high current or voltage is interrupted, an arc is generated. The length of the arc is generally proportional to the voltage while the intensity (or heat) is proportional to the current. This arc must be contained, cooled and extinguished in a controlled way, so that the gap between the contacts can again withstand the voltage in the circuit. Different circuit breakers use vacuum, air, insulating gas, or oil as the medium the arc forms in. Different techniques are used to extinguish the arc including: - Lengthening or deflecting the arc - Intensive cooling (in jet chambers) - Division into partial arcs - Zero point quenching (contacts open at the zero current time crossing of the AC waveform, effectively breaking no load current at the time of opening. The zero-crossing occurs at twice the line frequency; i.e., 100 times per second for 50 Hz and 120 times per second for 60 Hz AC.) - Connecting capacitors in parallel with contacts in DC circuits. Finally, once the fault condition has been cleared, the contacts must again be closed to restore power to the interrupted circuit. The circuit breaker is ergonomically designed to reset manually or automatically to resume normal operation. Circuit breakers are available in the market in various sizes, from small devices to protect low current or individual household appliances and large switchgear to protect high-voltage circuits feeding an entire city. Standard circuit breakers are either single- or double-pole • These are a common type of circuit breakers used more often • They are designed to protect one energized wire. • It can supply up to 120V to a circuit. • It can handle up to 15- to 30-amps. • Single-pole breaker is available in three types: Full size of 1-inch wide, half size of 1/2-inch wide, and twin/tandem of 1-inch wide with two switches and controls two circuits. • Double-pole breakers can occupy two slots on a breaker panel and protect two energized wires. • Double-pole breakers consist of two single-pole breakers with one handle and a shared trip mechanism. • Double-pole breakers can supply 120V/240V or 240V to a circuit. • Double-pole breakers range in capacity from 15- to 200-amps. • Used for large appliances such as dryers and water heaters. What Are The Main Parts Of A Circuit Breaker? The main parts of a circuit breaker are a frame, an operating mechanism, interrupting structure, trip unit, and terminal connections. 1.Frame — The frame consists of the components and also provides insulation to contain the arc. 2.Operating Mechanism — shuts and opens the contacts in the circuit breaker 3.Interrupting Structure — This specific part includes all the current-carrying parts except the trip unit and arcing probe. 4.Trip Unit — This is designed to sense abnormal or faulty current flow and helps the operating mechanism to open the contacts. The circuit trip units are usually of the thermal-magnetic type. 5.Terminal Connections — Terminal connections establish a suitable connection from the breaker to the conductor. How Does A Circuit Breaker Work? The circuit breaker mainly comes with fixed and moving contacts. These contacts touch each other to allow the flow of current under normal conditions when the circuit is closed. When the circuit breaker is closed, the electrodes carrying the current called, engage with each other under the pressure of a spring. Under normal conditions, the circuit breaker arms either open or close for switching or maintenance of the system. A certain amount of pressure is required to trigger and open the circuit breaker. In an event of fault current, the trip coil of the circuit breaker gets activated and the moving contacts move apart from each other with an operating mechanism and result in opening the circuit. Advantages of a circuit breaker: • Circuit-breakers come as a great replacement for mechanically operating fuses. . Circuit breakers are sensitive and respond quicker than fuses • Circuit-breakers are highly reliable and are more functional. • A circuit-breaker can be installed once and is easy to reset and lasts for a longer duration Conclusion: Denver Breaker & Supply, INC is one of the largest online shopping sites with a massive inventory of circuit breakers from leading brands in their exclusive online shopping site. If you are looking for a quote, or selling your electrical equipment or need something tested, you can reach the center or can contact them directly. Click here to follow the link, https://denverbreaker.com/catalog/ Article Source: https://www.123articleonline.com/articles/1196285/what-is-a-circuit-breaker-and-how-does-it-help
When a tumour grows, new blood vessels are formed that supply the tumour with nutrients and oxygen. However, these vessels are often malfunctioning and fluids and other molecules leak out of the vessels. This results in edema in the tissues, which in turn makes it more difficult for drugs to reach into the tumour during cancer therapy. The malfunctioning vessels can also contribute to the spread of metastases from the tumour. The leakage from the blood vessels is controlled by specific protein complexes that connect the cells in the blood vessel walls. By regulating these protein complexes, the cells are joined more or less tightly, which affects the leakage from the vessels. Recent findings from Uppsala University show how a specific alteration of the protein complex in the vessel walls can reduce leakage, without affecting any other vessel functions. The growth factor VEGFA functions as a signalling molecule, regulating the protein complexes in the blood vessel walls. One way of treating cancer is by inhibiting VEGFA, which decreases leakage and edema and improves the effects of chemo- and radiation therapy. However, VEGFA affects blood vessels in several ways and sustained anti-VEGFA therapy deteriorates vessel function and can cause increased metastasis. 'The specific mutation that we have studied allowed us to examine one of the signalling pathways in which VEGFA is involved. An important finding was that mice with the mutated protein complex also showed a reduced spread of metastases. We therefore believe that a targeted inhibition of this specific signalling pathway, which controls how the cells in the vessel walls are connected, might work better as a cancer therapy than the more general VEGFA inhibition that is used today,' says Lena Claesson-Welsh. Li et al. VEGFR2 pY949 signalling regulates adherens junction integrity and metastatic spread. Nature Communications. 2016;7:11017 doi:10.1038/ncomms11017 [Article]
Music Supports Children’s Development A music center is an important part of any child care program. Listening to music, singing, playing musical instruments, and moving to music are all activities that support children’s development across several different domains. - Language development: Children practice words and phrases in repeating patterns by listening to and singing songs. They also become aware of the rhythms of language and the patterns of poetry. These skills help them become better at understanding and producing language
We have already talked about the different types of electrical motors. You can find different articles on Linquip website dealing with topics such as DC and AC motors with their different types and functions. Today and in this article, we are going to explain one of the most important revolutionary components used in electric motors that brought new features. A commutator is the meant device we are going to elaborate on. In the following sections, we show you how commutator motors work and what features they bring to our lives. Read on. An introduction to Electric Motors As you may know, Electric motors are devices that convert electrical energy to mechanical energy. They usually do this by employing electromagnetic phenomena. The interaction of conductors carrying current in a direction at right angles to a magnetic field is what brings mechanical torque in electric motors. The different kinds of electric motors are distinguished by the ways in which the conductors and the field are arranged and also in the control that can be exercised over mechanical output torque, speed, and position. Most of the types were explained before on the website and you can reach them with a simple search. As we mentioned before, we intend to talk about the commutator motors in this article. But what we need to reach this point is to see what a commutator is and how it works. What Is the Commutation Process? So far, we know that how generally and based on what main rule electric motors work. In this section and in order to better understand the performance and features of commutator motors, we intend to get acquainted with its main component i.e. the commutator. As you may know, The operating principle of DC motors is on the basis of the mutual interaction between the magnetic field of an armature rotating and the magnetic field of a fixed stator. when the armature’s north pole is attracted to the stator’s south pole or vice-versa, a force is produced on the armature causing it to rotate. Commutation is indeed the process of switching the field in the armature windings to make constant torque in one direction with the purpose of ensuring that the torque acting on the armature is always in the same direction and the device connected to the armature enabling this current switch is the commutator. How Does a Commutator Work? A commutator itself is a split rotary ring, typically made of copper, with each segment of the ring attached to each end of the armature coil that is used in some types of electric motors and electrical generators whose job is to periodically reverse the current direction between the rotor and the external circuit. The armature having multiple coils need a commutator with similarly multiple segments supporting each end of each coil. Spring-loaded brushes are put on each side of the commutator and make contact with it as the commutator turns and supplying the commutator segments and the corresponding armature coils with voltage. Commutators are mostly applicated in direct current machines such as dynamos or as they are called DC generators and many DC motors as well as universal motors. By reversing the current direction in the rotating windings each half turn, a steady rotating force which is called torque is produced. In a generator the commutator picks off the current generated in the windings, reversing the direction of the current with each half turn, serving as a mechanical rectifier to convert the alternating current from the windings to unidirectional direct current in the external load circuit. The first DC commutator machine, the dynamo, was built by Hippolyte Pixii in 1832. You have to note that what we discussed above relates to traditional brushed DC motors with a traditional commutation process that uses mechanical means. Brushless DC motors also need a commutation process, but the difference is that for brushless designs the commutation process is carried out electronically, via an encoder or hall effect sensors monitoring the position of the rotor to determine when and how to energize the coils in the armature. What Is a Commutator Motor? The basic form of almost all the direct-current motor is the same. A stationary magnetic field is made across the rotor by set poles on the stator. The coils carrying direct current encircle these poles or they may contain permanent magnets. The armature or as it is called the rotor consists of an iron core with a coil accommodated in slots. Almost all the DC motors whose ends of the rotor coil are connected to the bars of a commutator switch mounted on the rotor shaft are called commutator motors. Suppose the armature terminals connected to a direct-current supply in a wat that a current enters at the positive terminal. The interaction between this current and the magnetic flux produces a counterclockwise torque accelerating the rotor. The commutator enters when the rotor has turned about 120°, and reverse the connection from the supply to the armature. The new direction of the current in the armature coil is such as to continue to make counterclockwise torque while the coil is under the pole. Proportional to the speed, a voltage is generated in the armature coil. As this coil voltage is alternating, the commutation process makes a unidirectional voltage at the motor terminals. The electrical input will be the product of this terminal voltage and the input current. The mechanical output power will be the product of the rotor torque and speed. Features and Disadvantages of Commutator Motors in the previous sections, we get acquainted with the commutation process and how a commutator works. Here and in this section, we list some of the features and disadvantages of the commutator motors. The following are some of the features that Commutator motors offer to us: - Thay can be used with 100 VAC for home appliances - They can Rotate faster than induction motors - under increased load, the rotating speed goes down, and torque increases - Startup torque is large - They provide high output - They are lightweight motors Because of these characteristics, the commutator motors are used for home appliances such as electric vacuum cleaners and some other tools such as electric drills that require high-output, lightweight motors. some other devices such as mixers and coffee mills requiring rotations faster than induction motors, use commutator motors as the driving force. On the other hand, the commutator motors have some disadvantages that we listed some of them below: - They Generate loud noises - Due to the service life of their brushes, these motors are not suited for constant or continuous operation. in this article, we tried to give you essential and comprehensive information about the commutation process, commutator, and most importantly, the commutator motors. we talked about the design and construction of commutators and the vital role in most of the DC motors. Besides, for better understanding, we brought some basic pieces of information about the working principle of electric motors. In the end, we listed some of the features and disadvantages of this type of motor for you and mentioned where this type of motor is commonly used. If you have any experience of using different types of commutator motors, we will be very glad to have your opinions in the comments. By the way, if you have any questions about this topic and if you still have ambiguities about this device in your mind, you can sign up on our website and wait for our experts in Linquip to answer your questions. Hope you enjoyed reading this article.
Triangle Inequality Theorem The triangle inequality theorem states that any side of a triangle is always shorter than the sum of the other two sides. Adjust the triangle by dragging the points A,B or C. Notice how the longest side is always shorter than the sum of the other two. In the figure above, drag the point C up towards the line AB. As it gets closer you can see that the line AB is always shorter than the sum of AC and BC. It gets close, but never quite makes it until C is actually on the line AB and the figure is no longer a triangle. The shortest distance between two points is a straight line. The distance from A to B will always be longer if you have to 'detour' via C. To illustrate this topic, we have picked one side in the figure above, but this property of triangles is always true no matter which side you initially pick. Reshape the triangle above and convince yourself that this is so. A triangle cannot be constructed from three line segments if any of them is longer than the sum of the other two. For more on this see Triangle inequality theorem converse. Other triangle topics Perimeter / Area Congruence and Similarity Triangle quizzes and exercises (C) 2011 Copyright Math Open Reference. All rights reserved
My students and I are already fairly familiar with Google Forms (to see how we use Google Forms for peer-assessments, click here). My students are able to create different question types, respond to surveys and analyse survey results. We knew the basic functions and it was time to ‘level up’. Being ‘techy’ isn’t about knowing how to do everything. It’s about learning what’s possible and taking time to figure it out. I knew that it was possible to direct people to different parts of a survey based on their answers, but I had never done this before. With a bit of clicking around, I realised how simple it was. My students used this method to create an electronic branching database for sorting 2-D and 3-D shapes. I first asked them to start sorting without the technology, and they started a traditional branching database, in a small group using the board. They started to sort the shapes based on their properties. This was a great way to reinforce the vocabulary (faces, edges, parallel sides, etc.). It also sparked questioning and discussions within the group, such as ‘what’s the difference between a rhombus and a parallelogram?’. It also highlighted some misconceptions (a sphere has no faces, for example). and these were addressed and explained by their peers within the group They didn’t finish the branching database on the board. This was just to ensure that they understood the task. It also gave them a starting point for creating and editing different sections of their Google Forms. They realised that each question (from the board) would be a new section. They were able to transfer these questions across and then continue the questioning until all of their shapes were identified. At the end of each branch was a shape, added as images to the final sections. The ‘add image’ option displays several options for accessing images, including a Google search option. My students carried out searches for the shapes and added them to the end of each branch. Click here to see a Google Forms branching database, created by a Year 4 student (it’s not quite perfect yet). What else could you sort and classify using a branching database? How else could the sections feature of Google Forms be used with students? Feel free to contribute your ideas in the comments below.
Meningitis is a serious infection which, if not diagnosed and treated promptly, can easily lead to serious injury, such as permanent brain damage, or even death. Doctors and other medical professionals have a duty to act professionally and competently in order to treat their patients properly and prevent them suffering unnecessary injury. This is especially important when dealing with serious illnesses such as meningitis. Unfortunately, it can be difficult to diagnose meningitis in babies and young children as symptoms vary wildly between cases and often resemble other illnesses (such as flu or a cold). This is compounded by the fact that young children find it harder to communicate that they feel unwell. Therefore, when medical professionals are not operating to the high standards required of them, cases of meningitis can slip through the cracks. What is meningitis? Meningitis is the inflammation of membranes which surround and protect the brain. It is most commonly caused by viruses or bacteria but in all cases is life-threatening, often resulting in serious injury or death if not treated as soon as possible. Meningitis can affect anyone, whatever their age; however, it is particularly common in infants, young children, and teenagers, with under 5’s being most at risk. There are vaccinations available for many types of meningitis nowadays, however, no vaccination can provide total protection. What are the symptoms of meningitis? Meningitis can be diagnosed by conducting a physical examination. The general symptoms of meningitis include: - High temperature over 38C (uncommon in babies under 3 months old) - Drowsiness or confusion - Cold hands and feet - Stiff neck - Muscle or joint pain - Light sensitivity - Becoming unresponsive or difficult to wake Signs of meningitis in babies and toddlers can also include: - Bulging soft spot on head - Irritability and moaning or whining cry - Refusing to feed - Stiff body or unusually floppiness One of the most distinctive symptoms of meningitis, particularly in children, is a rash which doesn’t fade when pressure is applied. It will generally start as small red “pinpricks” before developing into blotchy red or purple patches. However, not all children will develop a rash, so parents should never wait for one to appear before seeking medical assistance. How else is meningitis diagnosed in children? Once the signs of meningitis have been spotted, emergency medical assistance should immediately be sought out for treatment (even if you’re not 100% sure whether it’s meningitis). Meningitis can be diagnosed in a number of ways, including: - Blood tests to check for viruses or bacteria - CT scan to check the brain for swelling - Lumbar puncture – to check a sample of spinal fluid for viruses or bacteria Can meningitis cause brain damage? Meningitis can leave children with many long-term and lasting effects, including headaches, deafness, vision problems, organ failure, skin damage, and loss of limb. Brain injuries are also a possibility and, whilst many children are able to make a full recovery, the impact of meningitis on developing brains can sometimes result in permanent brain damage. The effects of brain damage can include: - Psychological changes – the affected child might experience mood swings, temper tantrums, aggression, and difficulty concentrating. These factors can also be influenced by anxiety caused by being seriously ill. - Cognitive changes – including problems with memory and difficulties with planning and organising - Loss of skills or coordination – toddlers may regress to crawling despite previously learning to walk. Affected children may also have difficulties with balance. These effects are often temporary. - Cerebral palsy – babies who experience meningitis soon after birth (neonatal meningitis) may acquire cerebral palsy which can result in developmental delays and permanent issues with movement and coordination. - Epilepsy – a condition which causes unprovoked, recurrent seizures. - Learning difficulties – research suggests that some children that experience meningitis are left with a low to borderline IQ. Because the brain continues to develop until people are over 20 years old, brain injury in children often doesn’t become apparent until years later. Do you need advice about a child brain injury caused by meningitis? Meningitis can progress incredibly fast, meaning it’s essential to diagnose and treat the infection as efficiently as possible. Unfortunately, in some situations doctors and other medical professionals don’t spot the signs quickly enough or mistake the symptoms for another milder illness, potentially leading to serious injuries including brain damage. If your child has sustained a brain injury as a result of delayed, mistaken, or inappropriate diagnosis or treatment, you may be able to claim medical negligence compensation, even if the brain damage did not become apparent until years after the illness. Depending on the severity of the injuries, the amount of compensation you receive could be substantial, covering things like pain, suffering and the costs of long-term care, home help, and educational support. Our specialist child brain injury solicitors understand how devastating meningitis can be for a family, with the after-effects lasting for months, years, or even for life. We aim to demystify the medical negligence claim process by providing you with clear, insightful advice and using our negotiation expertise to give you the best possible chance of receiving all the compensation your family deserves.
Opera gained popularity ring the Baroque period as composers began experimenting with complex musical ornamentation for the voice. Themes explored In operas written during this period include poetry, religious stories and mythology. Many of the flirts operas composed early in the Baroque period survive only in fragments or have been lost altogether. However, Claudio Monteverdi Oleomargarine did Poppa, which received its premiere in 1643, continues to be performed. One composer active during the latter part of the era was George Frederic Handel, perhaps most widely recognized today for is oratorios, such as Messiah. Examples of his operatic works Include Sere and Gluon Cesar. The late Baroque opera emphasized virtuosity In vocal singing. The brief ad capo aria soon superseded the strophic variation and was established as a vocal form. At least equally important was the bipartite aria, which consisted of only A and B or their variations. In contrast with the late Baroque opera and its rigid alternation of recitative and aria, the middle Baroque opera retained great formal flexibility. During the progress of opera from Its primitive forms, the words started to SSE their Importance and the music was dominating over words again, Classical Opera’s expansion and evolution owes a great deal of gratitude to the Baroque era of the early eighteenth century, but where Baroque opera was mainly designed and created for aristocracy or royal audiences, Classical opera branched out as a form of musical entertainment for the general public using the opera house as a center of experimentation. The population of the middle class would eventually become the mainstream participant engaging in opera entertainment as a response to aristocratic forms of opera. With the Classical Era came both the decay and subsequent reformation of the Italian opera serial, or serious opera. Its once dramatic and emotional presentation had evolved into a showy and artificial art form. Although many musicians of the time realized the tragic decline of the opera serial, change took place slowly. To try and restore the opera serial to its former greatness, composers made certain changes in their writing styles. While not everyone agreed upon or employed these changes, many of them can be found in some of the operas of the late 18th century. During the same time, the comedic opera began developing. This type of opera was in sharp contrast to the opera serial. It catered more to the people who wanted to “revolt” against the more serious and dramatic opera. Opera Music ay Ally-Allegro ere foundations of opera are often traced to the late Renaissance era, to a group intellectuals gathered in Florence under the patronage of Count Giovanni Bard to ornamentation for the voice. Themes explored in operas written during this period include poetry, religious stories and mythology. Many of the first operas composed However, Claudio Monteverdi L ‘Incorporation did Poppa, which received its is oratorios, such as Messiah. Examples of his operatic works include Sere and Gigolo Cesar. The late Baroque opera emphasized virtuosity in vocal singing. The local form. At least equally important was the bipartite aria, which consisted of only flexibility. During the progress of opera from its primitive forms, the words started to lose their importance and the music was dominating over words again. Classical subsequent reformation of the Italian opera serial, or serious opera. Its once dramatic No wanted to “revolt” against the more serious and dramatic opera.