content
stringlengths 275
370k
|
---|
This level consists of the following modules : stories, expressions of daily life, useful questions and answers, dialogues and poetry.
To develop the ability to express oneself in writing, each story is accompanied by :
• an adapted translation which respects the English style
• a word-by-word translation, which makes it easy to acquire the meaning of words, and intuitively grasp the construction of sentences
• very simple writing exercises, which will allow you to practice writing the letters correctly, and remembering the spelling of the words
• a vocabulary table with the essential words of the story
• translation and memorization exercises
• a question of understanding to discover the different interrogative pronouns
Expressions of everyday life
In order to express ourselves on a daily basis, we have designed expressions that respond to everyday situations based on the principle of action – reaction and question – answer.
Written in modern standard Arabic, each expression is accompanied by two successive translations :
• an adapted translation, which respects the English style • a word-by-word translation, which makes it easy to acquire the meaning of words
You will find dialogues related to scenes of everyday life, which aim to enrich vocabulary, develop Arabic language structures and oral expression.
We learn to read and write when we have a great interest in what we read and what we write. That is why we have chosen poems that speak of our environment : parents, children, feelings, school, games, nature and animals.
The recitation of poems promotes diction and memorization through rhymes. |
Simulation Based Learning for Better Learner Engagement
How do we make learning more contextual and relevant? How do we increase the stickiness of the content?
Teachers have traditionally used alternative formats of teaching to engage students and help them learn the practical aspects of learning. Playing games, taking field trips and working in labs are some of the ways in which students feel engaged. This is enabled with the growth of online learning. However, with the surge of information online and offline, one of the main impediments to effective learning today is keeping the learners engaged and to ensure that the learning is targeted and effective. This makes it important to make learning more contextual and relevant. It is also important to improve the stickiness of the content.
In the digital world, technology allows us to create real-world “simulations” on using a variety of digital media /elements like video, audio, gesture-based digital learning objects on computer, iPads, mobile device. Integrating simulations within the curriculum can help overcome the typical barriers of textbook learning. Simulation based learning adds concreteness and context to learning.
Some of the key benefits of simulations are:
1) Preparation for the Actual World – They help mimic real-world situations in a digital space. Everything from medical simulations for nurses and doctors to flight simulations for pilots — simulations helps students model various situations they are likely to face in the real world.
2) Contextual Learning – Simulations add concreteness to learning and make it relevant to the context. This helps learners to grasp better and ensures retention of the knowledge that has been acquired.
3) Visual Learning – We are inherently wired to process visual information better than textual and verbal. Take this one example – many of us have sat through lectures on Organic Chemistry and tried learning it through textbooks, trying to make sense of complex carbon and hydrogen bonds. Understanding those molecules and reactions can get confusing and tricky. Now imagine seeing those molecules in a form like this:
Imagine manipulating the 3D model of this molecule and its bonds to understand how exactly these interact. This visualization helps students grasp complex concepts better. Another example could be visualizing the impact of marine pollution on aquatic animals. By changing the variables in the simulation, students can change the pollution levels in the sea and see the subsequent changes.
4) Engagement and Deep Learning – In learning by doing, students engage deeply in an activity. This helps with deep learning and higher retention rates. Some of these media-rich interactive exercises also add an element of freshness to the concepts taught and help increase student engagement levels.
5) Technology – Today, it is possible to analyze student usage data by integrating web-based simulations with analytic tools. One can get highly relevant data and usage patterns which could be used to a) personalize the learning process for every student and b) make improvements in the materials for an improved user experience overall.
Overall, integrating simulations within the curriculum can help improve the learning process by making it contextual, relevant and engaging.
Having said that, there are some challenges to mass adoption of these simulations. Some of the key ones being:
a) Cost – The cost of developing these NextGen simulations can run into tens of thousands of dollars. The cost barrier is a major impediment especially in the K12 sector where funding priorities can put these on a backburner.
b) Infrastructure – We take internet connectivity for granted today but it is still a challenge in some areas of the world. Simulations can be heavily web-dependent objects and lower speeds can impair the user experience. The plethora of devices, browsers and operating systems used today can also pose challenges in maintaining a uniform experience. For example, a student accesses these simulations on a web browser at school. She goes home and accesses these same simulations on her phone on an app, but the presentation and controls are very different, which can be frustrating. Providing this flexibility without jeopardizing the experience is a challenge.
Some of these challenges are being overcome with better technology. The popularity of simulations as learning tools is on the rise.
We at Magic EdTech work with educational publishers at all levels to help them develop these engaging simulations. Some of the areas where our clients appreciate us are:
a) Cost Efficiency – By using an approach driven by templates and engines, we bring down costs and add efficiencies of scale to the development process. This leads to faster development timelines and reduced expense.
b) Device Agnostic Development – We understand the patterns of today’s technology consumption. Our development and design teams take these into consideration and design for devices of all shapes and forms.
c) Accessibility – Accessibility is the cornerstone of everything we do here at Magic EdTech. All our simulations keep user access at the center of its design and development. Our simulations are WCAG 2.1 compliant and provide the learning opportunity to everyone.
Feel free to drop us a line if you’d want to learn more. There is nothing that can replace the richness of real-world experience. Simulations replicate those and get as close to it as they can to help make learning effective and engaging. |
Youre standing on the sailboats bow on a windy day, facing the helm and yelling There is a rock ahead. Youre doing this while youre turning forward to look again. What the helmsman hears is There is a…..
Sound waves propagate through the air, and as you turned your mouth (the speaker) away from the direction of the helmsman, the wind blew those sound waves away. Ultrasonic wind sensors can measure this effect.
The distances between the sensors are close enough together that the sound waves arent blown away, and in windless conditions, the speed of sound is consistent and known. Each of the three sensors takes a turn at high frequency yelling while the other two listen.
The time it takes for each listener to hear the yelling is very accurately measured. Apply a lot of calculations, and you can derive both apparent wind direction and speed. As an example, if the wind is from aft of a transceiver, it will push the sound waves a little faster and they arrive earlier. Inversely, if the wind is blowing toward the transceiver, it slows down the sound waves, and it takes them longer to reach the other listeners. The difference in the time it takes to reach each listener is used to triangulate the direction the wind is traveling. This is a very simple explanation of a complex process. |
western North America during the Late Cretaceous Period, more than 70 million years ago. The type species, A. sarcophagus, was apparently restricted in range to the modern-day Canadian province of Alberta, after which the genus is named. Scientists disagree on the content of the genus, with some recognizing Gorgosaurus libratus as a second species.As a tyrannosaurid, Albertosaurus was a bipedal predator with tiny, two-fingered hands and a massive head with dozens of large, sharp teeth. It may have been at the top of the food chain in its local ecosystem. Although relatively large for a theropod, Albertosaurus was much smaller than its more famous relative Tyrannosaurus, probably weighing less than 2 metric tons.Since the first discovery in 1884, fossils of more than thirty individuals have been recovered, providing scientists with a more detailed knowledge of Albertosaurus anatomy than is available for most other tyrannosaurids. The discovery of 22 individuals at one site provides evidence of packbehaviour and allows studies of ontogeny and population biology which are impossible with lesser-known dinosaurs.Albertosaurus was smaller than some other tyrannosaurids, such as Tarbosaurus andTyrannosaurus. Typical adults of Albertosaurusmeasured up to 9 metres (30 ft) long, while rare individuals of great age could grow to over 10 metres (33 ft) in length. Several independent mass estimates, obtained by different methods, suggest that an adult Albertosaurus weighed between 1.3 tonnes (1.4 short tons) and 1.7 tonnes (1.9 tons).All tyrannosaurids, including Albertosaurus, shared a similar body appearance. Typically for a theropod, Albertosaurus was bipedal and balanced the heavy head and torso with a long tail. However, tyrannosaurid forelimbs were extremely small for their body size and retained only two digits. The hind limbs were long and ended in a four-toed foot. The first digit, called the hallux, was short and only the other three contacted the ground, with the third (middle) digit longer than the rest. Albertosaurus may have been able to reach speeds of 17−25 miles per hour.The skull of Albertosaurus, perched on a long, S-shaped neck, was approximately 1 metre (3.3 ft) long in the largest adults. Wide openings in the skull (fenestrae) reduced the weight of the head while also providing space for muscle attachment and sensory organs. Its long jaws contained more than 60 banana-shaped teeth; larger tyrannosaurids possessed fewer teeth. Unlike most theropods, Albertosaurus and other tyrannosaurids were heterodont, with teeth of different forms depending on their position in the mouth. The premaxillary teeth at the tip of the upper jaw were much smaller than the rest, more closely packed, and D-shaped in cross section. Above the eyes were short bony crests that may have been brightly coloured in life and used in courtship to attract a mate.William L. Abler observed in 2001 that Albertosaurus tooth serrations are so thin as to functionally be a crack in the tooth. However, at the base of this crack is a round void called an ampulla which would have functioned to distribute force over a larger surface area, hindering the ability of the "crack" formed by the serration to propagate through the tooth. An examination of other ancient predators, a phytosaur and Dimetrodon found similarly crack-like serrations, but noadaptations for preventing crack propagation. Tyrannosaurid teeth were used as holdfasts for pulling meat off a body, rather than knife-like cutting functions. Tooth wear patterns hint that complex head shaking behaviours may have been involved in tyrannosaur feeding. When a tyrannosaur would have pulled back on a piece of meat, the force would tend to push the tip of tooth toward the front of the mouth and the anchored root would experience tension on the posterior side and compression from the front. This would typically incline the tooth to crack formation on the posterior side of the tooth, but the ampullae at the base of the already crack-like serrations would tend to diffuse potential crack-forming forces. This form resembles techniques used by guitar makers to "impart alternating regions of flexibility and rigidity to a stick of wood." The use of a drill to create an "ampulla" of sorts and prevent the propagation of cracks through an important material is also used to protect airplane surfaces. Abler demonstrated that a plexiglass bar with kerfs and drilled holes was more than 25% stronger than one with only regularly placed incisions.Albertosaurus is a member of the theropod family Tyrannosauridae, in the subfamily Albertosaurinae. Its closest relative is the slightly older Gorgosaurus libratus (sometimes called Albertosaurus libratus; see below). These two species are the only described albertosaurines; other undescribed species may exist. Thomas Holtz found Appalachiosaurus to be an albertosaurine in 2004, but his more recent unpublished work locates it just outside Tyrannosauridae, in agreement with other authors.The other major subfamily of tyrannosaurids is the Tyrannosaurinae, including Daspletosaurus,Tarbosaurus and Tyrannosaurus. Compared with these robust tyrannosaurines, albertosaurines had slender builds, with proportionately smaller skulls and longer bones of the lower leg (tibia) and feet (metatarsals and phalanges).Albertosaurus was named by Henry Fairfield Osborn in a one-page note at the end of his 1905 description of Tyrannosaurus rex.:265 The name honours Alberta, the present-day Canadian province in which the first remains were found. The generic name also incorporates the Greekterm σαυρος/sauros ("lizard"), the most common suffix in dinosaur names. The type species isA. sarcophagus, which means "flesh-eater" and has the same etymology as the funeral containerwith which it shares its name: a combination of the Ancient Greek words σαρξ/sarx ("flesh") andΦαγειν/phagein ("to eat"). More than thirty specimens of all ages are known to science.The type specimen is a partial skull, collected in 1884 from an outcrop of the Horseshoe Canyon Formationalongside the Red Deer River, in present-day Alberta. This specimen and a smaller skull associated with some skeletal material were recovered by expeditions of the Geological Survey of Canada, led by the famous geologist Joseph B. Tyrrell. The two skulls were assigned to the preexisting species Laelaps incrassatus by Edward Drinker Cope in 1892, although the nameLaelaps was preoccupied by a genus of mite and had been changed to Dryptosaurus in 1877 byOthniel Charles Marsh. Cope refused to recognize the new name created by his archrival Marsh, so it fell to Lawrence Lambe to change Laelaps incrassatus to Dryptosaurus incrassatus when he described the remains in detail in 1904. Shortly later, Osborn pointed out that D. incrassatuswas based on generic tyrannosaurid teeth, so the two Horseshoe Canyon skulls could not be confidently referred to that species. The Horseshoe Canyon skulls also differed markedly from the remains of D. aquilunguis, type species of Dryptosaurus, so Osborn created the new nameAlbertosaurus sarcophagus for them in 1905. He did not describe the remains in any great detail, citing Lambe's complete description the year before. Both specimens (CMN 5600 and 5601) are stored in the Canadian Museum of Nature in Ottawa.In 1910, American paleontologist Barnum Brown uncovered the remains of a large group of Albertosaurus at another quarry alongside the Red Deer River. Because of the large number of bones and the limited time available, Brown's party did not collect every specimen, but made sure to collect remains from all of the individuals they could identify in the bonebed. Among the bones deposited in the American Museum of Natural History collections in New York City are seven sets of right metatarsals, along with two isolated toe bones that did not match any of the metatarsals in size. This indicated the presence of at least nine individuals in the quarry. The Royal Tyrrell Museum of Palaeontology rediscovered the bonebed in 1997 and resumed fieldwork at the site, which is now located inside Dry Island Buffalo Jump Provincial Park. Further excavation from 1997 to 2005 turned up the remains of 13 more individuals of various ages, including a diminutive two-year-old and a very old individual estimated at over 10 metres (33 ft) in length. None of these individuals are known from complete skeletons, and most are represented by remains in both museums.In 1913, paleontologist Charles H. Sternberg recovered another tyrannosaurid skeleton from the slightly older Dinosaur Park Formation in Alberta. Lawrence Lambe named this dinosaur Gorgosaurus libratus in 1914. Other specimens were later found in Alberta and the US state of Montana. Finding few differences to separate the two genera, Dale Russell declared the name Gorgosaurus a junior synonym ofAlbertosaurus, which had been named first, and G. libratus was renamed Albertosaurus libratus in 1970. This addition extended the temporal range of the genus Albertosaurus backwards by several million years and its geographic range southwards by hundreds of kilometres.In 2003, Philip J. Currie compared several tyrannosaurid skulls and came to the conclusion that the two species are more distinct than previously thought. The decision to use one or two genera is rather arbitrary, as the two species are sister taxa, more closely related to each other than to any other species. Recognizing this, Currie nevertheless recommended that Albertosaurus and Gorgosaurus be retained as separate genera, as they are no more similar than Daspletosaurus and Tyrannosaurus, which are almost always separated. In addition, several albertosaurine specimens have been recovered from Alaska and New Mexico, and Currie suggested that theAlbertosaurus-Gorgosaurus situation may be clarified once these are described fully. Most authors have followed Currie's recommendation, but some have not.William Parks described a new species, Albertosaurus arctunguis, based on a partial skeleton excavated near the Red Deer River in 1928, but this species has been considered identical to A. sarcophagus since 1970. Parks' specimen (ROM 807) is housed in the Royal Ontario Museum in Toronto. Six more skulls and skeletons have since been discovered in Alberta and are housed in various Canadian museums. Fossils have also been reported from the American states of Montana, New Mexico, and Wyoming, but these probably do not represent A. sarcophagus and may not even belong to the genus Albertosaurus.Albertosaurus megagracilis was based on a small tyrannosaurid skeleton from the Hell Creek Formation of Montana. It was renamedDinotyrannus in 1995, but is now thought to represent a juvenile Tyrannosaurus rex.Most age categories of Albertosaurus are represented in the fossil record. Using bone histology, the age of an individual animal at the time of death can often be determined, allowing growth rates to be estimated and compared with other species. The youngest known Albertosaurus is a two-year-old discovered in the Dry Island bonebed, which would have weighed about 50 kilograms (110 lb) and measured slightly more than 2 metres (7 ft) in length. The 10 metre (33 ft) specimen from the same quarry is the oldest and largest known, at 28 years of age. When specimens of intermediate age and size are plotted on a graph, an S-shaped growth curve results, with the most rapid growth occurring in a four-year period ending around the sixteenth year of life, a pattern also seen in other tyrannosaurids. The growth rate during this phase was 122 kilograms (268 lb) per year, based on an adult 1.3 tonnes (1.4 short tons). Other studies have suggested higher adult weights; this would affect the magnitude of the growth rate but not the overall pattern. Tyrannosaurids similar in size to Albertosaurus had similar growth rates, although the much largerTyrannosaurus rex grew almost five times faster (601 kilograms [1325 lb] per year) at its peak.The end of the rapid growth phase suggests the onset of sexual maturity in Albertosaurus, although growth continued at a slower rate throughout the animals' lives. Sexual maturation while still actively growing appears to be a shared trait among small and large dinosaurs as well as in large mammals such as humans and elephants. This pattern of relatively early sexual maturation differs strikingly from the pattern in birds, which delay their sexual maturity until after they have finished growing.Most known Albertosaurus individuals were aged 14 years or more at the time of death. Juvenile animals are rarely found as fossils for several reasons, mainly preservation bias, where the smaller bones of younger animals were less likely to be preserved by fossilization than the larger bones of adults, and collection bias, where smaller fossils are less likely to be noticed by collectors in the field. YoungAlbertosaurus are relatively large for juvenile animals, but their remains are still rare in the fossil record compared with adults. It has been suggested that this phenomenon is a consequence of life history, rather than bias, and that fossils of juvenile Albertosaurus are rare because they simply did not die as often as adults did.A hypothesis of Albertosaurus life history postulates that hatchlings died in large numbers, but have not been preserved in the fossil record due to their small size and fragile construction. After just two years, juveniles were larger than any other predator in the region aside from adult Albertosaurus, and more fleet of foot than most of their prey animals. This resulted in a dramatic decrease in their mortality rate and a corresponding rarity of fossil remains. Mortality rates doubled at age twelve, perhaps the result of the physiological demands of the rapid growth phase, and then doubled again with the onset of sexual maturity between the ages of fourteen and sixteen. This elevated mortality rate continued throughout adulthood, perhaps due to high physiological demands, stress and injuries received during intraspecific competition for mates and resources, and eventually, the ever-increasing effects of senescence. The higher mortality rate in adults may explain their more common preservation. Very large animals were rare because few individuals survived long enough to attain such sizes. High infant mortality rates, followed by reduced mortality among juveniles and a sudden increase in mortality after sexual maturity, with very few animals reaching maximum size, is a pattern observed in many modern large mammals, including elephants, African buffalo, and rhinoceros. The same pattern is also seen in other tyrannosaurids. The comparison with modern animals and other tyrannosaurids lends support to this life history hypothesis, but bias in the fossil record may still play a large role, especially since more than two-thirds of all Albertosaurusspecimens are known from one locality.In 2009, researchers hypothesized that smooth-edged holes found in the fossil jaws of tyrannosaurid dinosaurs such as Albertosaurus were caused by a parasite similar to Trichomonas gallinae which infects birds. They suggested that tyrannosaurids transmitted the infection by biting each other, and that the infection impaired their ability to eat food.The Dry Island bonebed discovered by Barnum Brown and his crew contains the remains of 22 Albertosaurus, the most individuals found in one locality of any Cretaceous theropod, and the second-most of any large theropod dinosaur behind the Allosaurus assemblage at theCleveland Lloyd Dinosaur Quarry in Utah. The group seems to be composed of one very old adult; eight adults between 17 and 23 years old; seven sub-adults undergoing their rapid growth phases at between 12 and 16 years old; and six juveniles between the ages of 2 and 11 years, who had not yet reached the growth phase.There is plentiful evidence for gregarious behaviour among herbivorous dinosaurs, includingceratopsians and hadrosaurs. However, only rarely are so many dinosaurian predators found at the same site. Small theropods like Deinonychus, Coelophysis and Megapnosaurus (Syntarsus) rhodesiensis have been found in aggregations, as have larger predators likeAllosaurus and Mapusaurus. There is some evidence of gregarious behaviour in other tyrannosaurids as well. Fragmentary remains of smaller individuals were found alongside "Sue," the Tyrannosaurus mounted in the Field Museum of Natural History in Chicago, and a bonebed in the Two Medicine Formation of Montana contains at least three specimens ofDaspletosaurus, preserved alongside several hadrosaurs. These findings may corroborate the evidence for social behaviour inAlbertosaurus, although some or all of the above localities may represent temporary or unnatural aggregations. Others have speculated that instead of social groups, at least some of these finds represent Komodo dragon-like mobbing of carcasses, where aggressive competition leads to some of the predators being killed and cannibalized.The near-absence of herbivore remains and the similar state of preservation common to the many individuals at the Albertosaurus bonebed quarry led Currie to conclude that the locality was not a predator trap like the La Brea Tar Pits in California, and that all of the preserved animals died at the same time. Currie claims this as evidence of pack behaviour. Other scientists are skeptical, observing that the animals may have been driven together by drought, flood or for other reasons.Currie also offers speculation on the pack-hunting habits of Albertosaurus. The leg proportions of the smaller individuals were comparable to those of ornithomimids, which were probably among the fastest dinosaurs. Younger Albertosaurus were probably equally fleet-footed, or at least faster than their prey. Currie hypothesized that the younger members of the pack may have been responsible for driving their prey towards the adults, who were larger and more powerful, but also slower. Juveniles may also have had different lifestyles than adults, filling predator niches between the enormous adults and the smaller contemporaneous theropods, the largest of which were two orders of magnitude smaller than adult Albertosaurus in mass. A similar situation is observed in modern Komodo dragons, with hatchlings beginning life as small insectivores before growing to become the dominant predators on their islands. However, as the preservation of behaviour in the fossil record is exceedingly rare, these ideas cannot readily be tested.In 2001, Bruce Rothschild and others published a study examining evidence for stress fractures and tendon avulsions in theropod dinosaurs and the implications for their behavior. They found that only one of the 319 Albertosaurus foot bones checked for stress fractures actually had them and none of the four hand bones did. The scientists found that stress fractures were "significantly" less common in Albertosaurusthan in the carnosaur Allosaurus. ROM 807, the holotype of A. arctunguis (now referred to A. sarcophagus), had a 2.5 by 3.5 cm deep hole in the iliac blade, although the describer of the species did not recognize this as pathological. The specimen also contains someexostosis on the fourth left metatarsal. Two of the five Albertosaurus sarcophagus specimens with humeri in 1970 were reported by Dale Russel as having pathological damage to them.All identifiable fossils of Albertosaurus sarcophagus are known from the Horseshoe Canyon Formation in Alberta. This geologic formation dates to the early Maastrichtian stage of the Late Cretaceous Period, 73 to 70 Ma (million years ago). Immediately below this formation is theBearpaw Shale, a marine formation representing a section of the Western Interior Seaway. The seaway was receding as the climate cooled and sea levels subsided towards the end of the Cretaceous, exposing land that had previously been underwater. It was not a smooth process, however, and the seaway would periodically rise to cover parts of the region throughout Horseshoe Canyon times before finally receding altogether in the years after. Due to the changing sea levels, many different environments are represented in the Horseshoe Canyon Formation, including offshore and near-shore marine habitats and coastal habitats like lagoons, estuaries and tidal flats. Numerous coal seams represent ancient peat swamps. Like most of the other vertebratefossils from the formation, Albertosaurus remains are found in deposits laid down in the deltas andfloodplains of large rivers during the later half of Horseshoe Canyon times.The fauna of the Horseshoe Canyon Formation is well-known, as vertebrate fossils, including those of dinosaurs, are quite common. Sharks,rays, sturgeons, bowfins, gars and the gar-like Aspidorhynchus made up the fish fauna. Mammals included multituberculates and themarsupial Didelphodon. The saltwater plesiosaur Leurospondylus has been found in marine sediments in the Horseshoe Canyon, while freshwater environments were populated by turtles, Champsosaurus, and crocodilians like Leidyosuchus and Stangerochampsa. Dinosaurs dominate the fauna, especially hadrosaurs, which make up half of all dinosaurs known, including the genera Edmontosaurus, Saurolophusand Hypacrosaurus. Ceratopsians and ornithomimids were also very common, together making up another third of the known fauna. Along with much rarer ankylosaurians and pachycephalosaurs, all of these animals would have been prey for a diverse array of carnivorous theropods, including troodontids, dromaeosaurids, and caenagnathids. Adult Albertosaurus were the apex predators in this environment, with intermediate niches possibly filled by juvenile albertosaurs. |
Folic acid is a type of Vitamin B complex. Folate is a generic term for both naturally occurring folate found in foods and folic acid. Folic acid is name of synthetically created folate which we add in foods and supplements.
Table of Contents
Folate is very important for development and growth of the body tissues and cells. Taking the adequate amount of folic acid before and during pregnancy helps prevent certain birth defects, including spina bifida.
Folate is water soluble vitamin B complex so our body can not store it for the long time. So we need to intake proper amount of folate rich diet regularly.
Folic acid(folate) deficiency may cause the following health problems
- Gray hair
- Mouth ulcers
- Peptic ulcer
- Poor growth
- Swollen tongue (glossitis)
It can also cause anemia because deficiency of folate will reduce building of new RBC.
Folate works along with vitamin B12 and vitamin C to help the body break down, use, and create new proteins. The vitamin helps form red blood cells and produce DNA, the building block of the human body, which carries genetic information.
Recommended Daily Intake of Folate
Adolescents and Adults
- Males age 14 and older: 400 mcg/day
- Females age 14 and older: 400 mcg/day
- Pregnant teens 14-18 years: 600 mcg/day
- Pregnant females 19 and older: 500 mcg/day
- Breastfeeding females 14-18 years: 600 mcg/day
- Breastfeeding females 19 and older: 500 mcg/day
- 1 – 3 years: 150 mcg/day
- 4 – 8 years: 200 mcg/day
- 9 – 13 years: 300 mcg/day
Natural food sources of folate
- Dark green leafy vegetables
- Dried beans and peas (legumes)
- Citrus fruits and juices |
With autumn looming on the horizon, the leaves on some trees have already begun the transition towards the vibrant hues of autumn. Whilst this change may outwardly seem like a simple one, the many vivid colours are a result of a range of chemical compounds, a selection of which are detailed here.
Before discussing the different compounds that lead to the colours of autumn leaves, it’s worth discussing how the colours of these compounds originate in the first place. To do this we need to examine the chemical bonds they contain – these can be either single bonds, which consist of one shared pair of electrons between adjacent atoms, or double bonds, which consist of two shared pairs of electrons between adjacent atoms. The colour causing molecules in autumn leaves contain systems of alternating double and single bonds – this is referred to as conjugation. A large amount of conjugation in a molecule can lead to them being able to absorb wavelengths of light in the visible spectrum. This leads to the appearance of colour.
Chlorophyll is the chemical compound responsible for the usual, green colouration of most leaves. This chemical is contained within chloroplasts in the leaf cells, and is an essential component of the photosynthesis process via which plants use energy from the sun to convert carbon dioxide and water into sugars. For the production of chlorophyll, leaves require warm temperatures and sunlight – as summer begins to fade, so too does the amount of light, and thus chlorophyll production slows, and the existing chlorophyll decomposes. As a result of this, other compounds present in the leaves can come to the fore, and affect the perceived colouration.
Carotenoids & Flavonoids
Carotenoids and flavonoids are both large families of chemical compounds. These compounds are present in the leaves along with chlorophyll, but the high levels of chlorophyll present in the summer months usually masks their colours. As the chlorophyll degrades and disappears in autumn, their colours become more noticeable – both families of compounds contribute yellows, whilst carotenoids also contribute oranges and reds. These compounds do also degrade along with chlorophyll as autumn progresses, but do so at a much slower rate than chlorophyll, and so their colours become visible. Notable carotenoids include beta-carotene, the cause of the orange colour of carrots, lutein, which contributes to the yellow colour of egg yolks, and lycopene, which is also responsible for the red colour of tomatoes.
Anthocyanins, are also a member of the flavonoid class of compounds. Unlike carotenoids, anthocyanins aren’t commonly present in leaves year-round. As the days darken, their synthesis is initiated by increased concentration of sugars in the leaves, combined with sunlight. Their precise role in the leaf is still unclear – there has been some suggestion, however, that they may perform some kind of light-protective role, allowing the tree to protect its leaves from light damage and extend the amount of time before they are shed. In terms of their contribution to the colour of autumn leaves, they provide vivid red, purple, and magenta shades. Their colour is also affected by the acidity of tree sap, producing a range of hues.
References & Further Reading |
Determining the effects of felling method and season of year on the regeneration of short rotation coppice
There is increasing interest in plantations with the objective of producing biomass for energy and fuel. These types of plantations are called Short Rotation Woody Crops (SRWC). Popular SRWC species are Eucalypt (Eucalyptus spp.), Cottonwood (Populus deltoides) and Willow (Salix spp.). These species have in common strong growth rates, the ability to coppice, and rotations of 2–10 years. SRWC have generated interest for many forest products’ companies (seeking for diversification or energy self-sufficiency) and private landowners, and although they might help with the supply for the expected growth on the bioenergy and biofuels market, there are still several concerns about how and when to harvest SRWC to maximize their ability to coppice. SRWC have elevated establishment and maintenance costs if compared to other type of plantations, but due to the coppicing ability, the same plantation may be harvested up to 5 times without the need of establishing a new one. Study plots were installed at six locations in Florida, Mississippi and Arkansas, and were cut with a chainsaw and a shear head during summer and winter, to determine the effects of felling method and season on coppice regeneration. Thus, plots were divided into areas of four different treatments: shear-winter, saw-winter, shear-summer, saw-summer. Harvesting eucalypt and cottonwood trees during winter resulted in better survival rates than harvesting during summer; however, there was no effect of felling method on coppice regeneration. Finally, no statistically significant difference was found on coppice regeneration of black willow when harvested during winter or summer with a chainsaw or a shear head. |
The New Elizabethan Era is the term used to refer to the period of the reign of Queen Elizabeth II, from 1952 - 1983.
At the Queen's coronation, much of the British public were optimistic that her coming to the throne would usher in a new age of triumph for the United Kingdom and the Commonwealth. Edmund Hilary had just climbed Mt Everest, many television sets were bought specifically to watch the Coronation, there was virtually full employment and the National Health Service had just been founded. However, many others saw this as political rhetoric and the notion rapidly fell out of fashion.
With hindsight, however, parallels have been made between the two eras, and the adoption of the term "Caroline Era" has brought it back into vogue.
Except at the beginning, the people living through the New Elizabethan Era tended to refer to their times and the recent past in terms of decades. This practice was standard from the 1920s onwards and has only fallen into disuse since the King's coronation. It should not be thought that the use of the terms implies support for the monarchy. An important difference between the two customs is that of length of time. Whereas the Edwardian Era refers practically to the first decade of the twentieth century, the use of the word "era" has a connotation of a longer period while at the same time suggesting discontinuity with the past. The use of these terms is, of course, confined to the Commonwealth, and in the US, the previously common nomenclature of decades is still used. To the British ear, this sounds both quaint and "short-term". In Canada and Australia, both terminologies are common.
It may be that this shift in language has led to a general perception of a longer time scale than the decade-based description of historical periods.
The New Elizabethan Era begins with the death of the King George VI and ends with the Queen's abdication. Since her reign is "bookended" by the reign of two kings, some people see the era as being characterised by the rise of feminism. For example, the first female prime minister, unsuccessful though her premiership was, came to office during this time. Another feature of the period is the domination of culture by the baby boomers and the growth of youth culture. |
If you’ve ever wondered how to make reading an active pursuit, reader’s theater is a perfect option. A long time staple in classrooms libraries, reader’s theater allows participants to step inside a story, interpret it, and provide an audience with their own vision for the text.
What’s so great about Reader’s Theater?
Reader’s theater brings a text to life. Readers are engaged and thinking about the story, making decisions about what parts are integral to the plot and what parts can be omitted.
No props, costumes or sets are necessary, only a willingness to take on the persona of characters in a chosen story. In this way, participants develop a better understanding of point of view, and they learn to express themes and ideas through tone of voice and body movements.
Participants learn valuable public speaking skills while reinforcing poise and self confidence.
Participants experience literature in a communal setting.
How does reader’s theater work?
To start, choose shorter picture books which are familiar to your readers. Books with humorous plot lines are always a fun choice.
Read through the work together, being sure to use appropriate emotion in order to demonstrate expressive reading techniques.
Talk about what parts of the text should be included and why.
Write a script as a group, then mark the script with highlighters and annotations to indicate emphasis, emotion, movement, etc.
Don’t be surprised if your read-throughs require you to tweak the script.
Present your play for a group of friends or family. It can be as formal or informal as you like.
Need some resources or ideas?
To see the rest of the 10 day series, click here! |
Certainty styling is being phased out topic by topic.Hover over keys for definitions:
Arithmetic is a branch of mathematics that deals with number systems and operations with such numbers such as multiplication, division, addition, and subtraction. Modern human western educational systems focus heavily on arithmetic. Thus while the ability to learn arithmetic is universal to human populations, arithmetic itself is not universal to all human groups.
The earliest evidence of arithmetic in human evolutionary history dates to about 35,000 years ago. Baboon fibulae covered with scratches presumed to be tallies, including the “Lebombo bone” (35,000 years old) and the “Ishango bone” (25,000 years old), have been found at ancient human sites in Africa. The marks are presumed to be for counting. Others have suggested that they are evidence of more complex mathematics, and still others believe they have no mathematical significance.
There has been a significant amount of experimentation on mathematical skills of other animals. The ability to approximate numerical magnitude is widespread and extends even to birds and amphibians. Chimpanzees have been demonstrated to have the ability to count and sum small numbers. Other primates also have some simple numerical skill; one study of rhesus macaques demonstrated the mastery of abstract numerical rules (in this case, ordering numbers), and another showed they are capable of simple addition (although not as accurately as human college students). However, other studies have indicated that pigeons have abilities on par with this.
Neuroimaging studies of humans and other primates have indicated that a region called the intraparietal sulcus is highly involved in mathematical tasks. Lesions of the nearby left angular gyrus are associated with deficits in mental arithmetic abilities, although the neural basis of these skills is not fully understood. Notably, numerical processing in human children and rhesus monkeys causes increased activation of the prefrontal cortex when compared with human adults. It has been suggested, therefore, that this area is the early association cortex for numerical magnitude in both humans and other primates.
Interestingly, calculation deficits associated with lesions of the intraparietal sulcus are commonly found with “finger agnosia” (the inability to distinguish between individual fingers), suggesting that learned counting on fingers by children may be imprinted into cortical pathways. Along similar lines, people raised in different cultures or taught to solve problems with different methods exhibit differences in brain activation during mathematical problem-solving. Again, this suggests that culture and learning through human childhood may have a significant impact on neural patterning and mathematical skill.
Arithmetic in the Chimpanzee Basso in the Frankfurt Zoological Park Together With Remarks to Animal Psychology and an Open Letter to Herr Krall From Karl Marbe: Introduction to Translation, , The American Journal of Psychology, Volume 124, p.463-488, (2011)
Pigeons on par with primates in numerical competence., , Science, 2011 Dec 23, Volume 334, Issue 6063, p.1664, (2011)
Effects of development and enculturation on number representation in the brain., , Nat Rev Neurosci, 2008 Apr, Volume 9, Issue 4, p.278-91, (2008)
Basic math in monkeys and college students., , PLoS Biol, 2007 Dec, Volume 5, Issue 12, p.e328, (2007)
Primate numerical competence: contributions toward understanding nonhuman cognition, , Cognitive Science, Volume 24, p.423 - 443, (2000)
Ordering of the numerosities 1 to 9 by monkeys., , Science, 1998 Oct 23, Volume 282, Issue 5389, p.746-9, (1998)
Numerical competence in a chimpanzee (Pan troglodytes)., , J Comp Psychol, 1989 Mar, Volume 103, Issue 1, p.23-31, (1989) |
Quantifying the Epidemic
Between June 1981 and Dec. 31, 1994, 441,528 cases of AIDS in the United States, including 270,870 AIDS-related deaths, were reported to the CDC (CDC, 1995a). AIDS is now the leading cause of death among adults aged 25 to 44 in the United States (CDC, 1995b) (Figure 1).
Fig. 1. Death rates from leading causes of death in persons aged 25-44 years, United States, 1982-1993
Reference: Centers for Disease Control and Prevention
Worldwide, 1,025,073 cases of AIDS were reported to the World Health Organization (WHO) through December 1994, an increase of 20 percent since December 1993 (WHO, 1995a) (Figure 2). Allowing for under-diagnosis, incomplete reporting and reporting delay, and based on the available data on HIV infections around the world, the WHO estimates that over 4.5 million AIDS cumulative cases had occurred worldwide by late 1994 and that 19.5 million people worldwide had been infected with HIV since the beginning of the epidemic (WHO, 1995a). By the year 2000, the WHO estimates that 30 to 40 million people will have been infected with HIV and that 10 million people will have developed AIDS (WHO, 1994). The Global AIDS Policy Coalition has developed a considerably higher estimate--perhaps up to 110 million HIV infections and 25 million AIDS cases by the turn of the century (Mann et al., 1992a).
Fig. 2. Cumulative AIDS cases worldwide. AIDS cases reported to the World Health Organization through December 1994.
Reference: WHO, 1995a |
The sophisticated intelligence shown by whales and other cetaceans has long intrigued and enchanted humans. Over millions of years, these animals have evolved a highly complex network of neurological and sensory systems that are certainly different than those of humans, but show enough similarities to make high-level interspecies communication and understanding possible. Cetaceans’ intelligence is just one factor that has spurred environmental and animal activists to fierce advocacy on these animals’ behalf.
Here’s an overview of what we know about whale and dolphin intelligence, as uncovered by scientists.
Bigger brains make smarter and more social animals
The brain size of whales and dolphins is impressive. In fact, the brain-body ratio of dolphins is exceeded only by humans. As is typical among animals with larger brains, cetaceans tend to be sociable, to take attentive care of their young, to engage in more complex behaviors, and to live longer.
One evolutionary theory states that bigger-brained animals are more intelligent because living in larger groups with complex social systems requires a greater capacity to observe and to process information and subtle cues quickly. This “cultural brain hypothesis” says that encephalization—growth in brain capacity or size—is linked to expanding networks of mutually beneficial behavior, such as hunting in groups and the development of different dialects within a species.
Strong bonds between parents and children
Whales’ and dolphins’ patterns of reproduction and development also parallel those of humans. Cetacean mothers give birth only a few times, and their young experience delayed sexual maturation and independence. Depending on the species, whale and dolphin offspring often spend all or much of their lives in close proximity to their parents, leaving them only to mate when they reach maturity.
A sonar superpower
Toothed whales and dolphins have an additional brain power unknown to humans: echolocation.
An entire section of these cetaceans’ brains is devoted to echolocation, which provides them with the ability to “see” via sonar. Water is an excellent conductor of sound waves; sound travels better in water than light does. So it’s natural that whales and dolphins would use their “superpower” of echolocation to find their way through the water.
Through echolocation, they are able to process unusually complex sets of information about their environments to help them navigate, hunt, and communicate. For example, through the clicking sounds other members of their pods make during echolocation, dolphins can even figure out which objects others are observing at any given moment.
Brain structures that support complexity
The brains of whales and dolphins are filled with spindle neurons, a type of specialized brain cell connected with higher-level thinking abilities such as memory, communication, logic, recognition of objects, and problem-solving abilities.
Researchers have also discovered that whale and dolphin brains contain many more folds and grooves than human brains. This means that their brains have a larger surface area, which possibly correlates with a greater number of information-processing units that can theoretically engage in more complicated thought processes and problem-solving.
Additionally, cetaceans’ limbic system—the part of the brain that handles emotional processing—shows greater intricacy than that of humans. Experts in dolphin neurology point out that this greater development of the limbic system is associated with high levels of social and community functioning, to the point that a dolphin on its own remains incomplete without connections to a complex society.
Playfulness that shows intelligence
The 20th century historian and scholar Johan Huizinga used the Latin term Homo ludens (which can be translated as “man at play”) to describe modern humans. The term acknowledges that our great capacity to have fun and be playful with one another—to engage in play as distinct from work or other activities related to survival—signals a higher level of intelligence.
In light of this insight, whales and dolphins can be said to be extremely intelligent. These species cavort, leap out of the water, do back flips, and roughhouse and tumble with one another for no discernable reason other than the sheer joy of it. Dolphins often create their own games of tag, start high-speed racing games, or throw a fish or an object back and forth.
Researchers have filmed whales and dolphins teaming up for play. In some cases, dolphins will swim up onto whales’ noses, then lift themselves high up out of the water and slide down the whales’ heads to the other side. Even cetaceans in captivity frequently play with toys, their human caretakers, and one another.
A social whole that is greater than the sum of its parts
Scientists have discovered that orcas may use distinct names to refer to other individual whales, and that sperm whales communicate with one another in a variety of dialects. Dolphins have been seen to assist fishermen bringing in their catches, and the highly individualized whistling sounds they make to identify one another even seem to be involved in frequent episodes of gossiping about one another.
Additionally, the parts of the orca’s brain that are involved in self-awareness and social perception are proportionally larger than the analogous brain parts in humans. This information could suggest that these animals’ sense of self is possibly better developed than that of humans. This may have something to do with their experience as highly social animals that have developed both an individual sense of themselves, and a sense of emotional community as members of an interdependent pod.
Dolphins’ and whales’ remarkable abilities demonstrate how much we humans have in common with other species on our shared planet. All animals are worthy of our respect and protection, but in whales, dolphins, and other particularly intelligent species, we humans can find a special kinship. |
How does it work?
When untouched, the vapour and the liquid inside the tube are in an equilibrium that allows the bird to rest in an upright position. To change that, we dip the bird's head that is covered in fabric in water. As the water evaporates from around the head, it takes energy with it, and the head cools down.
The vapour inside the head cools and contracts. Since the glass around the vapour won't contract, a vacuum is created inside the head of the bird. The liquid is the only thing that can give, and it does. Liquid from the lower half of the bird is sucked up into the head of the bird.
The head of the bird is now too heavy for it to stay upright, and the bird dips forward. As it dips, a corridor is opened up between the head and the body of the bird. The vacuum can more easily suck the vapour than the liquid and vapour travels to the head until the pressure is equalized. The liquid drains back to the body of the bird, making the lower bulb heavier. The bird is tipped backwards and returns to its full upright position.
That is, until more water evaporates from fuzzy head and the cycle is started over. By allowing the beak of the bird to dip in water, there is a continuous supply of water soaking into the head of the bird |
WHEN THE RESULTS SAY LOW BLOOD CALCIUM
Calcium is used as a messenger to activate enzymes and regulate all sorts of body functions. Calcium is such a crucial component of our biochemistry that virtually any complete blood panel, whether human or veterinary will include a measurement of calcium. Our bodies go to tremendous lengths to regulate our blood calcium levels within a very narrow range. We need a storage source to draw upon for when we need more circulating calcium as well as a system to unload the excess.
HOW CALCIUM IS ORGANIZED IN OUR BODIES:
Calcium exists in several states in our bodies depending on whether it is being used or stored. “Ionized Calcium” is circulating free in the bloodstream and is active or ready to be used in one of the numerous body functions requiring calcium. The amount of ionized calcium in the blood is tightly regulated. Too much is dangerous. Too low is dangerous. About 50% of blood calcium is present as ionized calcium.
“Bound Calcium” is also circulating in the bloodstream but it is not floating around freely. It is instead, being carried by molecules of albumin (a blood protein whose job is to transport substances that don't freely dissolve in blood) or it is complexed with other ions. About 40% of blood calcium is bound (i.e. carried by albumin or complexed with another ion). Ionized calcium and bound calcium added together are called "total calcium." This value is reported on most blood chemistry panels. Total calcium refers to the total calcium in the bloodstream, not the total calcium in the body.
Calcium is also stored in the minerals of bone. We do not usually think of bone as more than just scaffolding but living bone is a surprisingly active tissue. One of its functions is to store calcium and when calcium is needed, it can be mobilized from the bone. Normally there is plenty of calcium and such mobilization does not significantly weaken the bone structure but if excess calcium is mobilized, bone can be depleted and softened.
ADJUSTING CALCIUM LEVELS
When the body needs to raise blood ionized calcium levels, the sources it may draw from are the bones (where calcium is stored as mineral), and the intestine (where the calcium we eat enters our bodies). We can regulate how much dietary calcium is allowed to enter from the GI tract. We can cause our bones to relinquish stored calcium quickly or slowly as our needs dictate.
What keeps calcium from rising higher and higher? Calcitriol shuts off PTH production in the parathyroid glands. PTH is necessary for activation of vitamin D. Essentially these two hormones shut each other off
The sequence of events might be this: blood ionized calcium begins to drop. The parathyroid glands sense this and release PTH. Ionized calcium begins to rise. When PTH levels are high enough, vitamin D is activated. Ionized calcium begins to rise more. When enough vitamin D has been activated, the parathyroid glands shut of PTH production. When PTH levels are low enough, vitamin D activation ceases and calcium levels drop again.
There are a number of conditions that can interfere with the above system. Vitamin D deficiency, kidney disease, too many puppies nursing for a small mother dog. We will review the classic causes in a moment but first, the symptoms.
Without calcium, muscle contraction becomes abnormal and the nervous system more excitable. Seizures (called “hypocalcemic tetany”) can result. This type of seizure occurs when the calcium level drops below 6 mg/dl and in dogs (but not cats) and seems to be associated with exercise during the hypocalcemia state. Other symptoms include: nervousness, disorientation, drunken walk, fever, weak pulses, excessive panting, muscle tension, twitches and tremors. Cats tend to show more listlessness than dogs and also tend to raise their third eyelids. Painful muscle cramping occurs which can lead a pet to become aggressive. If calcium levels drop to 4 mg/dl or below, death generally results.
CHRONIC RENAL FAILURE
Antifreeze (ethylene glycol) poisoning leads to an almost untreatable acute kidney failure. Part of the syndrome can include low blood calcium.
PARATHYROID HORMONE (PTH) DEFICIENCY - NOT COMMON BUT STILL A CLASSIC DISEASE
The average age of onset for PTH deficiency is about 5 years for dogs and the most requently identified breeds are the toy poodle, the miniature schnauzer, the Labrador retriever, the German shepherd dog, the dachshund, and the entire terrier group.
HOW DOES ONE GET A PARATHYROID DEFICIENCY?
DIAGNOSIS OF HYPOCALCEMIA
As discussed, there are many causes of low blood calcium besides hypoparathyroidism: low albumin levels, kidney failure, pancreatitis, antifreeze poisoning, exposure to a phosphate enema, low magnesium, nutritional deficiency (especially the infamous "all meat diet"), nursing a litter, and the list continues.
A basic blood panel and urinalysis is ordered for the medical work-up of most medical conditions. If calcium is low and phosphorus is high, then the patient either is in kidney failure or the patient has hypoparathyroidism. These two conditions are readily distinguished by the other blood test results.
If for some reason it not clear which condition the patient has, a PTH blood level will settle the question. If the PTH level is low, then the patient truly has primary hypoparathyroidism and will require lifelong treatment and monitoring (vs. a more temporary calcium problem). PTH levels must be interpreted in the context of the low calcium so they must be drawn before therapy is started.
Low magnesium levels in the body cause a secondary hypoparathyroidism so it is important to run a magnesium level at some point in the work-up to rule this condition out.
If the patient is having an acute crisis from the seizures and twitches and/or the calcium level is dangerously low, hospitalization will be needed and calcium will be required intravenously.
After the crisis has been overcome or if the patient is stable to start with, oral calcium and vitamin D supplementation, the basis of long-term therapy, can be started. These two oral medications take up to 4 days show an effect so many patients must receive calcium in the hospital intravenously or under the skin during this period. Receiving injections under the skin is vastly less expensive than hospitalization but the occasional patient develops very inflamed calcium deposits under the skin.
There are three forms of Vitamin D which can be used for long-term management of this condition: Vitamin D2 (ergocalciferol), DHT (Dihydrotachysterol), and Vitamin D3 (Calcitriol).
Vitamin D2 is an over-the-counter vitamin D supplement readily available where nutritional supplements are sold. It is not recommended to treat hypoparathyroidism and here is why: when it is first delivered into the body, it is stored in fat (not used as active vitamin D in the blood). This means that before it can have any effect at all, the body's fat stores must be filled to capacity with Vitamin D2. Only after the body's fat stores are filled, will it circulate. This means many weeks of injectable calcium before switching to oral medication. Further, if problems occur and calcium levels get too high, it means many weeks before the fat stores deplete adequately to bring the calcium level down. Treatment of hypoparathyroidsm requires the ability to effect faster changes in blood calcium levels than Vitamin D2 can manage.
DHT (Dihydrotachysterol) has a much faster onset of action (1-7 days) but if there is a problem it can take 4-21 days to get the calcium level lowered. Occasionally animals seem to be resistant to the pill form of this medication so liquid seems to be best.
Calcitriol is the first choice medication for managing hypoparathyroidism. It is generally given twice a day and has its maximum effect in 1-4 days. If calcium levels get too high, they will drop in 1-14 days after discontinuing this medication. Calcitriol is made in capsules for human use so a compounding pharmacy is generally needed to make a dosing size that is appropriate for pets.
Too much blood calcium causes kidney failure and too little causes seizures. Blood calcium is normally tightly regulated around a normal range and the goal in treatment is to keep the range normal (8-9 mg/dl). The stable patient with hypoparathyroidism should come in quarterly for a calcium level to make sure no problems are occurring and no dose adjustments are needed. If the calcium level is at an undesirable level, dosing changes are done gradually to correct them.
Signs at home that calcium is getting too high include vomiting, diarrhea, excess water consumption, and listlessness. If the calcium level becomes too high the patient may require hospitalization and fluid therapy or simply discontinuing of the medication depending on how far out of the desired range the calcium goes.
Page last updated: 9/28/2019 |
Alkanes (Paraffins), Methane and Nomenclature of organic compounds
Alkanes are open chain aliphatic hydrocarbons in which the carbon atoms are combined together with a single bond of sigma type, They are saturated, Methane is the first member, It is considered as the simplest organic compound, The world paraffin is a Latin word which means unreactive.
They are saturated open chain Aliphatic Hydrocarbons (Paraffins), They have mono covalent bonds between C atoms, Their reactions by replacement, All alkanes are characterized by the general formula (CnH2n+2), where (n) is the number of carbon atoms, Total number of atoms = 3n+2.
Alkanes are relatively chemically inactive because the carbon atoms are combined together by a single bond called sigma bond which is strong and difficult to be broken, Each compound exceeds the previous one by (– CH2) methylene group, Fractional distillation is a method which is used to separate alkanes from each other in petroleum ore.
Alkanes are ended by the suffix (ane) that means that the compound is belonging to (alkanes chain), the prefix of the name indicates the number of carbon atoms in the molecule, for example, the prefix Meth = 1, Eth = 2, Prop = 3, But = 4, Pent = 5 and so on, Alkanes are from a homologous series.
Homologous series are compounds that have the same general molecular formula, chemical properties, graduated physical properties and each compound exceeds the previous one by (– CH2) group, such as Alkanes, Alkenes and Alkynes.
Radical is an organic atomic group which can’t be found alone but it must be connected with another element or group, The aryl radical (Ar) is radical which is produced by removing one hydrogen atom from the aromatic compounds, When a hydrogen atom is removed from benzene, the radical produced is called phenyl radical (C6H5–).
The Alkyl Radical (R–) is an organic group which does not found alone, It is derived from the corresponding alkane by removing one hydrogen atom, Alkyl radicals are given the symbol “R”, Their general formula is (CnH2n+1), Its name is derived from the corresponding alkane by replacing the suffix (ane) by (yl), such as Methyl (CH3), Ethyl (C2H5 ), Propyl (C3H7) and Butyl (C4H9).
Nomenclature of organic compounds
IUPAC system means International union of pure and applied chemistry, It is a method used for the nomenclature of organic compounds depending on the number of carbon atoms in the longest continuous carbon chain.
What is the difference between the common name and the IUPAC name of organic compounds?
- The common name: the nomenclature of the organic compounds depends on their sources in nature.
- The IUPAC: The nomenclature of the organic compounds follows the IUPAC which enables everyone to identify the precise of construction of this compound.
Nomenclature of Alkanes (IUPAC system)
Anciently alkanes were called paraffins (Common name), The name of the hydrocarbon is determined according to the longest continuous carbon chain which may be linear or branched, If the longest hydrocarbon chain free from any branches or side chain, the carbon atoms are given numbers from any side (left or right side).
If the longest hydrocarbon chain attached to an alkyl group or any other atoms except hydrogen, the numbering of carbon atoms in the hydrocarbon chain begins from the side which is nearer to the branch, the nomenclature begins by the number of the carbon atom from which the chain arises, then the name of the branch, and ending by the name of the alkane.
If the side is repeated in the hydrocarbon chain, we use prefix Di– or Tri– or Tetra– to indicate the number of repetition, If the branch is a halogen atom such as chlorine or bromine or a group such as Nitric NO2− , So its name is ended by the letter O, so we say chloro, bromo or nitro, If the side groups are different (alkyl group and halogens), the groups are arranged according to their alphabetical Latin names and the sum of numbers must be the lowest value.
Unsaturated compounds (double or triple bond)
Start numbering from the nearest side to the double bond or triple bond, If an alkyl radical is attached to an unsaturated hydrocarbon.
Methane is considered as the simplest organic compound forms about 50 – 90% of natural gas found under the earth’s crust or accompanied to crude petroleum, It is found in coal mines that may be exposed to an explosion as a result of its illumination.
Methane is sometimes called the gas of swamps because it goes out as bubbles from the bottom of these swamps as a result of decaying of the organic matters, It is found in atmospheric air (about 0,00022%).
It is believed that it was the main component of atmospheric air of earth, The atmospheric air was made of methane (CH4), ammonia (NH3), H2 and water vapour (reducing agents) but by the time, they exposed to UV , They reached and changed into O2 and N2 which change the atmosphere from reducing atmosphere to oxidizing atmosphere which helps in burning process.
Preparation of methane gas in the lab
Methane can be prepared in the lab, by dry distillation of anhydrous sodium acetate with soda lime (NaOH, CaO ), Soda Lime is a mixture of NaOH (caustic soda) and quicklime (CaO) which doesn’t take part in the reaction but it helps in reducing the melting points of the reaction mixture.
CH3COONa(s) + NaOH(s) → CH4 (g) + Na2CO3(s)
Dry distillation is a process which is used in the preparation of methane in the lab, The apparatus is free from air because the mixture of methane and air burn with explosion, Soda Lime is preferred than caustic soda to prepare methane by dry distillation of sodium acetate because CaO is used to decrease the melting point of sodium acetate and to absorb water vapour.
Properties of Methane
- It doesn’t remove the colour of bromine solution as it does not react by addition due to the absence of double bond (π).
- It is tasteless, colourless and odourless. |
Tantalising experiments that seem to have made human blood cells start producing insulin have raised the prospect of a new treatment for diabetes. Although the treatment has only been tried in mice so far, it might mean people can be cured with implants of their own cells.
But even the researcher whose team carried out the work says he will remain sceptical until other groups have repeated it. “If it’s true, it would be very nice, but the data is very preliminary,” cautions Bernat Soria, chairman of the European Stem Cell Network.
Juvenile-onset diabetes is caused by the immune system destroying the insulin-producing beta cells in the pancreas. It can now be treated by transplanting beta cells taken from cadavers, using a technique called the Edmonton protocol. But many recipients suffer severe side effects because of the drugs they have to take to prevent their immune systems rejecting the foreign cells. Also, the supply of beta cells is limited – only 500 people have been treated so far.
Several teams around the world have now managed to derive insulin-producing cells from human embryonic stem cells (ESCs). While this might one day end the shortage of beta cells for transplantation, it is not a perfect solution.
One problem is that there is no easy way to derive ESCs from individual patients, so obtaining matching cells might not be possible and immunosuppressant drugs would likely still be needed. And even if the beta cells were a perfect match, without drugs they might still be destroyed by the same autoimmune reaction that killed patients’ original beta cells.
Soria’s team at the Institute of Bioengineering in Alicante, Spain, was the first to obtain insulin-producing cells from mouse ESCs and is also working with human ESCs. Recently, together with Fred Fandrich of the University of Kiel in Germany, the team tried a different approach: exposing human white blood cells to the same growth factors it had applied to mouse ESCs. It worked. “We convinced white blood cells to produce insulin,” Soria says.
When the transformed cells were injected into diabetic mice, their blood sugar levels returned to normal, Soria told a recent conference on stem cells in Edinburgh, UK. After a week the effect disappeared, because the animals’ immune systems destroyed the human cells. The full results will appear soon in Gastroenterology.
The next step is to find out if insulin-producing cells can be derived from the blood of people with diabetes, and if they will be stable after re-implantation. One great advantage of the approach, if it works, is that white blood cells are very easy to obtain.
It is not yet clear whether the insulin-producing cells have actually become beta cells or another cell type that has been persuaded to make insulin, says Chris Burns of King’s College London, who studies beta cells. This should not matter as long as the cells produce normal insulin in response to rising blood sugar levels, he says.
“If this is the case, then this would be a significant advance.” It could even be an advantage if the cells are not true beta cells, as it means they might escape the autoimmune reaction that causes juvenile diabetes.
More on these topics: |
Doppler radar uses the Doppler effect to measure the speed (radial velocity) of targets. Radial velocity is how fast the target is coming towards, or going away from, the radar. The Doppler effect will shift the received frequency up or down, based on the radial velocity of a target in the beam. This gives a direct and highly accurate measurement of target velocity, but only the radial velocity. Doppler radar by itself doesn't measure the azimuthal velocity (how fast it is going in other directions).
Doppler radars may be Coherent Pulsed, Continuous Wave, or Frequency Modulated. |
Commensalism is a symbiotic relationship between two organisms where one organism benefits and the other is not affected. Other types of symbiotic relationships are mutualism, where both benefit from each other, and parasitism, where one benefits and the other is harmed. While all three are common in the rain forests throughout the globe, commensalism is the least common. However, there are many animals that display this type of relationship in the rain forests.
TL;DR (Too Long; Didn't Read)
Many animals show commercialism in the forest. These include frogs, vultures, sloths, ant birds and a variety of insects including dung beetles, flies, termites and flower mites.
Frogs Shelter Under Plants
Many frogs, like the poison dart frog and the Gaudy Leaf Frog, in rain forests throughout the world show commensalism with vermiliad (a rain-forest plant that grows close to the ground on or near trees) and other plants in the rain forests. The frogs benefit by using the leaves of the vermiliad as shelter from sun and rain. The vermiliad is unaffected by the frogs.
Furry and Feathered Animals Plant Trees
Many animals in the rain forest have a relationship showing commensalism with trees and plants throughout the forests. While animals who eat plant seeds are benefiting themselves, commensalism is happening when seeds travel on animals' fur or feathers without the animals realizing it. Often, a seed or a seed pod will fall onto an animal, like a sloth, who then walks through the forest. The seed will then fall off and plant itself, growing a new tree. The plants are benefiting and the animals are unharmed in this example of commensalism.
Sciencing Video Vault
Scavengers Clean Up
When an animal dies, it will no longer be affected or harmed by what happens to its body. In that respect, any plant that benefits from the minerals of a decaying animal is showing commensalism with that animal. Vultures and other scavenger animals who benefit from eating dead animals in the rain forest have a relationship of commensalism with those animals as well, since they benefit without affecting the dead animals.
Dung Provides Shelter
When an animal defecates, other animals like dung beetles and flies benefit by receiving nutrients and shelter from the dung. Plants also benefit from the animals' dung, as it replenishes the soil and helps provide nutrients for new plants.
Termites Use Dead Trees
Termites in rain forests eat fruits and vegetables that have fallen from the trees. They also use many of the dead, fallen branches from the trees to build shelters, which doesn't affect the trees but benefits the termites. Termites also show commensalism using the dung to help build their shelters.
Sloths Play Host
Sloths are on the unaffected side of commensalism, while many species of moths, mites and beetles are on the benefiting side. These bugs actually live on and inside sloths' fur and benefit by getting shelter. They also benefit by eating the algae that grows on the fur. Although the sloth may benefit from this, sloths will also clean themselves when necessary and aren't really affected by the bugs at all.
Ants Help Birds Find Food
Ant birds have a commensalism relationship with army ants. As the ants travel through the ground floor of the forests, flies, beetles and other flying insects hurry out of the ants' way and the ant birds are there to catch them. The birds know the ants will kick up other insects and the ants are unaffected by the birds' presence.
Flower Mites Hitchhike on Hummingbirds
Flower mites eat pollen, but instead of traveling the long distance from flower to flower alone in the rain forest, they hitchhike on other pollen-eaters: hummingbirds. The flower mites ride in the nasal airways of the hummingbirds from flower to flower. This doesn't affect the hummingbirds at all and the flower mites benefit. |
Nuclear Batteries Seminar Report
Published on July 25, 2016
Micro electro mechanical systems (MEMS) comprise a rapidly expanding research field with potential applications varying from sensors in air bags, wrist-warn GPS receivers, and matchbox size digital cameras to more recent optical applications. Depending on the application, these devices often require an on board power source for remote operation, especially in cases requiring for an extended period of time. In the quest to boost micro scale power generation several groups have turn their efforts to well known enable sources, namely hydrogen and hydrocarbon fuels such as propane, methane, gasoline and diesel.
Some groups are develo ping micro fuel cells than, like their micro scale counter parts, consume hydrogen to produce electricity. Others are developing on-chip combustion engines, which actually burn a fuel like gasoline to drive a minuscule electric generator. But all these approaches have some difficulties regarding low energy densities, elimination of by products, down scaling and recharging. All these difficulties can be overcome up to a large extend by the use of nuclear micro batteries.
Radioisotope thermo electric generators (RTGs) exploited the extraordinary potential of radioactive materials for
generating electricity. RTGs are particularly used for generating electricity in space missions. It uses a process known as See-beck effect. The problem with RTGs is that RTGs don't scale down well. So the scientists had to find some other ways of converting nuclear energy into electric energy. They have succeeded by developing nuclear batteries.
Nuclear batteries use the incredible amount of energy released naturally by tiny bits of radio active material without any fission or fusion taking place inside the battery. These devices use thin radioactive films that pack in energy at densities thousands of times greater than those of lithium-ion batteries. Because of the high energy density nuclear batteries are extremely small in size. Considering the small size and shape of the battery the scientists who developed that battery fancifully call it as "DAINTIEST DYNAMO". The word 'dainty' means pretty.
Types of nuclear batteries
Scientists have developed two types of micro nuclear batteries. One is junction type battery and the other is self-reciprocating cantilever. The operations of both are explained below one by one.
1. JUNCTION TYPE BATTERY
The kind of nuclear batteries directly converts the high-energy particles emitted by a radioactive source into an electric current. The device consists of a small quantity of Ni-63 placed near an ordinary silicon p-n junction - a diode, basically.
As the Ni-63 decays it emits beta particles, which are high-energy electrons that spontaneously fly out of the radioisotope's unstable nucleus. The emitted beta particles ionized the diode's atoms, exciting unpaired electrons and holes that are separated at the vicinity of the p-n interface. These separated electrons and holes streamed away form the junction, producing current.
It has been found that beta particles with energies below 250KeV do not cause substantial damage in Si . The maximum and average energies (66.9KeV and 17.4KeV respectively) of the beta particles emitted by Ni-63 are well below the threshold energy, where damage is observing silicon. The long half-life period (100 years) makes Ni-63 very attractive for remote long life applications such as power of spacecraft instrumentation. In addition, the emitted beta particles of Ni-63 travel a maximum of 21 micrometer in silicon before disintegrating; if the particles were more energetic they would travel longer distances, thus escaping. These entire things make Ni-63 ideally suitable in nuclear batteries.
Since it is not easy to micro fabricate solid radioactive materials, a liquid source is used instead for the micro machined p-n junction battery. The diagram of a micro machined p-n junction is shown below
As shown in figure a number of bulk-etched channels have been Micro machined in this p-n junction. Compared with planar p-n Junctions, the three dimensional structure of our device allows for a substantial increase of the junction area and the macro machined channels can be used to store the liquid source. The concerned p-n junction has 13 micro machine channels and the total junction area is 15.894 sq.mm (about 55.82% more than the planar p-n junction). This is very important since the current generated by the powered p-n junction is proportional to the junction area.
In order to measure the performance of the 3-dimensional p-n junction in the presence of a radioactive source, a pipette is used to place 8 l of liquid Ni-63 inside the channels micro machined on top of the p-n junction. It is then covered with a black box to shield it from the light. The electric circuit used for these experiments is shown below.
This concept involves a more direct use of the charged particles produced by the decay of the radio active source: the creation of a resonator by inducing movement due to attraction or repulsion resulting from the collection of charged particles. As the charge is collected, the deflection of a cantilever beam increases until it contacts a grounded element, thus discharging the beam and causing it to return to its original position. This process will repeat as long as the source is active. This has been tested experimentally. The following figure shows the experimental setup.
The self-reciprocating cantilever consists of a radioactive source of thickness very small and of area 4square mm. above this thin film there is a cantilever beam. It is made of a rectangular piece of silicon. Its free end is able to move up and down. On this cantilever beam there is a copper sheet attached to it. Also above this cantilever there is a piezoelectric plate. So the self-reciprocating cantilever type nuclear batteries are also called as radioactive piezoelectric generator.
First the beta particles, which are high-energy electrons, fly spontaneously from the radioactive source. These electrons get collected on the copper sheet. Copper sheet becomes negatively charged. Thus an electrostatic force of attraction is established between the silicon cantilever and radioactive source. Due to this force the cantilever bends down.
The piece of piezoelectric material bonded to the top of the silicon cantilever bends along with it. The mechanical stresses of the bend unbalances the charge distribution inside the piezoelectric crystal structure, producing a voltage in electrodes attached to the top and bottom of the crystal.
After a brief period – whose length depends on the shape and material of the cantilever and the initial size of the gap- the cantilever come close enough to the source to discharge the accumulated electrons by direct contact. The discharge can also take place through tunneling or gas breakdown. At that moment, electrons flow back to the source, and the electrostatic attractive force vanishes. The cantilever then springs back and oscillates like a diving board after a diver jumps, and the recurring mechanical deformation of the piezoelectric plate produces a series of electric pulses.
More Seminar Topics:
Green Engine Technology,
Graphical Password Authentication IEEE PDF,
Graphical Password Authentication Technology,
Power Paper Battery Technology,
Nokia Morph Technology Wiki,
Nokia Morph Technology Seminar PPT,
Graphical Password Authentication Documentation,
Graphical Password Authentication Advantages and Disadvantages,
Paper Battery Company Stock,
Paper Battery Advantages,
Nokia Morph Technology IEEE Paper,
Nokia Morph Technology Documentation,
Nokia Morph Technology Seminar PDF |
This is a written and spoken project. (5-page essay; 10-minute informative
speech; 20% of grade.)
Objective: “Great oratory has three components: style, substance, and impact.”
According to Ted Sorensen, the late-speechwriter for JFK, “Speeches are great
when they reflect great decisions.” The purpose of this assignment is to
effectively research, organize, and analyze an important speech in history, and
then share your knowledge in a formal lecture.
To make the project easier, I have put together this step-by-step outline of what
you have to do:
1. Select a famous historic speech to analyze that is academically
challenging. All topics must be approved by me. Refer to course notes for
2. Write an outline of who, what, where, when, and why this speech is
important, and how and why it was effective. Please refer to LINCOLN AT
GETTYSBURG by Gary Wills for inspiration. Innumerable famous
speeches can also be viewed online.
3. Begin your research by answering the question of why this speech is/was
important in the context of its era. You will need a minimum of three
sources for full credit (Wikipedia may NOT be used as a source.) Librarian
Jean Hine is an invaluable resource; seek her advice on credible sources.
4. Write a 5-page, double-spaced informational academic essay with an
introduction, body, and conclusion. Analyze the topic of the speech and
the qualities that made this speaker an effective communicator
(commanding, personable, eloquent, expressive, charismatic, authoritative
and so on). Study his/her nonverbal and the place and setting in which the
speech was delivered. Include the vivid details of the place, time,
audience: where and when did this speech occur? Pre/post TV, Internet,
radio? Public, private address? How did the media respond? Describe the
5. Include a thorough description of the context in which this speech was
delivered. Back your point of view with facts. Provide a list of at least three
sources that exist in print form (though you may consult them online); this
is page six. Use proper grammar and punctuation. (Refer to The Elements
of Style). Proofread. Draft one DUE: March 28.
6. For this 10-minute speech, you are the professor or TEDTalk speaker, an
expert in command of your material. You may use a Power Point
Presentation or any visual or audio aid to enhance your analysis and
engage and educate your audience. Use facts, a narrative arc, vivid
details, an element of surprise, and historic context. Speech day: April 18
7. You must incorporate an excerpt (or clip) of your chosen speech into the
body of your lecture be it visual, audio, or draw on your own skills as
orator and recite the excerpt out loud. You will lose points if your excerpt
takes up more than 2% of your 10-minute lecture, but you are free to go
over the allotted ten minutes.
8. You will prepare an outline of your speech (your talking points). You may
deliver a 20 or 30 minute speech with prior approval. Practice, practice,
9. Deliver your speech to the class on the assigned date.
Other important criteria:
Your speech must be 10 minutes to receive full credit. You can refer to your
notes, but reading your speech verbatim (or reading paragraphs from a
screen/PPT) will result in a low grade. Please refer to your sources during
your speech when appropriate (this will give you credibility and authority).
Evaluating the Historic Speech Project:
You will submit:
• Draft one, due Monday, March 28
• 5-page essay, that is double-spaced and proofread for grammatical errors; a
bibliography of source notes (this is an additional page—page 6) and a title
page (page 7). Use these guidelines as a checklist and go through and mark
off each requirement so you don’t skip steps by mistake. You will lose points
for not following these guidelines. FINAL ESSAY DRAFT Due April 13
•Speech notes/outline/talking points delivered on Speech day: April 18.
5-page essay 30 points
10-minute (or longer) speech 20 points
TAKE ADVANTAGE OF OUR PROMOTIONAL DISCOUNT DISPLAYED ON THE WEBSITE AND GET A DISCOUNT FOR YOUR PAPER NOW! |
LA: Ask and answer such questions as who, what, where, when, why, and how to demonstrate understanding of key details in a text.
LA: With guidance and support from adults and peers, focus on a topic and strengthen writing as needed by revising and editing.
MATH: Use addition and subtraction within 20 to solve word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem.
MATH: Understand that the two digits of a two-digit number represent amounts of tens and ones.
MATH: Add within 100, including adding a two-digit number and a one-digit number, and adding a two-digit number and a multiple of 10, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used.Understand that in adding two-digit numbers, one adds tens and tens, ones and ones; and sometimes it is necessary to compose a ten.
SCI: Provide evidence that humans’ uses of natural resources can affect the world around them, and share solutions that reduce human impact.
SS: Identify examples of rights and responsibilities of citizens.
VA: Know how the differences among visual characteristics and purposes of art in order to convey ideas.
VA: Select and use subject matter, symbols, and ideas to communicate meaning. |
Prevalence and Demographics: Worldwide
Thalassemia occurs across the globe, but is most prevalent among the following populations:
- Cypriots, Sardinians (Mediterranean region)
- Southeast Asians (Vietnamese, Laotians, Thais, Singaporeans, Filipinos, Cambodians, Malaysians, Burmese, and Indonesians)
- Middle Easterners (Iranians, Pakistanis, and Saudi Arabians)
- Transcaucasians (Georgians, Armenians, and Azerbaijanis)
There are two main types of thalassemia trait: alpha thalassemia trait and beta thalassemia trait. Individuals who have beta thalassemia trait have one normal beta globin gene and one that is altered such that it makes little or no beta globin. There are subtypes of alpha thalassemia trait. Individuals with 'silent alpha thalassemia trait' are missing one alpha globin gene. When two alpha globin genes are missing, an individual is said to have 'alpha thalassemia trait'. This can occur in two different ways. The 'cis' type of alpha thalassemia trait occurs when the two genes are missing from the same chromosome. This type is most common in those of Southeast Asian, Chinese, or Mediterranean ancestry. The 'trans' type of alpha thalassemia trait occurs when the two genes are missing from different chromosomes. This type is most common in African Americans.
Thalassemia trait is generally not thought to cause health problems, although women with the trait may be more likely to develop anemia of pregnancy than women without the trait. Obstetricians sometimes treat this with folate supplementation. Most types of thalassemia traits cause the red blood cells to be smaller in size than usual, a condition called microcytosis. Sometimes this is inaccurately referred to as 'low blood'. Since iron deficiency is the most common cause of microcytosis, doctors sometimes mistakenly prescribe iron supplementation to individuals with thalassemia trait. Therefore, before prescribing iron supplements, doctors should rule out thalassemia trait and/or perform lab tests to evaluate iron levels. A person with thalassemia trait can also be iron deficient, but if he or she is not, iron supplements may result in excess body iron. Excessive iron can deposit in many areas of the body, causing organ damage in the long-term. |
What are 1st graders expected to learn this year? Understand the basic objectives for first grade before developing or choosing an effective first grade homeschool curriculum package and see our top favorites.
You probably decided to homeschool your children because you wanted more control over what they learn. This is understandable, but you will still have to understand the basic objectives to be covered in each grade. Once you know what is required learning for first graders, you can come up with your own lesson plans and even enhance the knowledge you want your children to learn.
Make sure to look up the guidelines and rules specific to your state, since every state has their own guidelines on learning objectives. Some are far more strict than others, but you will always have the option of adding to the list. Following is a quick guide to what first graders typically learn no matter where they live.
There are several objectives and milestones to expect in first grade. To move forward, children must make progress in reading and language arts and mathematics. They will also further their understanding of basic scientific and social studies concepts.
The objectives in your homeschooling approach need to focus on the following areas:
Reading/Language Arts Objectives:
- Identify vowels and consonants
- Identify words with similar sounds for beginning reading skills
- Begin sounding out words and assigning letters to those sounds
- Use pictures to understand simple stories
- Trace fingers over words while someone else reads
- Begin writing basic sentences and paragraphs (misspelled words are to be expected)
In general, first graders are energetic and grow fast. They should actively pursue reading. Their fine motor skills are still developing but your child should be attempting to follow words and lines on a page with their finger.
Practice reading and language skills frequently. A first grader starts to understand the meaning of words and figure out how to decipher what words mean as they read. By the end of first grade, your child should be able to read on their own. Have your child read aloud to facilitate this process.
- Count at least to 100
- Skip count to 100 by 2s, 5s and 10s
- Addition and subtraction with single digit numbers
- Money value for dollars and coins
- Understand basic measurement units (inches, kilometers)
- Telling time on a clock
- Identify basic geometric shapes (triangle, square, etc.)
Math is important and not all children or adults are great at it. Fortunately, first grade math is pretty basic. It does, however, build a foundation that will matter every grade going forward. Addition, subtraction, counting and geometry can be taught by adding physical and online games to your curriculum. Worksheets and flashcards are great, too.
- Understand basic law of gravity (dropping, pushing, lifting things)
- Understand seasons and basics of weather
- Identify parts of the human body
- Explore elements of the natural world
- Explore animal groups and life of different animals
Science is a fascinating subject for children first learning to understand the world around them. They’re at a point where they can distinguish living things from those that aren’t. Classifying and comparing animal types are basic activities. Use weather maps to help children identify patterns, seasons, and daily weather. Discuss storms and also ways to stay safe in thunderstorms, for example. Such lessons can be critical and lifesaving at any point.
Social Studies Objectives:
- Understand the dynamics of family
- Introduce concepts of community and society at large
- Understand basic differences between the past and present
In first grade, children start to learn the difference between themselves and the rest of the world. Begin locally. Talk about the different people in the neighborhood and their roles and cultures. In order to grasp the rest of the world, your child first needs to see the scope of the community around them. Even research the history of your city or town to help them learn how the past associates with the present.
Basic Tips for Teaching First Grade:
- Not every child will come out of first grade reading on their own. Many will be reading simple children’s books by the end of the grade, while others are reading more advanced chapter books and still others are not reading independently at all. What is important at this grade level is for children to start taking words apart, hearing the letter sounds, and identifying words with common sounds. Those not reading independently after first grade will take that next step in early second grade.
- Make use of real money, blocks, dry macaroni, and other things that children can group and count. Children at this age learn better with a hands-on visual approach to math.
- Science and social studies are great areas to enhance your child’s learning objectives and create fun field trips to fit their personality and interests. For instance, a child who shows great interest in animals may want to write a small report about an animal and visit animals at the local zoo.
- Many of the learning objectives for first grade are intended to introduce concepts that will be understood better in second grade. Basic grasping of the concepts is often enough at this grade level.
- Flash cards help children absorb basic addition facts. Make your own or buy them.
Helping Your Child with Reading and Writing
There are several ways to boost your child’s reading and writing skills at this point. Don’t be afraid to intervene if there’s a hurdle, but engage them in a discussion. Here are some of the best ways:
- Ask if there’s context in a reading error. Can they identify smaller words within bigger ones or if what they’re stuck on is similar to something they know? If a child can fix their own mistakes, it helps them learn at a faster pace and feel more rewarded for it.
- Let your child write about their interests. Simple stories, diary entries or writing dates on a calendar are legitimate forms of practice.
- Discuss favorite parts of books or have your child predict what may happen later in the plot. Along with reading aloud, this helps improve comprehension.
- Play word games, which are fun yet are great for building vocabulary, problem solving skills and relating letters to sounds. Even board games can help in these areas and in boosting your first grader’s ability to categorize items.
Teaching a First Grader Math
You can integrate activities for math just as easily as with reading. It is the opportunities you give them that help a child advance these skills. Count down the days till an event on a calendar, or let your child count cans or vegetables and add those on to what you have in the house. There are limitless opportunities for counting and take advantage of them whenever you find something. Also:
- Practice using number picking games, counting change, or finding food items such as grapes or berries. The latter is good because they can choose colors. Ripe fruits are easily identifiable. Your child can learn by counting which items out of a group are good to eat.
- Find anything numerical, such as movie times or weather information, or compare any set of objects based on size or color.
- Use common games such as dice, cards, dominoes or checkers, or something that involves counting spaces on a board.
Other Focal Points
Exposure to different subjects is a critical part of first grade. Social and emotional development must be encouraged as well. You can guide your child along by addressing their feelings. Encourage them to express how they feel and correct their mistakes, while respecting who they are.
Intellectual development is at a crucial stage now too. Bolster your child’s intellectual skills by encouraging them to talk about their experiences and being creative. Conduct listening exercises as these will help lengthen their attention span. By organizing objects, it helps to boost your child’s memory. It’s therefore important to incorporate these factors in your first grade homeschooling curriculum and in your child’s downtime.
What to Look for in a 1st Grade Homeschool Curriculum Package
There are many different curricula for first graders. Even when homeschooling, there may be state requirements to consider. If going it on your own, there are a few things to look for. First, be sure it covers basic subjects such as reading, math, science, history, etc. Some may cover writing, art, or even religion.
A 1st grade homeschool curriculum package that is diverse stylistically can accommodate different learning and teaching styles, rather than restrict your efforts. One that caters to different teaching methods is as important as one that can be adapted to how your child learns. Also pick one that has flexible and relatively simple lessons. If they’re too complex, you’ll be spending too much time organizing and planning, or perhaps skipping lessons to conserve time and energy.
Here are a few examples to get started.
Complete First Grade Homeschool Curriculum Packages:
Homeschool Complete – Created by a homeschooling mom of three with 27 years’ teaching experience in both public and private schools at all levels, you can find quality year-long curriculum, reading curriculum, and unit studies.
Alpha Omega Life Pacs – This program has a lot of fun learning activities and great projects for 1st graders that really help them master the skills that they need to learn this year.
Sonlight Curriculum – This is a great year to try the Sonligh curriculum. It is a lot of reading for both parents and kids but is a favorite among many who use it.
Top Phonics and Reading Curriculum Options:
Hooked on Phonics – This is one of the best programs ever created to teach a child to read. Even if you decide to use it with another program, it is a good investment.
Horizons Phonics and Reading – This curriculum is an excellent way to introduce children to a love of reading. Their spelling program is also a great complement to this option.
1st Grade Science Curriculum Favorites:
Apologia Science – Want your child to LOVE Science? Apologia is great for all elementary ages.
LifePac Science – With a number of fun hands-on activities, your child will love this introduction to many scientific concepts.
Top Math Choices:
RightStart Mathematics – We’ve used this in our household and love it. Read the review to learn more!
Horizons Math – This curriculum may be more advanced than others for 1st grade, but kids love it and its spiral learning method is excellent for gifted students or ones who rise to a challenge.
LifePac Math – Utilizing the mastery learning method, this curriculum is ideal for kids who do their best when they are required to master a skill before moving on.
MathUSee – This program used manipulatives to help kids understand and “see” how math concepts work.
Great Penmanship Resource:
Horizons Penmanship – One of the best handwriting curriculum choices for 1st graders, this program will really help to teach them to write neatly and is fun, too! |
Dwarf bananas, such as the dwarf cavendish, require the same care as their taller cousins. The only difference between the two varieties is their height. Dwarf bananas reach a height of anywhere between 4 to 7 feet, while taller varieties can grow 12 to 18 feet tall. Being a tropical fruit, bananas require warm conditions to grow. Plant outside in zones 9 and 10. Cooler climates will need to grow dwarf bananas inside of containers to protect them from frosts and freezes. Dwarf bananas are relatively hardy plants that even a novice gardener should have success growing.
Grow dwarf bananas outside in a warm location, such as on the south side of the house, next to a building or a cement driveway. Grow in an area that receives either full sun or at least four hours of sunlight throughout the day.
Plant and grow the dwarf banana in soil that is rich with organic material. Amend the planting site with compost, manure or peat. Tolerant to a wide range of soils, bananas will perform best when grown in rich soil conditions.
Grow dwarf bananas planted in containers in a rich potting mix that has peat moss added to it. Use a container that is approximately 5 to 7 gallons in size, to give the roots room to grow. Be sure the container has drain holes.
Water the dwarf banana regularly, keeping the soil moist but not flooded. Bananas require moderate amounts of water to perform well. Keep container grown plants moist, but not soggy. Do not allow the planting area or container to completely dry out.
Fertilize outdoor dwarf banana plants once per month with an 8-10-8 fertilizer. Apply at a rate of 2 pounds per plant. Spread the fertilizer in a circle extending approximately 4 feet from the banana's trunk. Do not allow the fertilizer to touch the trunk. Fertilize container-grown plants on the same schedule, but with half the amount. Bananas are heavy feeders.
Protect the dwarf banana from freezing temperatures by bringing container grown plants indoors. Cover outdoor trees with blankets or wrap the trunk on taller trees.
Prune off all suckers except one. Allow the main stem to develop and grow bananas to put all the plants energy there. Cut down the main stem once the bananas are harvested, allowing the remaining sucker to develop and mature. Banana plants die off after they have produced fruit. |
Orcas are beautifully adapted to life in the marine environment, but unlike fish, they are not able to meet their water needs by drinking seawater. Sources of fresh water are limited to coastal river inputs and subsurface springs, and although it is unknown at this point to what degree orcas or other whales can utilize those sources, they are certainly not able to rely upon them. So, how do orcas survive without fresh water?
Whales and dolphins have undergone some adaptations to cope with the marine environment- their kidneys are able to remove some of the excess salt that is inevitably swallowed, and their rubbery skin presents a good barrier which helps keep salt out. But there are really only two known ways that the fish eating orcas can get water; one is through the fish and squid they consume, and the other is by using their fat stores.
All vertebrate animals maintain their bodies at about the same salinity, which is a quarter to a third of the salinity of the ocean – so when the orcas can find enough fish, presumably they don’t need any other source of water. When fish become scarce though, the whales use their own blubber for energy, and one byproduct of breaking down their fat is water, at least enough to get by.
Like any other system that uses energy, there is a net loss when the whales don’t have enough to eat – in other words, there is loss both in storing the fat and then using those reserves later to provide sustenance and water. For the orcas, any shortage of fish is also a shortage of water, leaving them in a doubly precarious situation if the shortages extend for any great period of time.
Research on baleen whales indicates that those animals may need 30% more krill (which is saltier than fish) than previously thought when the problem of salt balance is taken into account. Even though orcas are completely different animals from the baleen whales, it does stand to reason that their need to obtain water from fish may drive both the type and the quantity of salmon needed to provide them with both water and nourishment.
“Water, water everywhere, nor any drop to to drink”. (From Rime of the Ancient Mariner by S. Coleridge) |
Federal government of the United States
|This article needs additional citations for verification. (January 2013) (Learn how and when to remove this template message)|
|Founding document||United States Constitution|
|Jurisdiction||United States of America|
|Leader||President of the United States|
|Headquarters||The White House|
The government of the United States of America is the federal government of the republic of fifty states that constitute the United States, as well as one capital district, and several other territories. The federal government is composed of three distinct branches: legislative, executive, and judicial, whose powers are vested by the U.S. Constitution in the Congress, the President, and the federal courts, including the Supreme Court, respectively. The powers and duties of these branches are further defined by acts of Congress, including the creation of executive departments and courts inferior to the Supreme Court.
The full name of the republic is "United States of America". No other name appears in the Constitution, and this is the name that appears on money, in treaties, and in legal cases to which it is a party (e.g. Charles T. Schenck v. United States). The terms "Government of the United States of America" or "United States Government" are often used in official documents to represent the federal government as distinct from the states collectively. In casual conversation or writing, the term "Federal Government" is often used, and the term "National Government" is sometimes used. The terms "Federal" and "National" in government agency or program names generally indicate affiliation with the federal government (e.g. Federal Bureau of Investigation, National Oceanic and Atmospheric Administration, etc.). Because the seat of government is in Washington, D.C., "Washington" is commonly used as a metonym for the federal government.
- 1 History
- 2 Legislative branch
- 3 Executive branch
- 4 Judicial branch
- 5 Elections and voting
- 6 State, tribal, and local governments
- 7 See also
- 8 References
- 9 Bibliography
- 10 External links
The outline of the government of the United States is laid out in the Constitution. The government was formed in 1789, making the United States one of the world's first, if not the first, modern national constitutional republics.
The United States government is based on the principles of federalism and republicanism, in which power is shared between the federal government and state governments. The interpretation and execution of these principles, including what powers the federal government should have and how those powers can be exercised, have been debated ever since the adoption of the Constitution. Some make the case for expansive federal powers while others argue for a more limited role for the central government in relation to individuals, the states or other recognized entities.
Since the American Civil War, the powers of the federal government have generally expanded greatly, although there have been periods since that time of legislative branch dominance (e.g., the decades immediately following the Civil War) or when states' rights proponents have succeeded in limiting federal power through legislative action, executive prerogative or by constitutional interpretation by the courts.
One of the theoretical pillars of the United States Constitution is the idea of "checks and balances" among the powers and responsibilities of the three branches of American government: the executive, the legislative and the judiciary. For example, while the legislative (Congress) has the power to create law, the executive (President) can veto any legislation—an act which, in turn, can be overridden by Congress. The President nominates judges to the nation's highest judiciary authority (Supreme Court), but those nominees must be approved by Congress. The Supreme Court, in its turn, has the power to invalidate as "unconstitutional" any law passed by the Congress. These and other examples are examined in more detail in the text below.
Powers of Congress
The Constitution grants numerous powers to Congress. Enumerated in Article I, Section 8, these include the powers to levy and collect taxes; to coin money and regulate its value; provide for punishment for counterfeiting; establish post offices and roads, issue patents, create federal courts inferior to the Supreme Court, combat piracies and felonies, declare war, raise and support armies, provide and maintain a navy, make rules for the regulation of land and naval forces, provide for, arm and discipline the militia, exercise exclusive legislation in the District of Columbia, and to make laws necessary to properly execute powers. Over the two centuries since the United States was formed, many disputes have arisen over the limits on the powers of the federal government. These disputes have often been the subject of lawsuits that have ultimately been decided by the United States Supreme Court.
Makeup of Congress
House of Representatives
The House currently consists of 435 voting members, each of whom represents a congressional district. The number of representatives each state has in the House is based on each state's population as determined in the most recent United States Census. All 435 representatives serve a two-year term. Each state receives a minimum of one representative in the House. In order to be elected as a representative, an individual must be at least 25 years of age, must have been a U.S. citizen for at least seven years, and must live in the state that he or she represents. There is no limit on the number of terms a representative may serve. In addition to the 435 voting members, there are six non-voting members, consisting of five delegates and one resident commissioner. There is one delegate each from the District of Columbia, Guam, the Virgin Islands, American Samoa and the Commonwealth of the Northern Mariana Islands, and the resident commissioner from Puerto Rico.
In contrast, the Senate is made up of two senators from each state, regardless of population. There are currently 100 senators (two from each of the 50 states), who each serve six-year terms. Approximately one third of the Senate stands for election every two years.
The House and Senate each have particular exclusive powers. For example, the Senate must approve (give "advice and consent" to) many important Presidential appointments, including cabinet officers, federal judges (including nominees to the Supreme Court), department secretaries (heads of federal executive branch departments), U.S. military and naval officers, and ambassadors to foreign countries. All legislative bills for raising revenue must originate in the House of Representatives. The approval of both chambers is required to pass any legislation, which then may only become law by being signed by the President (or, if the President vetoes the bill, both houses of Congress then re-pass the bill, but by a two-thirds majority of each chamber, in which case the bill becomes law without the President's signature). The powers of Congress are limited to those enumerated in the Constitution; all other powers are reserved to the states and the people. The Constitution also includes the "Necessary and Proper Clause", which grants Congress the power to "make all laws which shall be necessary and proper for carrying into execution the foregoing powers". Members of the House and Senate are elected by first-past-the-post voting in every state except Louisiana, and Georgia, which have runoffs.
Impeachment of federal officers
Congress has the power to remove the President, federal judges, and other federal officers from office. The House of Representatives and Senate have separate roles in this process. The House must first vote to "impeach" the official. Then, a trial is held in the Senate to decide whether the official should be removed from office. Although two presidents have been impeached by the House of Representatives (Andrew Johnson and Bill Clinton), neither of them was removed following trial in the Senate.
Article I, Section 2, paragraph 2 of the U.S. Constitution gives each chamber the power to "determine the rules of its proceedings". From this provision were created congressional committees, which do the work of drafting legislation and conducting congressional investigations into national matters. The 108th Congress (2003–2005) had 19 standing committees in the House and 17 in the Senate, plus four joint permanent committees with members from both houses overseeing the Library of Congress, printing, taxation and the economy. In addition, each house may name special, or select, committees to study specific problems. Today, much of the congressional workload is borne by subcommittees, of which there are some 150.
Powers of Congress
Congressional oversight is intended to prevent waste and fraud, protect civil liberties and individual rights, ensure executive compliance with the law, gather information for making laws and educating the public, and evaluate executive performance.
It applies to cabinet departments, executive agencies, regulatory commissions and the presidency.
Congress's oversight function takes many forms:
- Committee inquiries and hearings
- Formal consultations with and reports from the President
- Senate advice and consent for presidential nominations and for treaties
- House impeachment proceedings and subsequent Senate trials
- House and Senate proceedings under the 25th Amendment in the event that the President becomes disabled or the office of the Vice President falls vacant.
- Informal meetings between legislators and executive officials
- Congressional membership: each state is allocated a number of seats based on its representation (or ostensible representation, in the case of D.C.) in the House of Representatives. Each state is allocated two Senators regardless of its population. As of January 2010, the District of Columbia elects a non-voting representative to the House of Representatives along with American Samoa, the U.S. Virgin Islands, Guam, Puerto Rico and the Northern Mariana Islands.
The executive power in the federal government is vested in the President of the United States, although power is often delegated to the Cabinet members and other officials. The President and Vice President are elected as running mates by the Electoral College, for which each state, as well as the District of Columbia, is allocated a number of seats based on its representation (or ostensible representation, in the case of D.C.) in both houses of Congress. The President is limited to a maximum of two four-year terms. If the President has already served two years or more of a term to which some other person was elected, he may only serve one more additional four-year term.
The executive branch consists of the President and those to whom the President's powers are delegated. The President is both the head of state and government, as well as the military commander-in-chief and chief diplomat. The President, according to the Constitution, must "take care that the laws be faithfully executed", and "preserve, protect and defend the Constitution". The President presides over the executive branch of the federal government, an organization numbering about 5 million people, including 1 million active-duty military personnel and 600,000 postal service employees.
The President may sign legislation passed by Congress into law or may veto it, preventing it from becoming law unless two-thirds of both houses of Congress vote to override the veto. The President may unilaterally sign treaties with foreign nations. However, ratification of international treaties requires a two-thirds majority vote in the Senate. The President may be impeached by a majority in the House and removed from office by a two-thirds majority in the Senate for "treason, bribery, or other high crimes and misdemeanors". The President may not dissolve Congress or call special elections but does have the power to pardon, or release, criminals convicted of offenses against the federal government (except in cases of impeachment), enact executive orders, and (with the consent of the Senate) appoint Supreme Court justices and federal judges.
The Vice President is the second-highest official in rank of the federal government. The office of the Vice President's duties and powers are established in the legislative branch of the federal government under Article 1, Section 3, Clauses 4 and 5 as the President of the Senate. By virtue of this on-going role, he or she is the head of the Senate. In that capacity, the Vice President is allowed to vote in the Senate, but only when necessary to break a tie vote. Pursuant to the Twelfth Amendment, the Vice President presides over the joint session of Congress when it convenes to count the vote of the Electoral College. As first in the U.S. presidential line of succession, the Vice President duties and powers move to the executive branch when becoming President upon the death, resignation, or removal of the President, which has happened nine times in U.S. history. Lastly, in the case of a Twenty-fifth Amendment succession event, Vice President would become Acting President, assuming all of the powers and duties of President, except being designated as President. Accordingly, by circumstances, the Constitution designates the Vice President as routinely in the legislative branch, or succeeding to the executive branch as President, or possibly being in both as Acting President pursuant to the Twenty-fifth Amendment. Because of circumstances, the overlapping nature of the duties and powers attributed to the office, the title of the office and other matters, such has generated a spirited scholarly dispute regarding attaching an exclusive branch designation to the office of Vice President.
Cabinet, executive departments and agencies
The day-to-day enforcement and administration of federal laws is in the hands of the various federal executive departments, created by Congress to deal with specific areas of national and international affairs. The heads of the 15 departments, chosen by the President and approved with the "advice and consent" of the U.S. Senate, form a council of advisers generally known as the President's "Cabinet". In addition to departments, a number of staff organizations are grouped into the Executive Office of the President. These include the White House staff, the National Security Council, the Office of Management and Budget, the Council of Economic Advisers, the Council on Environmental Quality, the Office of the U.S. Trade Representative, the Office of National Drug Control Policy and the Office of Science and Technology Policy. The employees in these United States government agencies are called federal civil servants.
There are also independent agencies such as the United States Postal Service, the National Aeronautics and Space Administration (NASA), the Central Intelligence Agency (CIA), the Environmental Protection Agency, and the United States Agency for International Development. In addition, there are government-owned corporations such as the Federal Deposit Insurance Corporation and the National Railroad Passenger Corporation.
The Judiciary explains and applies the laws. This branch does this by hearing and eventually making decisions on various legal cases.
Overview of the federal judiciary
Article III section I of the Constitution establishes the Supreme Court of the United States and authorizes the United States Congress to establish inferior (i.e., lower) courts as their need shall arise. Section I also establishes a lifetime tenure for all federal judges and states that their compensation may not be diminished during their time in office. Article II section II establishes that all federal judges are to be appointed by the president and confirmed by the United States Senate.
The Judiciary Act of 1789 subdivided the nation jurisdictionally into judicial districts and created federal courts for each district. The three tiered structure of this act established the basic structure of the national judiciary: the Supreme Court, 13 courts of appeals, 94 district courts, and two courts of special jurisdiction. Congress retains the power to re-organize or even abolish federal courts lower than the Supreme Court.
The U.S. Supreme Court adjudicates "cases and controversies"—matters pertaining to the federal government, disputes between states, and interpretation of the United States Constitution, and, in general, can declare legislation or executive action made at any level of the government as unconstitutional, nullifying the law and creating precedent for future law and decisions. The United States Constitution does not specifically mention the power of judicial review (the power to declare a law unconstitutional). The power of judicial review was asserted by Chief Justice Marshall in the landmark Supreme Court Case Marbury v. Madison (1803). There have been instances in the past where such declarations have been ignored by the other two branches. Below the U.S. Supreme Court are the United States Courts of Appeals, and below them in turn are the United States District Courts, which are the general trial courts for federal law, and for certain controversies between litigants who are not deemed citizens of the same state ("diversity jurisdiction").
There are three levels of federal courts with general jurisdiction, meaning that these courts handle criminal cases and civil lawsuits between individuals. Other courts, such as the bankruptcy courts and the Tax Court, are specialized courts handling only certain kinds of cases ("subject matter jurisdiction"). The Bankruptcy Courts are "under" the supervision of the district courts, and, as such, are not considered part of the "Article III" judiciary and also as such their judges do not have lifetime tenure, nor are they Constitutionally exempt from diminution of their remuneration. Also the Tax Court is not an Article III court (but is, instead an "Article I Court").
The district courts are the trial courts wherein cases that are considered under the Judicial Code (Title 28, United States Code) consistent with the jurisdictional precepts of "federal question jurisdiction" and "diversity jurisdiction" and "pendent jurisdiction" can be filed and decided. The district courts can also hear cases under "removal jurisdiction", wherein a case brought in State court meets the requirements for diversity jurisdiction, and one party litigant chooses to "remove" the case from state court to federal court.
|This article is part of a series on the|
|Politics of the
United States of America
The United States Courts of Appeals are appellate courts that hear appeals of cases decided by the district courts, and some direct appeals from administrative agencies, and some interlocutory appeals. The U.S. Supreme Court hears appeals from the decisions of the courts of appeals or state supreme courts, and in addition has original jurisdiction over a few cases.
The judicial power extends to cases arising under the Constitution, an Act of Congress; a U.S. treaty; cases affecting ambassadors, ministers and consuls of foreign countries in the U.S.; cases and controversies to which the federal government is a party; controversies between states (or their citizens) and foreign nations (or their citizens or subjects); and bankruptcy cases (collectively "federal-question jurisdiction"). The Eleventh Amendment removed from federal jurisdiction cases in which citizens of one state were the plaintiffs and the government of another state was the defendant. It did not disturb federal jurisdiction in cases in which a state government is a plaintiff and a citizen of another state the defendant.
The power of the federal courts extends both to civil actions for damages and other redress, and to criminal cases arising under federal law. The interplay of the Supremacy Clause and Article III has resulted in a complex set of relationships between state and federal courts. Federal courts can sometimes hear cases arising under state law pursuant to diversity jurisdiction, state courts can decide certain matters involving federal law, and a handful of federal claims are primarily reserved by federal statute to the state courts (for example, those arising from the Telephone Consumer Protection Act of 1991). Both court systems thus can be said to have exclusive jurisdiction in some areas and concurrent jurisdiction in others.
The U.S. Constitution safeguards judicial independence by providing that federal judges shall hold office "during good behavior"; in practice, this usually means they serve until they die, retire, or resign. A judge who commits an offense while in office may be impeached in the same way as the President or other officials of the federal government. U.S. judges are appointed by the President, subject to confirmation by the Senate. Another Constitutional provision prohibits Congress from reducing the pay of any Article III judge (Congress is able to set a lower salary for all future judges that take office after the reduction, but may not decrease the rate of pay for judges already in office).
Relationships between state and federal courts
Separate from, but not entirely independent of, this federal court system are the court systems of each state, each dealing with, in addition to federal law when not deemed preempted, a state's own laws, and having its own court rules and procedures. Although state governments and the federal government are legally dual sovereigns, the Supreme Court of the United States is in many cases the appellate court from the State Supreme Courts (e.g., absent the Court countenancing the applicability of the doctrine of adequate and independent State grounds). The Supreme Courts of each state are by this doctrine the final authority on the interpretation of the applicable state's laws and Constitution. Many state constitution provisions are equal in breadth to those of the U.S. Constitution, but are considered "parallel" (thus, where, for example, the right to privacy pursuant to a state constitution is broader than the federal right to privacy, and the asserted ground is explicitly held to be "independent", the question can be finally decided in a State Supreme Court—the U.S. Supreme Court will decline to take jurisdiction).
A State Supreme Court, other than of its own accord, is bound only by the U.S. Supreme Court's interpretation of federal law, but is not bound by interpretation of federal law by the federal court of appeals for the federal circuit in which the state is included, or even the federal district courts located in the state, a result of the dual sovereigns concept. Conversely, a federal district court hearing a matter involving only a question of state law (usually through diversity jurisdiction) must apply the substantive law of the state in which the court sits, a result of the application of the Erie Doctrine; however, at the same time, the case is heard under the Federal Rules of Civil Procedure, the Federal Rules of Criminal Procedure and the Federal Rules of Evidence instead of state procedural rules (that is, the application of the Erie Doctrine only extends to a requirement that a federal court asserting diversity jurisdiction apply substantive state law, but not procedural state law, which may be different). Together, the laws of the federal and state governments form U.S. law.
Elections and voting
Suffrage, commonly known as the ability to vote, has changed significantly over time. In the early years of the United States, voting was considered a matter for state governments, and was commonly restricted to white men who owned land. Direct elections were mostly held only for the U.S. House of Representatives and state legislatures, although what specific bodies were elected by the electorate varied from state to state. Under this original system, both senators representing each state in the U.S. Senate were chosen by a majority vote of the state legislature. Since the ratification of the Seventeenth Amendment in 1913, members of both houses of Congress have been directly elected. Today, U.S. citizens have almost universal suffrage under equal protection of the laws from the age of 18, regardless of race, gender, or wealth. The only significant exception to this is the disenfranchisement of convicted felons, and in some states former felons as well.
Under the U.S. Constitution, the national representation of U.S. territories and the federal district of Washington, D.C. in Congress is limited: while residents of the District of Columbia are subject to federal laws and federal taxes, their only congressional representative is a non-voting delegate; however, they have been allowed to participate in presidential elections since March 29, 1961. Residents of U.S. territories have varying rights; for example, only some residents of Puerto Rico pay federal income taxes (though all residents must pay all other federal taxes, including import/export taxes, federal commodity taxes and federal payroll taxes, including Social Security and Medicare). All federal laws that are "not locally inapplicable" are automatically the law of the land in Puerto Rico but their current representation in the U.S. Congress is in the form of a Resident Commissioner, a nonvoting delegate.
State, tribal, and local governments
The state governments tend to have the greatest influence over most Americans' daily lives. The Tenth Amendment prohibits the federal government from exercising any power not delegated to it by the States in the Constitution; as a result, states handle the majority of issues most relevant to individuals within their jurisdiction. Because state governments are not authorized to print currency, they generally have to raise revenue through either taxes or bonds. As a result, state governments tend to impose severe budget cuts or raise taxes any time the economy is faltering.
Each state has its own written constitution, government and code of laws. The Constitution stipulates only that each state must have, "a Republican Government". Therefore, there are often great differences in law and procedure between individual states, concerning issues such as property, crime, health and education, amongst others. The highest elected official of each state is the Governor. Each state also has an elected state legislature (bicameralism is a feature of every state except Nebraska), whose members represent the voters of the state. Each state maintains its own state court system. In some states, supreme and lower court justices are elected by the people; in others, they are appointed, as they are in the federal system.
As a result of the Supreme Court case Worcester v. Georgia, American Indian tribes are considered "domestic dependent nations" that operate as sovereign governments subject to federal authority but, in some cases, outside of the jurisdiction of state governments. Hundreds of laws, executive orders and court cases have modified the governmental status of tribes vis-à-vis individual states, but the two have continued to be recognized as separate bodies. Tribal governments vary in robustness, from a simple council used to manage all aspects of tribal affairs, to large and complex bureaucracies with several branches of government. Tribes are currently encouraged to form their own governments, with power resting in elected tribal councils, elected tribal chairpersons, or religiously appointed leaders (as is the case with pueblos). Tribal citizenship and voting rights are typically restricted to individuals of native descent, but tribes are free to set whatever citizenship requirements they wish.
The institutions that are responsible for local government within states are typically town, city, or county boards, water management districts, fire management districts, library districts and other similar governmental units which make laws that affect their particular area. These laws concern issues such as traffic, the sale of alcohol and the keeping of animals. The highest elected official of a town or city is usually the mayor. In New England, towns operate in a direct democratic fashion, and in some states, such as Rhode Island, Connecticut, and some parts of Massachusetts, counties have little or no power, existing only as geographic distinctions. In other areas, county governments have more power, such as to collect taxes and maintain law enforcement agencies.
- Bankruptcy courts
- Courts of appeals
- District courts
- Federal courts
- Federal judicial circuit
- Federal judicial district
- Supreme Court
- Note: Most agencies are executive, but a few are legislative or judicial.
- States and territories
- Works and websites
- Copyright status of work by the U.S. government
- U.S. Government Web Portal for Businesses
- U.S. Government Web Portal for Citizens
- Wood, 1998 p.208
- 'The Influence of State Politics in Expanding Federal Power,' Henry Jones Ford, 'Proceedings of the American Political Science Association, Vol. 5, Fifth Annual Meeting (1908)' Jstor.org Retrieved on March 17, 2010
- Judge Rules Favorably in Pennsylvania BRAC Suit (Associated Press, 26 August)
- 'The Legislative Branch' Retrieved on January 20, 2013
- US House Official Website House.gov Retrieved on August 17, 2008
- Kaiser, Frederick M. (January 3, 2006). "Congressional Oversight" (PDF). Congressional Research Service. Retrieved July 30, 2008.
- Article II, Constitution of the United States of America
- 3 U.S.C. §§ 301–303
- Barack, Obama (April 27, 2009). "Delegation of Certain Authority Under the National Defense Authorization Act for Fiscal Year 2008". United States. Retrieved July 1, 2009.
- Amendment XXIII to the United States Constitution
- Amendment XXII to the United States Constitution
- Goldstein, Joel K. (1995). "The New Constitutional Vice Presidency". Wake Forest Law Review. Winston Salem, NC: Wake Forest Law Review Association, Inc. 30 (505).
- Reynolds, Glenn Harlan (2007). "Is Dick Cheney Unconstitutional?". Northwestern University Law Review Colloquy. Chicago: Northwestern University School of Law. 102 (110).
- Federal tribunals in the United States
- United States Tax Court
- Fourteenth Amendment to the United States Constitution
- Twenty-sixth Amendment to the United States Constitution
- Fifteenth Amendment to the United States Constitution
- Nineteenth Amendment to the United States Constitution
- Twenty-fourth Amendment to the United States Constitution
- Twenty-third Amendment to the United States Constitution
- Contrary to common misconception, residents of Puerto Rico do pay U.S. federal taxes: customs taxes (which are subsequently returned to the Puerto Rico Treasury) (See Department of the Interior, Office of Insular Affairs.), import/export taxes (See Stanford.wellsphere.com), federal commodity taxes (See Stanford.wellsphere.com), social security taxes (See IRS.gov), etc. Residents pay federal payroll taxes, such as Social Security (See IRS.gov) and Medicare (See Reuters.com), as well as Commonwealth of Puerto Rico income taxes (See Puertorico-herald.com and HTRCPA.com). All federal employees (See Heritage.org), those who do business with the federal government (See MCVPR.com), Puerto Rico-based corporations that intend to send funds to the U.S. (See Page 9, line 1.), and some others (For example, Puerto Rican residents that are members of the U.S. military, See Heritage.org and Puerto Rico residents who earned income from sources outside Puerto Rico, See pp 14–15.) also pay federal income taxes. In addition, because the cutoff point for income taxation is lower than that of the U.S. IRS code, and because the per-capita income in Puerto Rico is much lower than the average per-capita income on the mainland, more Puerto Rico residents pay income taxes to the local taxation authority than if the IRS code were applied to the island. This occurs because "the Commonwealth of Puerto Rico government has a wider set of responsibilities than do U.S. State and local governments" (See GAO.gov). As residents of Puerto Rico pay into Social Security, Puerto Ricans are eligible for Social Security benefits upon retirement, but are excluded from the Supplemental Security Income (SSI) (Commonwealth of Puerto Rico residents, unlike residents of the Commonwealth of the Northern Mariana Islands and residents of the 50 States, do not receive the SSI. See Socialsecurity.gov), and the island actually receives less than 15% of the Medicaid funding it would normally receive if it were a U.S. state. However, Medicare providers receive less-than-full state-like reimbursements for services rendered to beneficiaries in Puerto Rico, even though the latter paid fully into the system (See p. 252). It has also been estimated (See Eagleforum.org that, because the population of the island is greater than that of 50% of the states, if it were a state, Puerto Rico would have six to eight seats in the House, in addition to the two seats in the Senate.(See Eagleforum.org, CRF-USA.org and Thomas.gov [Note that for the later, the official US Congress database website, you will need to resubmit a query. The document in question is called "House Report 110-597 – Puerto Rico Democracy Act of 2007". These are the steps to follow: Thomas.gov > Committee Reports > 110 > drop down "Word/Phrase" and pick "Report Number" > type "597" next to Report Number. This will provide the document "House Report 110-597 – Puerto Rico Democracy Act of 2007", then from the Table of Contents choose "Background and need for legislation".]). Another misconception is that the import/export taxes collected by the U.S. on products manufactured in Puerto Rico are all returned to the Puerto Rico Treasury. This is not the case. Such import/export taxes are returned only for rum products, and even then the US Treasury keeps a portion of those taxes (See the "House Report 110-597 – Puerto Rico Democracy Act of 2007" mentioned above.)
- "A brief overview of state fiscal conditions and the effects of federal policies on state budgets" (PDF). Center on Budget and Policy Priorities. May 12, 2004. Retrieved July 30, 2008.
- Wood, Gordon S. (1998). The creation of the American Republic, 1776–1787. Gordon S. Wood, Institute of Early American History and Culture (Williamsburg, Va.). p. 653. ISBN 0-8078-2422-4.
|Wikiquote has quotations related to: Federal government of the United States|
|Wikiversity has learning materials about School:Political science|
|Wikiversity has learning materials about Topic:American government| |
|Trisomy 18, Edwards Syndrome|
|Classification and external resources|
Trisomy 18 (T18) (also known as Trisomy E or Edwards syndrome) is a genetic disorder caused by the presence of all or part of an extra 18th chromosome. It is named after John H. Edwards, who first described the syndrome in 1960. It is the second most common autosomal trisomy, after Down Syndrome, that carries to term.
Trisomy 18 is caused by the presence of three—as opposed to two—copies of chromosome 18 in a fetus or infant's cells. The incidence of the syndrome is estimated as one in 3,000 live births. The incidence increases as the mother's age increases. The syndrome has a very low rate of survival, resulting from heart abnormalities, kidney malformations, and other internal organ disorders.
The following data are from the National Down Syndrome Cytogenetic Register Annual Reports 2008/09. In England and Wales, there were 495 diagnoses of Edwards’ syndrome (trisomy 18), of which 92% were made prenatally. There were 339 terminations, 49 stillbirths/miscarriages/fetal deaths, 72 unknown outcomes, and 35 live births. Because approximately 3% of cases of Edwards’ syndrome with unknown outcomes are likely to result in a live birth, the total number of live births is estimated to be 37 (2008/09 data are provisional). Only 50% of liveborn infants live to 2 months, and only 5–10% survive their first year of life. Major causes of death include apnea and heart abnormalities. It is impossible to predict the exact prognosis of a child with Edwards syndrome during pregnancy or the neonatal period. The median lifespan is 5–15 days. One percent of children born with this syndrome live to age 10, typically in less severe cases of the mosaic Edwards syndrome.
Edwards syndrome occurs in approximately 1 in 3,000 conceptions and approximately 1 in 6,000 live births; 50% of those diagnosed with the condition prenatally will not survive the prenatal period. Although women in their 20s and early 30s may conceive babies with Edwards syndrome, the risk of conceiving a child with Edwards syndrome increases with a woman's age. The average maternal age for conceiving a child with this disorder is 32½.
Edwards syndrome is a chromosomal abnormality characterized by the presence of an extra copy of genetic material on the 18th chromosome, either in whole (trisomy 18) or in part (such as due to translocations). The additional chromosome usually occurs before conception. The effects of the extra copy vary greatly, depending on the extent of the extra copy, genetic history, and chance. Edwards syndrome occurs in all human populations but is more prevalent in female offspring.
A healthy egg or sperm cell contains individual chromosomes, each of which contributes to the 23 pairs of chromosomes needed to form a normal cell with a typical human karyotype of 46 chromosomes. Numerical errors can arise at either of the two meiotic divisions and cause the failure of a chromosome to segregate into the daughter cells (nondisjunction). This results in an extra chromosome, making the haploid number 24 rather than 23. Fertilization of eggs or insemination by sperm that contain an extra chromosome results in trisomy, or three copies of a chromosome rather than two.
Trisomy 18 (47,XX,+18) is caused by a meiotic nondisjunction event. With nondisjunction, a gamete (i.e., a sperm or egg cell) is produced with an extra copy of chromosome 18; the gamete thus has 24 chromosomes. When combined with a normal gamete from the other parent, the embryo has 47 chromosomes, with three copies of chromosome 18.
A small percentage of cases occur when only some of the body's cells have an extra copy of chromosome 18, resulting in a mixed population of cells with a differing number of chromosomes. Such cases are sometimes called mosaic Edwards syndrome. Very rarely, a piece of chromosome 18 becomes attached to another chromosome (translocated) before or after conception. Affected individuals have two copies of chromosome 18 plus extra material from chromosome 18 attached to another chromosome. With a translocation, a person has a partial trisomy for chromosome 18, and the abnormalities are often less severe than for the typical Edwards syndrome.
Infants born with Edwards syndrome may have some or all of the following characteristics: kidney malformations, structural heart defects at birth (i.e., ventricular septal defect, atrial septal defect, patent ductus arteriosus), intestines protruding outside the body (omphalocele), esophageal atresia, mental retardation, developmental delays, growth deficiency, feeding difficulties, breathing difficulties, and arthrogryposis (a muscle disorder that causes multiple joint contractures at birth).
Some physical malformations associated with Edwards syndrome include small head (microcephaly) accompanied by a prominent back portion of the head (occiput); low-set, malformed ears; abnormally small jaw (micrognathia); cleft lip/cleft palate; upturned nose; narrow eyelid folds (palpebral fissures); widely spaced eyes (ocular hypertelorism); drooping of the upper eyelids (ptosis); a short breast bone; clenched hands; underdeveloped thumbs and or nails absent radius, webbing of the second and third toes; clubfoot or Rocker bottom feet; and in males, undescended testicles.
In utero, the most common characteristic is cardiac anomalies, followed by central nervous system anomalies such as head shape abnormalities. The most common intracranial anomaly is the presence of choroid plexus cysts, which is a pocket of fluid on the brain that is not problematic in itself but may be a marker for Trisomy 18. Sometimes excess amniotic fluid or polyhydramnios is exhibited. |
Alzheimer’s disease is a form of dementia that, according to the National Institute on Aging (NIA), is “an irreversible, progressive brain disorder that slowly destroys memory and thinking skills and, eventually, the ability to carry out the simplest tasks.” The disease is named after Dr. Alois Alzheimer, who, in 1906, noticed changes in the brain tissue of a woman who had died due to an unusual mental illness. The woman’s symptoms included memory loss, language problems, and unpredictable behavior. Upon her death, Dr. Alzheimer examined her brain and found amyloid plaques, or abnormal clumps, and neurofibrillary tangles, or tau.
The plaques and tangles are some of the main features of Alzheimer disease. The loss of connections between nerve cells, or neurons, in the brain is another feature, along with many other complex brain changes. It appears that the initial damage to the brain from the plaques, tangles, and loss of connection between nerve cells takes place in the part of the brain that is essential in forming memories. Additional parts of the brain are affected as the neurons die and, by the final stages of the disease, the damage is widespread and brain tissue has shrunk.
Are you looking for information and resources for a loved one with Alzheimer’s disease? Our information specialists have put together a quick go-to list:
- The NARIC Collection has numerous articles from the NIDILRR community and elsewhere that speak on different aspects of research on Alzheimer’s disease.
- NIA’s Alzheimer and related Dementias Education and Referral (ADEAR) Center offers information and publications about Alzheimer’s disease and related dementias for families, caregivers, and health professionals. NIA also provides information in Spanish about Alzheimer’s disease.
- The Alzheimer’s Association provides education and resources for families and professionals, information on local programs and services, and more, along with a 24/7 helpline: 800/272-3900.
- The Alzheimer’s Foundation of America provides information and resources on various topics including caregiving and healthy aging. They also provide training and education for professionals and a toll-free helpline: 866/232-8484.
Please note: These resources are provided for information purposes only, and not for diagnosis or recommendations of treatment. Please consult with your healthcare provider if you have questions or concerns about your health status. |
Use Excel COUNTA Function to count non-blank cells. In simple words, COUNTA Function can count the cell which has any type of value in it.
COUNTA(value1, [value2], …)
In the below example, I have used COUNTA Function to count cells from range A1:A11.
There are total 11 cells in the range and function has return 10. I have one blank cell in the range which is ignored by the function. In rest of the cells, I have numbers, text, logical value and a symbol.
To learn more about Excel AVERAGE Function you can check Microsoft’s Help Section. And, if you have a unique idea to use it, I would love to hear from you. |
Innate immunity is the defense mechanism that attacks an infection at onset. It does not adapt to specific pathogens to provide long-lasting protection as the adaptive immune system does. Most infectious agents that penetrate the body’s outer epithelial surfaces are quickly eliminated by the innate immune response preventing the appearance of disease symptoms. The word innate implies genetically determined mechanisms. Innate immunity functions in a two part mechanism. First, the pathogen is recognized by soluble proteins and cell surface receptors. Serum proteins of the complement system are activated to covalently bind the pathogen. Next, effector cells (phagocytic white blood cells) are recruited to engulf the pathogen via endocytosis and to destroy it in the phagosome.
All Innate Immunity Antibodies, Lysates, Proteins, and RNAi |
Perbedaan texs Explanation & Procedure
Text Explanation (Penjelasan)Ciri Umum:
a. Tujuan Komunikatif Teks:
b. Struktur Teks:
· Penjelasan umum;(A general statement)
· Penjelasan proses;( A sequenced explanation of why or how something occurs)
c. Ciri Kebahasaan menggunakan :
· general dan abstract nouns, misalnya word chopping, earthquakes;
· action verbs;
· simple present tense;
· passive voice;
· conjunctions of time dan cause;
· noun phrase, misalnya the large cloud;
· abstract nouns, misalnya the temperature;
· adverbial phrases;
· complex sentences;
· bahasa teksni;
contoh text explanation :
How Earthquakes Happen
Earthquake is one of the most destroying natural disasters. Unluckily it often happens in several regions. Recently a horrible earthquake has shaken West Sumatra. It has brought great damages. Why did it occur? Do you know how an earthquake happens?
Earthquakes are usually caused when rock underground suddenly breaks along a fault. This sudden release of energy causes the seismic waves. It make the ground shake. When two blocks of rock or two plates are rubbing against each other, they stick a little. They don't just slide smoothly. The rocks are still pushing against each other, but not moving. After a while, the rocks break because of all the pressure that's built up. When the rocks break, the earthquake occurs.
During the earthquake and afterward, the plates or blocks of rock start moving, and they continue to move until they get stuck again. The spot underground where the rock breaks is called the focus of the earthquake. The place right above the focus is called the epicenter of the earthquake.
ProcedureProcedure/Procedural Text, Teks Prosedur, adalah teks yang berisi prosedur, instruksi, proses, cara, atau langkah-langkah dalam membuat/melakukan (mengoperasikan) sesuatu.
Ciri-ciri Procedure Text:
1. Struktur umumnya (generic structure) terdiri dari:
Goal/Aim: tujuan dan maksud isi teks. Contoh: How to make sandwich…
Material/Tool: bahan atau alat-alat yang dibutuhkan untuk membuat/melakukan sesuatu. Contoh: The materials are as follows: 1. Two slides of bread, 2. Fried-egg, strawberry jam, chocolate sprinkles, ….
Steps/Procedures: langkah-langkah atau prosedur dalam melakukan/membuat sesuatu. Contoh: First, take two slides of bread and …
2. Menggunakan tenses “simple present”
3. Sering memakai kalimat Perintah (imperatives/orders). Contoh: Turn on the lamp, Put the rice into the rice cooker, Don’t forget to press the ‘on’ button, …
4. Kata-kata urutan (sequences). Contoh: first, second, then, next, the last, finally…
Contoh Procedure Text :
Boiling egg in simple and easy way
Eggs are a rich source of protein and vitamins and are generally healthy to eat, unless you have a high cholesterol level.
You can eat eggs raw, boiled or cooked in a pan as scrambled eggs or an omelet. Boiling eggs is one of the easiest ways to prepare them. Follow the steps!
First of all, place the raw egg in a saucepan!
Second, Run cold water into the saucepan until the water is 1 inch above the egg.
After that, Place the saucepan on a stove and cook over medium heat until the water begins to boil.
The next, don't forget to reduce the heat to low
Then, Simmer for 2 to 3 minutes for soft-boiled eggs or 10 to 15 minutes for hard-boiled eggs.
Finally, Remove the egg with a spoon or ladle and let it cool slowly, or run cold water over it to cool it more quickly. |
This image belongs to a collection of engravings of urban plans entitled Civitates Orbis Terrarum, published by Georg Braun, a German geographer and cartographer, and Franz Hogenberg, a German painter and engraver. This work was conducted between 1572 and 1617, just before the extensive devastation caused by the Thirty Year’s War. This great city atlas eventually contained 546 prospects, bird-eye views and map views of cities from all over the world. Among them we can admire this representation of the imperial capital of Austria, Vienna.
Vienna is placed on the riverside of the Danube river, in the oriental part of Austria. The river connects Central Europe to the Black Sea and has been used as a natural border many times throughout history. Concerning its specific geographical location, Vienna is in the Valley of the Vienna Woods, at the foot of the foothills of the Alps. The appearance of the city seems to be irregular, though we can see zones on the bottom left that looks like orthogonal. The property consist of medieval core, a former military Roman camp. Its remains stills visible in the urban medieval fabric of streets and alleys. In the 12th century, the urban development expanded beyond the Roman defences, which were demolished. The walls of the medieval city surrounded a much bigger area and were reconstructed after the Ottoman conflicts in the 16th and 17th century. The urban and architectural qualities of the Historical Centre of Vienna give exceptional testimony for a constant exchange of values along the second millennium.
Concerning the buildings inside the city, they show the big changes that the city of Vienna experienced from the Middle Ages to the Renaissance, as the Baroque period had not begun yet. This bird-view is considered to be one of the principal historical sources to know the appearance of the Gothic architecture of the city. Supposing that the monastic complexes and churches were generally constructed of stone, the residential neighbourhoods were of wood and suffered frequent fires. Out of the wall there are also houses, which is typical of the cities of the Modern Age because the medieval hulls were insufficient. The cities incorporated squares, gardens, sewerage, paving, etc. in their streets. The biggest and more significant streets were placed downtown, around important elements, as St Stephen’s Cathedral.
Thus, the interior of this city contains a number of medieval historic buildings, including the Schottenstift (the oldest monastery in Austria), the churches of Maria am Gestade (one of the most emblematic Gothic structures), Michaelerkirche, Minoritenkirche and Minoritenkloster. Saint Stephen Cathedral is dated between the fourteenth and fifteenth centuries, and clearly dominates the picture. It is crowned by a needle-shaped tower (Steffl), built in Gothic style, which is one of the most important religious symbols of Vienna.
Regarding the social activities, in the town centre there is a square where it is very possible that people devote to trade among other activities. Just outside the walls there are different groups of population: people at the top of the image seems to practice animal husbandry or trade. As we go down to the left, we can see men carrying goods. Then we stumbled upon the “Fort Boarium”, or the pork market of Vienna; and below we can see some passenger cars pulled along by horses. In the foreground we can see a large number of people carrying things and some of them look like being warriors. In the river we can see many boats, some fishermen and other merchants, because the salt trade was an essential activity at that time; although there are people who walk by crossing the bridges they had. The boats were made of wood to the edge that is on the right. At the top right, we see a line of people waiting to enter into the city, so we assume they were new immigrants in Vienna. The landscape in the surrounding area is formed by green plains, so we can deduce that agricultural activities are practiced. In the background of the image we can see a few mountains; on the other hand, in the foreground there is a river. The nearness of this one offers to the city supply of water and fish, and provides a trade route. We cannot forget that access to raw materials is essential in order that a city could develop correctly.
The imperial family Habsburg reigned in Austria and other Central European countries from 1278 to 1918. The struggles between Catholics and Protestants (Habsburg defended Catholicism) and the Turkish threat overshadowed the political life of Austria in the 16th and 17th centuries. The Peace of Westphalia (1648) and the Peace of Carlowitz (1699) ended, respectively, to both problems and consolidated the position of Austria as a European power. The Habsburgs made the city their capital from 1556 and its importance was enhanced with the expansion by the valley of the Danube. It became a core of European Baroque thanks to the construction of major architectural works and musical creations (from the 16th century, Vienna has been universally recognized as the musical capital of Europe).
Olga Arriero Gallego |
Little Jake is a 48-volt generator. Generators have two windings—the field and the armature. Direct current (DC) is produced by spinning the armature within a stationary DC field. The induced current travels from the copper commutator, at the end of the rotating armature, through conductive graphite brushes, and then down to the slip-ring assembly. A second set of brushes contacting the slip-ring assembly allows the machine to yaw (pivot) freely with the wind, without twisting the wires that run down the tower and into the junction box at ground level.
We inspected all four brushes that ride on the generator commutator for excessive or irregular wear patterns, looked for evidence of overheating and pitting on the commutator, and examined the inside of the cover for signs of spattering solder—which would have shown us that the armature windings had been overheated. Everything appeared in good working order; the brushes were evenly worn, had plenty of remaining life, and were set with a good amount of tension against the commutator. (Each of the four brushes slides into a spring-loaded retaining clip with five notches for increasing tension. All four brushes should ride around the commutator with equal amounts of tension.)
We also inspected the conductors between the slip-ring assembly and the generator terminals for the field and armature windings. We used a multimeter for continuity checks and the megger for evidence of insulation breakdown, just like we did when checking wires from the junction box to the top of the tower. We also isolated and performed resistance checks on the lightning arrestor by using a multimeter to test each lead to ground and from lead to lead. They all showed no continuity, and infinite resistance, which is appropriate.
We measured the resistance of the field, and it was an acceptable 16 Ω. Field windings are essentially giant coils of thin wire, so resistance should be low—the same as if the coils were unwound and just very long conductors. We checked for shorts within the generator by using the megger to measure resistance from the commutator to the metal input shaft (23 MΩ), and from the field to the metal case (3.5 MΩ). We didn’t know exactly what kind of numbers we should see, but we knew that anything in the MΩ range was acceptable, since it shows high resistance with respect to ground.
None of these tests yielded suspicious results, and we were getting frustrated. We had tested every component and wire within the system, from the BOS to the top of the tower, and we still hadn’t found anything wrong—not even a tiny clue as to why Little Jake wasn’t delivering power.
We had no written documentation on the machine, so we called Mick. He validated what we had measured so far, and gave us advice about what to do next.
“Flashing the field” means re-teaching the machine which field poles are positive and which are negative. Without a residual magnetic field in the field windings, no current can be induced when the turbine begins spinning. To verify that our field windings had a polarized magnetic field, we lifted the brushes off the commutator and connected MREA’s Sun Chaser (a portable solar power trailer) to the wires in the junction box going up the tower. After a full 15 seconds of delivering 48 volts of battery power up to the field windings, we were certain that the field poles contained magnetism with the correct polarity, but the Jake still yielded no output. (Machines with field windings using residual magnetism need to be “flashed” upon installation and after a high-transient event like a lightning strike.) |
Motility Disorders of the Esophagus
The function of the esophagus (food tube) is to transport food from the mouth to the stomach. Synchronized (peristaltic) contractions follow each swallow to accomplish this task. Between swallows, the esophagus usually does not contract.
The lower esophageal sphincter (or LES) is a muscle that separates the esophagus from the stomach. It acts like a valve that normally stays tightly closed to prevent contents in the stomach from backing up into the esophagus. When we swallow, the LES opens up (the muscle relaxes) so that the food we swallow can enter the stomach.
Difficulty swallowing liquids or solids, heartburn, regurgitation, and atypical (or non-cardiac) chest pain may be symptoms of an esophageal motility disorder.
Examples of motility disorders of the esophagus that are described below include gastroesophageal reflux disease (GERD), dysphagia, achalasia, and functional chest pain.
Gastroesophageal reflux disease (GERD)
The most common symptom that occurs in the esophagus is heartburn. It happens when stomach contents washes up into the esophagus repeatedly (gastroesophageal reflux) and irritates the lining of the esophagus. This occurs when the lower esophageal sphincter (LES) does not work properly.
This can be due to a weak sphincter muscle, too-frequent spontaneous relaxations of the sphincter, or hiatal hernia. Hiatal hernia means that the stomach pushes up into the chest above the sheet of muscle that separates the abdomen from the chest (this muscle sheet is called the diaphragm). A hiatal hernia weakens the sphincter.
Dysphagia means ineffective swallowing. Sometimes this occurs when the muscles of the tongue and neck that push the food into the esophagus are not working properly because of a stroke or a disease affecting the nerves or muscles.
However, food can also stick because the lower esophageal sphincter does not relax to let the food into the stomach (a disorder called achalasia – see below), or because the esophagus contracts in an uncoordinated way (a disorder called esophageal spasm).
Dysphagia can cause food to back up in the esophagus. Symptoms may include:
- A sensation of something getting stuck
- A sensation of pain
This condition is diagnosed when there is a complete lack of peristalsis within the body of the esophagus. In addition, the lower esophageal sphincter does not relax to allow food to enter the stomach.
Most people with achalasia have symptoms for years prior to seeing a physician that may include:
- Difficulty swallowing both liquids and solids
- Weight loss
- Atypical chest discomfort
Functional chest pain
Sometimes individuals have pain in their chest that is not like heartburn (no burning quality) and that may be confused with pain from the heart. Particularly if you are over 50 years of age, your doctor will always want to first find out if there is anything wrong with your heart, but in many cases the heart turns out to be healthy.
In many people with this kind of pain and no heart disease, the pain comes from spastic contractions of the esophagus, or increased sensitivity of the nerves in the esophagus, or a combination of muscle spasm and increased sensitivity. |
The more exoplanets we find and study, the closer we get to figuring out if life exists somewhere else in the universe.
But finding exoplanets is a tricky business. That's why NASA engineers are developing a giant, flower-shaped starshade to block out starlight and make it easier for astronomers to spot them.
Normally, astronomers find exoplanets by waiting for them to pass directly in front of their host stars; the temporary dip in brightness can tip us off to their existence.
This is a difficult method to use, though. Exoplanets are extremely far away, so they appear very small and faint to us. And with all the background starlight, it's a wonder we can spot them at all.
It's very likely that we've missed some exoplanets and even gotten some false positives for planets that aren't really there. The starshade will correct for that by blocking out excess light and giving telescopes a sharper view.
How it works: Imagine the man in the GIF below is a planet, the spotlight is a star and the paper he holds up is the starshade. You can see how the starshade gives us a much clearer view:
It works the same way as holding up your hand to block out sunlight as you watch a bird or a plane fly through the sky.
The starshade is designed as a flower-shaped screen about the size of a baseball diamond. It will fly tens of thousands of miles out in front of the telescope to block out excess light.
"Because stars are so far away, the angular distance between the planet and star is quite small, requiring a very large starshade (20 to 50 meters in diameter) flying very far from the telescope (up to 50,000 km)," Jeremy Kasdin, a scientist working on the starshade, explained to Universe Today.
Once engineers figure out a good way to package the starshade and deploy it in space, it could be launched alongside upcoming space telescopes like the Wide-Field Infrared Survey Telescope, the Transiting Exoplanet Survey Satellite or the James Webb Space Telescope. |
Unit 8: When Chemicals Meet Water—The Properties of Solutions
Solutions are all around us, from the air we breathe to the blood in our veins to the steel frames of many buildings. While solutions don't have to be liquids, aqueous, or water based, solutions are fundamental to life and common in inorganic chemistry: The majority of biochemical reactions happen in aqueous solutions. The formation of a solution depends upon the interactions between the solute (the substance that gets dissolved) and the solvent (the substance that does the dissolving); in turn, the interactions of solute and solvent are heavily influenced by their concentrations, temperature, and pressure. Solution chemistry is behind the extraction of materials for a variety of applications, for example, making a great cup of coffee.
- What Is a Solution?
- Solutions and Solubility
- Solution Concentrations
- Analyzing Solutions—Titrations
- Solutions and the Gases above Them—Raoult's Law
- Henry's Law
- Colligative Properties—Vapor Pressure and Osmosis
- Colligative Properties—Freezing and Boiling
- Separation and Purification
- Further Reading |
In a discovery that could shed light on the development of the human brain, University of Oregon researchers determined that infants as young as six months old can recognize simple arithmetic errors.
The researchers used puppets to portray simple addition problems. For example, in order to illustrate the incorrect equation 1 + 1 = 1, researchers showed infants one puppet, then added a second. A board was then raised to block the infant’s view of both puppets, and one was removed. When the board was lowered, only a single puppet remained.
To gauge the infants’ ability to detect the error, researchers recorded the number of seconds the babies spent looking at the puppet.
According to the study, babies ranging from six to nine months old looked at incorrect solutions 1.1 seconds longer than correct ones. This extended viewing correlated with EEG measurements showing higher activity in a frontal area of the brain that is known to be involved in error detection in adults. The team’s findings are published in the August 7th online edition of The Proceedings of the Natural Academy of Sciences.
“This brain system, in adults, allows us to monitor our own performance and even correct it,” said psychologist Michael Posner, the study’s lead author. “We know that infants can’t necessarily correct it, but they do apparently have at least a start of that brain system.”
Posner’s study bolsters the results of a 1992 Yale study that measured length of gaze but did not include EEG measurements, an omission that led many scientists to doubt its conclusions.
Since the babies in Posner’s experiment wore a net of brain-monitoring electrodes, the researchers could pinpoint enhanced brain activity following the presentation of incorrect solutions. Scientists previously thought that the brain system this data highlighted developed later, around two and a half years of age.
The findings could help scientists understand how early control systems are laid down in the brain and may eventually help them to analyze the genetic and experiential factors that influence early brain development, Posner said.
Patricia Kuhl, a professor of speech and hearing sciences at the University of Washington, believes studies like Posner’s can reveal mechanisms in the infant brain that could help doctors monitor its development.
“The more we know about these infant systems,” Kuhl said, “the more likely it is that we might be able to detect impairments early, and intervene while the brain is still being sculpted.”
Posner added that his findings could also clue scientists into the development of the brain’s representation of numbers.
“Some people have always thought that our number system is just something that’s been created by humans and learned by us, then taught to our children,” Posner said.
“But there seems to be a proto-number system—the beginning of a number system—that is available in other animals,” he continued, “and apparently is present in infants as well.”
Originally published August 18, 2006 |
A new study from the Wildlife Conservation Society (WCS) and the Nevada Department of Wildlife (NDOW) has pieced together the last 150 years of history for one of the state's most interesting denizens: the black bear.
The study, which looked at everything from historic newspaper articles to more recent scientific studies, indicates that black bears in Nevada were once distributed throughout the state but subsequently vanished in the early 1900s. Today, the bear population is increasing and rapidly reoccupying its former range due in part to the conservation and management efforts of NDOW and WCS.
Compelled in part by dramatic increases in human/bear conflicts and a 17-fold increase in bear mortalities due to collisions with vehicles reported between the early 1990s and mid- 2000s, WCS and NDOW began a 15-year study of black bears in Nevada that included a review of the animal's little-known history in the state.
Over the course of the study, black bears were captured both in the wild and at the urban interface in response to conflict complaints. The captured animals used in the study (adult males and females only) were evaluated for multiple physiological indicators including condition, sex, reproductive status, weight, and age, prior to being released. From the information gathered, the population size in the study area was estimated to be 262 bears (171 males, 91 females). Confirmed sightings and points of capture from 1988 to present were mapped and presented in the report to illustrate current population demographics, and will be used to inform bear management in Nevada.
"It's critical to understand the population dynamics in a given area in order to make informed decisions regarding management," said WCS Conservation Scientist Jon Beckmann. "This includes decisions on everything from setting harvest limits to habitat management to conservation planning in areas where people will accept occupation by bears. We used this long-term study to determine if reported incidences were due to an increasing or expanding bear population, or people moving to where bears are located. The answer is both."
The study area extended from the Carson Range of the Sierra Nevada eastward to the Virginia Range and Pine Nut Mountains, and from Reno south to Topaz Lake--an area collectively referred to as the Carson front. Because many captures were in response to conflicts, the urban interfaces of cities and towns of the Lake Tahoe Basin were included.
Nevada's Black Bear History Unraveled
In looking to integrate information on the historical demographics of black bears into their study, the authors found that little published scientific research or data was available and that the species' history in Nevada went largely ignored until 1987-- when complaints arising from sightings and road collisions with vehicles began.
Historical records compiled by retired NDOW biologist Robert McQuivey that included old newspaper articles, pioneer journals dating as far back as 1849, and NDOW records that had long been unavailable, were reviewed and confirmed that black bears were present throughout the state until about 1931. At that point, the authors concluded that "the paucity of historical references after 1931 suggest extirpation of black bears from Nevada's interior mountain ranges by this time."
"The historical records paint a very different picture of Nevada's black bear than what we see today. This new perspective is a good indication of what bear management in this state could involve should the population continue to expand," said the study's lead-author Carl Lackey of NDOW.
The authors believe that while over-hunting and conflicts with domestic livestock contributed to the bear's local extinction in the Great Basin, landscape changes due to clear-cutting of forests throughout western and central Nevada during the mining booms of the late 1800s played an important role as well. But as fossil fuels replaced timber as a heat and energy source, forestry and grazing practices evolved, and reforestation and habitat regeneration occurred in parts of the their former range, the bears rebounded.
Using the information gathered in their review of historic documents, the scientists mapped the distribution of black bears within the interior of Nevada during the 1800s and early 1900s. They recommend that historical range maps for the species in North America be revised to include the information produced as part of the study.
The study, Bear Historical Ranges: Expansion of an Extirpated Bear Population, appears in the current online edition of the Journal of Wildlife Management. Co-authors include Carl W. Lackey of the Nevada Department of Wildlife, Jon P. Beckmann of the Wildlife Conservation Society, and James Sedinger of the University of Nevada, Reno. |
plant reproductive systemArticle Free Pass
- General features of asexual systems
- General features of sexual systems
- Bryophyte reproductive systems
- Tracheophyte reproductive systems
- Variations in reproductive cycles
- Physiology of plant reproduction
plant reproductive system, any of the systems, sexual or asexual, by which plants reproduce. In plants, as in animals, the end result of reproduction is the continuation of a given species, and the ability to reproduce is, therefore, rather conservative, or given to only moderate change, during evolution. Changes have occurred, however, and the pattern is demonstrable through a survey of plant groups.
Reproduction in plants is either asexual or sexual. Asexual reproduction in plants involves a variety of widely disparate methods for producing new plants identical in every respect to the parent. Sexual reproduction, on the other hand, depends on a complex series of basic cellular events, involving chromosomes and their genes, that take place within an elaborate sexual apparatus evolved precisely for the development of new plants in some respects different from the two parents that played a role in their production. (For an account of the common details of asexual and sexual reproduction and the evolutionary significance of the two methods, see reproduction.)
In order to describe the modification of reproductive systems, plant groups must be identified. One convenient classification of organisms sets plants apart from other forms such as bacteria, algae, fungi, and protozoans. Under such an arrangement, the plants, as separated, comprise two great divisions (or phyla)—the Bryophyta (mosses and liverworts) and the Tracheophyta (vascular plants). The vascular plants include four subdivisions: the three entirely seedless groups are the Psilopsida, Lycopsida, and Sphenopsida; the fourth group, the Pteropsida, consists of the ferns (seedless) and the seed plants (gymnosperms and angiosperms).
A comparative treatment of the two patterns of reproductive systems will introduce the terms required for an understanding of the survey of those systems as they appear in selected plant groups.
General features of asexual systems
Asexual reproduction involves no union of cells or nuclei of cells and, therefore, no mingling of genetic traits, since the nucleus contains the genetic material (chromosomes) of the cell. Only those systems of asexual reproduction that are not really modifications of sexual reproduction are considered below. They fall into two basic types: systems that utilize almost any fragment or part of a plant body and systems that depend upon specialized structures that have evolved as reproductive agents.
In many plant groups, fragmentation of the plant body, followed by regeneration and development of the fragments into whole new organisms, serves as a reproductive system. Fragments of the plant bodies of liverworts and mosses regenerate to form new plants. In nature and in laboratory and greenhouse cultures, liverworts fragment as a result of growth; the growing fragments separate by decay at the region of attachment to the parent. During prolonged drought, the mature portions of liverworts often die, but their tips resume growth and produce a series of new plants from the original parent plant.
It is common horticultural practice to propagate desirable varieties of garden plants by means of plant fragments, or cuttings. These may be severed leaves or portions of roots or stems, which are stimulated to develop roots and produce leafy shoots. Naturally fallen branches of willows (Salix) and poplars (Populus) root under suitable conditions in nature and eventually develop into trees. Other horticultural practices that exemplify asexual reproduction include budding (the removal of buds of one plant and their implantation on another) and grafting (the implantation of small branches of one individual on another).
Reproduction by special asexual structures
Throughout the plant kingdom, specially differentiated or modified cells, groups of cells, or organs have, during the course of evolution, come to function as organs of asexual reproduction. These structures are asexual in that the individual reproductive agent develops into a new individual without the union of sex cells (gametes). A number of examples of special asexual agents of reproduction from several plant groups are in this section.
Airborne spores characterize most nonflowering land plants, such as mosses, liverworts, and ferns. Although the spores arise as products of meiosis, a cellular event in which the number of chromosomes in the nucleus is halved, such spores are asexual in the sense that they may grow directly into new individuals, without prior sexual union.
Among liverworts, mosses, lycopods, ferns, and seed plants, few-to many-celled specially organized buds, or gemmae, also serve as agents of asexual reproduction.
The vegetative, or somatic, organs of plants may, in their entirety, be modified to serve as organs of reproduction. In this category belong such flowering-plant structures as stolons, rhizomes, tubers, corms, and bulbs, as well as the tubers of liverworts, ferns, and horsetails, the dormant buds of certain moss stages, and the leaves of many succulents. Stolons are elongated runners, or horizontal stems, such as those of the strawberry, which root and form new plantlets when they make proper contact with a moist soil surface. Rhizomes, as seen in iris, are fleshy, elongated, horizontal stems that grow within or upon the soil. The branching of rhizomes results in multiplication of the plant. The enlarged fleshy tips of subterranean rhizomes or stolons are known as tubers, examples of which are potatoes. Tubers are fleshy storage stems, the buds (“eyes”) of which, under proper conditions, can develop into new individuals. Erect, vertical, fleshy, subterranean stems, which are known as corms, are exemplified by crocuses and gladioli. These organs tide the plants over periods of dormancy and may develop secondary cormlets, which give rise to new plantlets. Unlike the corm, only a small portion of the bulb, as in lilies and the onion, represents stem tissue. The latter is surrounded by the fleshy food-storage bases of earlier-formed leaves. After a period of dormancy, bulbs develop into new individuals. Large bulbs produce secondary bulbs through development of buds, resulting in an increase in number of individuals.
General features of sexual systems
In most plant groups, both sexual and asexual methods of reproduction occur. Some species, however, seem secondarily to have lost the capacity for sexual reproduction. Such cases are described below (see Variations in reproductive cycles).
Do you know anything more about this topic that you’d like to share? |
Geography and Climate
Yakushima is one of the southwest islands of Japan, which is situated approximately 60 km from the southern most tip of Cape Sata-misaki in Kyushu. Most of the land is made of uplifted granite, and the island is about 504 km² and 132 km in perimeter. Tall mountains, such as Mt. Miyanoura (1,936 m) of the highest peak in Kyushu region, Mt. Kuromi, Mt. Nagata, Mt. Kurio and three other mountains that exceed 1,800 meters dominate the central part of the circular island. Most of the mountains that surround the center exceed at least 1,000 meters, giving the name “Alps on the Ocean.” The population is roughly 13,500, and people live on plain grounds around the coastal area since the mountains are steep. In the administrative section, Yakushima and Kuchinoerabujima islands belong to Yakushima-cho, Kumage-gun, and as well as Kagoshima prefecture.
Constitution, Geological Feature
The rocks lying as the foundation of Yakushima are sedimentary rocks made of sand or mud. They were piled up as a trench approximately 40 million years ago along with landslides that occurred due to tectonic plate movements. Most areas of the mountains that people climb are made of granite created by the result of intrusion of hardened magma (by a cooling process) that developed about 15 million years ago. Because the granite is much lighter than other rocks, it had caused the stratum to be pushed up at several millimeters per year at a considerably fast rate. The outer surface that encompasses the granite are metamorphic rocks (hornfels) made of sedimentary rocks burnt from the heat of magma. Metamorphic rocks are hard and highly resistant to erosion which have allowed many cliffs and waterfalls to be produced. The red volcanic ashes of stratum can be observed especially in the northwestern part of Yakushima. This was a caused by a large eruption of Kikai Caldera that occurred 73 million years ago. Yakushima continues to rise at the rate of 13 cm in 1,000 years.
The climate and weather in Yakushima is greatly influenced by the Kuroshio Current. The moisture-rich air caused by the warm current fills the mountaintops. Because of the cool temperature at the high peaks, however, it produces rain clouds that create a large quantity of rain each year. This is one of the reasons why Yakushima is also known as the “Island of Water.” The rich water in Yakushima is the life of all living things, and it has produced magnificent scenes and untouched cedar forests. There are various ways that it rains which are different by seasons and areas. As a result, the characteristic of the weather in Yakushima can be quite unpredictable and wild. Although a tiny island, it could be raining heavily one minute in the southern part when it is completely sunny on the other side.
The annual mean temperature at the lower altitude level plains where most islanders live is approximately 20 ℃. Because of the mountains in Yakushima, the village areas are considered to be subtropical with warm temperatures, whereas the mountaintop is considered to be the cold zone. It is as if you had placed various climate zones that extend the Japanese mainland perpendicularly on top of this tiny island. |
Back: Table of Contents | Planting The Site
CARING FOR THE SITE
Following installation of the native plants, continued maintenance and care for the site is often required to ensure a successful project. The amount and duration of care - mostly watering, fertilization, and insect pest control - will depend on the particular environmental conditions and location of the restoration site.
During the planning phase of the restoration project, it should be determined if it will be necessary to water the plant material for a given time period following the initial installation. The basis for this decision should include considerations of weather, site topography, plant water requirements, logistics, and cost-effectiveness. Also, in general, larger plants require more water to survive than do seeds and smaller plants. If a restoration project is conducted on a very large scale or is in a remote location, then it may not be possible, both logistically and economically, to water the site. When this is the case, one should take advantage of seasonal rainfall patterns and plant seeds and/or plants either right before or during the rainy season. Hydroseeding (or hydraulic seeding), a technique in which seed, water, and nutrients are sprayed over the ground in the form of a slurry, may also be an option on very large sites. Other options to pursue if irrigation is cost-prohibitive include site preparation to remove all competing vegetation (which brings with it other complications such as increased erosion) or mulching to conserve water.
To determine if water is needed at the restoration site, a visual inspection of the plants will usually suffice. Most plants wilt noticeably when water is limited. Leaves can become dull and fade in color, turn yellow, and, in extreme instances, die. Some species of native plants will wilt earlier than others, so these can be used as an early-warning sign of drying conditions. If water does need to be added to the site, only apply an amount equivalent to the average annual rainfall in that area. Anything above that amount would be extraneous to the needs of the native species and an unnecessary cost.
Watering needs are different in dry areas with low humidity. Container plants will usually die if natural water regimes are relied upon during the first year. Potting mixes are prone to drying out quickly in arid climates, and once artificial soil mixes are dry, they resist moistening, resulting in plant death. Plants in arid climates will require irrigation on a regular basis until established, usually until the end of the first growing season. Irrigation should be sufficient to moisten the soil below the bottom of the planting hole.
Methods for water application include basin (flood), furrow, sprinkler, and low-volume, high-frequency (e.g., drip, minisprinkler, or soaker) systems (Harris et al. 1999). The basin and furrow methods offer a low-tech solution to irrigation and can be installed during the site preparation and/or planting phases of the restoration project. With both methods, water is provided to the plants only when the basin or furrows are filled. Sprinkler systems, when properly designed and maintained, can provide uniform water distribution on both flat and hilly terrain. Sprinklers are best used early in the day, when there is little wind and foliage will be able to dry throughout the day. The drying factor is especially important for plants susceptible to water-related diseases. Drip and minisprinkler irrigation apply water slowly and in such a way that only a portion of the soil within the dripline becomes wet. Drip emitters apply water more slowly and to a smaller area than minisprinklers and are better suited to smaller, slow-growing or widely-spaced plants. They also, however, have a greater tendency to clog than do the higher-pressure minisprinklers. With any type of sprinkler or drip irrigation system, equipment breakdown may cause stress on the plants.
The benefits of vegetation in preventing erosion are well documented - their roots stabilize and anchor the soil and live plants and litter increase the absorptive capacity of the soil. However, before the newly-installed native plants become established, erosion of exposed soil could be a problem. One easy and economical way to prevent erosion during the time of plant establishment is to use weed-free mulch. This is especially true on slope plantings. Weed-free mulch protects the seeds and seedlings against rain and wind and also reduces loss of moisture during dry periods. A variety of mulch types can be used, which include hay or straw, jute netting, wood fiber or fiber netting. Other considerations regarding erosion prevention are:
- buffering or filter strips
- diverting surface water runoff away from disturbed soils
- keeping heavy equipment off exposed soil during heavy rain
- planting native grass along drainage channels to slow the rate of runoff
- planting temporary vegetation cover (e.g., annual grasses) on sites that remain exposed during the rainy season
Invasive Species Controls
Following installation of new native plants, controlling the recruitment and spread of invasive plant species is one of the most important elements to ensure the success of a restoration project. Once established, invasive species can outcompete native species, form dense stands, and eventually dominate an entire plant community. Restoration projects that involve earth-moving or alterations to hydrology are particularly vulnerable to the influx and spread of invasive species (WADOE 1993).
Specific methods for invasive species control and eradication are detailed in the "Invasive Weeds" section. It is critical and cost-effective to prevent establishment and spread of new weed invasions during and after the initial site work has been completed. Methods of doing so include the following:
- early detection and eradication of new weed invasions
- containing neighboring weed infestations
- minimizing soil disturbances
- planting native species of the local ecotype
- managing for healthy native plants
- test plots
Early detection and eradication of new weed invasions. If a new infestation is detected at an early stage and the plants are removed before seeds are produced, efforts and resources will be saved. Even if some plants are detected after seed production, but before a large population increase, less work is required than in a full-blown invasion. One method commonly used to prevent weed invasion is to regularly survey the restoration site, removing individual weed plants before they become better established and begin seed production. The weed infestation area should be identified on a map of the site, marked in the field, and continually monitored during subsequent surveys.
Containing neighboring weed infestations. Since restoration sites do not exist in a vacuum and often are situated within a larger disturbed landscape, there is a good chance that weed populations will be found in areas adjacent to or nearby the site. One approach to controlling the spread of invasives is to spray the borders of the infested area with an herbicide. Containment programs are typically designed only to limit the spread of a weed population, and thus can require a long-term commitment to herbicide application.
Minimizing soil disturbance. Most weed species have developed characteristics, such as rapid growth rates and high seed production, that enable them to move into a bare ground site quickly and aggressively. They often are able to outcompete native species in occupying disturbed soil. Because this is the case, it is important to minimize soil disturbance in a restoration project wherever possible.
Planting native species. Eliminating a weed can leave environmental resources available for the reinvasion of the same or different weed species. Revegetation with native plants can prevent reinvasion of undesirable species and can also contain the spread of remnant weed populations.
Managing for healthy native plants. In areas where native species have been planted, it is important to manage the landscape properly so that the native plants remain healthy and strong and weed encroachment is limited.
Test plots. If the project time frame allows, it may be cost effective and worthwhile to carry out recommended treatments on test plots of a smaller scale to see if desired results can be obtained. Monitoring test plots is an excellent planning tool for large-scale restoration attempts.
Following the initial planting, fertilizing native plants is only necessary in extreme cases when the condition of the soil is still in need of repair. This would be in places such as contaminated sites or abandoned mine sites where the topsoil has been completely removed or destroyed. In those instances where the soil is not yet conducive to supporting native plant populations, the revegetation aspect of the restoration plan should be postponed until soil conditions can be improved. This is described in detail in the previous section on Reduced Soil Function. Once the desired soil environment (e.g., pH, nutrient levels, diversity of microorganisms) has been created or restored, then the native plants, being adapted to those particular soil conditions, should not require additional fertilization.
Applying nutrients to a restoration site without first knowing if the soils are deficient can cause adverse effects such as salt buildup in the soil, inhibition of mycorrhizae formation, growth of invasive species, and water pollution. It has also been reported that the addition of even mild fertilizers can cause root dieback and shoot burning in many native species, particularly those that are drought tolerant. If it is decided that fertilization is an option, first take soil samples to determine what nutrients are limited. It is important to keep in mind that the pH of a soil, among many other factors, can greatly affect nutrient levels. Iron and manganese may be less available in alkaline soils, and phosphorous may be limited in acid sandy or granitic soils. Potassium levels may also be low in acid sandy soils. Always remember to keep in mind the specific needs of the native plant community being restored. These plants may be adapted to particular low nutrient conditions. In these cases adding nutrients can reduce the ability of the native species to outcompete weedy species.
It is a good idea to regularly inspect the plants at the restoration site for signs of insect pest damage. Before doing so, however, it is a good idea to find information about what pests have the greatest potential for infesting the site. Knowledge about the host plants will provide much of this information, since the large majority of pests are host-specific. Keep in mind, though, that some pests are host-specific only at certain times of the year. For example, the woolly apple aphid infests American elms in the winter and then moves to apple trees in the spring and summer; the woolly elm aphid infests serviceberries in the summer and then spends the rest of the year on elms (Harris et. al. 1999).
To inspect plants for pest problems, go out to the restoration site on a regular basis and systematically check plant foliage for pests and damage symptoms. A routine should be developed that is efficient for each particular restoration site. As was mentioned before, learn about the problems common to the species on the restoration site and be able to recognize signs of damage caused by pests. Also, it is important to be able to clearly distinguish the pests from beneficial organisms. The use of appropriate tools, such as a hand lens and reference materials, can aid in pest recognition.
If a pest population increases to some level that can no longer be tolerated, then it may be necessary to implement some control practices. Before spraying or introducing a predator population, it is strongly encouraged that the advice of the local extension agency be sought.
Continuous Protection of Restoration Site
Following installation of the native plants in a restoration project, it is necessary to consider what will happen to the site once the project team walks away from it. For example, if the restoration site exists in a rural area or even some urban areas, it is highly probable that there are wildlife populations nearby waiting to forage on all of the newly-installed plant material. As has been documented time and time again, grazing or browsing by domestic or wild animal populations can severely inhibit establishment of native plant populations. Other considerations for protecting the restoration site include erosion control and adapting the management plan to suit changing environmental conditions.
Protection from Grazing or Browsing
Although some matured prairie plantings benefit from occasional or light grazing (effects similar to those produced by prescribed burning), most sites should be protected from grazing or browsing. The most effective method of controlling grazing or browsing of native plant material by wildlife is to prevent access to it. For larger animals, such as deer and cows, fencing the site, plant communities or individual plants can restrict access. The fences should be tall enough to prevent deer from jumping over them and sturdy enough to withstand the weight of the animals leaning or pushing against them. Building fences of chicken wire can also prevent waterfowl grazing, but the exclosures must be small enough so that they are unable to fly in and out of them. It may also be necessary to construct a cover of fencing or other material over the site to keep out smaller birds.
Plants can also be individually protected by installing some sort of physical barrier immediately around their base. For tree seedlings, tree shelters are often used. These are tubes of translucent plastic that fit around the bottom portion of the plant. Tubes of rigid netting are also used. To protect mature trees, chicken wire or hardware cloth can be wrapped around the base of the tree. For protection from rodents which like to eat the bark at the base of young trees, aluminum foil can be wrapped around the base of each tree to a height of around 9 inches.
All protective fences and barriers should be removed later, once the plants have established.
Monitoring is the means by which it may be determined how well the native plant project meets goals and objectives. It also serves a critical function, alerting managers to possible maintenance needs to ensure continued success of the project.
Development of a monitoring program requires much planning and consideration of one's specific goals and objectives. In fact, all monitoring data should be evaluated relative to the goals and measurable criteria established at the onset of the restoration project. Take for example a goal of increasing available wildlife habitat with measurable criteria as the establishment of greater than 50 percent cover of native plant species that are providing food for wildlife. The monitoring efforts would be directed at measuring the change in percent cover over time of those native plant species known to be a food source for wildlife. The success of the restoration project would then be evaluated based on a greater-than-fifty-percent or less-than-fifty-percent-cover of native species, which would have been measured through monitoring.
Monitoring has the dubious honor of being the most forgotten or left-out element in restoration projects. Many restoration projects are resource intensive in the early stages, which makes it easy to commit all of the project budget to planning the project, purchasing the plant material, procuring equipment, site preparation, and putting the plants or seeds into the ground. All too often, not enough thought is given to what might happen to the restoration site after the plants or seeds are installed, and project failure can be the unfortunate result. Monitoring provides a long-term look at the ecological changes occurring after the initial restoration project and enables proactive management to prevent failure of the project. Some examples of factors that can interfere with the success of a restoration project include invasion of noxious weeds or invasive plants, intense browsing or grazing by wildlife, failure of introduced plantings due to drought conditions, acts of nature that severely damage restored areas, and damage resulting from human trespass.
Techniques for vegetation monitoring can include a cursory visual inspection of installed plants or a more detailed study of plant species or groups of similar plant species using randomly or systematically-placed quadrats or other sampling units. A combination of both techniques may be appropriate for most planting sites: monthly site checks to quickly ensure that plants are healthy and are not being harmed by something, and more detailed assessments once or twice a year to examine vegetation health, growth, and establishment in order to monitor project development and success.
The choice of specific monitoring methods for the more detailed assessment will depend on the type and density of vegetation that is being restored. If the planting involved mainly woody vegetation where it is easy to relocate individual plants, assessments may involve counting numbers of surviving versus dead individuals and measurements of growth such as height, stem width, and numbers of new branches. If the project involved planting of herbaceous plants or mainly seeding, it may be better to establish monitoring plots throughout the site. Ideally all the monitoring plots combined should cover at least five percent of the total project area. They should be placed so that they can provide a fairly accurate picture of the success of the overall site. This may mean stratifying the site (dividing it up into different sections based on site differences) and then randomly placing a proportional number of sampling plots within each section. Within the plots some of the measures of vegetation that could be used include diversity, density, percent cover, frequency, and biomass. Each of these is explained briefly below. They should be evaluated in comparison with reference areas and take into consideration the natural dynamics of an ecosystem over time.
If time and funding prevent intensive monitoring of a restoration site, at least take photographs of before and after restoration at the same spot. These photos can also be retaken in future years to help chart the progress of a site.
Diversity measures both the absolute number of species in an assemblage, community or sample, as well as their relative abundance. Low diversity refers to few species and/or unequal abundances, while a measure of high diversity corresponds to many species with equal abundances.
Density is the number of individuals in a given unit of area. While density is a commonly-used metric due to its being easily obtained and understood, it does have some limitations. The major limitation is that the critical unit of measurement is the individual plant, which may be difficult to identify in some instances. For example, rhizomatous perennials grow by vegetative spread, making it difficult to determine whether one or several stems belong to a single individual. Therefore, it is important to first determine the individual unit of interest, making sure it is a unit easily identifiable in the field.
Percent cover typically refers to the vertical projection of vegetation or litter onto the ground surface when viewed from above. This measure is considered as an approximation of the area over which a plant exerts its influence on other parts of the ecosystem, or its dominance relative to other plants or species. Variations of this concept include vegetation cover, the total cover of vegetation on an area; crown cover, the spatial extent of tree or shrub canopies; ground cover, the cover of plants such as shrubs, grasses, and herbs as well as cover of litter, bare ground, and rock; and basal cover, the cover of the basal portion of plants. Percent cover is one of the most commonly measured vegetation attributes and provides a quantitative measure for species that can not be easily or accurately measured by density or biomass. When the cover of individual species or species guilds are measured separately, the total cover within a sample may exceed 100 percent due to overlap of the plant crowns or foliage.
Frequency is the proportion of time a species or guild occurs within a given number of samples. It is a useful means of detecting differences in vegetation structure between two or more plant communities and is sensitive to change over time (The Nature Conservancy 1997). If a species has a frequency of 20 percent, then it should occur once in every 20 quadrats examined. The measure is obtained simply by recording whether a species is present in a series of quadrats. A major benefit to collecting frequency data is the ability to collect a lot of data within a relatively short time period. However, the information gathered from this method is limited such that it does not provide an idea about the relative dominance and abundance of a species in the community.
Measuring biomass in vegetation monitoring is used infrequently mostly since it involves some degree of destructive sampling. It can, however, provide a good measure of seasonal and annual changes in growth.
Maintenance Using Prescribed Burning
Future management of restored prairie and other fire-dependent plant communities can utilize prescribed burning as a tool. Planning ahead for using fire requires a firebreak. Many times this can be done by planting a green break, or short cool-season grasses around the perimeter of the project site. Green breaks have proven to be valuable in reducing the time and expense of maintaining these sites.
Adaptive management is a systematic approach for improving management by learning from past mistakes. Management objectives and actions are continuously adjusted as new information is gathered through monitoring and more is known about which management techniques work and which do not. This approach to management is especially applicable in a restoration context, where environmental conditions are changing rapidly and there still is much uncertainty about how to design and implement a successful project. For example, a restoration site existing in a highly urbanized area was planted with native species, and, due to the absence of wildlife in the area, no plans for protection of the plant material from wildlife were made. Then in the second year of the project, some deer moved onto the site and proceeded to graze on the tender shoots that had been planted the year before. If the original project management plan was not modified to install fencing or some other physical barrier around the site, then chances are good that the deer would cause considerable damage to the new plant material. Adaptive management of a restoration site can lead to more effective decision making and increase the likelihood of project success.
Next: Final Thoughts |
What Is Lupus
Individuals researching or collecting information on what is lupus can find a plethora of websites giving detailed information about this condition. Lupus is basically a disease that afflicts the immune system. In this condition the immune system attacks the wrong objects thinking these to be foreign and harmful. In this condition excess proteins are created by a person’s immune system and these antibodies attach to the structures in a person’s body. This causes pain, inflammation and damage.
Information on what is lupus will also indicate that researchers, scientists and medical health professionals have not been exactly able to determine the causes for this condition. However it is believed that there are multitude factors that result in this condition. These may include genetic factors, environmental factors and hormonal problems as well. Some other aspects that may contribute include stress, certain kinds of medications, diet, some bacteria and viruses and exposure to light particularly ultra violet light.
Literature on what is lupus also indicates that this condition afflicts certain kinds of people. Around 90% of individuals suffering from lupus are women. Likewise people belonging to certain communities are more likely to suffer from this condition. So Latinos and African Americans are more likely to experience this condition as compared to Caucasians.
What is lupus can be better understood from the different types of lupus seen among people and the symptoms associated with these different types. Systemic lupus is the type wherein different systems or organs within the body may be involved and affected. In Discoid lupus a scaly and red rash may be seen on the areas that are exposed to the sun including the arms, face, scalp and legs and the trunk. Finally in drug induced lupus reactions to some medications may result in the development of this illness.
People suffering from lupus may experience the associated symptoms to varying degrees. In some patients the symptoms may be quite severe while in other individuals there may be flare ups. What is lupus can be understood better if the associated problems and symptoms are understood. People suffering from lupus experience weakness, fatigue and lethargy along with joint and muscle pain and swelling. Such a person may also experience fever and skin rashes.
Lupus also results in joint and muscle pain. People suffering from lupus also experience mouth and nose ulcers. In some cases the sac that surrounds the heart may experience inflammation and in some cases kidney problems may also occur. Diagnosis of lupus cannot be done easily and has to be conducted based on physical evaluation of the symptoms along with some laboratory tests.
Lupus Symptoms - Home
What Is Lupus
What Causes Lupus
Types Of Lupus
Systemic Lupus Erythematosus
Lupus Hair Loss
Lupus And Pregnancy
Living With Lupus
History Of Lupus
Drug Induced Lupus |
26 August 2013
Source: The Independent
Author: Lewis Smith
It might sound like a fisherman’s tale, but trawlers have to work 25 times harder to catch the same quantity of fish today as they did 150 years ago, scientists have calculated. Fish populations in UK coastal waters are a fraction of what they used to be, and by analysing historical records researchers have calculated that for the effort put in, a modern trawler catches only a small fraction of what its sail-powered predecessors could expect to catch.
The introduction of bottom trawling, whereby metal or wooden bars fixed to nets are dragged over the seabed, was identified as the key factor in the dramatic decline in fish numbers in UK waters. Scientists used the records of two 19th century Royal Commissions to provide the first quantitative estimates of the impact of bottom trawling – which is more efficient but less selective, and also damages the seabed habitats relied on by many marine species – and reveal that it was a “turning point” for UK fish stocks.
They calculated that for the same effort, modern trawlers caught 25 times fewer fish than their counterparts in 1860. |
Is your elementary school child learning about the water cycle during science? For first and second graders, this can be a confusing topic and hard to remember. The classroom teacher will review the material, but your child may really benefit from using a teaching aid at home. Turtle Diary created interactive science games to reinforce what your child is learning in school. They offer water cycle games to enrich your child's learning. The games are so much fun; your child will want to play them even during the weekends.
With each game, your child is given a brief animated lesson. This lesson covers evaporation and condensation, the different bodies of water, and the different forms of water. One of the first exercises is to fill in a diagram depicting the water cycle. As your child chooses the answers, a narrator expands on that answer. Due to the reiteration of the subject matter, your child will pick up the material quickly.
"The Water Cycle" game and the other games can be played multiple times so your child can absorb the information. Repetition is the key to learning in children. Other games offered to expand on the subject include, "Basketball Game", "Collect the Water Drops", and "Drive the Water Craft".
As your child plays, critical thinking and memory skills will improve. Each game has challenging levels that your child can work up to by mastering the level before it. Your child should never be bored with so many variations.
By using water cycle games for kids, it is easy to understand the subject matter and the bright colors and animation keep your child focused. Your child may ask you to view the lessons with them. This can be a great teaching moment and quality time for both of you. |
As well as being lots of fun, Scouting is a values-based programme with a code of conduct. The Scout Promise and Law help instill the values of good behaviour, respect for others, and honesty. Members of a scout group (regardless of age) learn skills that will last a lifetime, including basic outdoor skills, first aid, citizenship skills, leadership skills, and how to get along with others; for example, just ask anyone who has been a Hawke Kea how to build a campfire. Over the last century, Scouting has instilled in young women & men the values and knowledge that they will need to become leaders in their communities and country – the list of scouts who names are known internationally is extensive.
The mission of Scouting is to contribute to the education of young people, through a value system based on the Scout Promise and Law, to help build a better world where people are self-fulfilled as individuals and play a constructive role in society. This is achieved by:
- involving them throughout their formative years in a non-formal educational process
- using a specific method that makes each individual the principal agent of his or her development as a self-reliant, supportive, responsible and committed person
- assisting them to establish a value system based upon spiritual, social and personal principles as expressed in the Promise and Law.
Non-formal education is organized educational activity outside of the established formal education system that is intended to serve an identifiable learning clientele and identifiable learning objectives. Scouting is clearly distinguished from a purely recreational movement, though recreation plays a large part in in its activities.
The Scout method is defined as “a system of progressive self-education through:
- A promise and law.
- Learning by doing.
- Membership of small groups (for example the patrol), involving, under adult guidance, progressive discovery and acceptance of responsibility and training towards self-government directed towards the development of character, and the acquisition of competence, self-reliance, dependability and capacities both to cooperate and to lead.
- Progressive and stimulating programme of varied activities based on the interests of the participants, including games, useful skills, and service to the community, taking place largely in an outdoor setting in contact with nature.”
The Scouting method is best seen when young people, in partnership with adults, are:
- enjoying what they are doing;
- learning by doing;
- participating in varied and progressive activities;
- making choices for themselves;
- taking responsibility for their own actions;
- working in groups;
- taking increasing responsibility for others;
- taking part in activities outdoors;
- sharing in prayer and worship; and
- making and living out their Promise |
A magnet is an object that produces a magnetic field. This magnetic field is responsible for attracting ferromagnetic materials such as iron and attracts or repels other magnets.
Magnets are of two types:
Natural Magnets and Artificial Magnets
- Natural Magnets are those magnets which are naturally found in mines. Due to their odd shape and weak attracting power, natural magnets are rarely used.
- Artificial Magnets are those magnets which are artificially prepared. These magnets exist in various shapes and sizes like a bar magnet, horse-shoe magnet or a magnetic needle.
The places in a magnet where its attracting power is maximum are called poles and the place where the attracting power is minimum is called neutral region. The distance between the poles along the axis of a magnet is called its effective or magnetic length. The line joining the two poles of magnet is called magnetic axis and the vertical plane passing through the axis of a freely suspended or pivoted magnet is called magnetic meridian.
Pole Strength: the strength of a magnetic pole to attract magnetic materials towards it is known as pole strength.
Pole Strength = Magnetic force / Magnetic Induction
Greater the number of unit poles in a magnetic pole, greater will be its strength. The unit of pole strength is ampere-meter. A pole of a magnet attracts the opposite pole while repels the similar. However, a sure test of polarity is repulsion and not attraction, as attraction can take place between opposite poles or a pole and a piece of unmagnetised magnetic material due to ‘induction effect’.
At the poles of the magnet the magnetic field is stronger because the lines of force there are crowded together and away from poles the magnetic field is weak. Therefore, the magnetic field intensity is proportional to the number of lines of force.
Magnetic field: The space around a magnet in which a net force acts on a magnetic test pole is known as magnetic field or the space around a magnet in which a torque acts on a magnetic needle is known as a magnetic field.
Magnetic Flux: The number of magnetic lines of forces passing through unit normal area is defined as magnetic induction whereas the number of lines of force passing through any area is known as magnetic flux.
Properties of Magnets:
- If a magnet is dipped into iron fillings, the fillings cling to it, maximum at the ends and least in the middle.
- The regions at the ends of the magnet, where the attraction of the iron filings is maximum and hence the magnetism is maximum are called poles.
- A bar magnet freely suspended through its centre of gravity always stays in the north-south direction.
- The end of the magnet pointing to geographic north is called ‘North Pole’ and the end pointing south is called ‘South Pole’.
- Like poles repel each other and unlike poles attract each other.
- A magnet induces magnetism in magnetic materials such as in a piece of iron and steel.
- An isolated magnetic pole does not exist, they always come in pairs.
- A magnet can loose its magnetic properties by beating, mechanical jerks, heating and with lapse of time.
- The pole strength of a magnet’s two poles is same.
For more details you can visit our website at http://www.helpwithassignment.com/physics-assignment-help and http://www.helpwiththesis.com |
1 Answer | Add Yours
Federalism is a system in which there is a constitution that divides powers in a country between a national government and lower levels of governments. This is the sort of system that the United States has.
In the United States, the federal government and the state governments each have their own set of rights and powers. The federal government cannot simply tell the state governments what to do in all cases. For example, the federal government cannot tell a state to lower its income tax rates. At the same time, however, the state governments cannot tell the federal government what to do in all cases. For example, the state governments do not have the right to tell the federal government how to regulate commerce between the states.
Federalism is typically put into place in countries that are large and/or countries in which the population differs in important ways from region to region. The US Constitution imposed a federal system because the country was very big and because the states were very different from one another on issues such as slavery.
We’ve answered 319,641 questions. We can answer yours, too.Ask a question |
Cell Cookie Lesson Plan
Learners review the structure and fuctions of plant and animal cells. They use various types of materials such as sugar cookies and cake frosting for this lesson.
27 Views 86 Downloads
Cell Size and Shape; Diffusion and Osmosis Processes
Use salmon eggs as a cell model for demonstrating the movement of water over concentration gradients. Junior scientists examine the same process microscopically with an onion cell. They use a thistle tube and a semipermeable membrane to...
9th - 12th Science
Molecules and Fuel Cell Technology
A fuel cell is where the jailer keeps gas guzzlers. Scholars review chemical reactions, chemical bonds, and chemical structure in order to apply these concepts. Participants construct fuel cell kits, using electrolysis to run the car and...
6th - 12th Science CCSS: Adaptable |
The Restoration period and the Jacobite war
Most significant of the events of the Restoration was the second Act of Settlement (1662), which enabled Protestants loyal to the crown to recover their estates. The Act of Explanation (1665) obliged the Cromwellian settlers to surrender one-third of their grants and thus provided a reserve of land from which Roman Catholics were partially compensated for losses under the Commonwealth. This satisfied neither group. Catholics were prevented from residing in towns, and local power, in both borough and county, became appropriated to the Protestant interest. But Protestantism itself became permanently split; as in England, the Presbyterians refused to conform to Episcopalian order and practice and, in association with the Presbyterians of Scotland, organized as a separate church.
Under James II, antagonism to the king’s Roman Catholicism triggered a reversal of the tendencies of the preceding reign. After his flight from England to France in 1688, James crossed to Ireland, where in Parliament the Acts of Settlement and Explanation were repealed and provision was made for the restoration of expropriated Catholics. When William III landed in Ireland to oppose James, the country divided denominationally, but the real issue was land, not religion. After his defeat at the Battle of the Boyne in 1690, James fled to France, but his Catholic supporters continued in arms until defeated at Aughrim and obliged to surrender in 1691 at Limerick. However, James’s supporters secured either the right to go overseas or, if they accepted William’s regime, immunity from discriminatory laws. But civil articles to secure toleration for the Catholics were not ratified, and later Irish leaders were thus enabled to denounce the “broken treaty” of Limerick. Immediately after Limerick, the Protestant position was secured by acts of the English Parliament declaring illegal the acts of King James’s Parliament in Ireland and restricting to Protestants membership of future Irish Parliaments. The sale of the lands forfeited by James and some of his supporters further reduced the Catholic landownership in the country; by 1703 it was less than 15 percent. On this foundation was established the Protestant Ascendancy.
The 18th century
The Protestant Ascendancy was a supremacy of that proportion of the population, about one-tenth, that belonged to the established Protestant Episcopalian church. They celebrated their position as a ruling class by annual recollections of their victories over their hated popish enemies, especially at the Battle of the Boyne, which has been commemorated on July 12 with parades by the Orange Order from the 1790s until today.
Not only the Catholic majority but also the Presbyterians and other Nonconformists, whose combined numbers exceeded those of the established church, were excluded from full political rights, notably by the Test Act of 1704, which made tenure of office dependent on willingness to receive communion according to the Protestant Episcopalian (Church of Ireland) rite. Because of their banishment from public life, the history of the Roman Catholic Irish in the 18th century is concerned almost exclusively with the activities of exiled soldiers and priests, many of whom distinguished themselves in the service of continental monarchs. Details of the lives of the unrecorded Roman Catholic majority in rural Ireland can be glimpsed only from ephemeral literature in English and from Gaelic poetry.
The Protestant Ascendancy of 18th-century Ireland began in subordination to that of England but ended in asserting its independence. In the 1690s commercial jealousy impelled the Irish Parliament to destroy the Irish woolen export trade, and in 1720 the Declaratory Act affirmed the right of the British Parliament to legislate for Ireland and transferred to the British House of Lords the powers of a supreme court in Irish law cases. By the end of the first quarter of the 18th century, resentment at this subordination had grown sufficiently to enable the celebrated writer Jonathan Swift to whip up a storm of protest in a series of pamphlets over the affair of “Wood’s halfpence.” William Wood, an English manufacturer, had been authorized to mint coins for Ireland; the outcry against this alleged exploitation by the arbitrary creation of a monopoly became so violent that it could be terminated only by withdrawing the concession from Wood.
Test Your Knowledge
Nevertheless, it was another 30 years before a similar protest occurred. In 1751 a group was organized to defeat government resolutions in the Irish Parliament appropriating a financial surplus as the English administrators rather than the Irish legislators saw fit. Although in 1768 the Irish Parliament was made more sensitive to public opinion by a provision for fresh elections every eight years instead of merely at the beginning of a new reign, it remained sufficiently controlled by the government to pass sympathetic resolutions on the revolt of the American colonies.
The American Revolution greatly influenced Irish politics, not least because it removed government troops from Ireland. Protestant Irish volunteer corps, spontaneously formed to defend the country against possible French attack, exerted pressure for reform. A patriotic opposition led by Henry Flood and Henry Grattan began an agitation that led in 1782 to the repeal of the Declaratory Act of 1720 and to an amendment of Poynings’s Law that gave the right of legislative initiative to the Irish Parliament (which under the law was subject to the control of the English king and council). Many of the disadvantages suffered by Roman Catholics in Ireland were abolished, and in 1793 the British government, seeking to win Catholic loyalty on the outbreak of war against revolutionary France, gave them the franchise and admission to most civil offices. The government further attempted to conciliate Catholic opinion in 1795 by founding the seminary of Maynooth to provide education for the Catholic clergy. But the Protestant Ascendancy resisted efforts to make the Irish Parliament more representative.
The outbreak of the French Revolution had effected a temporary alliance between an intellectual elite among the Presbyterians and leading middle-class Catholics; these groups, under the inspiration of Wolfe Tone, founded in 1791 a radical political club, the Society of United Irishmen, with branches in Belfast and Dublin. After the outbreak of war with revolutionary France, the United Irishmen were suppressed. Reinforced by agrarian malcontents, they regrouped as a secret oath-bound society intent on insurrection. Wolfe Tone sought military support from France, but a series of French naval expeditions to Ireland between 1796 and 1798 were aborted. The United Irishmen were preparing for rebellion, which broke out in May 1798 but was widespread only in Ulster and in Wexford in the southeast, where, despite the nonsectarian ideals of its leaders, it assumed a nakedly sectarian form resulting in the slaughter of many Protestants. Although the rebellion failed and was savagely suppressed, the threat to British security posed by the alliance between their French enemies and the Irish rebels prompted the British government to tighten its grip on Ireland. The prime minister, William Pitt the Younger, accordingly planned and carried through an amalgamation of the British and Irish parliaments, merging the two kingdoms into the United Kingdom of Great Britain and Ireland. Despite substantial opposition in the Irish Parliament to its dissolution, the measure passed into law, taking effect on Jan. 1, 1801. To Grattan and his supporters the union of Ireland and Great Britain seemed the end of the Irish nation; the last protest of the United Irishmen was made in Robert Emmet’s futile uprising in Dublin in 1803. |
What is Chemotherapy ?
Chemotherapy is a general term for treatments that use chemical agents (drugs) to kill cancer cells. Many different kinds of drugs are used, either alone or in combination, to treat different cancers. The specific drug or combination used is chosen to best combat the type and extent of cancer present.
Chemotherapy drugs are tested against various forms of cancer in an effort to find out which drugs work against that particular type of cancer. Multiple drugs, each individually effective against a certain cancer, are often combined to try and maximize the effect against the cancer. Drugs are combined so that there are few overlapping side effects, to make the treatment more tolerable. These combinations are then tested in clinical trials to see how effective they are. If a combination works better than the current "standard" treatment, it will become the new standard therapy.
Chemotherapy drugs are given for several reasons :-
- To treat cancers that respond well to chemotherapy
- To decrease the size of tumors for easier and safer removal by surgery
- To enhance the cancer-killing effectiveness of other treatments, such as radiation therapy
- In higher dosages, to overcome the resistance of cancer cells
- To control the cancer and enhance the patient's quality of life
Types of Chemotherapy DrugsDrugs that generally kill cancer cells are referred to as cytotoxic agents.
Common types of cytotoxic chemotherapy drugs include :-
- Alkylating agents modify/damage cancer cell DNA and block the replication of DNA, therefore interfering with the growth of cancer cells.
- Antimetabolites block the enzyme pathways needed by cancer cells to live and grow.
- Antitumor antibiotics block certain enzyme and cancer cell changes, thus affecting DNA.
- Mitotic inhibitors slow cancer cell division or hinder certain enzymes necessary in the cell reproduction process.
- Nitrosoureas impede enzymes that repair DNA
Other Chemotherapy DrugsOther drugs used in cancer therapy include :-
- Hormonal agents target the hormonal processes that may stimulate cancer cell growth and/or survival.
- Biological agents affect natural processes that may stimulate cancer cell growth and survival.
- Immunotherapy is intended to boost the recognition of cancer cells by the body's immune system, thereby helping the body to kill cancer cells.
- Cellular therapy involves the use of immunologic cells that selectively destroy cancer cells.
- Signal transduction inhibitors are given to disrupt abnormal processes present within cancer cells and are necessary for the growth or survival of cancer cells.
- Radiopharmaceuticals are substances that have been marked with radioactive markers to selectively deliver radiation therapy to cancer cells.
- Anticancer antibodies are specially engineered antibodies given with the goal of selectively targeting cancer cells for removal by the immune system.
- Anticancer vaccines contain agents intended to help the immune system more readily recognize cancer cells as foreign (and thus attack them).
- Anticancer viral therapies involve giving viruses to patients with the hope that the virus will selectively kill cancer cells. (This treatment is currently highly experimental.)
- Gene therapies introduce DNA/genetic material into cancer cells in hopes of either restoring them to normal or killing them. (This treatment is currently highly experimental.)
How are chemotherapy drugs given ?
Chemotherapy is given in different ways depending on the cancer type and the drugs used.Methods of giving chemotherapy drugs include : -
|Intravenously (IV)||: -||injected into a vein|
|Intramuscular (IM)||: -||injected into a muscle|
|Intraperitoneal (IP)||: -||injected into the abdominal cavity|
|Intracavitary (IC)||: -||injected into a body cavity|
|Subcutaneous (sub.q.)||: -||injected just under the skin|
|Oral (PO)||: -||as a pill or a liquid to be swallowed|
How Chemotherapy Works ?
Chemotherapy kills rapidly dividing cells. Cancer cells often multiply more rapidly than normal cells. Cancer cell are also less able to recover from the toxic effects of chemotherapy than can normal cells. Normal cells that divide rapidly, such as hair or blood cells, are also killed by chemotherapy. This results in common side effects such as hair falling out and blood counts dropping.
Chemotherapy drugs are tested against various forms of cancer in an effort to find out which drugs work against that particular type of cancer. Multiple drugs, each individually effective against a certain cancer, are often combined to try and maximize the effect against the cancer. Drugs are combined so that there are few overlapping side effects, to make the treatment more tolerable. These combinations are then tested in clinical trials to see how effective they are. If a combination works well than the current "standard" treatment, it will become the new standard therapy.
Risk of Chemotherapy
- Hair loss
- Nausea and vomiting
- Nerve pain
- Muscle pain
- Mouth sores
- Temporary loss of menstrual periods
- Decrease in red blood cells and/or white blood cells
- Early menopause/loss of fertility
- Weight gain
- Heart disorders
- Sexual dysfunction
- Urinary incontinence
- Cognitive complaints (e.g., memory lapses, slower processing of information)
Other Treatments Of Chemotherapy
Molecularly Targeted Therapy
For more information, medical assessment and medical quote
as email attachment to
Email : - [email protected]
Contact Center Tel. (+91) 9029304141 (10 am. To 8 pm. IST)
(Only for international patients seeking treatment in India) |
|Introduction to Basic Unix System Administration|
|<<< Previous||Access to the Unix system, to files and directories||Next >>>|
Every file and every directory has 3 types of access, being read access, write access and exectue access for 3 types of groups: user, group and other. The first group is the group of the owner of the file. The second group contains access rights for a group of users. The third set of access rights is for any other user (not being the owner and not belonging to the group having access rights to the file or directory).
With the -l option (long list) of ls, you can find out the access rights for any given file or directory:
tille:~>ls -l verlanglijst -rw-rw-r-- 1 tille tille 200 Apr 13 10:23 verlanglijst
The file verlanglijst is owned by user tille, who has a separate group (the fact of each user having his own group is common on some newer Unix systems). It is readable and writeable for the user tille and other users that may be in group tille, and every other user can read the file.
The types of access have a value:
read access: value 4
write access: value 2
execute access: value 1
The chmod command (change mode) uses these values by making the sum of rights given to each group, thus obtaining 3 numbers between 0 and 7. In the above example the file verlanglijst would have a value of 664.
full access to everybody:
chmod 777 filename
share a file with users in your group:
chmod 775 filename
to share a directory with other users in your group without giving them opportunity to rename, remove or add files:
chmod 755 dirname
to protect files from other users:
chmod 700 file
to prevent yourself from accidentally removing, renaming or deleting files in a directory:
chmod 500 dirname
to make a private file that only you can edit:
chmod 600 file
to protect a file from accidental editing:
chmod 400 file
to let users of your group edit a file while keeping it unaccessable for any other user:
chmod 660 file
This is a simple explanation on chmod. In the manual, you will see that there are actually 4 octal digits specifying security on a file, as showed in this extract from the chmod manual:
Some Unix systems provide extra permission facilities, which go beyond the standard Unix file permission. Examples are filesystem specific attributes (ie. on Linux ext2 filesystems, files can have extra restrictions such as append-only, compressed, immutable or undeletable) and Access Control Lists (ie. on Solaris). Type man chattr or consult your vendor's system-specific documentation.
Changing user or group ownership of a file is done with the GNU chown command (change owner). Although both types of ownership are changed with the same command, they are independent of each other. E.g. you need not be a member of the group that owns the file in order to be able to change it. Your own group will be considered as "other", and if permissions allow, you can change the file.
User and group ownership can be changed in one command:
chown newuser:newgroup file
See man chown for more.
When you know the password of another user's account, you can present yourself to the system with that user's permissions using the su command (switch user). E.g. the intranet website of your company is managed by a special user called "www". In order to change the site, use
su - www
You will be prompted to enter the password for user "www". After the authentication process, you are working on the system using the permissions of user "www". Check with the id -a command:
[tille@rincewind tille]$ su - www Password: [www@rincewind www]$ id -a uid=501(www) gid=501(www) groups=501(www)
So every file is owned by somebody. And so is every process. If you want to handle a file or a process, you have to be the owner. It is clear that some actions need to be undertaken to circumvent this situation. Who will clean up the mess? Who will modify the system files and services? On a Unix system, this force is called the "superuser" or "root".
The root account should always be protected with a password, and the root user is not obliged in any way to communicate this to the other users. This prevents people from reading eachother's mail, from harassing other people and generally prevents a great deal of accidents.
The root user (system administrator) should only use the root status when necessary, and only when concentrated. Root status gives full controll over the system, so you should be careful when "being" root. Should you need to become root, always log in as a normal user and then use the su - (switch user) command, which will give you root status when no options are given. When connecting to a system over the network, use ssh (see above: connecting to a system) if you want to connect directly using the root account.
In this document, we'll assume that you don't know the password for the root account. Almost any command discussed in this document can be executed without superuser status. |
1. Talk with the student openly about stuttering but don't make a big deal about it. Acknowledge the problem with her and inquire about what classroom activities are more difficult for her to speak in. Ask the student for some suggestions for things that would help to manage her speech in the class and discuss them with her.
2. Allow the student who stutters plenty of time to answer questions in class. Reduce time pressure in the classroom as much as possible.
3. Don't supply words or finish a sentence when the student is having trouble. No one likes words put in his or her mouth. And, of course, if you guess the wrong word, the difficulties multiply.
4. Don't ask the student to substitute an easy word for a hard one as this will only increase the fear of certain words and phrases.
5. Refrain from making comments such as "Slow down," "Relax," "Think before you try to speak," etc. This advice can be felt as demeaning, not constructive, and it does not help.
6. Use a random method to call on students in the class rather than going systematically up and down rows. Making a person who stutters wait his turn in this way greatly increases apprehension or tension.
7. Be flexible (when necessary) in the way a person who stutters is required to participate in classroom activities. For example, if the student has a very difficult time talking in front of the whole class, could smaller groups be set up for reading and presentations? This does not mean the person who stutters should be excused from or avoid participating in class, only that she might participate differently when necessary.
8. Allow plenty of opportunities for the student to speak in class on days when speech is easier. Most people who stutter have "good" and "bad" days; capitalize on the good ones.
9. Praise the student for participating verbally in classroom activity. But praise what they say, not how they say it.
10. Maintain normal eye contact with the student and project a relaxed body language.
11. After a dysfluent utterance, repeat back the content of what the student said. This will ensure understanding and reduce the student's negative memory of the dysfluency.
12. Discourage students from making fun of someone who stutters; discuss stuttering in the class, if the student who stutters feels comfortable with this.
13. Discuss with the student, the speech therapist, and the parents how best to approach the management of this particular student's stuttering and use of speech techniques in the classroom. |
Lucifer (“light-bringer”) was a Latin name for the planet Venus as the morning star in the ancient Roman era, and is often used for mythological and religious figures associated with the planet. Due to the unique movements and discontinuous appearances of Venus in the sky, mythology surrounding these figures often involved a fall from the heavens to earth or the underworld. Interpretations of a similar term in the Hebrew Bible, translated in the King James Version as “Lucifer“, led to a Christian tradition of applying the name Lucifer and its associated stories of a fall from heaven to Satan. Most modern scholarship regards these interpretations as questionable, and translate the term in the relevant Bible passage as “morning star” or “shining one” rather than as a proper name, “Lucifer“.
As a name for the devil, the more common meaning in English, “Lucifer” is the rendering of the Hebrew word הֵילֵל in Isaiah (Isaiah 14:12) given in the King James Version of the Bible. The translators of this version took the word from the Latin Vulgate, which translated הֵילֵל by the Latin word lucifer (uncapitalized), meaning “the morning star, the planet Venus”, or, as an adjective, “light-bringing”.
As a name for the morning star, “Lucifer” is a proper name and is capitalized in English. In Greco-Roman civilization the morning star was often personified and considered a god and in some versions considered a son of Aurora (the Dawn).
Fall from heaven
Main article: Venus in culture
The motif of a heavenly being striving for the highest seat of heaven only to be cast down to the underworld has its origins in the motions of the planet Venus, known as the morning star.
The Sumerian goddess Inanna (Babylonian Ishtar) is associated with the planet Venus. Inanna’s actions in several of her myths, including Inanna and Shukaletuda and Inanna’s Descent into the Underworld appear to parallel the motion of Venus as it progresses through its synodic cycle. For example, in Inanna’s Descent to the Underworld, Inanna is able to descend into the netherworld, where she is killed, and then resurrected three days later to return to the heavens. The three-day disappearance of Inanna refers to the three-day planetary disappearance of Venus between its appearance as a morning and evening star.
A similar theme is present in the Babylonian myth of Etana. The Jewish Encyclopedia comments:
“The brilliancy of the morning star, which eclipses all other stars, but is not seen during the night, may easily have given rise to a myth such as was told of Ethana and Zu: he was led by his pride to strive for the highest seat among the star-gods on the northern mountain of the gods … but was hurled down by the supreme ruler of the Babylonian Olympus.”
The fall from heaven motif also has a parallel in Canaanite mythology. In ancient Canaanite religion, the morning star is personified as the god Attar, who attempted to occupy the throne of Ba’al and, finding he was unable to do so, descended and ruled the underworld. The original myth may have been about a lesser god Helel trying to dethrone the Canaanite high god El who lived on a mountain to the north. Hermann Gunkel’s reconstruction of the myth told of a mighty warrior called Hêlal, whose ambition was to ascend higher than all the other stellar divinities, but who had to descend to the depths; it thus portrayed as a battle the process by which the bright morning star fails to reach the highest point in the sky before being faded out by the rising sun. However, the Eerdmans Commentary on the Bible argues that no evidence has been found of any Canaanite myth or imagery of a god being forcibly thrown from heaven, as in the Book of Isaiah (see below). It argues that the closest parallels with Isaiah’s description of the king of Babylon as a fallen morning star cast down from heaven are to be found not in Canaanite myths but in traditional ideas of the Jewish people, echoed in the Biblical account of the fall of Adam and Eve, cast out of God’s presence for wishing to be as God, and the picture in Psalm 82 of the “gods” and “sons of the Most High” destined to die and fall. This Jewish tradition has echoes also in Jewish pseudepigrapha such as 2 Enoch and the Life of Adam and Eve. The Life of Adam and Eve, in turn, shaped the idea of Iblis in the Quran.
The Greek myth of Phaethon, a personification of the planet Jupiter, follows a similar pattern.
In classical mythology
In classical mythology, Lucifer (“light-bringer” in Latin) was the name of the planet Venus, though it was often personified as a male figure bearing a torch. The Greek name for this planet was variously Phosphoros (also meaning “light-bringer”) or Heosphoros (meaning “dawn-bringer”). Lucifer was said to be “the fabled son of Aurora and Cephalus, and father of Ceyx”. He was often presented in poetry as heralding the dawn.
The second century Roman mythographer Pseudo-Hyginus said of the planet:
“The fourth star is that of Venus, Luciferus by name. Some say it is Juno’s. In many tales it is recorded that it is called Hesperus, too. It seems to be the largest of all stars. Some have said it represents the son of Aurora and Cephalus, who surpassed many in beauty, so that he even vied with Venus, and, as Eratosthenes says, for this reason it is called the star of Venus. It is visible both at dawn and sunset, and so properly has been called both Luciferus and Hesperus.”
Ovid, in his first century epic Metamorphoses, describes Lucifer as ordering the heavens:
“Aurora, watchful in the reddening dawn, threw wide her crimson doors and rose-filled halls; the Stellae took flight, in marshaled order set by Lucifer who left his station last.”
In the classical Roman period, Lucifer was not typically regarded as a deity and had few, if any, myths, though the planet was associated with various deities and often poetically personified. Ciceropointed out that “You say that Sol the Sun and Luna the Moon are deities, and the Greeks identify the former with Apollo and the latter with Diana. But if Luna (the Moon) is a goddess, then Lucifer (the Morning-Star) also and the rest of the Wandering Stars (Stellae Errantes) will have to be counted gods; and if so, then the Fixed Stars (Stellae Inerrantes) as well.”
In the Book of Isaiah, chapter 14, the King of Babylon is condemned in a prophetic vision by the prophet Isaiah and is called הֵילֵל בֶּן-שָׁחַר (Helel ben Shachar, Hebrew for “shining one, son of the morning”). who is addressed as הילל בן שחר (Hêlêl ben Šāḥar), The title “Helel ben Shahar” may refer to the planet Venus as the morning star, but the text in Isaiah 14 gives no indication that Helel is the name of a star or planet. The Hebrew word transliterated as Hêlêl or Heylel (pron. as Hay-LALE), occurs only once in the Hebrew Bible. The Septuagint renders הֵילֵל in Greek as Ἑωσφόρος (heōsphoros), “bringer of dawn”, the Ancient Greek name for the morning star. According to the King James Bible-based Strong’s Concordance, the original Hebrew word means “shining one, light-bearer”, and the translation given in the King James text is the Latin name for the planet Venus, “Lucifer”.
However, the translation of הֵילֵל as “Lucifer” has been abandoned in modern English translations of Isaiah 14:12. Present-day translations render הֵילֵל as “morning star” (New International Version, New Century Version, New American Standard Bible, Good News Translation, Holman Christian Standard Bible, Contemporary English Version, Common English Bible, Complete Jewish Bible), “daystar” (New Jerusalem Bible, The Message), “Day Star” (New Revised Standard Version, English Standard Version), “shining one” (New Life Version, New World Translation, JPS Tanakh), or “shining star” (New Living Translation).
In a modern translation from the original Hebrew, the passage in which the phrase “Lucifer” or “morning star” occurs begins with the statement: “On the day the Lord gives you relief from your suffering and turmoil and from the harsh labour forced on you, you will take up this taunt against the king of Babylon: How the oppressor has come to an end! How his fury has ended!” After describing the death of the king, the taunt continues:
- “How you have fallen from heaven, morning star, son of the dawn! You have been cast down to the earth, you who once laid low the nations! You said in your heart, ‘I will ascend to the heavens; I will raise my throne above the stars of God; I will sit enthroned on the mount of assembly, on the utmost heights of Mount Zaphon. I will ascend above the tops of the clouds; I will make myself like the Most High.’ But you are brought down to the realm of the dead, to the depths of the pit. Those who see you stare at you, they ponder your fate: ‘Is this the man who shook the earth and made kingdoms tremble, the man who made the world a wilderness, who overthrew its cities and would not let his captives go home?'”
J. Carl Laney has pointed out that in the final verses here quoted, the king of Babylon is described not as a god or an angel but as a man; and that man may have been not Nebuchadnezzar II, but rather his son, Belshazzar. Nebuchadnezzar was gripped by a spiritual fervor to build a temple to the moon god Sin (possibly analogous with Hubal, the primary god of pre-Islamic Mecca), and his son ruled as regent. The Abrahamic scriptural texts could be interpreted as a weak usurping of true kingly power, and a taunt at the failed regency of Belshazzar.
For the unnamed “king of Babylon” a wide range of identifications have been proposed. They include a Babylonian ruler of the prophet Isaiah’s own time the later Nebuchadnezzar II, under whom the Babylonian captivity of the Jews began, or Nabonidus, and the Assyrian kings Tiglath-Pileser, Sargon II and Sennacherib. Verse 20 says that this king of Babylon will not be “joined with them [all the kings of the nations] in burial, because thou hast destroyed thy land, thou hast slain thy people; the seed of evil-doers shall not be named for ever”, but rather be cast out of the grave, while “All the kings of the nations, all of them, sleep in glory, every one in his own house”, pointing to Nebuchadnezzar II as a possible interpretation. Herbert Wolf held that the “king of Babylon” was not a specific ruler but a generic representation of the whole line of rulers.
Isaiah 14:12 became a source for the popular conception of the fallen angel motif seen later in 1 Enoch 86–90 and 2 Enoch 29:3–4. Rabbinical Judaism has rejected any belief in rebel or fallen angels. In the 11th century, the Pirqe de-Rabbi Eliezer illustrates the origin of the “fallen angel myth” by giving two accounts, one relates to the angel in the Garden of Eden who seduces Eve, and the other relates to the angels, the benei elohim who cohabit with the daughters of man (Genesis 6:1–4). An association of Isaiah 14:12–18 with a personification of evil, called the devil developed outside of mainstream Rabbinic Judaism in pseudepigrapha and Christian writings, particularly with the apocalypses.
As Satan or the devil
Main article: Devil in Christianity
Some Christian writers have applied the name “Lucifer” as used in the Book of Isaiah, and the motif of a heavenly being cast down to the earth, to Satan. Sigve K Tonstad argues that the New Testament War in Heaven theme of Revelation 12:7–9, in which the dragon “who is called the devil and Satan … was thrown down to the earth”, was derived from the passage about the Babylonian king in Isaiah 14. Origen (184/185 – 253/254) interpreted such Old Testament passages as being about manifestations of the Devil; but writing in Greek, not Latin, he did not identify the devil with the name “Lucifer”. Tertullian (c. 160 – c. 225), who wrote in Latin, also understood Isaiah 14:14 (“I will ascend above the tops of the clouds; I will make myself like the Most High”) as spoken by the Devil, but “Lucifer” is not among the numerous names and phrases he used to describe the devil. Even at the time of the Latin writer Augustine of Hippo (354–430), “Lucifer” had not yet become a common name for the Devil.
Some time later, the metaphor of the morning star that Isaiah 14:12 applied to a king of Babylon gave rise to the general use of the Latin word for “morning star”, capitalized, as the original name of the devil before his fall from grace, linking Isaiah 14:12 with Luke 10:18 (“I saw Satan fall like lightning from heaven”) and interpreting the passage in Isaiah as an allegory of Satan’s fall from heaven.
As a result, “Lucifer has become a byword for Satan or the Devil in the church and in popular literature”, as in Dante Alighieri’s Inferno, Joost van den Vondel’s Lucifer, and John Milton’s Paradise Lost. However, unlike the English word, the Latin word was not used exclusively in this way and was applied to others also, including Jesus.
Adherents of the King James Only movement and others who hold that Isaiah 14:12 does indeed refer to the devil have decried the modern translations. Jealousy of humans, created in the divine image and given authority over the world is the motive that a modern writer, who denies that there is any such person as Lucifer, says that Tertullian attributed to the devil, and, while he cited Tertullian and Augustine as giving envy as the motive for the fall, an 18th-century French Capuchin preacher himself described the rebel angel as jealous of Adam’s exaltation, which he saw as a diminution of his own status.
However, the understanding of the morning star in Isaiah 14:12 as a metaphor referring to a king of Babylon continued also to exist among Christians. Theodoret of Cyrus (c. 393 – c. 457) wrote that Isaiah calls the king “morning star”, not as being the star, but as having had the illusion of being it. The same understanding is shown in Christian translations of the passage, which in English generally use “morning star” rather than treating the word as a proper name, “Lucifer”. So too in other languages, such as French, German, Portuguese, and Spanish. Even the Vulgate text in Latin is printed with lower-case lucifer (morning star), not upper-case Lucifer (proper name).
Calvin said: “The exposition of this passage, which some have given, as if it referred to Satan, has arisen from ignorance: for the context plainly shows these statements must be understood in reference to the king of the Babylonians.” Luther also considered it a gross error to refer this verse to the devil.
In the Bogomil and Cathar text Gospel of the secret supper, Lucifer is a glorified angel and the older brother of Jesus, but fell from heaven to establish his own kingdom and became the Demiurge. Therefore, he created the material world and trapped souls from heaven inside matter. Jesus descended to earth to free the captured souls. In contrast to mainstream Christianity, the cross was denounced as a symbol of Lucifer and his instrument in an attempt to kill Jesus.
Lucifer is regarded within The Church of Jesus Christ of Latter-day Saints as the pre-mortal name of the devil. Mormon theology teaches that in a heavenly council, Lucifer rebelled against the plan of God the Father and was subsequently cast out. The Church’s scripture reads:
“And this we saw also, and bear record, that an angel of God who was in authority in the presence of God, who rebelled against the Only Begotten Son whom the Father loved and who was in the bosom of the Father, was thrust down from the presence of God and the Son, and was called Perdition, for the heavens wept over him—he was Lucifer, a son of the morning. And we beheld, and lo, he is fallen! is fallen, even a son of the morning! And while we were yet in the Spirit, the Lord commanded us that we should write the vision; for we beheld Satan, that old serpent, even the devil, who rebelled against God, and sought to take the kingdom of our God and his Christ—Wherefore, he maketh war with the saints of God, and encompasseth them round about.”
After becoming Satan by his fall, Lucifer “goeth up and down, to and fro in the earth, seeking to destroy the souls of men”. Members of the Church of Jesus Christ of Latter-Day Saints consider Isaiah 14:12 to be referring to both the king of the Babylonians and the devil.
Other instances of lucifer in the Old Testament pseudepigrapha are related to the “star” Venus, in the Sibylline Oracles battle of the constellations (line 517) “Lucifer fought mounted on the back of Leo”, or the entirely rewritten Christian version of the Greek Apocalypse of Ezra 4:32 which has a reference to Lucifer as Antichrist.
Isaiah 14:12 is not the only place where the Vulgate uses the word lucifer. It uses the same word four more times, in contexts where it clearly has no reference to a fallen angel: 2 Peter 1:19 (meaning “morning star”), Job 11:17 (“the light of the morning”), Job 38:32 (“the signs of the zodiac”) and Psalms 110:3 (“the dawn”). Lucifer is not the only expression that the Vulgate uses to speak of the morning star: three times it uses stella matutina: Sirach 50:6(referring to the actual morning star), and Revelation 2:28 (of uncertain reference) and 22:16 (referring to Jesus).
Indications that in Christian tradition the Latin word lucifer, unlike the English word, did not necessarily call a fallen angel to mind exist also outside the text of the Vulgate. Two bishops bore that name: Saint Lucifer of Cagliari, and Lucifer of Siena.
In Latin, the word is applied to John the Baptist and is used as a title of Jesus himself in several early Christian hymns. The morning hymn Lucis largitor splendide of Hilary contains the line: “Tu verus mundi lucifer” (you are the true light bringer of the world). Some interpreted the mention of the morning star (lucifer) in Ambrose’s hymn Aeterne rerum conditor as referring allegorically to Jesus and the mention of the cock, the herald of the day (praeco) in the same hymn as referring to John the Baptist. Likewise, in the medieval hymn Christe qui lux es et dies, some manuscripts have the line “Lucifer lucem proferens”.
The Latin word lucifer is also used of Jesus in the Easter Proclamation prayer to God regarding the paschal candle: Flammas eius lucifer matutinus inveniat: ille, inquam, lucifer, qui nescit occasum. Christus Filius tuus, qui, regressus ab inferis, humano generi serenus illuxit, et vivit et regnat in saecula saeculorum (“May this flame be found still burning by the Morning Star: the one Morning Star who never sets, Christ your Son, who, coming back from death’s domain, has shed his peaceful light on humanity, and lives and reigns for ever and ever”). In the works of Latin grammarians, Lucifer, like Daniel, was discussed as an example of a personal name.
Rudolf Steiner’s writings, which formed the basis for Anthroposophy, characterised Lucifer as a spiritual opposite to Ahriman, with Christ between the two forces, mediating a balanced path for humanity. Lucifer represents an intellectual, imaginative, delusional, otherworldly force which might be associated with visions, subjectivity, psychosis and fantasy. He associated Lucifer with the religious/philosophical cultures of Egypt, Rome and Greece. Steiner believed that Lucifer, as a supersensible Being, had incarnated in China about 3000 years before the birth of Christ.
Luciferianism is a belief system that venerates the essential characteristics that are affixed to Lucifer. The tradition, influenced by Gnosticism, usually reveres Lucifer not as the devil, but as a liberator, a guardian or guiding spirit or even the true god as opposed to Jehovah.
In Anton LaVey’s The Satanic Bible, Lucifer is one of the four crown princes of hell, particularly that of the East, the ‘lord of the air’, and is called the bringer of light, the morning star, intellectualism, and enlightenment. The title ‘lord of the air’ is based upon Ephesians 2:2, which uses the phrase ‘prince of the power of the air’ to refer to the pagan god Zeus, but that phrase later became conflated with Satan.
Author Michael W. Ford has written on Lucifer as a “mask” of the adversary, a motivator and illuminating force of the mind and subconscious.
Léo Taxil (1854–1907) claimed that Freemasonry is associated with worshipping Lucifer. In what is known as the Taxil hoax, he alleged that leading Freemason Albert Pike had addressed “The 23 Supreme Confederated Councils of the world” (an invention of Taxil), instructing them that Lucifer was God, and was in opposition to the evil god Adonai. Supporters of Freemasonry contend that, when Albert Pike and other Masonic scholars spoke about the “Luciferian path,” or the “energies of Lucifer,” they were referring to the Morning Star, the light bearer, the search for light; the very antithesis of dark, satanic evil. Taxil promoted a book by Diana Vaughan (actually written by himself, as he later confessed publicly) that purported to reveal a highly secret ruling body called the Palladium, which controlled the organization and had a satanic agenda. As described by Freemasonry Disclosed in 1897:
With frightening cynicism, the miserable person we shall not name here [Taxil] declared before an assembly especially convened for him that for twelve years he had prepared and carried out to the end the most sacrilegious of hoaxes. We have always been careful to publish special articles concerning Palladism and Diana Vaughan. We are now giving in this issue a complete list of these articles, which can now be considered as not having existed.
Taxil’s work and Pike’s address continue to be quoted by anti-masonic groups.
In Devil-Worship in France, Arthur Edward Waite compared Taxil’s work to today’s tabloid journalism, replete with logical and factual inconsistencies.
In Neopagan Witchcraft
In a collection of folklore and magical practices supposedly collected in Italy by Charles Godfrey Leland and published in his Aradia, or the Gospel of the Witches, the figure of Lucifer is featured prominently as both the brother and consort of the goddess Diana, and father of Aradia, at the center of an alleged Italian witch-cult. In Leland’s mythology, Diana pursued her brother Lucifer across the sky as a cat pursues a mouse. According to Leland, after dividing herself into light and darkness:
- “…Diana saw that the light was so beautiful, the light which was her other half, her brother Lucifer, she yearned for it with exceeding great desire. Wishing to receive the light again into her darkness, to swallow it up in rapture, in delight, she trembled with desire. This desire was the Dawn. But Lucifer, the light, fled from her, and would not yield to her wishes; he was the light which files into the most distant parts of heaven, the mouse which flies before the cat.”
Here, the motions of Diana and Lucifer once again mirror the celestial motions of the moon and Venus, respectively. Though Leland’s Lucifer is based on the classical personification of the planet Venus, he also incorporates elements from Christian tradition, as in the following passage:
- “Diana greatly loved her brother Lucifer, the god of the Sun and of the Moon, the god of Light (Splendor), who was so proud of his beauty, and who for his pride was driven from Paradise.”
In the several modern Wiccan traditions based in part on Leland’s work, the figure of Lucifer is usually either omitted or replaced as Diana’s consort with either the Etruscan god Tagni, or Dianus (Janus, following the work of folklorist James Frazer in The Golden Bough).
Adapted from Wikipedia, the free encyclopedia |
The KS3 Physics curriculum is designed to engage pupils and further the idea that Physics explains the world around us. Each module is themed. The Physics of weather, rollercoasters and magic are all considered.
The curriculum spirals so that most key topics, such as forces and electricity, are visited each year. All work is completed using OneNote which gives pupils a valuable opportunity to put their IT skills to good use as well as enabling interactive games and simulations to be part of every lesson.
We place a huge emphasis on practical work with some great opportunities to make hot air balloons, clouds, Cartesian divers as well as standard experiments to improve basic skills.
At GCSE we follow the AQA specification with the opportunity to follow both the separate and combined courses. Again, practical work is at the forefront and as many real-world links and possible careers are introduced where possible.
Physics is taught using a “flipped classroom” approach throughout. This means that homework is always to look ahead to the next lesson using the resources provided. This develops valuable independent learning skills as well as allowing the lessons to move more quickly, with extra time available for doing experiments, practising exam skills or extending students beyond the curriculum.
For A-level, we follow the Edexcel course. The “flipped classroom” is very much in evidence with students expected to learn most of the basic material independently. This frees up the lesson time for more advanced practical work, focusing on the trickier concepts and doing the exam question in class, where the teacher is available to support in case of difficulties.
We have a good take up at A level with many girls going on to study related subjects at University. Recent leavers are studying Space Physics, Artificial Intelligence, Engineering, Particle Physics and Architecture.
There is a KS3 STEM based Challenge Day every year for pupils to practise their design and building skills as well as the important soft skills such a team work, time management, problem solving and leadership. Previous days have involved building a theme park, designing and building a working house complete with electricity and plumbing installed and building and racing fan powered cars.
There is a now traditional STEM day on the last day of Upper Six when all the Sixth Form build catapults to fire water balloons at targets and teachers. You can see a film of this and other challenge day activities here. |
Writing in Egypt originated for economical purposes but developed for the service of the elite. but developed for the service of the elite
Increased social inequality lead to everything being devoted to the service of the elite including writing
devoted to the service of the elite, including writing
North‐east corner of Africa
Mediterranean Sea to the north
Desert to the south, east and D
t t th th t d west.
Civilization of Ancient Egypt Ci ili ti f A i t E
t existed between 3500 BC and 30 BC.
• Water flows from south to north
• Opens up into a wide, triangular, O
i id i
l green delta criss‐crossed by shallow waterways
• Settled near the Nile •Floods revitalized agricultural lands
•Wild and domesticated animals
Wild and domesticated animals
Great means of transport throughout Egypt
Ships were propelled either by oar or sail
Shi ll d i h b il
Current runs from south to north, into the Mediterranean Sea
Prevailing wind blows from north to south
Easy travel along Nile
Drift downriver and travel north with the current
Sail upriver and travel south with the wind
Facilitates cultural uniformity and political unity
Compared to Mesopotamia, with towns scattered over a plain
The Nile floods regularly every year
It covers the farmland with water
Farmers plant in the mud as the water recedes
Keep the fields wet with small‐scale systems of ditches and retaining ponds.
A system for measuring the height of the Nile in various parts of the country. This monitoring allowed them to compare daily river levels with years past and to predict with some accuracy the coming year s high mark.
to predict with some accuracy the coming year's high mark
The nilometer on Elephantine Island, Aswan, consists of stairs and staff gauges
Varieties of stone and metal
Settlements along the g
Nile didn’t lack basic materials like stone for building and carving, the way Mesopotamian sites did i did Wood was somewhat scarce
During the New Kingdom (1552‐1070 BC), monuments D i th N Ki d (
BC) t were decorated with lists of past kings and a few words about their achievements going back to the Old about their achievements, going back to the Old Kingdom (2686‐2250 BC)
Only some written sources on early Egyptian history
List on papyrus (the Turin papyrus) is fragmentary, but gives durations of reigns in the Old Kingdom
A large fragment of a stele known as the Royal Annals of the Old Kingdom of Ancient Egypt. It contains records of the kings of Egypt from the first dynasty through the fifth dynasty.
Engraved toward the end of the fifth dynasty, in the 25th century BC
Inscribed on both sides with the earliest known Egyptian text
Briefly records the principal achievements of the kings of the first 5 dynasties
Manetho, an Egyptian historian of the 3rd century BC, used documents like these to compile a history of kings and events
Contained many errors due to being written almost 3000 years later
Yet much stands up to excavated evidence.
He must have had access to documents and monuments that are now lost, while we may have some that were buried or unknown to Manetho
These records provide a chronological framework starting very early but don’t say much about life and society until later periods
Unlike Mesopotamia, where early documents are accounting records which initially don’t help much with chronology or history but do shed hi h i i i ll d ’ h l h i h h
b d h d some light on economic activities and occasionally other aspects of life
Human‐like beings might have been in the Nile Valley around 700,000 years ago, if not earlier
d if t li
Egypt were covered in treed savanna and with many h d
Time between the hunter‐gatherers of before and the appearance of the true Time between the hunter gatherers of before and the appearance of the true farming of the village‐dwelling cultures after 5500 BC. Most of the information from this era comes from the site of El Kab, located between the eastern bank of the Nile and the Red Sea Hills.
The camps at El Kab
Th t El K b were most likely occupied only during spring and summer. t lik l i d l d i i d The annual floods of the Nile, especially given how massive it was then, would make it next to impossible to live in those locations year round. It is apparent that these tribes were still largely nomadic, hunting and gathering, following seasonally available wild plants and game. Despite this, the camps enjoyed ll il bl ild l
d D i hi h j d many times of prosperity, living near the cool Nile and benefiting from its supply of fish, supplemented by the traditional hunting of savanna wildlife such as wild cattle and gazelles.
These seasonal camps merged together and grew into large concentrations of dwellings over time. There is evidence in these later Epipaleolithic sites of a population explosion around 5500 BC, possibly due to the development of true agriculture as well as animal domestication.
agriculture as well as animal domestication
5500 BC, evidence of organized, permanent settlements 5500 BC evidence of organized permanent settlements focused around agriculture. Hunting was no longer a major support for existence now that the Egyptian diet was made up of domesticated cattle sheep pigs and goats as well as up of domesticated cattle, sheep, pigs and goats, as well as cereal grains such as wheat and barley. Artifacts of stone were supplemented by those of metal, and the crafts of basketry pottery weaving and the tanning of animal hides basketry, pottery, weaving, and the tanning of animal hides became part of the daily life. Transitioning from primitive nomadic tribes to traditional civilization.
4500 BC during Naqada I, growing influence of the peoples of the North on those of the South. Soon this would result in a truly mixed people and culture, that of the Late Predynastic, or Naqada III. Dead were buried in simple oval pits
D d b i d i i l l it
Black‐topped pottery is the typical ware
Painted pottery appears
Some individuals are buried in larger, more elaborate Some individuals are buried in larger more elaborate tombs
Beginning of social inequality and different classes
cemeteries include extremely wealthy burials, t i i l d t
l lth b i l revealing stark social differences.
Cylinder jars are characteristic grave goods
C li d j h
t i ti d
first writing appears
Dates to Naqada III Contained the largest and the oldest C
i d h l
d h ld
inscribed artifacts so far found in Egypt. Found 200 small bone and ivory tags and more than 100 ceramic jars.
The number of signs on all is about 50. Limited yet well formed writing
Unlikely to be first writing→ Prompted excavators to search for earlier antecedents.
Writing as tags of exports
Over a period of about 1,000 years, the Naqada
O i d f b t th N d
culture developed from a few small farming communities into a powerful civilization whose iti i t f l i ili ti h leaders were in complete control of the people and resources of the Nile valley.
f th Nil ll
Emergence of complex societies and interactions among polities (subordinate civil authority) l
( b d
followed formation of a unitary state. Increased social inequality
Eventually, some places specialized in making certain kinds of goods that were traded up and down the Nile
This must have been based on social factors, rather than better access to resources
Some places had more specialists or larger workshops
Some places developed reputations for certain goods
Ancient Egypt was known as one of the wealthiest countries in
the world. the world
Food produced by Egyptian was more than enough to feed their own people, and this surplus grains played an important role in Egypt's economy as well as fish
s economy as well as fish, fine linen, papyrus and an fine linen papyrus and an extended trade in perfume and fine oils. They developed trading routes to far away places. There is not much doubt that Egypt had reached Assyria (where Syria and h d bt th t E
t h d h d A
i ( h S i d Lebanon are located present days.) The first recorded mention of Greater Syria is in Egyptian annals detailing expeditions to the Syrian coastland to log the cedar pine and cypress of the Syrian coastland to log the cedar, pine, and cypress of the Ammanus and Lebanon mountain ranges in the fourth millennium. Egyptians imported timber applicable for carpentry on a large scale and for boat construction from Syria and Lebanon. They established trade with Nubia to obtain incense.
Th N d culture manufactured a diverse array of lt f t d di
f material goods, reflective of the increasing power and wealth of the elite which included painted pottery wealth of the elite, which included painted pottery, high quality decorative stone vases, cosmetic palettes, and jewelry made of gold, lapis, and ivory They also and jewelry made of gold, lapis, and ivory. They also developed a ceramic glaze known as faience which was used well into the Roman Period to decorate cups, p,
amulets, and figurines.
Transition between Predynastic and Dynastic was the result the new social structures such as cities and individual dwellings. Technological evolution. T h l i l l ti Stoneworking, particularly that involved in the making of blades and points, reached a level almost that of the Old Ki d i d
Kingdom industries that would follow. i h ld f ll Furniture: many artifacts already resembling what would come. Objects began to be made not only with a function, but also Obj
b d l i h f
i b l with an aesthetic value. Pottery was painted and decorated, particularly the black‐topped clay pots and vases that this era i t d f b
is noted for; bone and ivory combs, figurines, and tableware, d i
b fi i
d t bl
are found in great numbers, as is jewelry of all types and materials. Political unification of Upper and Lower Egypt under the first pharaoh, and it developed over the next three millennia.
31 BC when Egypt fell to the Roman Empire and BC h E
t f ll t th R
E i d became a Roman province
Roman emperor Augustus depicted as an Egyptian pharaoh
The success of ancient Egyptian civilization was partly from its ability to adapt to the conditions of the Nile River Valley. The predictable flooding and controlled irrigation of the fertile valley produced surplus crops, which fueled social development and culture. With resources to spare, the administration sponsored mineral exploitation of the valley and surrounding desert regions, the early development of an independent writing system, the organization of collective construction and agricultural projects, trade with surrounding regions, and a military intended to g g
defeat foreign enemies and assert Egyptian dominance. A bureaucracy of elite scribes, religious leaders, and ad
administrators under the control of a pharaoh who ensured the st ato s u de t e co t o o a p a ao
o e su ed t e
cooperation and unity of the Egyptian people organizing these activities.
The many achievements of the ancient Egyptians Th hi
t f th i t E
ti include the quarrying, and construction techniques that facilitated the building of monumental pyramids that facilitated the building of monumental pyramids, temples, and obelisks They had a system of mathematics, a practical and They had a system of mathematics a practical and effective system of medicine, irrigation systems and agricultural production techniques and the first agricultural production techniques, and the first known ships. Writing for economical purposes.
Tomb U‐j was full of Palestinian exports‐
T b U j f ll f P l i i attached h d information to deliveries
Writing to serve the pharos.
W iti t th h
Writing used for records
y/ gyp / |
Stanford astrophysicist Dan Wilkins and his colleagues were studying a supermassive black hole when something caught their attention— a series of bright flares of X-rays. The emission of such high-energy photons from a black hole was intriguing, but not necessarily unprecedented. Yet it was interesting enough for Wilkins to take a closer look.
When he did, Wilkins noticed additional, smaller, flashes of X-rays that were different "colors" than the bright flares. They also appeared to be delayed. This was strange, Wilkins said, as they expected the smaller flashes to be an "echo" of the first flashes.
They set about measuring the color of these X-rays, and the delay between them and the initial X-ray flash.
"We realized that these must be the echo coming from a bit of gas that should be hidden behind the black holes, so the gas on the other side of the black hole to us, " Wilkins said. It was as if they were seeing something on the "far side of the black hole we shouldn't be able to see — because anything that goes into the black hole can't come out," he added. "If something's on the other side of the black hole from us, the light shouldn't be able to get through the black hole towards us."
But black holes do not eclipse light the way a moon or a planet might. Because of their intense mass, light bends and curves around them, like cars driving on a straight street suddenly swerving around a pothole.
It turns out that what Wilkins and his team observed is the black hole warping space, and bending light around itself. (The research is detailed in a paper published July 28 in Nature). Though predicted by Albert Einstein's theory of general relativity, it has never been confirmed on such an extreme scale — in this case, astronomers detecting light [in the X-ray spectrum] being bent from the opposite side of a black hole.
"This means that these echoes of X-rays from the far side of the black hole don't have to travel through the black hole for us to see them," Wilkins said. "They can actually get bent around the black hole, which is why we can see them."
X-rays are typically observed when gas falls into black holes. Yet in those cases, the X-ray emissions are not from the black hole itself (from which light cannot escape) but from matter interactions near the event horizon, where particles can be accelerated to relativistic speeds and, in collisions, spew tremendous amounts of high-energy particles in all different directions. Typically, astronomers only observe these directly — they had never observed them as they were bent from the opposite side of a black hole, the researchers say.
"Fifty years ago, when astrophysicists starting speculating about how the magnetic field might behave close to a black hole, they had no idea that one day we might have the techniques to observe this directly and see Einstein's general theory of relativity in action," said Roger Blandford, a co-author of the paper and a Stanford professor of physics, in a news release.
Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.
Avi Loeb, the former chair of the astronomy department at Harvard University (2011-2020) and founding director of Harvard's Black Hole Initiative, told Salon via email the paper is "interesting," though he questioned whether such an event had been observed previously in 2019.
"It finds that short flashes of light from behind the black hole are bent around the black hole and magnified by the strong gravitational field," Loeb said. "Observing light bent around the black hole confirms a key prediction of general relativity."
Loeb added that this was confirmed previously when the Event Horizon Telescope "obtained an image of the ring of light around the silhouette of the giant black hole in the galaxy M87." That image was famous for being the first direct image of a black hole, and was painstakingly produced after years of study and data analysis.
"That ring was also produced through bending of light by gravity near the black hole," Loeb noted.
Whether or not you are a stickler about the precise definition of "behind a black hole," the new study is historic in that there have been few such observations in astronomy history. Indeed, there is much to learn from a direct observation of black holes bending light, as black holes emit some of the most intense gravitational and electromagnetic fields of anything in the universe.
"By studying this, we can begin to understand how the brightest light sources in our whole universe work," Wilkins said. "But it's also an important piece of the puzzle to learn about how the galaxies formed and how the galaxy that we live in, the universe that we live in, really came into being."
But is there any way this incredible observation could have been a fluke? Wilkins doesn't think so.
"When we analyze the data, we try to rule out every other possibility, so we think about any other theories or any explanations that could mimic the same result," Wilkins said. "This bending of light around the black hole is the only thing we know off in the laws of science as we understand that's able to explain this." |
Scholars have explored the moral dimensions of human rights for decades. There’s a consensus that human rights are an essential feature of any just and moral society, as well as being universal, inalienable, and indivisible. Many scholars also make the case that human rights should serve as the foundation for ethical and legal discourses. Our time has aptly been described as an ‘age of rights’ in which rights-based morality is consistently promoted. The language of rights seems to occupy the whole spectrum of moral and legal language. Human rights are used in many circumstances to replace ethics and are taken for granted in a way that somehow constitutes another way of ‘doing ethics’.
On the other hand, philosophers such as Michael Sandel, Charles Taylor, and Amartya Sen contend that such an approach not only undermines but also operates against range of other morally significant human relationships and attitudes. These include community, solidarity, care, compassion, and benevolence, all of which play an essential role in our lives. They argue that focusing on individual rights can lead to a fragmented society in which people are less likely to feel a sense of obligation or responsibility to others. It has also been argued that the concept of rights is a product of historical circumstances that risk turning morality upside down if it encourages self-righteous claims and a sense of entitlement.
Recently, human rights have evolved into a field of study with an interdisciplinary framework, integrating insights from disciplines such as philosophy, ethics, politics, education, psychology, anthropology, and bioethics. The interplay between these disciplines and human rights, understood as a moral framework, is becoming a thriving field of research. Professional ethical codes already include elements related to human rights and social justice. However, some scholars argue that human rights and social justice are not to be contained as simply ‘elements’ but as a foundation upon which these codes are developed and understood. One such example is the role of human rights in the medical field, where medical professionals are obligated to respect the human rights of their patients, including the right to privacy, informed consent, and dignity.
This relationship is underscored by ethical codes, such as the World Medical Association’s Declaration of Helsinki, which sets standards for medical research involving human subjects. Another example is the relationship between human rights and business ethics. Many companies have adopted policies that outline their commitment to respecting the human rights of their employees, suppliers, and customers and avoiding actions that may violate them. This relationship is governed by international standards, such as the United Nations Guiding Principles on Business and Human Rights, which provide guidance on how to align business operations with human rights. The relationship between human rights and ethics can also be seen in many other areas, such as environmental ethics, criminal justice, and education. Human rights and ethics, particularly from an Islamic perspective, have also been a topic of discussion and study among Islamic studies scholars and related fields for several decades.
Many have explored the concept of human rights in light of the Holy Quran, Hadith, and other Islamic religious texts. They have sought to identify ways in which Islamic ethics and values intersect with the idea of human rights. There has also been significant discussion among Muslim scholars and thinkers about the compatibility of Islamic law and human rights norms and the role that Islamic ethics can play in developing a comprehensive theory of human rights. To help shape and guide contemporary debates, the Research Center for Islamic Legislation and Ethics (CILE), based at the College of Islamic Studies, is convening its 10th international conference on the interplay of Islamic ethics and human rights.
Interplay of Islamic ethics and human rights will gather renowned experts on Islamic studies, law, politics, anthropology, and bioethics to revisit moral foundations of human rights and explore novel avenues for exploring their intersection and interplay between various disciplines
The event will gather renowned experts on Islamic studies, law, politics, anthropology, and bioethics to revisit the moral foundations of human rights and explore novel avenues for exploring their intersection and interplay between various disciplines. The conference is not concerned with worn-out questions about the relationship, compatibility, or reconciliation between conventional international human rights and Islamic law. Attention will instead turn to fresh and profound discussions about the interplay between Islamic ethics and social, economic, and political rights. Case studies will focus on the rights of refugees and survivors of mass atrocities, including their psychological and mental health; as well as bioethics and related concepts, such as human dignity, respect for human vulnerability, and personal integrity.
The interplay of Islamic ethics and human rights will also reflect the intricacy of human rights as both moral and legal concepts, a reality that continues to spark complex discussions among philosophers, legal experts, political scientists, and religious scholars. Such multifaceted interplay between the various moral dimensions of human rights raises questions that cross multiple disciplines, such as the development of a sound Islamic rights-based theory of morality. These issues, and more, will be approached from various angles, such as theological ethics, Islamic legal theory, jurisprudence, philosophical ethics, literature, political ethics, applied ethics, and Islamic bioethics. A multi-disciplinary approach is essential for uncovering common and conflicting principles and practices across various fields, disciplines, and cultures.
Jarida Daily is republishing this article from The Peninsula for its readers. |
As K–12 educators seek ways to help students develop social-emotional skills, they are finding that some of the same tools they use in modern learning environments can also facilitate collaboration, empathy and other soft skills. That’s important because students will need these “people skills” to succeed in their educational and professional careers.
In a recent study, 98 percent of principals said they believe students from all backgrounds would benefit from learning SEL skills in the classroom.
What Is SEL?
SEL or, social-emotional learning, is the process in which students develop the necessary soft skills to collaborate effectively with their peers, according to the Collaborative for Academic, Social and Emotional Learning (CASEL).
“Through social-emotional learning, people develop the ability to understand and manage their emotions, handle the emotions of others, and build and maintain relationships, as well as the ability to apply those competencies appropriately,” David Osher, vice president and Institute Fellow at the American Institutes for Research, said in an interview with EdTech.
Through these competencies — self-awareness, self-management, social awareness, relationship skills and responsible decision-making — students can improve their classroom contributions, which can then lead to better academic outcomes.
Why Are Social Emotional Learning Competencies Important?
Although educators began paying attention to social-emotional competencies decades ago, Osher believes the increased focus is due to a larger accumulation of research and an increase in support from educators who have tried SEL and found it to be successful.
“There is now enough data to conduct meta analyses and, through this, researchers have consistently found out that good social-emotional learning programs tend to impact other aspects of academic performance,” Osher said.
In addition, employers have identified a need in the workforce for young adults who have SEL competencies. A Bloomberg report found that only 35 percent of employers are confident that new hires will have the soft skills they need to succeed on the job.
Classroom Technology Creates Opportunity for SEL
Integrated classroom technologies such as Chromebooks and videoconferencing platforms have made it easier for teachers to create teachable moments around the key SEL competencies. A recent study by Microsoft found that three pieces of technology tend to be especially helpful for SEL:
- Collaboration platforms: Platforms such as G-Suite and Office 365 help students learn to work together and facilitate SEL skills. In the Fresno Unified School District, administrators adopted a digital collaboration platform with the intention of improving social emotional learning, but also saw academic benefits. After adopting the platform, participating middle school students were 25 percent more likely to meet or exceed language and math standards.
- Artificial intelligence: Personalized learning has played a significant part in evolving SEL in the classroom. Teachers can now target the SEL competencies students have not yet mastered by adjusting course material to more closely align with the areas they need to work on. AI-enabled technology allows teachers to improve personalized learning, in part by making student assessment and course correction more efficient so that teachers have more time for one-on-one interactions. When the Tacoma Public School District incorporated AI-powered technology and Azure cloud computing in an initiative to “focus on the whole child,” graduation rates increased from 55 percent to 83 percent.
- Mixed reality: Virtual and augmented reality gives students a chance to practice SEL skills in a low-stakes, virtual environment. “One especially effective teaching method is to provide students with opportunities to observe social-emotional skills and then practice those skills,” report Microsoft researchers. “Such experiences can help raise awareness of bias and improve skills, such as empathy and collaboration, among other benefits.”
In previous iterations of SEL, Osher said, educators often focused on creating SEL-specific curricula. Although such classes were often rich in information, they failed to contextualize the lessons. More effective, he said, is to incorporate SEL into daily classroom activities to show students what these competencies look like in practice.
“Students develop these skills at school, but also at home and in the community within environmental contexts that are supportive of social-emotional learning,” Osher said. “It is the interactions in the classroom in which social and emotional learning develops.” |
We have witnessed the use of invisible ink in our favorite Mission: Impossible and James Bond films – now, scientists have discovered a cutting-edge method to encode secret messages with a simple yet brilliant molecular biology technique. Recently, scientists have been able encode messages in molecules, such as DNA and protein, and now, they have found a way to use bacteria. In June 2011, Manuel Palacios, a chemist working at Tufts University, published a study that explained Steganography by Printed Arrays of Microbes (SPAM), the novel technique that allows one to encode messages in the bacterium Escherichia coli.
Escherichia coli (E. coli) is a Gram-negative bacterium particularly useful in molecular biology. In a technique known as transformation, a scientist can introduce foreign DNA into E. coli cells. This DNA molecule allows the bacterium to synthesize specific proteins. E. coli’s potent genetic machinery has been exploited by industrial microbiology to mass-produce useful peptides such as insulin, human growth hormone, and blood clotting factors, among many others.
With this ideal model organism, Palacios began his experiment by transforming E. coli with DNA that codes for fluorescent proteins. This means that after the E. coli take up the foreign DNA, they begin synthesizing the appropriate proteins. Now to visualize these proteins, one only needs to shine the E. coli cells with a light-emitting diode, which would cause these bacteria to glow. Such diodes are actually readily accessible and available in everyday items, such as in a simple iPhone app. In his experiment, Palacios used many colors, amongst them cyan, green, yellow, orange, tomato, and cherry. After transforming the bacteria, he transferred them to a membrane, which can then be mailed.
However, how does one read the message? The array of colorful dots is based on a dot-only binary Morse code, in which two dots represent an alphanumeric symbol: A-Z and 0-9. For example, an orange dot followed by a cyan dot represent the letter a, and so on, creating a system that resembles a decoder ring. Even though the process of creating and reading the message may seem straightforward, Palacios designed a system with several layers of defense: the message needs to be “developed” under specific conditions. To this end, E. coli cells are especially useful communication tools because they create messages that are both time-sensitive and environment-sensitive.
When the E. coli are mailed, it takes about 24 hours for them to be able to glow, which is the time it takes for the bacteria to synthesize the fluorescent proteins. Thus, the real message may only appear after a period of 24 hours. After this time period, the message will be incomprehensible because the colonies have started to die from the lack of nutrients. Like any living organism, E. coli requires a food source to produce energy for survival. In other words, this is a self-destructing message, very à la 007.
Finally, a person may only be able to read the message if the bacteria are exposed to a specific antibiotic medium, such as ampicillin. Similar to the time-sensitive limitations, if the message were read in the absence of antibiotics, then it would be undecipherable. However, in the presence of a certain antibiotic, one specific colored-colony (or multiple ones) would die, thus clearing up the hidden message.
Even though this newly developed system seems practical, it is still in its infancy. For now, any cryptographer would be able to crack these E. coli-encoded messages. For example, the specialist could grow several batches of the message and apply different antibiotics at different time points. This becomes a manageable task because there are a limited number of antibiotics and all E. coli are likely to die within 48 hours. Yet, Palacios remains unsatisfied. He wants to expand his idea of watermarking E. coli to more complex organisms, such as yeast. And with more complex model organisms, more sophisticated messages can be encoded and more useful applications can be derived from this method. Ultimately, the ways that this communication system can be improved are as limitless as molecular biology itself. |
Earthquakes can be caused by many factors, therefore, the types
of earthquakes are also diversiform, but divided into five categories
usually: tectonic earthquake, volcanic earthquake, collapse
earthquake, induced earthquake and artificial earthquake.
Tectonic earthquake,under the action of tectonic movement,when
the crustal stress reach even exceed the ultimate strength of the
rock stratum, the rock stratum will suddenly deform, and even
rupture, release the energy at once, then result the ground shake.
Tectonic earthquake is the earthquake caused by the dislocation and
fracture of the deep underground rock stratum. In the earthquakes
of all over the world, more than 90 percent are tectonic earthquakes.
Tectonic earthquakes are most destructive, impact on a wider scope.
After the volcanic earthquake and volcanic eruption, due to massive
magmatic losses, the underground pressure decreases or the subsurface
magma are too late to replenish, a hollow happens. Volcanic earthquakes
are only possible to occur in volcanic activity areas, volcanic earthquakes
are rare. Modern volcanic belts such as Italy, Japan, the Philippines
and Indonesia, they are more prone to occur the volcanic earthquakes.
Falling earthquakes are the local earthquake that due to the underground
caverns or the subsidence of the mine goaf. Falling earthquakes are the
result of gravity, small scale, fewer times. In 1935, there was a collapse
earthquake in Baishou County Guangxi Procince, the collapse area is about
40000 m2, the ground fell into deep pools, sounds can be heard dozens of miles,
roof tiles vibrated nearby. In March 1972, large areas of roof collapse caused
the earthquake in western Datong coal goaf in Shanxi, which maximum magnitude
is 3.4, the buildings in the central area were slightly damaged.
Induced earthquakes caused by the activities, such as reservoirs storage and
oilfield water injection. Due to the increasing of reservoir storage, result
in uneven stress distribution and local pressure buildup, as a result, the rock
stratum cannot bear external additional pressure, rupture、dislocation occurs,
then earthquake occurs. Such as China's Xinfengjiang reservoir、Danjiangkou
reservoir, there have been medium and small earthquakes in the two regions.
Among them, the largest magnitude was the Xinfengjiang reservoir in 1962,
reached at 6.1 magnitude.
Man-made ground vibrations about artificial earthquake underground nuclear
explosion、explosive blasting etc.
Earthquake classification can also be divided into the following aspects.
(1)Classified according to the focal depth.
Shallow-focus earthquake: the focal depth is less than 70km:
Intermediate-focus earthquake: the focal l depth is l70km-300km:
Palintectic earthquake: the focal l depth is more than 300km.
90 percent of all earthquakes have the focal depth of less than 100km in
the world, only 3% of earthquakes are palintectic earthquake.
(2)Classified by the magnitude of the earthquakes
Microseismic earthquake: magnitude 1<magnitude<magnitude 3:
Minor earthquake: magnitude 3<magnitude<magnitude 4.5:
Moderate earthquake: magnitude 4.5<magnitude<magnitude 6:
Strong earthquake: magnitude 6<magnitude<magnitude 7:
Major earthquake: magnitude>magnitude 7:
Huge earthquake: magnitude>magnitude 8:
Felt earthquake: the earthquake that can be felt near the epicenter:
Destructive earthquake: the earthquake that causes casualties and economic damage:
Severely destructive earthquake: cause the seriously loss of casualties
and properties, make the disaster area lose or partially lose its
ability to recover.
Local earthquakes are classified according to the size of epicentral
distance: the epicenter was less than 100km:
Near earthquake: the epicenter100-1000km:
Distant earthquake: the epicenter﹥1000km. |
You are probably well aware that the haze in the sky this past week was the result of smoke from wildfires out west, but you might be wondering how it got here. After all, we are about 1,500 to 2,000 miles away from the source. Typically, we’re only worried about what we see at the surface, but a few key components higher up in the atmosphere ultimately determine the weather that we see at the surface. The main one is the jet stream, which has the fastest winds aloft and is found several miles up in the atmosphere. In general, it divides cold, drier air to our north and moist, warmer air to our south. Ultimately, the jet stream steers our storm systems. It blows from west to east across the United States with some north and south bends in the flow. As smoke rises, it can get caught up in this jet stream and transported large distances.
Below is an image from NOAA’s Global Systems Laboratory showing the movement of smoke across the United State according to a high-resolution forecast model last week. I made a couple of edits, one showing the source of the smoke from the wildfires out west and another showing the flow of the smoke across the United States (blue line). This blue line followed the same path of the jet stream at the time, which for comparison purposes can be seen in the upper-level map just below. The jet stream ended up directing the smoke northeast into South Central Canada and then southeast into the Ohio Valley Region. |
Module: Microsoft Office Access Level 3 (Advanced)
Lesson 1: Structuring Existing Data
Restructure existing data using the Table Analyzer Wizard.
Create a junction table to minimize redundant data.
Modify the structure of tables to meet a change in target
Lesson 2: Writing Advanced Queries
Create unmatched and duplicate queries using wizards.
Filter records using criteria.
Summarize data using a crosstab query.
Create a PivotTable and PivotChart to effectively summarize query
Lesson 3: Simplifying Tasks with Macros
Create a macro that opens a form from an existing form to display
Attach a macro to a command button in a form.
Restrict records by adding a condition.
Create a macro that makes data entry mandatory to ensure data
Create a macro that inputs data automatically when a predefined
condition is fulfilled.
Lesson 4: Creating Effective Reports
Add a chart to a report to increase its visual impact.
Create a multiple-column report that uses functions and operators
to control the printing of data.
Create a macro that cancels the printing of a blank report.
Publish a report as a PDF, so that it can be viewed by users who do
not have the Access program installed in their computer.
Lesson 5: Maintaining an Access Database
Link tables to external data sources.
Manage an Access database.
Determine interdependency of Access objects.
Document a database using the Database Documenter tool.
Analyze the performance of the database. |
What Does Saturated Air Mean?
Saturated air is air that holds water vapor at its highest level. Air is composed of moisture or water vapor, regardless of the amount of pressure and temperature levels. Adding more moisture to the air at a specific temperature and in an enclosed area causes the air to absorb the moisture.
Excess moisture leads to the formation of saturated air as brought about by the conversion of moisture into dew.
Corrosionpedia Explains Saturated Air
Once a particular point is reached, the air can no longer hold moisture and all excess is converted to fog or dew. The air which consists of the highest moisture amount at a certain temperature is known as saturated air.
The amount of moisture that air is capable of holding relies on the temperature. The higher the air temperature, the higher the amount of moisture the air can absorb. Accurate measurement of saturated air levels is essential in preventing damaging effects due to corrosion. For instance, saturated air can be relied on to assess the comfort level in industries that make use of evaporators.
When the saturated air is too low, discomfort can result. This phenomenon leads to circulation of drier air that can lead to cracking in equipment such as pipes and barrels. In industrial operations that necessitate less cooling, a lower level of saturated air can result in effective cooling or evaporation. |
3.2 What is the European Convention on Human Rights?
In the aftermath of the Second World War there were public disclosures of huge numbers of cases of brutal, inhuman and tyrannical treatment of people, frequently within the civilian populations of occupied countries. Many serious concerns arose about the way in which millions of people had been mistreated at the instigation of or with the connivance or concurrence of government. There was almost universal disgust and condemnation at the disclosures made, together with a general recognition that such events must not be allowed to happen again. As a result, a number of countries in Europe came together and created the Council of Europe in 1949. At this time it was felt that the United Nations was not likely to be fully effective at protecting individual human rights because of the growing divide between a democratic west and a communist east in Europe. The subsequent division of Germany and the building of the Berlin Wall contributed to this feeling. The terms of the European Convention on Human Rights in 1950 were negotiated over a period of 14 months. States agreed that everyone within their individual jurisdictions should be afforded what were considered to be their basic rights and freedoms, which should be enshrined within the laws of each state. Many nations, including the UK, were opposed to the establishment of a European court of human rights as they did not want an international court judging their own domestic law. To ensure that negotiations did not collapse a compromise position was reached under which nation states signing up to the European Convention on Human Rights could choose whether or not to allow their individual citizens the right to bring a complaint, and also whether they wished to submit to the jurisdiction of the court.
Despite reservations about a human rights court, the UK became the first nation to ratify the European Convention on Human Rights (ECHR). At this time the UK regarded the development of human rights protections within Europe as an important part of its foreign policy. The ECHR had the potential to play an important role in security and in dealings with the new international organisation, the United Nations. It was felt that the UK itself had adequate protection of human rights through existing common law principles.
Activity 2 gives you the opportunity to consolidate your understanding of the ECHR.
Activity 2: The European Convention on Human Rights and the UK
From what you have read of this course so far, make some brief notes about the European Convention on Human Rights in relation to the UK.
The UK has no single source of constitutional rights. It is said that the UK has an ‘unwritten constitution’.
The UK was a member of the Council of Europe, which was created in 1949.
Like many other nations, the UK was opposed to a European Court of Human Rights as it did not want an international court judging its domestic law.
However, the UK was the first nation to ratify the ECHR.
Although the UK saw the importance of the ECHR in terms of foreign policy and security, it considered at this stage that the existing common law provided adequate protection of human rights in the UK.
You may find it helpful to think about these points when you come to consider the English Courts and human rights in Part C of this course.
In this Comment, the notes are listed as a number of key points. Of course, you may have chosen to use an alternative form of notes, for example in the form of a diagram. Use the style of note taking that you find most useful. |
Thickness of thin stellar disk
≈2 kly (0.6 kpc)
Oldest known star
13.21 billion years
Sb, Sbc, or SB(rs)bc (barred spiral galaxy)
100–180 kly (31–55 kpc)
Number of stars
100–400 billion (2.5 × 10 ± 1.5 × 10)
The Milky Way is the galaxy that contains our Solar System. The descriptive "milky" is derived from the appearance from Earth of the galaxy – a band of light seen in the night sky formed from stars that cannot be individually distinguished by the naked eye. The term "Milky Way" is a translation of the Latin via lactea, from the Greek γαλαξίας κύκλος (galaxías kýklos, "milky circle"). From Earth, the Milky Way appears as a band because its disk-shaped structure is viewed from within. Galileo Galilei first resolved the band of light into individual stars with his telescope in 1610. Until the early 1920s, most astronomers thought that the Milky Way contained all the stars in the Universe. Following the 1920 Great Debate between the astronomers Harlow Shapley and Heber Curtis, observations by Edwin Hubble showed that the Milky Way is just one of many galaxies.
- Size and mass
- Galactic quadrants
- Galactic Center
- Spiral arms
- Gaseous halo
- Suns location and neighborhood
- Galactic rotation
- Age and cosmological history
- Etymology and mythology
- Astronomical history
The Milky Way is a barred spiral galaxy with a diameter between 100,000 light-years and 180,000 light-years. The Milky Way is estimated to contain 100–400 billion stars. There are probably at least 100 billion planets in the Milky Way. The Solar System is located within the disk, about 26,000 light-years from the Galactic Center, on the inner edge of one of the spiral-shaped concentrations of gas and dust called the Orion Arm. The stars in the inner ≈10,000 light-years form a bulge and one or more bars that radiate from the bulge. The very center is marked by an intense radio source, named Sagittarius A**, which is likely to be a supermassive black hole.
Stars and gases at a wide range of distances from the Galactic Center orbit at approximately 220 kilometers per second. The constant rotation speed contradicts the laws of Keplerian dynamics and suggests that much of the mass of the Milky Way does not emit or absorb electromagnetic radiation. This mass has been termed "dark matter". The rotational period is about 240 million years at the position of the Sun. The Milky Way as a whole is moving at a velocity of approximately 600 km per second with respect to extragalactic frames of reference. The oldest stars in the Milky Way are nearly as old as the Universe itself and thus probably formed shortly after the Dark Ages of the Big Bang.
The "Milky Way" can be seen as a hazy band of white light some 30 degrees wide arcing across the sky. Although all the individual naked-eye stars in the entire sky are part of the Milky Way, the light in this band originates from the accumulation of unresolved stars and other material located in the direction of the galactic plane. Dark regions within the band, such as the Great Rift and the Coalsack, are areas where light from distant stars is blocked by interstellar dust. The area of the sky obscured by the Milky Way is called the Zone of Avoidance.
The Milky Way has a relatively low surface brightness. Its visibility can be greatly reduced by background light such as light pollution or stray light from the Moon. The sky needs to be darker than about 20.2 magnitude per square arcsecond in order for the Milky Way to be seen. It should be visible when the limiting magnitude is approximately +5.1 or better and shows a great deal of detail at +6.1. This makes the Milky Way difficult to see from any brightly lit urban or suburban location, but very prominent when viewed from a rural area when the Moon is below the horizon. The new world atlas of artificial night sky brightness shows that more than one-third of Earth population cannot see the Milky Way from their homes due to light pollution.
As viewed from Earth, the visible region of the Milky Way's Galactic plane occupies an area of the sky that includes 30 constellations. The center of the Galaxy lies in the direction of the constellation Sagittarius; it is here that the Milky Way is brightest. From Sagittarius, the hazy band of white light appears to pass around to the Galactic anticenter in Auriga. The band then continues the rest of the way around the sky, back to Sagittarius. The band divides the night sky into two roughly equal hemispheres.
The Galactic plane is inclined by about 60 degrees to the ecliptic (the plane of Earth's orbit). Relative to the celestial equator, it passes as far north as the constellation of Cassiopeia and as far south as the constellation of Crux, indicating the high inclination of Earth’s equatorial plane and the plane of the ecliptic, relative to the Galactic plane. The north Galactic pole is situated at right ascension 12h 49m, declination +27.4° (B1950) near β Comae Berenices, and the south Galactic pole is near α Sculptoris. Because of this high inclination, depending on the time of night and year, the arc of Milky Way may appear relatively low or relatively high in the sky. For observers from approximately 65 degrees north to 65 degrees south on Earth's surface, the Milky Way passes directly overhead twice a day.
Size and mass
The Milky Way is the second-largest galaxy in the Local Group, with its stellar disk approximately 100,000 ly (30 kpc) in diameter, and, on average, approximately 1,000 ly (0.3 kpc) thick. As a guide to the relative physical scale of the Milky Way, if the Solar System out to Neptune were the size of a US quarter (24.3 mm (0.955 in)), the Milky Way would be approximately the size of the continental United States. A ring-like filament of stars wrapping around the Milky Way may belong to the Milky Way itself, rippling above and below the relatively flat galactic plane. If so, that would mean a diameter of 150,000–180,000 light-years (46–55 kpc).
Estimates of the mass of the Milky Way vary, depending upon the method and data used. At the low end of the estimate range, the mass of the Milky Way is 5.8×1011 solar masses (M☉), somewhat less than that of the Andromeda Galaxy. Measurements using the Very Long Baseline Array in 2009 found velocities as large as 254 km/s (570,000 mph) for stars at the outer edge of the Milky Way. Because the orbital velocity depends on the total mass inside the orbital radius, this suggests that the Milky Way is more massive, roughly equaling the mass of Andromeda Galaxy at 7×1011 M☉ within 160,000 ly (49 kpc) of its center. In 2010, a measurement of the radial velocity of halo stars found that the mass enclosed within 80 kiloparsecs is 7×1011 M☉. According to a study published in 2014, the mass of the entire Milky Way is estimated to be 8.5×1011 M☉, which is about half the mass of the Andromeda Galaxy.
Much of the mass of the Milky Way appears to be dark matter, an unknown and invisible form of matter that interacts gravitationally with ordinary matter. A dark matter halo is spread out relatively uniformly to a distance beyond one hundred kiloparsecs from the Galactic Center. Mathematical models of the Milky Way suggest that the mass of dark matter is 1–1.5×1012 M☉. Recent studies indicate a range in mass, as large as 4.5×1012 M☉ and as small as 0.8×1012 M☉.
The total mass of all the stars in the Milky Way is estimated to be between 4.6×1010 M☉ and 6.43×1010 M☉. In addition to the stars, there is also interstellar gas, comprising 90% hydrogen and 10% helium by mass, with two thirds of the hydrogen found in the atomic form and the remaining one-third as molecular hydrogen. The mass of this gas is equal to between 10% and 15% of the total mass of the galaxy's stars. Interstellar dust accounts for an additional 1% of the total mass of the gas.
The Milky Way contains between 200 and 400 billion stars and at least 100 billion planets. The exact figure depends on the number of very-low-mass stars, which are hard to detect, especially at distances of more than 300 ly (90 pc) from the Sun. As a comparison, the neighboring Andromeda Galaxy contains an estimated one trillion (1012) stars. Filling the space between the stars is a disk of gas and dust called the interstellar medium. This disk has at least a comparable extent in radius to the stars, whereas the thickness of the gas layer ranges from hundreds of light years for the colder gas to thousands of light years for warmer gas.
The disk of stars in the Milky Way does not have a sharp edge beyond which there are no stars. Rather, the concentration of stars decreases with distance from the center of the Milky Way. For reasons that are not understood, beyond a radius of roughly 40,000 ly (13 kpc) from the center, the number of stars per cubic parsec drops much faster with radius. Surrounding the galactic disk is a spherical Galactic Halo of stars and globular clusters that extends further outward but is limited in size by the orbits of two Milky Way satellites, the Large and Small Magellanic Clouds, whose closest approach to the Galactic Center is about 180,000 ly (55 kpc). At this distance or beyond, the orbits of most halo objects would be disrupted by the Magellanic Clouds. Hence, such objects would probably be ejected from the vicinity of the Milky Way. The integrated absolute visual magnitude of the Milky Way is estimated to be around −20.9.
Both gravitational microlensing and planetary transit observations indicate that there may be at least as many planets bound to stars as there are stars in the Milky Way, and microlensing measurements indicate that there are more rogue planets not bound to host stars than there are stars. The Milky Way contains at least one planet per star, resulting in 100–400 billion planets, according to a January 2013 study of the five-planet star system Kepler-32 with the Kepler space observatory. A different January 2013 analysis of Kepler data estimated that at least 17 billion Earth-sized exoplanets reside in the Milky Way. On November 4, 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs within the Milky Way. 11 billion of these estimated planets may be orbiting Sun-like stars. The nearest such planet may be 4.2 light-years away, according to a 2016 study. Such Earth-sized planets may be more numerous than gas giants. Besides exoplanets, "exocomets", comets beyond the Solar System, have also been detected and may be common in the Milky Way.
The Milky Way consists of a bar-shaped core region surrounded by a disk of gas, dust and stars. The mass distribution within the Milky Way closely resembles the type Sbc in the Hubble classification, which represents spiral galaxies with relatively loosely wound arms. Astronomers began to suspect that the Milky Way is a barred spiral galaxy, rather than an ordinary spiral galaxy, in the 1990s. Their suspicions were confirmed by the Spitzer Space Telescope observations in 2005 that showed the Milky Way's central bar to be larger than previously thought.
A galactic quadrant, or quadrant of the Milky Way, refers to one of four circular sectors in the division of the Milky Way. In actual astronomical practice, the delineation of the galactic quadrants is based upon the galactic coordinate system, which places the Sun as the origin of the mapping system.
Quadrants are described using ordinals—for example, "1st galactic quadrant", "second galactic quadrant", or "third quadrant of the Milky Way". Viewing from the north galactic pole with 0 degrees (°) as the ray that runs starting from the Sun and through the Galactic Center, the quadrants are as follows:
The Sun is 25,000–28,000 ly (7.7–8.6 kpc) from the Galactic Center. This value is estimated using geometric-based methods or by measuring selected astronomical objects that serve as standard candles, with different techniques yielding various values within this approximate range. In the inner few kpc (around 10,000 light-years radius) is a dense concentration of mostly old stars in a roughly spheroidal shape called the bulge. It has been proposed that the Milky Way lacks a bulge formed due to a collision and merger between previous galaxies and that instead has a pseudobulge formed by its central bar.
The Galactic Center is marked by an intense radio source named Sagittarius A** (pronounced Sagittarius A-star). The motion of material around the center indicates that Sagittarius A* harbors a massive, compact object. This concentration of mass is best explained as a supermassive black hole (SMBH) with an estimated mass of 4.1–4.5 million times the mass of the Sun. The rate of accretion of the SMBH is consistent with an inactive galactic nucleus, being estimated at around 6995100000000000000♠1×10−5 M☉ y−1. Observations indicate that there are SMBH located near the center of most normal galaxies.
The nature of the Milky Way's bar is actively debated, with estimates for its half-length and orientation spanning from 1 to 5 kpc (3,000–16,000 ly) and 10–50 degrees relative to the line of sight from Earth to the Galactic Center. Certain authors advocate that the Milky Way features two distinct bars, one nestled within the other. However, RR Lyr variables do not trace a prominent Galactic bar. The bar may be surrounded by a ring called the "5-kpc ring" that contains a large fraction of the molecular hydrogen present in the Milky Way, as well as most of the Milky Way's star-formation activity. Viewed from the Andromeda Galaxy, it would be the brightest feature of the Milky Way. X-ray emission from the core is aligned with the massive stars surrounding the central bar and the Galactic ridge.
In 2010, two gigantic spherical bubbles of high energy emission were detected to the north and the south of the Milky Way core, using data from the Fermi Gamma-ray Space Telescope. The diameter of each of the bubbles is about 25,000 light-years (7.7 kpc); they stretch up to Grus and to Virgo on the night-sky of the southern hemisphere. Subsequently, observations with the Parkes Telescope at radio frequencies identified polarized emission that is associated with the Fermi bubbles. These observations are best interpreted as a magnetized outflow driven by star formation in the central 640 ly (200 pc) of the Milky Way.
Later, on January 5, 2015, NASA reported observing an X-ray flare 400 times brighter than usual, a record-breaker, from Sagittarius A*. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*, according to astronomers.
Outside the gravitational influence of the Galactic bars, the structure of the interstellar medium and stars in the disk of the Milky Way is organized into four spiral arms. Spiral arms typically contain a higher density of interstellar gas and dust than the Galactic average as well as a greater concentration of star formation, as traced by H II regions and molecular clouds.
The Milky Way's spiral structure is uncertain, and there is currently no consensus on the nature of the Milky Way's spiral arms. Perfect logarithmic spiral patterns only crudely describe features near the Sun, because galaxies commonly have arms that branch, merge, twist unexpectedly, and feature a degree of irregularity. The possible scenario of the Sun within a spur / Local arm emphasizes that point and indicates that such features are probably not unique, and exist elsewhere in the Milky Way. Estimates of the pitch angle of the arms range from about 7° to 25°. There are thought to be four spiral arms that all start near the Milky Way's center. These are named as follows, with the positions of the arms shown in the image at right:
Two spiral arms, the Scutum–Centaurus arm and the Carina–Sagittarius arm, have tangent points inside the Sun's orbit about the center of the Milky Way. If these arms contain an overdensity of stars compared to the average density of stars in the Galactic disk, it would be detectable by counting the stars near the tangent point. Two surveys of near-infrared light, which is sensitive primarily to red giants and not affected by dust extinction, detected the predicted overabundance in the Scutum–Centaurus arm but not in the Carina–Sagittarius arm: the Scutum-Centaurus Arm contains approximately 30% more red giants than would be expected in the absence of a spiral arm. This observation suggests that the Milky Way possesses only two major stellar arms: the Perseus arm and the Scutum–Centaurus arm. The rest of the arms contain excess gas but not excess old stars. In December 2013, astronomers found that the distribution of young stars and star-forming regions matches the four-arm spiral description of the Milky Way. Thus, the Milky Way appears to have two spiral arms as traced by old stars and four spiral arms as traced by gas and young stars. The explanation for this apparent discrepancy is unclear.
The Near 3 kpc Arm (also called Expanding 3 kpc Arm or simply 3 kpc Arm) was discovered in the 1950s by astronomer van Woerden and collaborators through 21-centimeter radio measurements of HI (atomic hydrogen). It was found to be expanding away from the central bulge at more than 50 km/s. It is located in the fourth galactic quadrant at a distance of about 5.2 kpc from the Sun and 3.3 kpc from the Galactic Center. The Far 3 kpc Arm was discovered in 2008 by astronomer Tom Dame (Harvard-Smithsonian CfA). It is located in the first galactic quadrant at a distance of 3 kpc (about 10,000 ly) from the Galactic Center.
A simulation published in 2011 suggested that the Milky Way may have obtained its spiral arm structure as a result of repeated collisions with the Sagittarius Dwarf Elliptical Galaxy.
It has been suggested that the Milky Way contains two different spiral patterns: an inner one, formed by the Sagittarius arm, that rotates fast and an outer one, formed by the Carina and Perseus arms, whose rotation velocity is slower and whose arms are tightly wound. In this scenario, suggested by numerical simulations of the dynamics of the different spiral arms, the outer pattern would form an outer pseudoring, and the two patterns would be connected by the Cygnus arm.
Outside of the major spiral arms is the Monoceros Ring (or Outer Ring), a ring of gas and stars torn from other galaxies billions of years ago. However, several members of the scientific community recently restated their position affirming the Monoceros structure is nothing more than an over-density produced by the flared and warped thick disk of the Milky Way.
The Galactic disk is surrounded by a spheroidal halo of old stars and globular clusters, of which 90% lie within 100,000 light-years (30 kpc) of the Galactic Center. However, a few globular clusters have been found farther, such as PAL 4 and AM1 at more than 200,000 light-years from the Galactic Center. About 40% of the Milky Way's clusters are on retrograde orbits, which means they move in the opposite direction from the Milky Way rotation. The globular clusters can follow rosette orbits about the Milky Way, in contrast to the elliptical orbit of a planet around a star.
Although the disk contains dust that obscures the view in some wavelengths, the halo component does not. Active star formation takes place in the disk (especially in the spiral arms, which represent areas of high density), but does not take place in the halo, as there is little gas cool enough to collapse into stars. Open clusters are also located primarily in the disk.
Discoveries in the early 21st century have added dimension to the knowledge of the Milky Way's structure. With the discovery that the disk of the Andromeda Galaxy (M31) extends much further than previously thought, the possibility of the disk of the Milky Way extending further is apparent, and this is supported by evidence from the discovery of the Outer Arm extension of the Cygnus Arm and of a similar extension of the Scutum-Centaurus Arm. With the discovery of the Sagittarius Dwarf Elliptical Galaxy came the discovery of a ribbon of galactic debris as the polar orbit of the dwarf and its interaction with the Milky Way tears it apart. Similarly, with the discovery of the Canis Major Dwarf Galaxy, it was found that a ring of galactic debris from its interaction with the Milky Way encircles the Galactic disk.
The Sloan Digital Sky Survey of the northern sky shows a huge and diffuse structure (spread out across an area around 5,000 times the size of a full moon) within the Milky Way that does not seem to fit within current models. The collection of stars rises close to perpendicular to the plane of the spiral arms of the Milky Way. The proposed likely interpretation is that a dwarf galaxy is merging with the Milky Way. This galaxy is tentatively named the Virgo Stellar Stream and is found in the direction of Virgo about 30,000 light-years (9 kpc) away.
In addition to the stellar halo, the Chandra X-ray Observatory, XMM-Newton, and Suzaku have provided evidence that there is a gaseous halo with a large amount of hot gas. The halo extends for hundreds of thousand of light years, much further than the stellar halo and close to the distance of the Large and Small Magellanic Clouds. The mass of this hot halo is nearly equivalent to the mass of the Milky Way itself. The temperature of this halo gas is between 1 and 2.5 million K (1.8 and 4.5 million oF).
Observations of distant galaxies indicate that the Universe had about one-sixth as much baryonic (ordinary) matter as dark matter when it was just a few billion years old. However, only about half of those baryons are accounted for in the modern Universe based on observations of nearby galaxies like the Milky Way. If the finding that the mass of the halo is comparable to the mass of the Milky Way is confirmed, it could be the identity of the missing baryons around the Milky Way.
Sun’s location and neighborhood
The Sun is near the inner rim of the Orion Arm, within the Local Fluff of the Local Bubble, and in the Gould Belt, at a distance of 26.4 ± 1.0 kly (8.09 ± 0.31 kpc) from the Galactic Center. The Sun is currently 5–30 parsecs (16–98 ly) from the central plane of the Galactic disk. The distance between the local arm and the next arm out, the Perseus Arm, is about 2,000 parsecs (6,500 ly). The Sun, and thus the Solar System, is located in the Milky Way's galactic habitable zone.
There are about 208 stars brighter than absolute magnitude 8.5 within a sphere with a radius of 15 parsecs (49 ly) from the Sun, giving a density of one star per 69 cubic parsecs, or one star per 2,360 cubic light-years (from List of nearest bright stars). On the other hand, there are 64 known stars (of any magnitude, not counting 4 brown dwarfs) within 5 parsecs (16 ly) of the Sun, giving a density of about one star per 8.2 cubic parsecs, or one per 284 cubic light-years (from List of nearest stars). This illustrates the fact that there are far more faint stars than bright stars: in the entire sky, there are about 500 stars brighter than apparent magnitude 4 but 15.5 million stars brighter than apparent magnitude 14.
The apex of the Sun's way, or the solar apex, is the direction that the Sun travels through space in the Milky Way. The general direction of the Sun's Galactic motion is towards the star Vega near the constellation of Hercules, at an angle of roughly 60 sky degrees to the direction of the Galactic Center. The Sun's orbit about the Milky Way is expected to be roughly elliptical with the addition of perturbations due to the Galactic spiral arms and non-uniform mass distributions. In addition, the Sun passes through the Galactic plane approximately 2.7 times per orbit. This is very similar to how a simple harmonic oscillator works with no drag force (damping) term. These oscillations were until recently thought to coincide with mass lifeform extinction periods on Earth. However, a reanalysis of the effects of the Sun's transit through the spiral structure based on CO data has failed to find a correlation.
It takes the Solar System about 240 million years to complete one orbit of the Milky Way (a galactic year), so the Sun is thought to have completed 18–20 orbits during its lifetime and 1/1250 of a revolution since the origin of humans. The orbital speed of the Solar System about the center of the Milky Way is approximately 220 km/s (490,000 mph) or 0.073% of the speed of light. The Sun moves through the heliosphere at 84,000 km/h (52,000 mph). At this speed, it takes around 1,400 years for the Solar System to travel a distance of 1 light-year, or 8 days to travel 1 AU (astronomical unit). The Solar System is headed in the direction of the zodiacal constellation Scorpius, which follows the ecliptic.
The stars and gas in the Milky Way rotate about its center differentially, meaning that the rotation period varies with location. As is typical for spiral galaxies, the orbital speed of most stars in the Milky Way does not depend strongly on their distance from the center. Away from the central bulge or outer rim, the typical stellar orbital speed is between 210 and 240 km/s (470,000 and 540,000 mph). Hence the orbital period of the typical star is directly proportional only to the length of the path traveled. This is unlike the situation within the Solar System, where two-body gravitational dynamics dominate, and different orbits have significantly different velocities associated with them. The rotation curve (shown in the figure) describes this rotation. Toward the center of the Milky Way the orbit speeds are too low, whereas beyond 7 kpcs the speeds are too high to match what would be expected from the universal law of gravitation.
If the Milky Way contained only the mass observed in stars, gas, and other baryonic (ordinary) matter, the rotation speed would decrease with distance from the center. However, the observed curve is relatively flat, indicating that there is additional mass that cannot be detected directly with electromagnetic radiation. This inconsistency is attributed to dark matter. The rotation curve of the Milky Way agrees with the universal rotation curve of spiral galaxies, the best evidence for the existence of dark matter in galaxies. Alternatively, a minority of astronomers propose that a modification of the law of gravity may explain the observed rotation curve.
The Milky Way began as one or several small overdensities in the mass distribution in the Universe shortly after the Big Bang. Some of these overdensities were the seeds of globular clusters in which the oldest remaining stars in what is now the Milky Way formed. These stars and clusters now comprise the stellar halo of the Milky Way. Within a few billion years of the birth of the first stars, the mass of the Milky Way was large enough so that it was spinning relatively quickly. Due to conservation of angular momentum, this led the gaseous interstellar medium to collapse from a roughly spheroidal shape to a disk. Therefore, later generations of stars formed in this spiral disk. Most younger stars, including the Sun, are observed to be in the disk.
Since the first stars began to form, the Milky Way has grown through both galaxy mergers (particularly early in the Milky Way's growth) and accretion of gas directly from the Galactic halo. The Milky Way is currently accreting material from two of its nearest satellite galaxies, the Large and Small Magellanic Clouds, through the Magellanic Stream. Direct accretion of gas is observed in high-velocity clouds like the Smith Cloud. However, properties of the Milky Way such as stellar mass, angular momentum, and metallicity in its outermost regions suggest it has undergone no mergers with large galaxies in the last 10 billion years. This lack of recent major mergers is unusual among similar spiral galaxies; its neighbour the Andromeda Galaxy appears to have a more typical history shaped by more recent mergers with relatively large galaxies.
According to recent studies, the Milky Way as well as the Andromeda Galaxy lie in what in the galaxy color–magnitude diagram is known as the "green valley", a region populated by galaxies in transition from the "blue cloud" (galaxies actively forming new stars) to the "red sequence" (galaxies that lack star formation). Star-formation activity in green valley galaxies is slowing as they run out of star-forming gas in the interstellar medium. In simulated galaxies with similar properties, star formation will typically have been extinguished within about five billion years from now, even accounting for the expected, short-term increase in the rate of star formation due to the collision between both the Milky Way and the Andromeda Galaxy. In fact, measurements of other galaxies similar to the Milky Way suggest it is among the reddest and brightest spiral galaxies that are still forming new stars and it is just slightly bluer than the bluest red sequence galaxies.
Age and cosmological history
Globular clusters are among the oldest objects in the Milky Way, which thus set a lower limit on the age of the Milky Way. The ages of individual stars in the Milky Way can be estimated by measuring the abundance of long-lived radioactive elements such as thorium-232 and uranium-238, then comparing the results to estimates of their original abundance, a technique called nucleocosmochronology. These yield values of about 12.5 ± 3 billion years for CS 31082-001 and 13.8 ± 4 billion years for BD +17° 3248. Once a white dwarf is formed, it begins to undergo radiative cooling and the surface temperature steadily drops. By measuring the temperatures of the coolest of these white dwarfs and comparing them to their expected initial temperature, an age estimate can be made. With this technique, the age of the globular cluster M4 was estimated as 12.7 ± 0.7 billion years. Age estimates of the oldest of these clusters gives a best fit estimate of 12.6 billion years, and a 95% confidence upper limit of 16 billion years.
Several individual stars have been found in the Milky Way's halo with measured ages very close to the 13.80-billion-year age of the Universe. In 2007, a star in the galactic halo, HE 1523-0901, was estimated to be about 13.2 billion years old. As the oldest known object in the Milky Way at that time, this measurement placed a lower limit on the age of the Milky Way. This estimate was made using the UV-Visual Echelle Spectrograph of the Very Large Telescope to measure the relative strengths of spectral lines caused by the presence of thorium and other elements created by the R-process. The line strengths yield abundances of different elemental isotopes, from which an estimate of the age of the star can be derived using nucleocosmochronology. Another star, HD 140283, is 14.5 ± 0.7 billion years old and thus formed at least 13.8 billion years ago.
The age of stars in the galactic thin disk has also been estimated using nucleocosmochronology. Measurements of thin disk stars yield an estimate that the thin disk formed 8.8 ± 1.7 billion years ago. These measurements suggest there was a hiatus of almost 5 billion years between the formation of the galactic halo and the thin disk. Recent analysis of the chemical signatures of thousands of stars suggests that stellar formation might have dropped by an order of magnitude at the time of disk formation, 10 to 8 billion years ago, when interstellar gas was too hot to form new stars at the same rate as before.
The satellite galaxies surrounding the Milky way are not randomly distributed, but seemed to be the result of a break-up of some larger system producing a ring structure 500,000 light years in diameter and 50,000 light years wide. Close encounters between galaxies, like that expected in 4 billion years with the Andromeda Galaxy rips off huge tails of gas, which, over time can coalesce to form dwarf galaxies in a ring at right angles to the main disc.
The Milky Way and the Andromeda Galaxy are a binary system of giant spiral galaxies belonging to a group of 50 closely bound galaxies known as the Local Group, surrounded by a Local Void, itself being part of the Virgo Supercluster. Surrounding the Virgo Supercluster are a number of voids, devoid of many galaxies, the Microscopium Void to the "north", the Sculptor Void to the "left", the Bootes Void to the "right" and the Canes-Major Void to the South. These voids change shape over time creating filamentous structures of galaxies. The Virgo Supercluster for instance is being drawn towards the Great Attractor, which in turn forms part of a greater structure, called Laniakea.
Two smaller galaxies and a number of dwarf galaxies in the Local Group orbit the Milky Way. The largest of these is the Large Magellanic Cloud with a diameter of 14,000 light-years. It has a close companion, the Small Magellanic Cloud. The Magellanic Stream is a stream of neutral hydrogen gas extending from these two small galaxies across 100° of the sky. The stream is thought to have been dragged from the Magellanic Clouds in tidal interactions with the Milky Way. Some of the dwarf galaxies orbiting the Milky Way are Canis Major Dwarf (the closest), Sagittarius Dwarf Elliptical Galaxy, Ursa Minor Dwarf, Sculptor Dwarf, Sextans Dwarf, Fornax Dwarf, and Leo I Dwarf. The smallest dwarf galaxies of the Milky Way are only 500 light-years in diameter. These include Carina Dwarf, Draco Dwarf, and Leo II Dwarf. There may still be undetected dwarf galaxies that are dynamically bound to the Milky Way, which is supported by the detection of nine new satellites of the Milky Way in a relatively small patch of the night sky in 2015. There are also some dwarf galaxies that have already been absorbed by the Milky Way, such as Omega Centauri.
In 2014 researchers reported that most satellite galaxies of the Milky Way actually lie in a very large disk and orbit in the same direction. This came as a surprise: according to standard cosmology, the satellite galaxies should form in dark matter halos, and they should be widely distributed and moving in random directions. This discrepancy is still not fully explained.
In January 2006, researchers reported that the heretofore unexplained warp in the disk of the Milky Way has now been mapped and found to be a ripple or vibration set up by the Large and Small Magellanic Clouds as they orbit the Milky Way, causing vibrations when they pass through its edges. Previously, these two galaxies, at around 2% of the mass of the Milky Way, were considered too small to influence the Milky Way. However, in a computer model, the movement of these two galaxies creates a dark matter wake that amplifies their influence on the larger Milky Way.
Current measurements suggest the Andromeda Galaxy is approaching us at 100 to 140 km/s (220,000 to 310,000 mph). In 3 to 4 billion years, there may be an Andromeda–Milky Way collision, depending on the importance of unknown lateral components to the galaxies' relative motion. If they collide, the chance of individual stars colliding with each other is extremely low, but instead the two galaxies will merge to form a single elliptical galaxy or perhaps a large disk galaxy over the course of about a billion years.
Although special relativity states that there is no "preferred" inertial frame of reference in space with which to compare the Milky Way, the Milky Way does have a velocity with respect to cosmological frames of reference.
One such frame of reference is the Hubble flow, the apparent motions of galaxy clusters due to the expansion of space. Individual galaxies, including the Milky Way, have peculiar velocities relative to the average flow. Thus, to compare the Milky Way to the Hubble flow, one must consider a volume large enough so that the expansion of the Universe dominates over local, random motions. A large enough volume means that the mean motion of galaxies within this volume is equal to the Hubble flow. Astronomers believe the Milky Way is moving at approximately 630 km/s (1,400,000 mph) with respect to this local co-moving frame of reference. The Milky Way is moving in the general direction of the Great Attractor and other galaxy clusters, including the Shapley supercluster, behind it. The Local Group (a cluster of gravitationally bound galaxies containing, among others, the Milky Way and the Andromeda Galaxy) is part of a supercluster called the Local Supercluster, centered near the Virgo Cluster: although they are moving away from each other at 967 km/s (2,160,000 mph) as part of the Hubble flow, this velocity is less than would be expected given the 16.8 million pc distance due to the gravitational attraction between the Local Group and the Virgo Cluster.
Another reference frame is provided by the cosmic microwave background (CMB). The Milky Way is moving at 552 ± 6 km/s (1,235,000 ± 13,000 mph) with respect to the photons of the CMB, toward 10.5 right ascension, −24° declination (J2000 epoch, near the center of Hydra). This motion is observed by satellites such as the Cosmic Background Explorer (COBE) and the Wilkinson Microwave Anisotropy Probe (WMAP) as a dipole contribution to the CMB, as photons in equilibrium in the CMB frame get blue-shifted in the direction of the motion and red-shifted in the opposite direction.
Etymology and mythology
In Babylonia, the Milky Way was said to be the tail of Tiamat, set in the sky by Marduk after he had slain the salt water goddess. It is believed this account, from the Enuma Elish had Marduk replace an earlier Sumerian story when Enlil of Nippur had slain the goddess.
In western culture the name "Milky Way" is derived from its appearance as a dim un-resolved "milky" glowing band arching across the night sky. The term is a translation of the Classical Latin via lactea, in turn derived from the Hellenistic Greek γαλαξίας, short for γαλαξίας κύκλος (galaxías kýklos, "milky circle"). The Ancient Greek γαλαξίας (galaxias) – from root γαλακτ-, γάλα ("milk") + -ίας (forming adjectives) – is also the root of "galaxy", the name for our, and later all such, collections of stars. In Greek mythology it was supposedly made from the forceful suckling of Heracles, when Hera acted as a wetnurse for the hero.
The Milky Way, or "milk circle", was just one of 11 "circles" the Greeks identified in the sky, others being the zodiac, the meridian, the horizon, the equator, the tropics of Cancer and Capricorn, Arctic and Antarctic circles, and two colure circles passing through both poles.
In Meteorologica (DK 59 A80), Aristotle (384–322 BC) wrote that the Greek philosophers Anaxagoras (c. 500–428 BC) and Democritus (460–370 BC) proposed that the Milky Way might consist of distant stars. However, Aristotle himself believed the Milky Way to be caused by "the ignition of the fiery exhalation of some stars which were large, numerous and close together" and that the "ignition takes place in the upper part of the atmosphere, in the region of the world which is continuous with the heavenly motions." The Neoplatonist philosopher Olympiodorus the Younger (c. 495–570 A.D.) criticized this view, arguing that if the Milky Way were sublunary, it should appear different at different times and places on Earth, and that it should have parallax, which it does not. In his view, the Milky Way is celestial. This idea would be influential later in the Islamic world.
The Persian astronomer Abū Rayhān al-Bīrūnī (973–1048) proposed that the Milky Way is "a collection of countless fragments of the nature of nebulous stars". The Andalusian astronomer Avempace (d 1138) proposed the Milky Way to be made up of many stars but appears to be a continuous image due to the effect of refraction in Earth's atmosphere, citing his observation of a conjunction of Jupiter and Mars in 1106 or 1107 as evidence. Ibn Qayyim Al-Jawziyya (1292–1350) proposed that the Milky Way is "a myriad of tiny stars packed together in the sphere of the fixed stars" and that these stars are larger than planets.
According to Jamil Ragep, the Persian astronomer Naṣīr al-Dīn al-Ṭūsī (1201–1274) in his Tadhkira writes: "The Milky Way, i.e. the Galaxy, is made up of a very large number of small, tightly clustered stars, which, on account of their concentration and smallness, seem to be cloudy patches. Because of this, it was likened to milk in color."
Actual proof of the Milky Way consisting of many stars came in 1610 when Galileo Galilei used a telescope to study the Milky Way and discovered that it is composed of a huge number of faint stars. In a treatise in 1755, Immanuel Kant, drawing on earlier work by Thomas Wright, speculated (correctly) that the Milky Way might be a rotating body of a huge number of stars, held together by gravitational forces akin to the Solar System but on much larger scales. The resulting disk of stars would be seen as a band on the sky from our perspective inside the disk. Kant also conjectured that some of the nebulae visible in the night sky might be separate "galaxies" themselves, similar to our own. Kant referred to both the Milky Way and the "extragalactic nebulae" as "island universes", a term still current up to the 1930s.
The first attempt to describe the shape of the Milky Way and the position of the Sun within it was carried out by William Herschel in 1785 by carefully counting the number of stars in different regions of the visible sky. He produced a diagram of the shape of the Milky Way with the Solar System close to the center.
In 1845, Lord Rosse constructed a new telescope and was able to distinguish between elliptical and spiral-shaped nebulae. He also managed to make out individual point sources in some of these nebulae, lending credence to Kant's earlier conjecture.
In 1917, Heber Curtis had observed the nova S Andromedae within the Great Andromeda Nebula (Messier object 31). Searching the photographic record, he found 11 more novae. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred within the Milky Way. As a result, he was able to come up with a distance estimate of 150,000 parsecs. He became a proponent of the "island universes" hypothesis, which held that the spiral nebulae were actually independent galaxies. In 1920 the Great Debate took place between Harlow Shapley and Heber Curtis, concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the Universe. To support his claim that the Great Andromeda Nebula is an external galaxy, Curtis noted the appearance of dark lanes resembling the dust clouds in the Milky Way, as well as the significant Doppler shift.
The controversy was conclusively settled by Edwin Hubble in the early 1920s using the Mount Wilson observatory 2.5 m (100 in) Hooker telescope. With the light-gathering power of this new telescope, he was able to produce astronomical photographs that resolved the outer parts of some spiral nebulae as collections of individual stars. He was also able to identify some Cepheid variables that he could use as a benchmark to estimate the distance to the nebulae. He found that the Andromeda Nebula is 275,000 parsecs from the Sun, far too distant to be part of the Milky Way. |
Have you ever considered the power of small words? In the English language, even 2-letter words can have a significant impact on communication. In this article, we will explore the world of 2-letter words with ‘C’ and delve into their importance, benefits, and practical applications. Whether you’re a Scrabble enthusiast, a crossword puzzle solver, or simply want to enhance your vocabulary, understanding these short words can prove invaluable. So, let’s embark on this linguistic journey and unlock the potential of 2-letter words with ‘C’.
Importance of 2-Letter Words
Words are the building blocks of language, and 2-letter words hold a special place in the hierarchy. They may be short, but they play a crucial role in constructing sentences, conveying meaning, and creating a cohesive narrative. While longer words tend to steal the spotlight, it is the humble 2-letter words that provide the glue, connecting ideas and enabling fluid communication. By understanding and utilizing these concise words effectively, you can elevate your writing and speech to a whole new level.
Common 2-Letter Words with ‘C’
Among the numerous 2-letter words in English, those featuring the letter ‘C’ are particularly intriguing. Let’s take a look at some of the common 2-letter words with ‘C’:
- On: This versatile word signifies being in contact with or supported by something.
- In: It represents inclusion or being inside a specific location or situation.
- To: Often used as part of an infinitive verb form, indicating movement or direction.
- It: Refers to an object or animal that has been previously mentioned or is easily understood from the context.
- Do: An action word expressing the performance of an activity or task.
- No: Indicates the absence or negation of something.
- Up: Denotes an upward movement or position.
- By: Signifies the means or method used to achieve a particular outcome.
- If: Used to introduce a conditional clause or express uncertainty.
- Or: Represents an alternative choice between two or more options.
These are just a few examples of the diverse 2-letter words with ‘C’ that can enhance your linguistic repertoire.
Benefits of Knowing 2-Letter Words with ‘C’
Understanding and incorporating 2-letter words with ‘C’ into your writing and speech can yield several benefits. Firstly, these short words allow you to convey meaning concisely, making your sentences more precise and impactful. They can also enhance the flow and rhythm of your writing, creating a natural cadence that captivates readers. Additionally, using 2-letter words with ‘C’ strategically can help you emphasize key points, add variety to your language, and demonstrate a nuanced command of the English language.
Strategies for Memorizing 2-Letter Words with ‘C’
Memorizing a list of 2-letter words with ‘C’ may seem daunting at first, but with the right strategies, it can become an enjoyable and achievable task. Consider the following tips to enhance your recall and understanding:
- Chunking: Divide the words into smaller groups based on patterns or similarities. For example, group words that start with ‘C’ together and those that end with ‘C’ separately.
- Mnemonics: Create mnemonic devices or memorable associations for each word to aid retention. You could use vivid mental images or create catchy phrases that incorporate the words.
- Flashcards: Write each word on a flashcard along with its meaning and use them for regular review sessions. Repetition and active recall will reinforce your memory.
- Contextual Learning: Practice using the words in sentences or short paragraphs. By placing them in context, you will develop a deeper understanding of their usage and meaning.
- Word Games: Engage in word games like Scrabble or crossword puzzles to apply your knowledge in a fun and interactive way. These games provide a practical outlet for practicing and refining your skills.
By employing these strategies consistently, you can quickly master the 2-letter words with ‘C’ and integrate them effortlessly into your vocabulary.
Examples of 2-Letter Words with ‘C’ in Context
To grasp the practical applications of 2-letter words with ‘C’, let’s explore some examples of their usage in context:
- On: Sarah placed the book on the table before joining the conversation.
- In: The children sat quietly in the classroom, eagerly awaiting the teacher’s instructions.
- To: Jenny handed the gift to her friend, beaming with excitement.
- It: The cat chased the ball, pounced on it, and triumphantly carried it to its owner.
- Do: Don’t just talk about your dreams, take action and do something to make them a reality.
- No: Jacob received no response to his email, leaving him feeling uncertain.
- Up: The hot air balloon gracefully floated up into the sky, offering a breathtaking view.
- By: He accomplished his goals by staying focused and working diligently.
- If: If you study consistently, you will see improvements in your grades.
- Or: You can choose to visit the museum or explore the nearby botanical gardens.
These examples illustrate how 2-letter words with ‘C’ seamlessly integrate into everyday language, adding depth and precision to communication.
Tips for Using 2-Letter Words with ‘C’ in Writing
While 2-letter words with ‘C’ can enhance your writing, it’s essential to use them judiciously and appropriately. Consider the following tips to make the most of these short yet impactful words:
- Variety: Avoid repetitive use of the same 2-letter words with ‘C’. Instead, strive for variety by incorporating different words to maintain reader engagement.
- Clarity: Ensure that the meaning of your sentences remains clear when using 2-letter words. If there’s a risk of ambiguity, rephrase or provide additional context to eliminate confusion.
- Smooth Transitions: Use 2-letter words with ‘C’ to create smooth transitions between ideas or paragraphs. They can act as connectors, facilitating the flow of your writing.
- Emphasis: Employ 2-letter words with ‘C’ strategically to emphasize key points or add emphasis to specific aspects of your writing. They can serve as linguistic tools to highlight important information.
- Context: Always consider the context in which you are using 2-letter words. Ensure they align with the tone, style, and formality level of your writing.
By applying these tips, you can harness the power of 2-letter words with ‘C’ and elevate your writing to new heights.
In the realm of language, every word counts, no matter how short. 2-letter words with ‘C’ may seem inconspicuous, but they possess immense potential to enhance your communication skills. From providing coherence to adding emphasis, these small words play a vital role in constructing meaning. By familiarizing yourself with common 2-letter words with ‘C’ and incorporating them skillfully into your writing and speech, you can captivate your audience, enrich your vocabulary, and express yourself with precision. |
The Renaissance was the rebirth of Europe after it was terrorized by the plague, known as The Black Death. With this rebirth came a desire to redesign Europe into something better. People began studying the ‘Classics’, or ancient Rome and Greece. Science, math, and the arts were studied and funded in the hopes of rebuilding Europe’s culture and population. Many great artists spawned from this period of the Renaissance. Along with those artists came Leonardo da Vinci, the most significant Renaissance artist. Leonardo da Vinci was the most important Renaissance artist. In addition to the arts, da Vinci studied anatomy, botany, geology, zoology, hydraulics, aeronautics, physics, and architecture. Many of these skills were used in da Vinci’s artwork, which is part of what makes them so eye catching and intriguing. “Besides painting… Leonardo made scientific studies, dissections, observations, and research”(DBQ Document). His scientific discoveries are still used today for research and to gather an understanding of science. Also, da Vinci was the first to conceive and sketch his ideas for the tank, helicopter, submarine, and crossbow. Many of his inventions are now used. Leonardo da Vinci was a noteworthy Renaissance artist. …show more content…
One of his paintings, “Mona Lisa” is, “arguably the most famous painting in the world” (Websource #1). In fact, today the “Mona Lisa” is hung behind bulletproof glass in the Louvre Museum in Paris, France. The “Mona Lisa” is also believed by some to be a national treasure. In addition, da Vinci’s painting, “The Last Supper” is still studied by art historians infatuated with the distinct attitude of the painting. Da Vinci’s inventions, paintings, and studies are still marveled at today, which is only one reason why Leonardo da Vinci was the most notable Renaissance
Click here to unlock this and over one million essaysShow More
When translated to english the French word renaissance means rebirth. This is a perfect description of the event. The Renaissance was a time of rebirth for people between the 1300s and the 1600s, in Florence, Italy. A time of education and self discovery. The Renaissance served as a transitional time between the Middle Ages and the Modern Age.
Michelangelo was one of the most influential and significant people that lived during the Renaissance period. The Renaissance, meaning 'rebirth ', was a significant time in European history that existed between the 14th and the 16th century. It was a time that led to development and change in literature, arts, architecture and philosophy. Michelangelo was very fortunate to live in this period as it brought him great success, especially in art and architecture. Not only did he learn from this new way of thinking, but as he progressed in his career, he also had and still has, a major influence on many other artists, worldwide.
The Renaissance was a time of reformation that started after the plague in the 14th and 15th centuries. During this time of rebirth, there was renewed interest in the famous Greek and Roman art. During this cultural time, there were numerous important people who played a big role in the Renaissance. Some examples are, William Shakespeare, Christopher Columbus, Johannes Gutenberg, Henry the VIII, and many more people. But the first person to remember is Leonardo Da Vinci and everything he did in the Renaissance.
During Thanksgiving dinner, there are many delicious dishes sprawled across the table. However, there is one in particular that everyone is waiting for; the turkey. This turkey is the focal point and centerpiece of Thanksgiving, similar to how art was the main focus of the Renaissance. Not only did this time period revolve around the arts, but specifically Leonardo da Vinci, and other reputable painters and sculptors from this era. The Thanksgiving turkey is cooked and carved to perfection, which is comparable to the artistic abilities and expectations of Renaissance artists.
Leonardo da Vinci was the most influential Renaissance artist because he used scientific observations in art by studying human anatomy, observing nature, and using realism in his pieces. By bringing science into the art world, da Vinci made progress in observations and inventions that would be and become relevant to modern day. Da Vinci was known as a “Renaissance man” (an man and artist with many curiosities). Not only did he study art, but he wanted to learn more about technology, nature and anatomy. His interest in anatomy led Da Vinci to perform dissections on cadavers (corpses) to learn more about the human body.
He thought of two rotors that would create enough lift on a platform to lift one person up. Not only was Leonardo da Vinci an engineer but also a painter, and quite an extraordinary one as well. His most famous painting, by far, would be the Mona Lisa, which is located in Louvre, France at this
The Renaissance by definition is the cultural rebirth that occurred in Europe from roughly the fourteenth through the middle of the seventeenth centuries, based on the rediscovery of the literature of Greece and Rome, but I believe that Renaissance was so much more than that definition. The Renaissance teaches us the power of looking to the past for insights and inspiration, the importance of continual innovation, and how we thrive by connecting the past present and future. The Renaissance included huge leaps in astronomy, literature, art, architecture, and mathematics. This era in time changed the way people learned and thought about life and really changed the course of intelligence in human history. One man named Niccolo Machiavelli had a big part in the Renaissance era.
Leonardo daVinci Do you know what an artist is? An artist is a person who practices any of the various creative arts; such as, a sculptor, novelist, poet, film-maker, architect, inventor, and painter. Well, there is one artist, his name is Leonardo daVinci; he was called the Renaissance Man. Leonardo daVinci was referred to as the Renaissance Man, because he was an architect, inventor, painter, and scholar of all things scientific. In the following paragraphs, you will read about how Leonardo became who he was, what important things he accomplished, and why what he did was important.
Leonardo di ser Piero da Vinci (Italian: [leoˈnardo di ˌsɛr ˈpjɛːro da (v)ˈvintʃi] 15 April 1452 – 2 May 1519), more commonly Leonardo da Vinci or simply Leonardo, was an Italian Renaissance polymath whose areas of interest included invention, painting, sculpting, architecture, science, music, mathematics, engineering, literature, anatomy, geology, astronomy, botany, writing, history, and cartography. He has been variously called the father of palaeontology, ichnology, and architecture, and is widely considered one of the greatest painters of all time. Sometimes credited with the inventions of the parachute, helicopter and tank, he epitomised the Renaissance humanist ideal. Leonardo Da Vinci and Mona Lisa:-
The Importance of Leonardo da Vinci’s Anatomical Studies The Italian Renaissance was the birthplace of many important advances in the arts, politics, and sciences, and was fueled by Humanism, the belief that humanity has unlocked potential and knowledge. During this time, many men and women came up with world-changing ideas and actions. Niccolo Machiavelli for example, wrote, The Prince, a political guide for Renaissance Patrons on how their use their wealth, social status, and patronage to help acquire an even higher social status and gain their family popularity. Ippolita Maria Sforza, the Duchess of Calabria, was a humanist scholar whose more enlightened thinking aided her husband, Alfonso the Duke of Calabria, in his courts.
Leonardo Da Vinci The question we all are asking, who is the most significant Renaissance Artist? The most significant artist is Leonardo Da Vinci. He is the greatest because showed Realism and Humanism in most of his art. Leonardo Da Vinci was the most significant because he showed Realism and Humanism in his work.
He made many achievements that had changed history. Leonardo da Vinci was one of the greatest masters to have ever lived in the renaissance. He had talented gifts that got improved by good education. It didn’t take him long to become a master at the age of 14. Da vinci’s gifts and talents developed and grew stronger over the past years.
Donatello did amazing Chapel's paintings and sculptures. They are just a few of many that made an impact during the Renaissance. They were amazing painters and sculptors in my opinion. Leonardo de Vinci was born in April 15, 1452 near a mountain village of Vinci in central Italy. It was said, when he was younger, he drew a dragon so realistic, his father got scared.
Most people know Leonardo Da Vinci as being painter but he was also a sculptor, architect, engineer, musician, inventor and scientist. Da Vinci was a hard worker and had a creative soul that reflected in all his work. From the Renaissance to present day Leonardo Da Vinci work is still admired and constantly influences people all around the world. Leonardo Da Vinci was one of the most gifted, well rounded artist of the Renaissance. This can been proven through research and collected data.
The renaissance was a time of art and rebirth. Many great artists appeared during this time bringing their own individual skills and talent. These artists were Michelangelo, Leonardo, Donatello, and Raphael. However out of the four, Leonardo was the most significant. Not only was he a great artist, but an inventor, engineer, and scientist. |
What Is The Mechanism Of Microwave Heating?
Microwave ovens are a commonplace item in our kitchens, utilized by individuals around the globe to warm food, reheat leftovers, and even cook entire meals. These gadgets are known for their quick cooking times and convenience, yet despite their widespread use, a great many people are still unfamiliar with the mechanism of microwave heating.
Microwave ovens are one of the most commonly used kitchen appliances today. They are convenient, efficient, and widely used for heating and cooking food. But have you ever wondered about of what is the mechanism of microwave heating? How does a microwave oven heat food so quickly, and is it safe to use?
In this article, we explore the science behind the microwave oven, the process of microwave heating, and answer the question of whether they are safe to use for heating food or beverages.
What is the Mechanism that Makes Microwaves Heat Things Up So Quickly?
Microwave ovens use electromagnetic radiation in the microwave frequency range to heat up food. Specifically, they emit high-frequency electromagnetic waves that cause water molecules in the food to vibrate rapidly. Although microwaves do not get absorbed by the food directly, they are able to interact with the water molecules, which in turn generate heat. This is referred to as dielectric heating, which occurs when polar molecules such as water or fat are subjected to an alternating electric field.
The microwave frequencies used in microwave ovens range from 300 MHz to 300 GHz, with a wavelength of around 12cm. This range of frequencies is specifically chosen because it matches the natural resonant frequency of water molecules. When microwaves interact with water molecules, they induce them to rotate and generate heat that quickly penetrates into the food, heating it evenly and rapidly from the inside out.
To understand this concept, imagine a water molecule as a tiny magnet that can rotate in response to an external magnetic field. As the electromagnetic waves of the microwave pass through the food, they cause the water molecules to rapidly rotate and collide with other molecules, generating heat. This rapid heating is what causes your food to heat up so quickly and uniformly.
What Happens to Food When It Is Heated by Microwaves?
Microwave heating has several effects on food that are different from other cooking methods such as baking or frying. One of the key characteristics of microwave heating is that it heats food from the inside out. This is because it penetrates the food deeply and evenly, unlike conventional ovens, which generate heat from the outside.
Microwave heating is also great for retaining the nutrients in food. The shorter heating time and lower temperature of microwave cooking help to preserve the vitamins and minerals in food that can be lost during conventional cooking methods. Moreover, as microwaves heat the food from the inside out, they cause less moisture loss than conventional methods, reducing the chances of overcooking or drying out the food.
Another effect of microwaves on food is that they can change the texture and flavor of some foods. This is because microwaves preferentially heat up water molecules, which comprise most of the food’s structure. As a result, the heat generated by microwave cooking can cause some parts of the food to become soft or mushy due to the loss of water molecules. Similarly, the rapid heating can cause some parts of the food to become crispy, but at the cost of reducing moisture.
Overall, microwave cooking is a great option for quick, easy, and nutritious meals. It is especially useful for reheating leftovers or cooking frozen foods. However, it is important to use caution when cooking with a microwave, as food can easily become overcooked or unevenly heated if the instructions are not followed carefully.
Is it Safe to Use Microwave Ovens to Heat Food or Beverages?
Microwave ovens have become a popular kitchen appliance in households worldwide due to their convenience and ease of use. However, concerns over their safety have been raised, particularly in terms of their potential impact on the nutritional value of food and the potential risks associated with exposure to the electromagnetic radiation they emit.
Numerous studies have been conducted to assess the safety of using microwave ovens to heat food or beverages, and the general consensus is that they are safe to use when used according to the manufacturer’s instructions. The radiation emitted by microwave ovens is non-ionizing, meaning it does not have enough energy to ionize molecules and cause any permanent damage to human tissues.
While exposure to microwave radiation does not pose any immediate health risks, some concerns have been raised regarding the possibility of long-term exposure leading to negative health effects. However, the current evidence suggests that such risks are minimal and that the biggest potential health risk associated with microwave ovens is burning from contact with hot food or beverages.
One concern often debated is whether microwave ovens reduce the nutritional value of food. Studies have shown that microwave cooking can result in lower levels of certain nutrients, such as vitamin C and some antioxidants. However, the amount of nutrient loss is generally small and does not have a significant impact on overall nutrition. Additionally, microwave cooking can actually help to preserve certain beneficial compounds in food that can be lost when using other cooking methods.
Microwave ovens are generally safe to use for heating food and beverages when used correctly. While some concerns have been raised regarding the potential impact on nutritional value and long-term exposure to microwave radiation, the evidence suggests that such risks are minimal. As with any cooking method, it is important to follow recommended guidelines and safety precautions to ensure the best outcomes.
The mechanism of microwave heating is an interesting yet complex process that has a variety of implications and applications. Though it may seem mysterious, it all boils down to the simple concept of molecules vibrating and moving faster when exposed to certain frequencies, resulting in increased heat and consequently food cooking faster.
Further exploration into this area could involve experimenting with different types of materials to see how they interact with microwaves as well as testing the most effective parameters for achieving efficient heating in a shorter period of time. Ultimately, the understanding of microwave heating is beneficial for proceeding further with its use and creating newer, more efficient products when it comes to food preparation. In the future, perhaps we can come up with new inventions that rely on these concepts to reduce our dependence on traditional cooking processes. |
Table of Contents
- 1 Which organ has maximum power of regeneration?
- 2 What organisms can regenerate?
- 3 Why is the ability to regenerate not shown by higher animals?
- 4 Which is the body part that never grows?
- 5 Can a salamander regrow its head?
- 6 Can a human regenerate?
- 7 Which animal does not show regeneration?
- 8 Can eyes repair themselves?
- 9 What kind of animal has the best regenerative power?
- 10 Which is easier to regenerate younger or older animals?
Which organ has maximum power of regeneration?
Although some patients who have a diseased portion of their liver removed are unable to regrow the tissue and end up needing a transplant. Researchers from Michigan State University believe blood clotting factor fibrinogen may be responsible.
What organisms can regenerate?
Animals that Regenerate
- Lizards who lose all or part of their tails can grow new ones.
- Planarians are flat worms.
- Sea cucumbers have bodies that can grow to be three feet long.
- Sharks continually replace lost teeth.
- Spiders can regrow missing legs or parts of legs.
- Sponges can be divided.
Why is the ability to regenerate not shown by higher animals?
Regeneration is not possible for all types of animals because this is possible only for animals with simple cell structure. The tissue structures are also different in complex cell structures. So, the absence of information within the cells, does not allow these organisms to regenerate.
Which animal has the ability of regeneration of its leg?
A prime example is the axolotl (Ambystoma mexicanum), a species of aquatic salamander. Unlike humans, it has the “superpower” of regenerating its limbs, spinal cord, heart, and other organs.
What is the only body part that Cannot repair itself?
Teeth are the ONLY body part that cannot repair themselves. Repairing means either regrowing what was lost or replacing it with scar tissue. Our teeth cannot do that. Our brain for example will not regrow damaged brain cells but can repair an area by laying down other scar-type tissue .
Which is the body part that never grows?
The only part of the human body which does not grow in size from birth to death is the ‘innermost ear ossicle’ or the ‘Stapes’. EXPLANATION: The stapes is 3 mm is size when a person is born. As a person grows or develops, this ossicle does not grow in size.
Can a salamander regrow its head?
It’s this talent that has captured the attention of Uri Frank and colleagues at Galway’s Regenerative Medicine Institute. Many animals can regenerate body parts, from starfish to salamanders.
Can a human regenerate?
Regeneration means the regrowth of a damaged or missing organ part from the remaining tissue. As adults, humans can regenerate some organs, such as the liver. If part of the liver is lost by disease or injury, the liver grows back to its original size, though not its original shape.
What animal regenerates the most?
Urodele amphibians, such as salamanders and newts, display the highest regenerative ability among tetrapods. As such, they can fully regenerate their limbs, tail, jaws, and retina via epimorphic regeneration leading to functional replacement with new tissue. Salamander limb regeneration occurs in two main steps.
Why can’t humans grow back body parts?
In fact, most of our organs have some turnover in cells, which explains why they’re younger than our biological age. The human heart, skin, intestines, and even our bones are slowly replaced over time, meaning that a limited amount of damage can be reduced. However, this doesn’t extend to limbs.
Which animal does not show regeneration?
In reptiles, chelonians, crocodilians and snakes are unable to regenerate lost parts, but many (not all) kinds of lizards, geckos and iguanas possess regeneration capacity in a high degree.
Can eyes repair themselves?
Minor superficial scratches on the cornea will usually heal by themselves within two to three days. In the meantime, some people cover their eye with an eye patch to keep it closed and relaxed.
What kind of animal has the best regenerative power?
A sponge it technically an animal. So some would say that some sponge species have absolutely the best regenerative power. A sponge can be ground up into individual cells. The cells will come back together to form full sponge organisms, or maybe even on sponge organism. No other ‘animal’ has this level of regeneration.
Who are scientists who study regenerative capacity in animals?
Andong Zhao ( [email protected]) and Hua Qin are PhD candidates at the Tianjin Medical University, in P.R. China; they study wound repair and regeneration.
Is it true that sponges have the ability to regenerate?
The ability to regenerate is widespread in the animal kingdom, but the regenerative capacities and mechanisms vary widely. Sponges are known to possess remarkable reconstitutive and regenerative abilities ranging from wound healing or body part regeneration to the impressive re-building of a functional body from dissociated cells.
Which is easier to regenerate younger or older animals?
In addition, the younger animal is usually easier to regenerate than the older. Decades of research are beginning to yield explanations about why regenerative capacity differs markedly, based on cellular and molecular components and evolutionary ideas. |
There are many different types of networks and it depends on the size of the network
That is why this is called a wide area network
This is owned by the PSTN ( company ) but can be leased or used by an organisation
The transmission medium is fibre optics
Communications with in the network is done by switches. You don't need a router for this as a switch contains the mac address of the device connected to the switch.
A router is bit more advance but we will talk about it later
An example - Office uses
So this is owned by the organisation and it enables communications with in the building or room
The transmission medium used are wireless or twisted pair
There are many advantages in setting up a LAN as this enables communications within computers
For Example you can communicate with your collegues without actually accessing the internet
Also Centralised servers such as application servers and file servers can be connected to a LAN so the people can access files or softwares from these servers rather than installing the software to each individual computer. This makes it more efficient and cheaper
These centralised servers highlight a very important model called the client-server model.
This definition is very important infact, they usually ask what's the difference between internet and WWW
All we have to know is that internet is the largest or a massive network which connects networks together such as WAN. It also uses the TCP and IP protocols.
The reason why other networks don't use this protocols as the internet is a interconnected network and so it links computers of separate locations. This must be used in order for data to be sent to the correct location
You only need to know these points
This model is very important as they usually ask this.
Lets take a simple example .You may know the app Google drive
So Google drive helps you backup data or files onto a (file) server. This is the service provided by the server and the client is the browser or you. This is a specific type of service called cloud computing services
The client server model can be broken into two categories
So in thin client it sends request or data to a server and the processing occurs only on the servers . Then the server will send a signal or an output to the client
This is actually PHP coding
So in thick client the server sends the application/software onto the browser and so the proccesing occurs on the browser itself and not on the server.
We use each type for different purposes such as for ecommerce or online stores we have to use thin client as data must be sent to the server. However, if we use an online calculator the browser itself can perform the processing
This is fully covered in A2
In this model there is no fixed server or client. Whenever a computer wants to retrieve data it is called a client or a seed. If another computer wants to access data from it, then it acts as a server.
So in this model parts of the document is stored in individual computers and it is not stored in a centralised server.
In other words the services are shared.
Indeed each model has its own advantages and you need to remember this as exams ask this
So as all the applications are stored in a central server. This improves the security of data and allows access rights. So users will be able to acces data if they are authorised to
We can also perform centralised backups
Store a virus guard to protect the data
Much more efficient and cheaper as the services doesn't need to be installed on individual computers
So usually the data is spread over many computers
Part of the documents can be accessed rather than the full file
It avoids congestion of the medium to the main server as not everyone will be accesing the main server simulataneuosly.
And usually there are copies of data from many hosts
The reason why I didn't mention the disadvantages as the advantages of the other options are the disadvantages
When ever they say mode of transmission there are 3 types
Data is sent in one direction only
Example is a radio station
When data is sent in both direction but one direction at a time
Example is a walkie talkie
When data is sent in both directions simultaneously
Example is playing online games. The internet is basically duplex
Message or data can be transmitted different ways. If data is sent by a single device to all the devices in the network it is broadcasted.
If data is sent to many destinations but selected it is multicast.
If data is sent only to a single device it is unicast
There are some main topologies which you must know and each of them have there own advantages and disadvantages.
Think of a leased wire. This is actually a point to point network as you only can communicate with the other device only.
So in this a single shared link is used to connect device to this network
Think about it. If you have 3 devices , one device can communicate directly to the other 2 and its the same for the other 2 devices. In this data can be sent only to a single device or all
Think of a router. Usually all the devices are connected to this central device
The performance of the network or other devices is not affected when a device connected to the network fails
Also fewer cabling is used compare to the other topologies as a single shared wire is used. So more cheaper
However if the shared link is damaged the whole network fails
Higher chance of congestion as single wire is used so slow transmission
Also the message is broadcasted and so it is less secure
This allows direct communication with in the system and so faster and more secure transmission
If a device fails it doesn't affect the network
Requires alot of cabling so highly expensive and unrealistic
Doesn't affect the perfomance of the network if an endsystem fails
More secure as the central device can filter the data
If central device fails the whole network fails
Requires alot of cabling as devices must be connected to the central device
Depending on the central device there could be data collisions and corruption and so less secure
Example is when a Bus topology is connected to a star topology we then call it a hybrid Network
Usually as a whole the internet is a hybrid network
These are things you might like. Clicking these ads can help us improve our free services in the future... |
Why Add More?
Muscular endurance can be increased by performing more repetitions with a given resistance, while maximizing strength development require muscles to be subject to progressively heavier training loads. The process of gradually adding more exercise resistance than the muscles have previously encountered is referred to as overload.
Adding resistance training intensity based on an absolute or relative weight. This can include a percentage of a one rep maximum or a one rep maximum. For example, 70% of a one rep maximum would be a weight lifted for 12 repetitions to failure. Failure is defined as failure to properly lift the weight with good form or full range of motion.
Completing more repetitions at the same level of exercise intensity. If you are completing 10 repetitions of a weight, a progression may be completing 12 repetitions with the same weight. A repetition is a single, individual action of the muscles responsible for creating movement at a joint or series of joints. A repetition involves three phases of muscle action: eccentric lengthening, which is a momentary isometric pause and concentric shortening. The number of repetitions assigned for an exercise indicates the number of times an individual should perform that particular movement. As mentioned above, to create the necessary overload to promote specific adaptations, repetitions should be performed until momentary muscle fatigue occurs.
Alternating the speed or tempo of the repetitions. This can increase intensity by taking gravity and momentum out of the picture. Time under tension shows promise in helping to develop certain goals, especially hypertrophy. As a general rule of thumb, more speed during movement on power and strength movements and less speed during movement on hypertrophy and endurance lifts.
Changing the rest periods between multiple sets when strength training. By increasing the time of rest, an individual can shoot for the original repetition performed on the first set, or they can perform drop sets by decreasing rest time and performing a set before full recovery. Time for rest for each type of goal will be covered in another section.
Gradually increasing the training volume. During each resistance-training session, a certain amount of work is performed. The cumulative work completed is referred to as the training volume. Training volume is calculated in several ways:
Repetition-Volume Calculation: Volume=Sets x Repetitions (for either the muscle group or the session).
Load-Volume Calculation: Volume = Exercise Weight load x Repetitions x Sets (summing the total for each muscle group or the entire session).
It’s important to emphasize that the best progressions will involve variations in intensity and volume. Later, we will discuss periodization, and the different forms of periodization that allow the framework for implementing progressive overload over a certain time span to complete certain specific goals.
When muscles are stressed beyond their normal demands, they respond in some way to the imposed stress. If the training stress is much greater than normal, the muscles react negatively to high levels of tissue micro trauma. The resulting (large-scale) cell damage requires several days of muscle repair and rebuilding to regain pre-training strength and functional abilities. Weight lifters should allow 48-72 hours of rest between training the same body part and performing the same movements with load.
On the other hand, when muscles are systematically stressed in a progressive manner, they gradually increase in size and strength. That is, if the training stress is slightly greater than normal, the muscles respond positively to low levels of tissue micro trauma. The resulting (small-scale) cell damage elicits muscle-remodeling processes that lead to larger and stronger muscles. Research indicates that muscular strength increases significantly above baseline levels 72 to 96 hours after an appropriately stressful series of resistance training.
When the training program no longer produces gains in muscular strength or size, the exercise protocol should be changed in some way to again elicit the desired neuromuscular adaptations. This is where a well-designed periodization model helps keep results coming.
My Signature Method, Your Signature Move! |
The question how life began on Earth has always been a matter of profound interest to scientists. But just as important as how life emerged is the question of when it emerged. In addition to discerning how non-living elements came together to form the first living organisms (a process known as abiogenesis), scientists have also sought to determine when the first living organisms appeared on Earth.
The surface of Venus has been a mystery to scientists ever since the Space Age began. Thanks to its dense atmosphere, its surface is inaccessible to direct observations. In terms of exploration, the only missions to penetrate the atmosphere or reach the surface were only able to transmit data back for a matter of hours. And what we have managed to learn over the years has served to deepen its mysteries as well.
For instance, for years, scientists have been aware of the fact that Venus experiences volcanic activity similar to Earth (as evidenced by lighting storms in its atmosphere), but very few volcanoes have been detected on its surface. But thanks to a new study from the School of Earth and Environmental Sciences (SEES) at the University of St. Andrews, we may be ready to put that particular mystery to bed.
The study was conducted by Dr. Sami Mikhail, a lecturer with the SEES, with the assistance of researchers from the University of Strasbourg. In examining Venus’ geological past, Mikhail and his colleagues sought to understand how it is that the most Earth-like planet in our Solar System could be considerably less geologically-active than Earth. According to their findings, the answer lies in the nature of Venus’ crust, which has a much higher plasticity.
This is due to the intense heat on Venus’ surface, which averages at 737 K (462 °C; 864 °F) with very little variation between day and night or over the course of a year. Given that this heat is enough to melt lead, it has the effect of keeping Venus’ silicate crust in a softened and semi-viscous state. This prevents lava magmas from being able to move through cracks in the planets’ crust and form volcanoes (as they do on Earth).
In fact, since the crust is not particularly solid, cracks are unable to form in the crust at all, which causes magma to get stuck in the soft, malleable crust. This is also what prevents Venus from experiencing tectonic activity similar to what Earth experiences, where plates drift across the surface and collide, occasionally forcing magma up through vents. This cycle, it should be noted, is crucial to Earth’s carbon cycle and plays a vital role in Earth’s climate.
Not only do these findings explain one of the larger mysteries about Venus’ geological past, but they also are an important step towards differentiating between Earth and it’s “sister planet”. The implications of this goes far beyond the Solar System. As Dr. Mikhail said in a St. Andrews University press release:
“If we can understand how and why two, almost identical, planets became so very different, then we as geologists, can inform astronomers how humanity could find other habitable Earth-like planets, and avoid uninhabitable Earth-like planets that turn out to be more Venus-like which is a barren, hot, and hellish wasteland.”
In terms of size, composition, structure, chemistry, and its position within the Solar System (i.e. within the Sun’s habitable zone), Venus is the most-Earth like planet discovered to date. And yet, the fact that it is slightly closer to our Sun has resulted in it having a vastly different atmosphere and geological history. And these differences are what make it the hellish, uninhabitable place that is today.
Beyond our Solar System, astronomers have discovered thousands of exoplanets orbiting various types of stars. In some cases, where the planets exist close to their sun and are in possession of an atmosphere, the planets have been designated as being “Venus-like“. This naturally sets them apart from the planets that are of particular interest to exoplanet hunters – i.e. the “Earth-like” ones.
Knowing how and why these two very similar planets can differ so dramatically in terms of their geological and environmental conditions is therefore key to being able to tell the difference between planets that are conducive to life and hostile to life. That can only come in handy when we begin to study multiple-planet systems (such as the seven-planet system of TRAPPIST-1) more closely.
Further Reading: University of St. Andrews
So Curiosity has been on Mars for an Earth year and is now, slowly, making its way over to that ginormous mountain — Mount Sharp, or Aeolis Mons — in the distance. The trek is expected to take at least until mid-2014, if not longer, because the rover will make pit stops at interesting science sites along the way. But far-thinking scientists are already thinking about what areas they would like to examine when it gets there.
One of those is an area that appears to have formed in water. There’s a low ridge on the bottom of the mountain that likely includes hematite, a mineral that other Mars rovers have found. (Remember the “blueberries” spotted a few years ago?) Hematite is an iron mineral that comes to be “in association with water”, a new study reports, and could point the way to the habitable conditions Curiosity is seeking.
The rub is scientists can’t say for sure how the hematite formed until the rover is practically right next to the ridge. There are plenty of pictures from orbit, but not high-resolution enough for the team to make definitive answers.
“Two alternatives are likely: chemical precipitation within the rocks by underground water that became exposed to an oxidizing environment — or weathering by neutral to slightly acidic water,” wrote Arizona State University’s Red Planet Report. Either way, it shows the ridge likely hosted iron oxidation. Earth’s experience with this type of oxidation shows that it happens “almost exclusively” with microorganisms, but that’s not a guarantee on Mars.
Mars Reconnaissance Orbiter images show that the ridge is about 660 feet (200 meters) wide and four miles (6.5 kilometers) long, with strata or layers in the ridge appearing to be similar to those of layers in Mount Sharp.
While Curiosity is not designed to seek life, it can ferret out details of the environment. Just a few weeks ago, for example, it uncovered pebbles that likely formed in the presence of water. Other Mars missions have also found evidence of that liquid, with perhaps some of it once arising from the subsurface. Where the water came from, and why the environment of Mars changed so much in the last few billion years, are ongoing scientific questions.
Check out more details on the study in Geology.
Source: Red Planet Report |
The Ficus genus is home to around 850 species of plants in the dicotyledenous Moraceae family that come in a range of shapes and sizes, but can most easily be recognised by their round or pear-shaped fruit or infructescence (inflorescence before pollination).
Plants in this genus can be sensitive to location change; perhaps if you’ve ever taken a rubber plant home from the nursery and it lost the bulk of its leaves you can relate to this.
Mature fig trees are quite possibly the best trees for kids to play in; there’s just something about their branch structure and buttress roots that makes them fun to climb and hide in.
Fig relatives can grow as a woody tree, shrub or vine. Many species such as Moreton Bay figs have buttress roots that provide additional structural support.
Their true flowers and fruit occur within a hollowed stem that we term a “fruit”. It is in fact a false fruit, or a recaptacle holding many fruits.
When mature, the main trunk may be relatively short compared to the numerous branches that reach out in all directions. In shady areas, plants may race for the sun with a single leader, leading to the growth of a tall main trunk.
Thin aerial roots may come off branches, and if enough time is allowed these roots will reach the soil and potentially eventually grow a whole new tree, especially in the case of mature strangler figs.
Fig trees have a milky sap when cut or when a leaf is snapped off, as do other members of the Moraceae family.
Flowers, Fruits & Leaves
Flowers: Monoecious flowers (sometimes dioecious) exist in an inflorescence within a hollow stem.
Fruit: Figs have a composite fruit called a syconium, with the apparent “seeds” being the true fruits (tiny drupes or achenes), each with a true seed within. The part we call the “fruit” is in fact the receptacle.
Leaves: The fig genus has produced some pretty varied leaves. Benjamin fig leaves and rubber fig leaves are vastly different in size, but roughly the same oval (elliptic) shape and waxy texture. Contrast these with the fuzzy-feeling, lobed palmate leaves of the culinary common fig. They are usually alternating, however sometimes leaves are opposite.
Because of their morphology (shape/structure), figs require highly specialised pollinating wasps.
Female wasps enter when the fig “ostiole” opens during pollination, where the calyx (dry sepals) would be on an apple or pear. This is a one-way mission as they lose their wings during the process.
They lay their eggs in the female flowers which become fertile and grow galls wherein the offspring grow within eggs. Once they hatch, wingless males mate with females, who become covered in pollen and burrow out to pollinate the next generation of figs.
In monoecious species, there are plants with both sexes, and plants with only female parts. The “male” plants have separate male and female flowers (like their dioecious relatives), but the female flowers never grow fruit and their function is to provide a brooding site for pollinators within the “male” plants.
The female plants of these monoecious species can only receive pollen and create seeds, whereas their “brothers” (who technically do have female organs) produce pollen to send to them.
The bodhi tree F. religiosa has significance as the type of tree the the Buddha gained enlightenment beneath.
Rubber trees F. elastica are cultivated for their sap which is used in the production of rubber. They also make great indoor plants because of their large, dramatically coloured leaves.
Moreton Bay figs F. macrophylla are one of my personal favourite plants. They have beautiful, large elliptic leaves (slightly smaller than F. elastica leaves), red to purple fruits and are huge when they can reach their full potential. This iconic plant is quite possibly my favourite tree and is the inspiration for this blog’s logo.
F. benjamina is one of the most common indoor and outdoor ornamental figs. Sometimes called a weeping fig, Benjamin fig, or simply a ficus or fig tree.
Fiddle-leaf figs F. lyrata make great indoor plants with partial light. They get their name from the shape of their lobed lyrate leaves, which resemble a fiddle
Banyan trees are strangler figs that begin life growing out of bird poo in a tree crevice as an “epiphyte”, meaning that they live on another tree without being a parasite. They will eventually swallow the host in roots, and the host will die to provide fertiliser for the fig. A plant that spends part of its life as an epiphyte and part of its life with roots in the ground is called a “hemiepiphyte”, a term that applies to banyans.
Examples of banyans are F. benghalensis, the national tree of India, and strangler figs F. watkinsiana native to Queensland.
Almost all figs (as in the fruit) are technically edible, but there are several main types that are generally grown for this purpose, including Ficus carica also known as the common fig.
Ficus is a genus full of beautiful plants that make attractive ornamental plants in our gardens and delicious fruits on our plates. If you’re living in a tropical region, you’ll tend to do better with figs but they can do really well down here in Melbourne, too.
Most fig fruits aren’t that palatable to humans, but birds absolutely love them. Be prepared to pay for incresed bird sightings with their purple, seedy poo if you plan on planting a ficus variety.
If you haven’t already read my articles on plant identification and scientific names, I recommend reading those to get a broader picture of the topic. Alternatively, you can browse some of my other plant families, subfamilies and genera below. |
Brain stimulation guides people through an invisible maze
By New Scientist | Nov 24, 2016
You’re stuck in a maze. You can’t see the walls, or the floor. All you have to navigate is a device on your head stimulating your brain to tell you which way to go.
In an experiment at the University of Washington in Seattle, participants solved a maze puzzle guided only by transcranial magnetic stimulation (TMS). The findings suggest that this type of brain prompt could be used to augment virtual reality experiences or help give people who are blind “visual” information about their surroundings.
Darby Losey and his colleagues created a virtual maze in the style of a simple 2D video game through which people had to guide an avatar. But they couldn’t actually see the maze – instead, they faced a blank screen. At regular intervals, a question box would pop up asking if they would like to move forward or make a turn. How did they know whether to keep going or change course? Each time their avatar got too close to a wall, they were given a dose of TMS to the primary visual cortex at the back of their brain.
TMS produces small electric currents that can at certain intensities induce the perception of a flash of light called a phosphene. No light actually enters the eye, but the brain still “sees” it. Phosphenes can also occur if you put pressure on your eyeballs when rubbing your eyes. To successfully escape from the maze, all the participants had to do was carry on walking until they experienced a flash of light. When that happened, they knew they had reached a wall and had to turn.
Participants successfully completed an average of about 92 per cent of the steps to get through a variety of different mazes. In contrast, a control group provided with a fake TMS machine that gave them no stimulation completed just 15 per cent, suggesting that TMS was helpful in guiding people and they weren’t just guessing.
“A lot of research has been done trying to extract information from the brain,” says Losey. He is more interested in using TMS to put information into it. |
Scientists have discovered a new process to make polymers out of sulfur which could provide a way of making plastic that is less harmful to the environment.
Sulfur is an abundant chemical element and can be found as a mineral deposit across the world. It is a waste product from the refining of crude oil and gas in the petrochemicals industry, which generates huge stockpiles of sulfur outside refineries.
Whilst being identified as an interesting possible alternative to carbon in the manufacture of polymers, sulfur cannot form a stable polymer on its own but, as revealed in a process called ‘inverse vulcanization’ it must be reacted with organic crosslinker molecules to make it stable. This process can require high temperatures, long reaction times, and produce harmful by-products.
However, researchers from the University of Liverpool’s Stephenson Institute of Renewable Energy, working in the field of materials chemistry have made a potentially game changing discovery.
Continue reading at University of Liverpool
Image via University of Liverpool |
Overview – Blastomycosis
Blastomycosis is a systemic pyogranulomatous infection usually caused by the inhalation of (spores) conidia of Blastomyces dermatitidis. Clinical presentations vary widely, ranging from an asymptomatic, self-limited pulmonary infection to acute respiratory distress syndrome (ARDS), a life-threatening disease. Blastomycosis is not known to spread from person to person.
Pathogenesis of Blastomycosis
Inhaled conidia of B. dermatitidis are phagocytosed by neutrophils and macrophages in alveoli. Some of these escape phagocytosis and transform into yeast phase rapidly. Having thick walls, these are resistant to phagocytosis and express glycoprotein, BAD-1, which is a virulence factor as well as an epitope. In lung tissue, they multiply and may disseminate through blood and lymphatics to other organs, including the skin, bone, genitourinary tract, and brain. The incubation period is 30 to 100 days, although infection can be asymptomatic.
What Causes Blastomycosis?
Blastomycosis is caused when the spores of the offending fungus Blastomyces Dermatitidis become airborne and inhaled by an individual. Although most of the fungus is killed when they enter the lungs by specialized cells present in the lungs, there are some spores that take the form of yeast like infection and are not able to be destroyed by these cells. This happens basically because of the body temperature which changes the form of spores from fungal to yeast. These yeasts then start to multiply and spread to other organs and parts of the body through the blood resulting in Blastomycosis.
What Are the Risk Factors for Blastomycosis?
Although almost anyone can become infected with the fungi, those at the highest risk for blastomycosis are immunosuppressed individuals and those that live in or visit areas where the fungal spores are plentiful. Since the fungi prefer damp forested areas, people who are hunters, forestry workers, campers, and farmers are at higher risk to get blastomycosis. Blastomycosis cannot be spread from person to person or animal to person.
Lung infection may not cause any symptoms. Symptoms may be seen if the infection spreads. Symptoms may include:
- Joint pain
- Chest pain
- Cough (may produce brown or bloody mucus)
- Fever and night sweats
- General discomfort, uneasiness, or ill feeling (malaise)
- Muscle pain
- Unintentional weight loss
Most people develop skin symptoms when the infection spreads. You may get papules, pustules, or nodules on exposed body areas.
- May look like warts or ulcers
- Are usually painless
- Vary in color from gray to violet
- May appear in the nose and mouth
- Bleed easily and form ulcers
Possible Complications of Blastomycosis
Complications of blastomycosis may include:
- Large sores with pus (abscesses)
- Skin sores can lead to scarring and loss of skin color (pigment)
- Return of the infection (relapse or disease recurrence)
- Side effects from drugs such as amphotericin B
Diagnosis and test
Patient history is also important in the diagnosis, particularly in areas where there may be a disease outbreak.
Test and screening
- Fungal cultures and smear
- Blastomyces urine antigen
If blastomycosis is suspected, a chest x-ray should be taken. Focal or diffuse infiltrates may be present, sometimes as patchy bronchopneumonia fanning out from the hilum. These findings must be distinguished from other causes of pneumonia (eg, other mycoses, tuberculosis [TB], tumors).
Skin lesions can be mistaken for sporotrichosis, TB, iodism, or basal cell carcinoma. Genital involvement may mimic TB.
Cultures of infected material are done; they are definitive when positive. Because culturing Blastomyces can pose a severe biohazard to laboratory personnel, the laboratory should be notified of the suspected diagnosis. The organism’s characteristic appearance, seen during the microscopic examination of tissues or sputum, is also frequently diagnostic.
Serologic testing is not sensitive but is useful if positive.
A urine antigen test is useful, but cross-reactivity with Histoplasma is high.
Molecular diagnostic tests (eg, polymerase chain reaction [PCR]) are becoming available.
How is Blastomycosis treated?
Not all patients with Blastomycosis require treatment. Occasionally the symptoms from Blastomycosis may go away without treatment. People with evidence of Blastomycosis spreading beyond the lungs, or whose symptoms do not improve, will require treatment. The type of treatment you will be given depends on how severe your symptoms are and whether you are immunosuppressed (have a weakened immune system).
Treatment of blastomycosis depends on severity of the infection.
Mild to moderate disease: itraconazole 200 mg orally 3 times a day for 3 days, followed by 200 mg orally once a day or 2 times a day for 6 to 12 months is used. Fluconazole appears less effective, but 400 to 800 mg orally once a day may be tried in itraconazole-intolerant patients with mild disease.
Severe, life-threatening infections: IV amphotericin B is usually effective. The Infectious Diseases Society of America’s guidelines recommends a lipid formulation of amphotericin B at a dosage of 3 to 5 mg/kg once a day or amphotericin B deoxycholate 0.7 to 1.0 mg/kg once a day for 1 to 2 weeks or until improvement is noted.
Therapy is changed to oral itraconazole once patients improve; dosage is 200 mg 3 times a day for 3 days, then 200 mg 2 times a day for ≥ 12 months.
Patients with central nervous system blastomycosis, pregnant patients, and immunocompromised patients should be treated with IV amphotericin B (preferably liposomal amphotericin B), using the same dose schedule as for life-threatening infection.
Voriconazole, isavuconazole, and posaconazole are active against B. dermatitidis, but clinical data are limited, and the role of these drugs has not yet been defined.
Are there Home Remedies for Blastomycosis?
Blastomycosis should not be attempted at home; a physician needs to diagnose, treat, and follow up with the infected patient to be sure the patient has adequate treatments and does not relapse.
- Unfortunately, there are no known practical measures for the prevention of blastomycosis.
- There are currently no methods to test soil for the presence of Blastomyces species.
- Illness caused by blastomycosis can be minimized by early recognition and appropriate treatment of the disease. Awareness of the disease by both the public and health care providers is the key to early diagnosis. |
Fractions may have numerators and denominators that are composite numbers
(numbers that has more factors than 1 and itself).
How to simplify a fraction:
- Find a common factor of the numerator and denominator. A common factor is
a number that will divide into both numbers evenly. Two is a common factor of 4 and 14.
- Divide both the numerator and denominator by the common factor.
- Repeat this process until there are no more common factors.
- The fraction is simplified when no more common factors exist.
Another method to simplify a fraction
- Find the Greatest Common Factor (GCF) of the numerator and denominator
- Divide the numerator and the denominator by the GCF |
CCSS.ELA-LITERACY.RH.11-12.1 Cite specific textual evidence to support analysis of primary and secondary sources, connecting insights gained from specific details to an understanding of the text as a whole.
CCSS.ELA-LITERACY.RH.11-12.2 Determine the central ideas or information of a primary or secondary source; provide an accurate summary that makes clear the relationships among the key details and ideas.
CCSS.ELA-LITERACY.RH.11-12.3 Evaluate various explanations for actions or events and determine which explanation best accords with textual evidence, acknowledging where the text leaves matters uncertain.
CCSS.ELA-LITERACY.RH.11-12.6 Evaluate authors’ differing points of view on the same historical event or issue by assessing the authors’ claims, reasoning, and evidence.
CCSS.ELA-LITERACY.RH.11-12.9 Integrate information from diverse sources, both primary and secondary, into a coherent understanding of an idea or event, noting discrepancies among sources.
Evaluate the historiography of the War of 1812
Compare an American textbook account of the War of 1812 with a Canadian textbook account.
Describe the important differences between the American textbook account of the War of 1812 with the Canadian textbook account (i.e. – each side describes different causes, atrocities, heroes, villains, and outcomes)
Explain to the class that the War of 1812 has been called “the Rodney Dangerfield of wars,” in other words, it doesn’t get any respect. In the realm of American History, it is easily glossed over and does not receive much attention in most textbooks. College Humor lampooned our limited collective knowledge of this event with a spoof preview for a film about the War of 1812. Show the class the College Humor film clip to drive home the point that the U.S. collective memory of the War of 1812 is incredibly shallow. https://www.youtube.com/watch?v=w2AfQ5pa59A
Introduce the central investigative question to the class: “How do American and Canadian accounts of the War of 1812 differ?” Begin the investigation by analyzing your students’ U.S. History textbook account of the war. Here is a sample of James West Davidson’s America: History of Our Nation textbook account of the War of 1812. Ask students to consider the following questions when they read their U.S. History textbook:
According to the text, what are the causes of the War of 1812?
What terms or adjectives does the author use to describe the various groups who participated in the war (ex. Americans, Canadians, British, Native Americans)?
Are there any heroes in the War of 1812? Any villains?
Were there any atrocities committed during the war?
Who were the winners and losers in the War of 1812?
In most American textbooks, British impressment of sailors, British restriction of American trade and British support of Native Americans on the frontier are usually cited as the main causes in the War of 1812. Andrew Jackson and Commodore Perry emerge as heroes and the British soldiers involved in the destruction of Washington, D.C. make for menacing villains. A majority of American textbooks usually downplay the terms of the Treaty of Ghent and end their narrative with Jackson’s victory at New Orleans and the subsequent rise of nationalism, which make the war seem like a last-minute victory for the United States.
After students have an opportunity to share their interpretation of the American textbook, transition into studying the Canadian textbook account. Use the same set of questions to deconstruct the Canadian account. A few key differences usually emerge as students encounter a new point of view:
The Americans are described as the aggressors and invaders.
The Canadians are the clear victors – they successfully defended their country against the American invaders.
Isaac Brock and Laura Secord are Canadian heroes
Washington, D.C. was only destroyed as retaliation after the Americans destroyed York (present day Toronto).
Use the following Venn Diagram to help students visualize the similarities and differences between the U.S. and Canadian accounts: https://docs.google.com/document/d/1vcc4ra3E6lQo7wgPIbkRYOdYxZhs71ideKnRnanoD8M/edit?usp=sharing
Wrap up the lesson in a humorous way, by having students listen to a Canadian song about the War of 1812: The Arrogant Worms “War of 1812 Song”. The songs re-emphasizes a lot of the main points in the Canadian textbook and also extends the lesson into modern times in a unique way. Students could use the same set of questions to critically analyze the song.
The big idea that I try to drive home with this lesson is that students need to read all sources with a critical eye. They need to understand that their textbook is only one interpretation of an international event. Reading international accounts of events, can provide us with a much broader understanding of a topic.
Springer, Paul. The Causes of the War of 1812. Foreign Policy Research Institute. March 31, 2017.
https://www.fpri.org/article/2017/03/causes-war-1812/. Presented at the 2017 Butcher History Institute: Why
Does America Go To War? Video of Presentation: https://www.youtube.com/watch?v=P32jMVRYnmI
Arrogant Worms. The War of 1812 Song. YouTube. June 26, 2011.
College Humor. The War of 1812: The Movie. YouTube. October 4, 2011.
Davidson, James West. America: History of Our Nation, Beginnings to 1914. Prentice Hall. 2011. Pages 327-331.
Lindaman, Dana and Ward, Kyle. History Lessons: How Textbooks from Around the World Portray U.S. History.
The New Press. June 1, 2004. Pages 53-56. |
Some types of exercise are worse than others. For example, if you do different types of exercise that you use the same amount of oxygen, then some will cause more wheeziness or chest tightness than others.
Running outdoors is usually worse than swimming. In fact, swimming is one of the best forms of exercise for people with asthma because it usually causes the least amount of chest tightness.
Also, if the air you breathe during exercise is cold and dry, then the asthma will be worse. If it is warm and moist, the asthma will be less bad. This explains why swimming usually causes less asthma than outdoor running.
Increased breathing during exercise causes cooling and drying of the lining of the air passages and this is usually necessary for someone to get exercise-induced asthma. This explains why warm moist air protects against exercise-induced asthma. At this stage it is not understood why the drying and cooling of the airway linings causes the asthma episode.
Some people get worsening of their asthma from the chlorine fumes from swimming baths. This is another factor which can affect the result, and for such people swimming in a chlorinated pool is much worse than running.
The timing of the exercise is also important. It usually takes about six minutes of exercising to trigger an exercise-induced asthma, and exercising for less than this may not be enough to trigger the asthma.
In addition, for a few hours after you have had the exercise induced asthma, repeating the same amount of exercise will no longer produce the same amount of asthma symptoms, or may even produce no asthma symptoms at all.
Some people may be able to ‘run through’ their exercise induced asthma either by warming up with short bursts of exercise, or by continuous exercise which does not bring on a severe attack.
Sports and exercises which consist of short bursts of activity with periods of rest in between can be particularly suitable for people with asthma. For example :
- Long-distance or cross-country running are particularly strong triggers for asthma because they are undertaken outside in cold air without short breaks.
Team sports such as football or hockey are less likely to cause asthma symptoms as they are played in brief bursts with short breaks in between.
Swimming is an excellent form of exercise for people with asthma. The warm humid air in the swimming pool is less likely to trigger symptoms of asthma. However, swimming in cold water or heavily chlorinated pools may trigger asthma.
Yoga is a good type of exercise for people with asthma as it relaxes the body, reduces stress levels, and may also help with breathing.
There is also compelling evidence that gradual athletic training can make you less prone to exercise-induced asthma.
Better treatment with medicines can have a powerful effect on exercise-induced asthma. The better your asthma control, the less you will be troubled by exercise-induced asthma.
A lot of athletes, especially runners, suffer from exercise-induced asthma. This may be partly because an amount of asthma which does not matter to most people can mean the difference between winning and losing to an athlete.
If you are an athlete who suffers from exercise-induced asthma, then it is worth getting top-level specialist advice to help you solve it. Athletes train to levels of fitness which most ordinary people don’t even think about, so it is worth getting the best advice to help you manage the disease.
In the past, many Olympic medal winners have been asthmatic and have suffered from exercise-induced asthma. With the right help, advice, training, treatment, and self-discipline the problems can usually be overcome.
There are several steps that can be taken to help to reduce the symptoms of exercise-induced asthma. These should be used with any medicines that your doctor has prescribed.
- Warm up and down.
Avoid the cold air. It can also help to cover the nose and mouth with a scarf in cold weather.
Stay fit. Good aerobic fitness can help to reduce exercise-induced asthma. |
This is an image of Pluto with its moon Charon.
Click on image for full size
After the discovery of Neptune in 1846, mathematical theory suggested that there still might be a ninth planet. Scientists set out to discover it, and it was finally identified in 1930 by Clyde Tombaugh after a careful search of the sky.
Finding Pluto was difficult. It had to be done by noticing its motion against the background of stars. Because Pluto is so small, it is also very dim in the sky. At 39 Astronomical Units from the sun, and with 248 years to complete its orbit around the sun, Pluto also moves very slowly. So it was many years before the 9th planet could be identified by its motion.
Pluto is named after the Roman god of the underworld. It has one moon named Charon. The two objects act more like a double planetary system. They orbit each other, as if they were in a standoff, waiting for the other to turn their back. Some people say that Pluto isn't a planet at all, but rather a satellite that escaped Neptune's gravitational pull.
Shop Windows to the Universe Science Store!
Our online store
includes issues of NESTA's quarterly journal, The Earth Scientist
, full of classroom activities on different topics in Earth and space science, ranging from seismology
, rocks and minerals
, and Earth system science
You might also be interested in:
Pluto has // Call the moon count function defined in the document head print_moon_count('pluto'); known moons. Charon, the largest by far, was discovered in 1978 by the American astronomer James Christy....more
Of all the planets and moons in the solar system, Pluto and Charon are the two which resemble each other the most closely. They are almost the same size, and they are very close together. They are so...more
It may seem hard to believe that Pluto could have an atmosphere because it is so cold at 39 AU, where Pluto resides, but it does. Because there are times when Pluto is closer to the sun than is Neptune...more
Pluto is a frigid ball of ice and rock that orbits far from the Sun on the frozen fringes of our Solar System. Considered a planet, though a rather odd one, from its discovery in 1930 until 2006, it was...more
Pluto is so far away, and has never been explored. Questions to answer about Pluto include the following: What are the geologic features of the surface. (pictures of the surface) If there are bare spots,...more
No one knows whether or not Pluto has a magnetosphere. Scientists were very surprised to find that Jupiter's icy moon Ganymede had a magnetosphere because it is hard to explain how an icy body can develop...more
The diagram to the left shows a cutaway of the possible interior structure of Pluto. The composition of Pluto is mostly ice, therefore there is probably a small core of some rocky material buried inside,...more |
If you’ve ever encountered head lice in your family, you may know that lice afflict some 6-12 million people—mostly children—each year, according to the U.S. Centers for Disease Control and Prevention. Lice are a serious problem and they never seem to go away.
So you may wonder, where did head lice come from in the first place? There is a short answer and a long answer to this question. The short answer is that if you or your child has lice, you got them from another person through head-to-head contact or by wearing something like a hat or using a brush that had live lice on it from someone else. Those are the only ways to get lice.
The longer answer goes back tens of thousands of years. Scientists believe head lice began to evolve on a different path than body lice about the time humans started to wear more clothing. Body lice evolved to attach to clothing fibers which are typically thicker and stronger than human hair. Head lice stuck with the scalp. Researchers have used this information to speculate that humans may have used clothing much earlier than previously believed.
According to researchers, there are three primary “clades” (i.e., categories) of head lice, imaginatively named A, B, and C. Clade B head lice are thought to have originated in North America, and then to have migrated to farther reaches of the world, including Australia and Europe.
The DNA from the mitochondria of head lice cells collected from lice around the world have been used to trace back the ancestry of lice to a common lineage about 2 million years ago. They now believe that Clade C then split off from the group. Much later, between 700,000 and 1 million years ago, Clade B split from Clade A.
Scientists believe that lice, due to their relationship with humans, can provide important information about human evolution. Because they live only on human hosts, only feed on human blood, and die shortly after separation from a host, their DNA is a relatively pure link to human evolution.
Lice are continuing to evolve, to the point that many of the traditional lice treatment products used for the past several decades are no longer effective. Multiple studies have shown that the majority of lice in the U.S. have developed resistance to pyrethroids, the active ingredient in over-the-counter lice products.
Treating Head Lice
So now you know where lice come from—the short answer and the long answer. And you can probably guess that they’re not going away any time soon. With 6-12 million cases of head lice each year in the U.S. alone, and increasing resistance to lice medications, the problem may get worse before it gets better.
However, there has been research in a new area of lice treatment. Scientists studying lice at the University of Utah found that lice die when exposed to heated air, as long as the air falls within a specific airflow and temperature. This led to the development of the AirAllé medical device, FDA-cleared and clinically proven to kill live lice and more than 99 percent of eggs. The device uses warm air to dehydrate the lice and eggs, depriving them of the humidity needed to survive. Humidity is a critical factor for lice; the optimal humidity for survival is in the range of 70–90 percent; they cannot survive when this value falls below 40 percent.
The treatment only takes about an hour.
Treatment using the AirAllé device is available at Lice Clinics of America treatment centers—some 150 in the United States and 100 more in other countries. For more information or to find a clinic, visit www.liceclinicsofamerica.com. |
Humanity is devouring our planet’s resources in increasingly destructive volumes, according to a new study that reveals we have consumed a year’s worth of carbon, food, water, fiber, land, and timber in a record 212 days.
As a result, the Earth Overshoot Day – which marks the point at which consumption exceeds the capacity of nature to regenerate – has moved forward two days to 1 August, the earliest date ever recorded.
It is the date when humanity annual demand on nature exceeds what Earth can regenerate over the entire year. It is calculated by Global Footprint Network and World Wide Fund for Nature (WWF).
The increasing burden on natural resources:
Currently, humankind is using 170% of the world’s natural output. That means we are using up the equivalent of 1.7 Earths. And, according to the Global Footprint Network, we’re on track to be using two Earths by the end of the 21st Century.
In 1963, we used 78% of the Earth’s biocapacity. However, by the early 1970s, we began to consume more energy than the planet could produce. By 10 years ago, we were using 144% of the Earth’s biocapacity.
The two greatest contributing factors to humanity’s Ecological Footprint are carbon emissions, which accounts for 60%, and food, 26%.
If we cut our carbon emissions by half, according to the Global Footprint Network, Earth Overshoot Day would come 89 days later in the year.
If we cut food waste in half worldwide, we could move the date back 11 days. By eating less protein-intensive food, we could move it back 31 days.
Earth Overshoot Day is calculated by dividing the world biocapacity (the number of natural resources generated by Earth that year), by the world ecological footprint (humanity’s consumption of Earth’s natural resources for that year), and multiplying by 365, the number of days in one Gregorian common calendar year.
It is an international nonprofit organization founded in 2003 to enable a sustainable future where all people have an opportunity to thrive within the means of one planet.
It develops and promotes tools for advancing sustainability, including ecological footprint and biocapacity, which measure the amount of resources we use and how much we have. These tools aim at bringing ecological limits to the center of decision-making. |
To use cables and electrical wires, it is best to know the color code of the ducts indicating their functions, the meaning of electrical cable naming and the section table. explanations:
The wires are protected by a sheath whose color indicates the function: blue for the neutral, two-color yellow / green for the earth and any other color (but mostly red) for the phase.
There are two types of threads: rigid (identified U) and semi-rigid made up of several strands (denoted R).
Their denomination makes it possible to recognize them.
Example: H 07 V-U. 1.5 means that the conductor is harmonized (H), that it is rated for a maximum voltage of 700 volts (07), that the insulation is PVC (V), that the conductor is rigid (U) and that section of 1.5 mm².
The cables are multiple conductors composed of several son of different colors together under an insulating envelope. Their denomination makes it possible to know their peculiarities.
Example: 3G1.5: means that the cable has three wires, including one ground wire (G for Ground = ground in English), 1.5 mm² each.
Table of sections
|1.5mm²||8 lighting points|
|8 ordered outlets|
|5 direct sockets 16A|
|Electric radiator <2,250 W|
|2.5 mm²||8 direct sockets 16A|
|Specialized circuits (washing machine, oven, freezer...)|
|Non-instantaneous electric water heater|
|Electric radiator <4,500 W|
|6 mm²||Cooker, hob| |
Approximately 500,000 people seek medical attention for burns every year in the United States, 40,000 of whom require hospitalization. Unlike other types of injury, burn wounds induce metabolic and inflammatory alterations that predispose the patient to various complications. Infection is the most common cause of morbidity and mortality in this population, with almost 61% of deaths being caused by infection.
The skin, one of the largest organs in the body, performs numerous vital functions, including fluid homeostasis, thermoregulation, immunologic functions, neurosensory functions, and metabolic functions (eg, vitamin D synthesis). The skin also provides primary protection against infection by acting as a physical barrier. When this barrier is damaged, pathogens can directly infiltrate the body, resulting in infection.
In addition to the nature and extent of the thermal injury influencing infections, the type and quantity of microorganisms that colonize the burn wound appear to influence the risk of invasive wound infection. The pathogens that infect the wound are primarily gram-positive bacteria such as methicillin-resistant Staphylococcus aureus (MRSA) and gram-negative bacteria such as Acinetobacter baumannii-calcoaceticus complex, Pseudomonas aeruginosa, and Klebsiella species. These latter pathogens are notable for their increasing resistance to a broad array of antimicrobial agents.
Fungal pathogens can also infect burn wounds. These infections occur more frequently after the use of broad-spectrum antibiotics. Among the fungal pathogens, Candida albicans is the most common cause of infection.
At the beginning of the 21st century, the Centre of Fire Statistics estimated that the average number of fires worldwide was 7-8 million, resulting in 70,000-80,000 fire deaths and 500,000-800,000 fire injuries. According to the National Burn Repository’s 10-year rolling data collection from January 1, 1996, through June 30, 2006, the mortality rate associated with burns was 5.3% overall, with older age and higher-percentage total body surface area (TBSA) burned correlating with higher mortality rates.
Diagnosis of wound infection should focus on a careful physical examination that is performed frequently by personnel trained in the management of burns.
Laboratory tests or changes in laboratory values such as white blood cell (WBC) count, neutrophil percentage, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) level are of low yield in detecting or predicting burn infections because of the inflammatory response associated with the burn itself.
Laboratory examinations are useful for the initial risk assessment. Low prealbumin levels (100-150 mg/L) in burned patients are associated with a higher incidence of sepsis and organ dysfunction, lengthier stays, decreased ability of wound healing, and a higher mortality rate. In patients with suspected wound infections, procalcitonin (PCT) levels of 0.56 ng/mL have a reported sensitivity of 75% and a specificity of 80% when compared with quantitative swab culture. Although these levels cannot be considered diagnostic, they should prompt the physician to start searching for an infectious source.
Diagnosis of a burn wound infection relies on clinical examination and culture data, including the following:
- Quantitative biopsy can be used to confirm infection but is not reliable. This procedure is useful in identifying the infecting pathogen.
- Quantitative swab is of limited value but may aid in identifying the infecting pathogen.
- Tissue histopathology allows for quantification and evaluation of infection depth and involvement of non-burned skin.
The use of routine wound cultures as part of surveillance procedures has been proposed to provide early identification of organisms colonizing the wound, to monitor response to therapy, to guide empiric therapy, and to evaluate for nosocomial transmission.
Multiple biopsy samples from several areas of the burn wound should be obtained and sent for histopathology and microbiological workup of the pathogens and their resistance profiles.
After cleaning the wound with isopropyl alcohol, 2 parallel incisions 1-2 cm in length and 1.5 cm apart with a depth to obtain a portion of the underlying fat are made in the skin.
The goal of medical care is to prevent infection. Early excision and grafting is the current standard of care and the primary surgical method for reducing infection risk and length of hospital stay and increasing graft take. A 2015 meta-analysis of all available randomized controlled studies found that early excision reduced mortality rates in all burned patients who did not have an inhalation injury.
Wound care should be directed at thoroughly removing devitalized tissue, debris, and previously placed topical antimicrobials. A broad-spectrum surgical antimicrobial topical scrub such as chlorhexidine gluconate should be used along with adequate analgesia and preemptive anxiolytic in order to permit adequate wound care.
For analgesia, the use of opiates is debated, as these medications induce tolerance and addiction and may promote pain, a phenomenon known as opioid-induced hyperalgesia. Multimodal pain management should therefore be considered. Opioid-sparing agents include acetaminophen, ketamine, and alpha-adrenergic agonists such as clonidine and dexmedetomidine. Nonsteroidal anti-inflammatory agents should be avoided, as they impair wound healing and increase the risk of acute kidney injury and bleeding.
When an infection is identified, antimicrobial therapy should be directed at the pathogen recovered on culture. In the setting of invasive infection or evidence of sepsis, empiric therapy should be initiated. The incidence of bacteremia in critically ill adult patients with burn wounds is reported to be 4%. The most frequent pathogens in North American burn centers include S aureus and P aeruginosa; therefore, these microorganisms should be considered when choosing empiric therapy.
Antimicrobial-resistant bacterial infection among burn patients is associated with prolonged stays in the hospital. Isolates recovered after 7, 14, and 21 days of hospitalization are considerably more likely to be resistant to the antibiotics tested compared with admission-day isolates. If an multidrug-resistant pathogen is isolated, colistin should be considered. An evaluation of the antimicrobial activities of colistin against gram-negative bacteria isolates worldwide demonstrated that this medication is still effective with constant resistance levels.
Hyperglycemia is associated with an increase in inflammatory response and occurs in burned patients because of the increased rate of glucose production and impaired tissue glucose extraction. Tight glucose control has been suggested to improve survival and to reduce the sepsis risk.
Propranolol has been studied for its potential benefits in burns. It is suggested that this drug may restore glycemic control, reduce peripheral lipolysis, and enhance the immune response to sepsis by modulation of the catecholamine release during severe burn injury. |
The scientific revolution is a radical change in the process and content of scientific knowledge associated with the transition to new theoretical and methodological prerequisites, a new system of fundamental concepts and methods, a new scientific picture of the world, as well as qualitative transformations of material means of observation and experimentation, and interpretation of empirical data, with new ideals of explanation, validity and organization of knowledge. Historical examples of the scientific revolution include the transition from medieval views of the cosmos to the mechanistic picture of the world on the basis of mathematical physics of the 16th and 18th centuries, the transition to the evolutionary theory of the origin and development of biological species, the emergence of an electrodynamic picture of the world (19th century), the creation of quantum relativistic physics in beginning of the 20th century. Scientific revolutions differ in the depth and breadth of the coverage of the structural elements of science, in terms of the type of changes in its conceptual, methodological and cultural bases. The structure of the foundations of science includes the ideals and norms of research (evidence and validity of knowledge, the norms of explanation and description, the construction and organization of knowledge), the scientific picture of the world and the philosophical foundations of science. Corresponding to this structurization, the main types of scientific revolutions are distinguished:
- the restructuring of the world picture without a radical change in the ideals and norms of research and the philosophical foundations of science (for example, the introduction of atomism into concepts of chemical processes in the early 19th century, the transition of modern elementary particle physics to synthetic quark models etc.);
- a change in the scientific picture of the world, accompanied by a partial or radical replacement of the ideals and norms of scientific research, as well as its philosophical foundations (for example, the emergence of relativistic quantum physics or the synergetic model of cosmic evolution).
The scientific revolution is a complicated step-by-step process, having a wide range of internal and external, i.e. socio-cultural, historical, determinants, interacting with each other. Among the “internal” factors of the scientific revolution are: the accumulation of anomalies, facts that are not explained in the conceptual and methodological framework of a particular scientific discipline; antinomies that arise in solving problems that require the restructuring of the conceptual foundations of the theory (for example, the paradox of infinite values that arises when explaining the model of an absolutely “blackbody” within the framework of the classical theory of radiation); improvement of means and methods of research (new instrumentation, new mathematical models, etc.), expanding the range of objects under study; the emergence of alternative theoretical systems that compete with each other in terms of their ability to increase the “empirical content” of science, i.e. the field of facts explained and predicted by it.
The “external” determination of the scientific revolution includes a philosophical rethinking of the scientific picture of the world, a reassessment of the leading cognitive values and ideals of cognition and their place in culture, as well as the processes of changing scientific leaders, the interaction of science with other social institutions, changing the relationships in social production structures, scientific and technical processes, highlighting the fundamentally new needs of people (economic, political, spiritual). Thus, the revolutionary nature of the changes in science can be judged on the basis of a complex “multidimensional” analysis, the object of which is science in the unity of its various dimensions: object-logical, sociological, personal-psychological, institutional. The principles of such analysis are determined by the conceptual apparatus of the epistemological theory, within the framework of which the basic ideas about scientific rationality and its historical development are formulated. The ideas about the scientific revolution vary depending on the choice of such apparatus.
For example, within the framework of the neo-positivist philosophy of science, the concept of the scientific revolution only appears as a methodological metaphor, expressing the conditional division of the cumulative, in its basis, the growth of scientific knowledge for periods of domination of certain inductive generalizations that act as “laws of nature”. The transition to “laws” of a higher level and the replacement of previous generalizations take place according to the same methodological canons; the knowledge that has been certified by knowledge remains valid in any subsequent systematization, possibly as a limiting case (for example, the laws of classical mechanics are considered as extreme relativistic cases, etc.). The concept of the scientific revolution plays the same “metaphorical role” in “critical rationalism” (K.Popper): revolutions in science take place constantly, every refutation of the accepted and promotion of a new “bold” (i.e., even more susceptible to refutation) hypothesis can be made in principle to consider a scientific revolution. Therefore, the scientific revolution in a critical-rationalist interpretation is a fact of changing scientific (primarily fundamental) theories, viewed through the prism of its logical-methodological (rational) reconstruction, but not an event of a real history of science and culture. The same is the basis for understanding the scientific revolution of I.Lakatos. Historian only “retroactively”, applying the scheme of rational reconstruction to past events, can decide whether this shift was a transition to a more progressive program (increasing its empirical content due to the heuristic potential built-in) or the result of “irrational” decisions (for example, erroneous evaluation of the program by the scientific community). In science, various programs, methods constantly compete, which for the time come to the fore, but then are pushed out by more successful competitors or substantially reconstructed.
The concept of scientific revolution is metaphorical in historically oriented concepts of science (T. Kun, S. Toulmin), but the meaning of the metaphor here is different: it means a leap across the gulf between “incommensurable” paradigms, committed as a gestalt switching in the minds of members of scientific communities. In these concepts, the focus is on the psychological and sociological aspects of conceptual change, the possibility of a “rational reconstruction” of the scientific revolution is either denied or allowed at the expense of such an interpretation of scientific rationality, in which the latter is identified with a set of successful decisions of the scientific elite. In discussions on the problems of scientific revolutions in the late 20th century, a stable trend of interdisciplinary and complex research of scientific revolutions as an object not only of philosophical and methodological but also of historical, scientific, scientific and cultural analysis was determined. |
Pegasus Satellite was Lofted into Space in 1965
Marshall Space Flight Center Historian
The history of the Marshall Space Flight Center
in the 1960s is clearly associated with building the Saturn V
moon rocket. Less well-known, however, is the Center's early work
in designing scientific payloads.
The Pegasus satellite was named for the winged
horse of Greek mythology and was lofted into space by a Marshall-built
Saturn I rocket on Feb.16, 1965. Like its namesake, the Pegasus
I satellite was notable for its wings; however, the 96-foot-long,
14-foot-wide wings were not for flying. They carried 208 panels
to report punctures by potentially hazardous micrometeoroids at
high altitudes where the manned Apollo missions would orbit. Spacecraft
designers were keenly interested in the information because the
Apollo spacecraft and crew were in jeopardy if tiny particles
could puncture a spacecraft skin.
Micrometeoroid detectors and sample protective shields were mounted
on the satellite's wing-like solar cell arrays. The sensors successfully
measured the frequency, size, direction and penetration of scores
of micrometeoroid impacts.
The Marshall Center was responsible for the design, production
and operation of Pegasus I and two additional Pegasus satellites
which were also launched by Saturn I rockets in 1965. At launch,
an Apollo command and service module boilerplate and launch escape
system tower were atop the Saturn 1, with Pegasus I folded inside
the service module. After first stage separation and second-stage
ignition, the launch escape system was jettisoned. When the second
stage attained orbit, the 10,000-pound Apollo boilerplate command
and service modules were jettisoned into a separate orbit. Then
a motor driven device extended the winglike panels on the Pegasus
to a span of 96 feet. Pegasus I remained attached to the Saturn
I's second stage as planned.
A television camera, mounted on the interior of the service module
adapter, provided pictures of the satellite deploying in space
and as one historian has written, "captured a vision of the
eerie silent wings of Pegasus I as they haltingly deployed."
The satellite exposed more than 2,300 square feet of instrumented
surface, with thickness varying up to 16/1000 of an inch.
Ernst Stuhlinger, then director of the Center's Research Projects
Laboratory, noted that all three Pegasus missions provided more
than data on micrometeoroid penetration. Scientists also were
able to gather data regarding gyroscopic motion and orbital characteristics
of rigid bodies in space, lifetimes of electronic components in
the space environment, and thermal control systems and the degrading
effects of space on thermal control coatings. Space historian
Roger Bilstein reported that for physicists the Pegasus missions
provided additional knowledge about the radiation environments
of space, the Van Allen belts and other phenomena. |
Researchers find declining nitrogen availability in a nitrogen rich world
Since the mid-20th century, research and discussion has focused on the negative effects of excess nitrogen on terrestrial and aquatic ecosystems. However, new evidence indicates that the world is now experiencing a dual trajectory in nitrogen availability with many areas experiencing a hockey-stick shaped decline in the availability of nitrogen. In a new review paper in the journal Science, researchers have described the causes for these declines and the consequences on how ecosystems function.
“There is both too much nitrogen and too little nitrogen on Earth at the same time,” said Rachel Mason, lead author on the paper and former postdoctoral scholar at the National Socio-environmental Synthesis Center.
Over the last century, humans have more than doubled the total global supply of reactive nitrogen through industrial and agricultural activities. This nitrogen becomes concentrated in streams, inland lakes, and coastal bodies of water, sometimes resulting in eutrophication, low-oxygen dead-zones, and harmful algal blooms. These negative impacts of excess nitrogen have led scientists to study nitrogen as a pollutant. However, rising carbon dioxide and other global changes have increased demand for nitrogen by plants and microbes. In many areas of the world that are not subject to excessive inputs of nitrogen from people, long-term records demonstrate that nitrogen availability is declining, with important consequences for plant and animal growth.
Nitrogen is an essential element in proteins and as such its availability is critical to the growth of plants and the animals that eat them. Gardens, forests, and fisheries are almost all more productive when they are fertilized with moderate amounts of nitrogen. If plant nitrogen becomes less available, plants grow more slowly and their leaves are less nutritious to insects, potentially reducing growth and reproduction, not only of insects, but also the birds and bats that feed on them.
“When nitrogen is less available, every living thing holds on to the element for longer, slowing the flow of nitrogen from one organism to another through the food chain. This is why we can say that the nitrogen cycle is slowing down,” said Andrew Elmore, senior author on the paper and a professor of landscape ecology at the University of Maryland Center for Environmental Science and at the National Socio-environmental Synthesis Center.
Researchers reviewed long-term, global and regional studies and found evidence of declining nitrogen availability. For example, grasslands in central North America have been experiencing declining nitrogen availability for a hundred years, and cattle grazing these areas have had less protein in their diets over time. Meanwhile, many forests in North American and Europe have been experiencing nutritional declines for several decades or longer.
These declines are likely caused by multiple environmental changes, one being elevated atmospheric carbon dioxide levels. Atmospheric carbon dioxide has reached its highest level in millions of years, and terrestrial plants are exposed to about 50% more of this essential resource than just 150 years ago. Elevated atmospheric carbon dioxide fertilizes plants, allowing faster growth, but diluting plant nitrogen in the process, leading to a cascade of effects that lower the availability of nitrogen. On top of increasing atmospheric carbon dioxide, warming and disturbances, including wildfire, can also reduce availability over time.
Declining nitrogen availability is also likely constraining the ability of plants to remove carbon dioxide from the atmosphere. Currently global plant biomass stores nearly as much carbon as is contained in the atmosphere, and biomass carbon storage increases each year as carbon dioxide levels increase. However, declining nitrogen availability jeopardizes the annual increase in plant carbon storage by imposing limitations to plant growth. Therefore, climate change models that currently attempt to estimate carbon stored in biomass, including trends over time, need to account for nitrogen availability.
“The strong indications of declining nitrogen availability in many places and contexts is another important reason to rapidly reduce our reliance on fossil fuels,” said Elmore. “Additional management responses that could increase nitrogen availability over large regions are likely to be controversial, but are clearly an important area to be studied.”
In the meantime, the review paper recommends that data need be assembled into an annual state-of-the-nitrogen-cycle report, or a global map of changing nitrogen availability, that would represent a comprehensive resource for scientists, managers, and policy-makers. |
Anything and everything that can be produced, which we can quantify is referred as data. In simple words granular information is data. Automobiles and machinery have been running on oil extracted from earth. In the Internet age devices, machines and all mundane activities shall be driven by data.
All data that exists can be classified into three forms:
- Structured Data
- Semi-structured Data
- Unstructured Data
Structured Data – Structured data is a standardized format for providing information. Examples: Sensor data, machine generated data, etc.
Semi-structured Data – Semi structured Data does not exist in standard form but can be used to derive structured form with little effort. Examples: JSON, XML, etc.
Unstructured Data – Unstructured Data is any data that is not organized but may contain data which can be extracted. Examples: Social media data, Human languages, etc.
Most global tech giants operate from data generated by their users. Google, Amazon, Facebook, Uber, et al. come under the same umbrella. The insights derived from structured and semi structured data can help us in decision making. The magnitude and scale at which these companies generate data is astounding.
Databases play a very important role in storing data. But traditional databases are no longer a choice to store data in today’s fast moving world. New age file systems and infrastructure have started operating to cater the demands of ever expanding Internet space.
In the human world, the voice, speech, text, walking speed, everything can be classified as unstructured data, since we can derive a lot of insights from them. A mobile device per individual is pretty much sufficient to analyse the behavior of a sizable population in a region.
Data collected from a population for a relatively considerable time can be used to derive patterns about the population. Hence, data is the driving force which will fuel innovation and economy from here. |
Compressed Zip File
Be sure that you have an application to open this file type before downloading and/or purchasing. How to unzip files.
This phonics blending interactive activity allows students to click or select and drag letters with ease to help them blend sounds to make words. Simply select a letter...then click on it to drag and move it. This interactive activity was specifically designed as a PowerPoint. The sounds in this PowerPoint activity correspond to Theme 1 of the Houghton Mifflin Reading Series. Theme 2 focuses on the short o, e and u sounds. I have created moveable letters that students can click and drag (using the mouse) around the o, e or u to form words. After dragging a letter to form a new word, I have the student sound the word out. This phonics activity helps students focus on the short vowels as they blend beginning sounds and say the ending sound that goes with the word.
*This PowerPoint uses the VBA code to move letters during the slide show using the mouse. Once you open the file, please select "Enable Content" and you will be able to use and edit it.
In my class I have student numbers. I have sticks with each of the student's numbers on them. I remind the students of the short vowel sound letter a makes. I choose the ending sound or letter, for example "g". I drag the letter g down to the right of the letter "u" to make the ending sound of "ug". I ask the students what the sound of the letters u and g make when they are put together. Then, I pull a stick and call a student up to drag down a letter that goes in front of the letters a and t. The student then sounds the word out. I have the class repeat the word and then I pull another stick.
There are 6 pages included in this PowerPoint file. Each one corresponds to the stories in Theme 2 of the Houghton Mifflin Reading Series. The last page has each letter of the alphabet. If your students need more of a challenge, you can select a letter or letters from page 6 and then copy and paste them onto another page. This way the students can make more words. So, if I wanted my students to practice and make words with the short vowel u and the letter m as the ending sound. I would copy and paste m from page 6 and then have pull a stick to have a student drag a letter to form a new word with the ending sound m.
*Also included are 2 review pages included that review the short a and i sounds from Theme 1.
*I have listed the letters and some sample words to have the students create in each page in small letters under the title (to the upper right of the page). |
Language awareness – sometimes referred to as the language of learning – refers to how we use words in our learning materials. We research language carefully, so that our resources are as accessible as possible.
In a nutshell*…
Why is language awareness important?
Understandably, using language that makes new and complex ideas easier to understand is incredibly important when writing textbooks. It’s even more crucial when those textbooks are for international curriculums, such as Cambridge IGCSE™ or the International Baccalaureate Diploma programme, and are therefore likely to be read by English as a second language (ESL) or an additional language (EAL) learners.
We recently conducted a piece of research with science teachers on our exclusive research community. We asked:
Which area represents the biggest barrier to your students when it comes to language?
52.8% of teachers answered that interpreting questions was the biggest language barrier to students’ learning: more than all the other options put together.
This reveals just how important it is to frame questions in an understandable way. Students may know the answer, but if they don’t understand what the question is asking, they struggle.
What does language awareness look like?
We work carefully to use language that enables learners to understand new and challenging concepts. This includes clear definitions of key words alongside the text on the page, as well as glossaries of not only subject words, but command words too. For example, words such as ‘analyse’, ‘justify’ or ‘contrast’. This ensures that students are clear on what questions are asking them to do.
In addition, we always give opportunities for learners to practise their vocabulary. Studies have shown that it takes 15–20 meaningful exposures to new words and phrases before an item becomes part of a learner’s active vocabulary. That’s why when learners see an important new word in our books and digital resources, it will be repeated close-by in the text to give context and they are also often used in student activities.
* ‘In a nutshell’ is an English idiom that means to explain something briefly. An idiom is a phrase that is not directly translatable, but whose meaning is generally understood (if you are a first language speaker). This makes them difficult to understand from a second language perspective. But it also makes them attractive to learn because they feel like a secret language!
An idiom is an example of language we would not use in resources designed for second language speakers – at least, not without an explanation! For more fun with idioms, try our interactive quiz. |
Word Work is such an important skill to practice in the primary classroom. Word Work, or word study, is often considered “spelling practice” in classrooms. While word study does involve learning to spell words, it is not your traditional spelling practice of simply memorizing words for a spelling test. Word Work involves learning the patterns and meanings behind our written language. Whether or not you do “spelling” in your classroom is up to you and your school district. My district uses spelling words and gives spelling tests. My goal is to make them the most useful for my students. Instead of focusing on memorizing the spelling words each week, and then forgetting them right away, I want my students to understand the patterns in our spelling words. Our spelling words are related to our phonics skill for the week, plus a couple of high frequency words. We practice these words during our independent Word Work stations, in our whole group instruction, and small group instruction. I created these Editable Word Work activities to use each week with my students.
These Word Work Activities allow you to use ANY words that you would like for your students to practice. Your students can practice spelling words, sight words, or word families. The possibilities are endless. Simply type in your list of words and 12 Word Work Activities are automatically generated for you to use in whole group, small groups, independent word work centers, or homework. See how it works in the video below.
What are the Editable Word Work activities that are included?
There are 12 Word Work activities that can be used with 10 words and 12 of the same activities that can be used with 5 words. This allows for easy differentiation for your students who are not ready to work with more than 5 words at a time. You will find the following activities:
•Spin & Write
•Using Different Tools
•Roll & Color
•Roll & Write
•Roll & Read
•Tic Tac Toe
•Secret Code Words
“This is a brilliant idea!! Type the list once and instantly get 12 differentiated activities. I can’t wait to use this in my classroom. This has to be one of my all time favorite downloads.” -Darlene
This is truly a huge time saver in my classroom because I can use any list that I want to use for my students. It is easy to differentiate for my students to work on sight words, word families, or spelling words. I also included activities that can be used with 10 or 5 words, which allows me to cut my word list in half for struggling students.
You can learn more about these Editable Word Work activities by clicking on the picture below. |
The quality of machined surface is characterized by the
accuracy of its manufacture with respect to the dimensions specified by
the designer. Every machining operation leaves characteristic evidence
on the machined surface. This evidence in the form of finely spaced micro
irregularities left by the cutting tool. Each type of cutting tool leaves
its own individual pattern which therefore can be identified. This pattern
is known as surface finish or surface roughness.
1. Roughness :
Roughness consists of surface irregularities which result
from the various machining process. These irregularities combine to form
2. Roughness Height :
It is the height of the irregularities with respect to
a reference line. It is measured in millimeters or microns or microinches.
It is also known as the height of unevenness.
3. Roughness Width :
The roughness width is the distance parallel to the nominal
surface between successive peaks or ridges which constitute the predominate
pattern of the roughness. It is measured in millimeters.
4. Roughness Width Cut Off :
Roughness width cut off is the greatest spacing of respective
surface irregularities to be included in the measurement of the average
roughness height. It should always be greater than the roughness width
in order to obtain the total roughness height rating.
5. Lay :
Lay represents the direction of predominant surface pattern
produced and it reflects the machining operation used to produce it.
6. Waviness :
This refers to the irregularities which are outside the
roughness width cut off values. Waviness is the widely spaced component
of the surface texture. This may be the result of workpiece or tool deflection
during machining, vibrations or tool runout.
7. Waviness Width :
Waviness height is the peak to valley distance of the
surface profile, measured in millimeters.
8. Arithmetic Average (AA):
A close approximation of the arithmetic average roughness-height
can be calculated from the profile chart of the surface. Averaging from
a mean centerline may also be automatically performed by electronic instruments
using appropriate circuitry through a meter or chart recorder. If X is
the measured value from the profilometer, then the AA value can be calculated
as shown below.
9. Root Mean Square (rms)
The rms value can be calculated as shown below. Its numerical
value is about 11% higher than that of AA.
SURFACE FINISH IN MACHINING
The resultant roughness produced by a machining process
can be thought of as the combination of two independent quantities:
1. Ideal roughness, and
2. Natural roughness
Ideal surface roughness is a function of only feed and
geometry. It represents the best possible finish which can be obtained
for a given tool shape and feed. It can be achieved only if the built-up-edge,
chatter and inaccuracies in the machine tool movements are eliminated completely.
For a sharp tool without nose radius, the maximum height of unevenness
is given by:
The surface roughness value is given by:
Practical cutting tools are usually provided with a rounded
corner, and figure below shows the surface produced by such a tool under
ideal conditions. It can be shown that the roughness value is closely related
to the feed and corner radius by the following expression:
where r is the corner radius.
In practice, it is not usually possible to achieve conditions
such as those described above, and normally the natural surface roughness
forms a large proportion of the actual roughness. One of the main factors
contributing to natural roughness is the occurrence of a built-up edge.
Thus, larger the built up edge, the rougher would be the surface produced,
and factors tending to reduce chip-tool friction and to eliminate or reduce
the built-up edge would give improved surface finish.
Factors Affecting the Surface Finish
Whenever two machined surfaces come in contact with one
another the quality of the mating parts plays an important role in the
performance and wear of the mating parts. The height, shape, arrangement
and direction of these surface irregularities on the workpiece depend upon
a number of factors such as:
A) The machining variables which include
a) cutting speed
b) feed, and
c) depth of cut.
B) The tool geometry
The design and geometry of the cutting tool also plays
a vital role in determining the quality of the surface. Some geometric
factors which affect achieved surface finish include:
a) nose radius
b) rake angle
c) side cutting edge angle, and
d) cutting edge.
C) Workpiece and tool material combination and their mechanical properties
D) Quality and type of the machine tool used,
E) Auxiliary tooling, and lubricant used, and
F) Vibrations between the workpiece, machine tool and cutting tool. |
Most scientists share the gripping compulsion to place things into tidy categories. Fish ecologists are no exception.
Grouping species helps improve our understanding and management of aquatic ecosystems. Because local fish diversity can be quite high, predicting responses of individual species to environmental change or nonnative introductions can be difficult. Grouping fishes based on biological or ecological similarities can help reduce that complexity because species in the same group often respond similarly to same threats.
Didn’t we already have groups?
By the time we finish elementary school, we’re all familiar with the hierarchical Linnaean classification system that groups organisms by their common ancestry. Of course, this system is one of the greatest contributions to the natural sciences, and is the basis for phylogenetic analysis.
However, closely-related species often perform starkly different ecological functions. For example, darters are much more closely related to walleye (they’re in the same family, Percidae) than they are to sculpins. Yet, darters eat, behave and function much more similarly to sculpins than to walleye.
Thus, in ecological analyses, phylogenetic groupings may not always be as effective as other types of groups.
|Redfin darter (left) and banded sculpin (right) are small benthic insectivores that inhabit riffle habitats; walleye (center) are large pelagic predators. Source|
What types of groups?
Fishes are often grouped according to guilds—species that exploit the same resources typically in similar ways. Guilds can be based on any ecological requirement (some of these include flow, habitat type, etc…). The two most commonly applied guilds are trophic (feeding) and reproductive.
Trophic. Trophic guilds are classified by what fishes eat and are subdivided by how they eat it (modes of feeding). A well-accepted classification system* of freshwater fishes recognizes five trophic guilds and 26 feeding modes. For example, there are 10 different feeding modes that invertivores use. A few of these include surface drift feeding, grazing and digging.
|Carmine shiners are drift-feeding insectivores, and can be affected by riparian deforestation. Source|
Reproductive. Reproductive guilds are firstly classified by what is done after eggs are laid—they are either guarded, unguarded, or born on/inside the fish. Within this scheme, classifications are subdivided based on the substrate needed for spawning**. A few examples include lithophils (requiring gravel), psammophils (requiring sand), and speleophils (requiring cavities).
How do fish ecologists use groups?
Human impacts. Groupings have greatly improved our understanding of the impacts of human development on fishes. In fact, groupings are the basis of community-level bioassessment. For instance, abundances of lithophilic fishes predictably decrease with increasing sedimentation from human development.
|Simple lithophils like this blacktip jumprock are vulnerable to sedimentation. Photo by Brandon Peoples.|
Indirect effects. Groupings help us to conceptually simplify food webs to improve fisheries management. Many fisheries-related problems, especially in smaller systems, are related to unbalanced food webs. Understanding which group’s biomass needs to be increased or decreased can help us solve the problem.
Improving our groups
Groupings are only as good as the information used to build them. Oftentimes information on a species’ feeding or reproduction only comes from one study. However, species traits are plastic—they can vary through space and time. For widely-distributed species, scant life history information may lead to poor classifications.
As we continue to learn more about the basic biology of fishes, our groupings will become more precise so that we can better predict fishes’ response to environmental change and nonnative introductions.
*Goldstein, R. M. and T. P. Simon. 1999. Toward a united definition of guild structure for feeding ecology of North American freshwater fishes. Pages 123-220 in T. P. Simon, editor. Assessing the Sustainability and Biological Integrity of Water Resources using Fish Communities. CRC Press, Boca Raton.
**Balon, E. K. 1975. Reproductive guilds of fishes - proposal and definition. Journal of the Fisheries Research Board of Canada 32:821-864. |
The material, constructed of two different compounds, might one day allow computers to use the magnetic spin of electrons, in addition to their charge, for computation. A host of innovations could result, including fast memory devices that use considerably less power than conventional systems and still retain data when the power is off. The team's effort not only demonstrates that the custom-made material's properties can be engineered precisely, but in creating a virtually perfect sample of the material, the team also has revealed a fundamental characteristic of devices that can be made from it.
Manganite oxide lattices (purple) doped with lanthanum (magenta) and strontium (green) have potential for use in spintronic memory devices, but their usual disorderly arrangement (left) makes it difficult to explore their properties. The ANL/NIST team's use of a novel orderly lattice (right) allowed them to measure some of the material's fundamental characteristics.
Team members from ANL began by doing something that had never been done before-engineering a highly ordered version of a magnetic oxide compound that naturally has two randomly distributed elements: lanthanum and strontium. Stronger magnetic properties are found in those places in the lattice where extra lanthanum atoms are added. Precise placement of the strontium and lanthanum within the lattice can enable understanding of what is needed to harness the interaction of the magnetic forces among the layers for memory storage applications, but such control has been elusive up to this point.
"These oxides are physically messy to work with, and until very recently, it was not possible to control the local atomic structure so precisely," says Brian Kirby, a physicist at the NIST Center for Neutron Research (NCNR). "Doing so gives us access to important fundamental properties, which are critical to understand if you really want to make optimal use of a material."
The team members from ANL have mastered a technique for laying down the oxides one atomic layer at a time, allowing them to construct an exceptionally organized lattice in which each layer contains only strontium or lanthanum, so that the interface between the two components could be studied. The NIST team members then used the NCNR's polarized neutron reflectometer to analyze how the magnetic properties within this oxide lattice changed as a consequence of the near-perfect placement of atoms.
They found that the influence of electrons near the additional lanthanum layers was spread out across three magnetic layers in either direction, but fell off sharply further away than that. Tiffany Santos, lead scientist on the study from ANL, says that the measurement will be important for the emerging field of oxide spintronics, as it reveals a fundamental size unit for electronic and magnetic effects in memory devices made from the material.
"For electrons to share spin information-something required in a memory system-they will need to be physically close enough to influence each other," Kirby says. "By ordering this material in such a precise way, we were able to see just how big that range of influence is." |
April 13, 2011
Bacteria In Wasp Antennae Produce Antibiotic Cocktail
Bacteria that grow in the antennae of wasps help ward off fungal threats by secreting a 'cocktail' of antibiotics explains a scientist at the Society for General Microbiology's Spring Conference in Harrogate.
Dr Martin Kaltenpoth describes how this is the first known example of non-human animals using a combination prophylaxis strategy similar to the one used in human medicine. This discovery could help us find novel antimicrobials for human use and lead to more effective strategies for using them.Female beewolf digger wasps cultivate symbiotic Streptomyces bacteria in unique antennal glands and secrete them into their larval brood cells. The larvae take up the bacteria and incorporate them in the cocoon while spinning it. On the cocoon, the bacteria provide protection against a wide range of potentially detrimental fungi and bacteria by producing a cocktail of at least nine different antibiotic substances.
Dr Kaltenpoth from the Max Planck Institute for Chemical Ecology in Germany explains why the results are so fascinating. "We are studying one of the few known symbiotic interactions in which the bacterial partner defends its host against pathogens - rather than providing a nutritional benefit, which is more common. Such defensive interactions have so far been largely overlooked, yet they are probably much more widespread than is currently recognized."
Studying more about insect-bacteria symbioses also has potential clinical benefits. "Learning about defensive symbioses will help us understand how mutualistic interactions between insects and bacteria evolve," suggested Dr Kaltenpoth. "Importantly, it will also tell us more about the exploitation of bacterial antibiotics by animals other than humans and about the, as yet, surprisingly little-studied role of antibiotics in nature."
The symbiotic bacteria secrete a mixture of antimicrobial compounds on the walls of the cocoon to ward off microbial threats. A similar combination prophylaxis (also known as combination therapy) approach is increasingly used in human medicine. Such a treatment exploits the complementary action of two or more antibiotics. It results in a higher efficacy against a broader spectrum of pathogens and is known to prevent micro-organisms from developing resistance to the antibiotic substance. "Understanding how insects use this approach to defend themselves against pathogens may provide us with insight to help design better strategies to combat increasingly resistant human pathogens," explained Dr Kaltenpoth. "What's more, identifying the components of the antibiotic cocktails produced by a wide range of beewolf species may yield novel antimicrobial compounds that might be useful for human medicine."
Image Caption: The beewolf larva hibernates for several months in its cocoon before the adult insect hatches. Antibiotics on the surface of the cocoon, produced by symbionts, guarantee protection against microbial pests during such a protracted developmental stage. The amount of antibiotics was visualized by means of imaging techniques based on mass spectrometry (LDI imaging) and merged as pseudocolors onto the cocoon. Credit: Johannes Kroiss and Martin Kaltenpoth, MPI for Chemical Ecology, Jena (Photomontage).
On the Net: |
Training is to cause the branches to grow in a particular direction or fashion. The primary object of training and pruning fruit trees is to manage light. Shading of one leaf by another reduces light interception of the shaded leaf by 90% and thus reduces photosynthesis by 28%. About 30% of full sunlight is required to achieve the maximum rate of photosynthesis. In gardens, a secondary reason to train and prune trees is to improve aesthetics.
Limb orientation is an important factor in productivity. Branches that are growing mostly upward are vegetatively vigorous, but not fruitful. Branches that are growing mostly horizontal are very fruitful, but have too little vegetative vigor. The ideal branch angle is about 30° to 45° above horizontal. This allows for vegetative growth while providing as much fruitfulness as possible. It creates a balance between growth and fruiting. Careful bending of young branches, and holding these branches horizontally with string or weights, will help bring young trees into production.
Many different approaches have been used to train or position limbs. Spreading involves using wooden sticks or metal rods to push branches downward. Tying involves using string or twine to pull branches downward. The twine can be connected to a stout stake or to a nail or screw in the base of the tree stake. Weighting involves fastening small weights on branches to pull them downward. Filling small paper cups with concrete makes a simple weight. Put a J-shaped piece of wire in the cup while the cement is wet to form a hook for hanging the weight in the tree. Trellising can also be used for limb positioning. Many different types of trellises can be used for fruit trees. Some are very elaborate such as espalier. Others simply consist of two to three horizontal wires connected to strong posts. Limbs are attached to the wires to hold them in the desired position. Trellises can also help support a growing crop. Trellis systems don’t work very well for stone fruits. |
This is a cutaway drawing of the proposed structure of Ganymede's interior.
Click on image for full size
Differentiation is a scientific term which really means "to separate". In their earliest history, elements which made the planets would part into separate regions, if the planet were warm enough. This is the akin to the process whereby an oil & vinegar salad dressing will part into regions made only of oil and only of vinegar.
Planetary elements which separate include iron, which is heavy and silicate rock, which is lighter. Iron falls to the center of a planet and forms a core, silicate material stays in the middle, ice stays near the top, as shown in this drawing of the moon Ganymede.
Earth/Mars are examples of planets which did/did not differentiate early in their histories. Ganymede/Callisto are examples of moons which did/did not differentiate early in their histories. The lack of differentiation may say something about how warm these planets were to start with.
Shop Windows to the Universe Science Store!
Our online store
includes issues of NESTA's quarterly journal, The Earth Scientist
, full of classroom activities on different topics in Earth and space science, ranging from seismology
, rocks and minerals
, and Earth system science
You might also be interested in:
An element (also called a "chemical element") is a substance made up entirely of atoms having the same atomic number; that is, all of the atoms have the same number of protons. Hydrogen, helium, oxygen,...more
Minerals are the building blocks of rocks. There are many different types of minerals. All of them are solid and all are made of atoms of elements. Minerals can grow even though they are not alive. Most...more
Also, prior to the official "beginning" of the Earth or start of the Archean age, the Earth turned inside out! That's right. Iron is heavy. As soon as the Earth became warm enough, the heavy iron which...more
The Archean is the name of the age which began with the forming Earth. This period of Earth's history lasted a long time, 2.8 billion years! That is more than half the expected age of the Earth! And no...more
Measurements by the Galileo spacecraft have been shown that Callisto is the same inside from the center to the surface. This means that Callisto does not have a core at the center. This means that, unlike...more
AU stands for Astronomical Units. It is an easy way to measure large distances in space. It is the distance between the Earth and the Sun, which is about 93 million miles. For really big distances, we...more
The solar wind is formed as the Sun's top layer blows off into space. It carries magnetic fields still attached to the Sun. Streams appear to flow into space as if they are spiraling out from the Sun,...more |
Ground foraging is the Quails' method of feeding. They are an omnivorous species that consumes nuts, seeds, and the occasional insect. Juveniles and adult females tend to eat more insects than adult males, and young males will become more and more herbivorous as they age.
Mountain Quails are monogamous, and both parents incubate and care for the brood of 10-12. Chicks are precocial, meaning that they are up and about following their parents very soon after birth. Mountain Quails live in very small groups (called Coveys) that typically number fewer than 10 adult birds.
Due to their small Covey sizes and elusive behavior, it is difficult to determine exactly how many Mountain Quails are out there. We do know that habitat loss has been a contributor to population decline, as numbers have continued to shrink even in the states that have banned their hunting. Despite some of these local declines, the large range of the Mountain Quail has kept them listed as being of Least Concern.
IUCN Status : Least Concern
Location : North America, west of the Rocky Mountains
Size : Weighs up to 9oz (255g), Length 12in (30cm)
Classification : Phylum : Chordata -- Class : Aves -- Order : Galliformes
Family : Odontophoridae -- Genus : Oreortyx -- Species : O. pictus |
The Alpine Fault is a geological fault, specifically a right-lateral strike-slip fault, that runs almost the entire length of New Zealand's South Island. It forms a transform boundary between the Pacific Plate and the Indo-Australian Plate. Earthquakes along the fault, and the associated earth movements, have formed the Southern Alps. The uplift to the southeast of the fault is due to an element of convergence between the plates, meaning that the fault has a significant high-angle reverse oblique component to its displacement.
The Alpine Fault is believed to align with the Macquarie Fault Zone in the Puysegur Trench off the southwestern corner of the South Island. From there, the Alpine Fault runs along the western edge of the Southern Alps, before splitting into a set of smaller dextral strike-slip faults north of Arthur's Pass, known as the Marlborough Fault System. This set of faults, which includes the Wairau Fault, the Hope Fault, the Awatere Fault, and the Clarence Fault, transfer displacement between the Alpine Fault and the Hikurangi subduction zone to the north. The Hope fault is thought to represent the primary continuation of the Alpine fault.
Average slip rates in the fault's central region are about 30mm a year, very fast by global standards.
The Alpine Fault and its northern offshoots have experienced sizable earthquakes in historic times:
- 1848 - Marlborough, estimated magnitude = 7.5
- 1888 - North Canterbury, estimated magnitude = 7.3
- 1929 - Arthur's Pass, estimated magnitude = 7.1
- 1929 - Murchison, estimated magnitude = 7.8
- 1968 - Inangahua, estimated magnitude = 7.1
- 2003 - Fiordland, estimated magnitude = 7.1
- 2009 - Fiordland, estimated magnitude = 7.8
Over the last thousand years, there have been four major ruptures along the Alpine Fault causing earthquakes of about magnitude 8. These occurred in approximately 1100, 1450, 1620 and 1717 CE, at intervals between 100 and 350 years. The 1717 quake appears to have involved a rupture along nearly 400 km of the southern two thirds of the fault. Scientists say that a similar earthquake could happen at any time as the interval since 1717 is longer than between the earlier events.
GNS Science researchers have compiled an 8000-year timeline of 24 major quakes on the (southern end of the) fault from sediments at Hokuri Creek, near Lake McKerrow in north Fiordland. In earthquake terms, the 850km-long fault is remarkably consistent, rupturing on average each 330 years, at intervals ranging from 140 years to 510 years.
Large ruptures can also trigger earthquakes on the faults continuing north from the Alpine Fault. There is paleotsunami evidence of near-simultaneous ruptures of the Alpine fault and Wellington (and/or other major) faults to the North having occurred at least twice in the past 1,000 years.
- Zachariasen, J.; Berryman K., Langridge R., Prentice C., Rymer M., Stirling M.& Villamor P. (2006). "Timing of late Holocene surface rupture of the Wairau Fault, Marlborough, New Zealand". New Zealand Journal of Geology and Geophysics 49: 159–174. Retrieved 2009-11-30.
- "Deadly alpine quake predicted". New Zealand Herald. 23 August 2006.
- "'Well Behaved' Alpine Fault ". Science Media Centre 28 June 2012.
- Robinson, R. (2003). Potential earthquake triggering in a complex fault network: the northern South Island, New Zealand. Geophysical Journal International, 159(2), 734-748. (abstract)
- Wells, A., Yetton, M.T., Duncan, R.P., and Stewart, G.H. (1999) Prehistoric dates of the most recent Alpine fault earthquakes, New Zealand. Geology, 27(11), 995-998. (abstract)
Otago Regional Council
University of Otago Geology Department:
Institute of Geological & Nuclear Sciences Limited (GNS Science):
- Interactive map New Zealand Active Faults Database.
- Map of New Zealand's Largest Earthquakes
- FAQ on Alpine Fault
- Press release, 10 September 1998 |
– Ball of fire –
The first thing people noticed was an “intense ball of fire” according to the International Committee of the Red Cross (ICRC).
The atomic bomb had a yield of 15 kilotonnes, equal to 15,000 tonnes of TNT, yet was 3,300 times less powerful than the biggest hydrogen bomb tested by the Soviet Union in 1961.
Temperatures at the epicentre of the blast reached an estimated 7,000 degrees Celsius (12,600 Fahrenheit), which caused fatal burns within a radius of about three kilometres (five miles).
ICRC experts say there were cases of temporary or permanent blindness due to the intense flash of light, and subsequent related damage such as cataracts.
A whirlwind of heat generated by the explosion also ignited thousands of fires that burned several square kilometres (miles) of the largely wooden city. A firestorm that consumed all available oxygen caused more deaths by suffocation.
It has been estimated that burn- and fire-related casualties accounted for more than half of the immediate deaths in Hiroshima.
– Shock wave –
The explosion generated an enormous shock wave and almost instantaneous expansion of air which also caused a huge number of deaths.
Some people were literally blown away while others were crushed inside collapsed buildings or perforated by flying debris.
The ICRC recorded many victims with ruptured internal organs, open fractures, broken skulls and penetration wounds.
– Radiation –
Another deadly effect of the atomic bomb was the emission of radiation that proved harmful in both the short and long term.
Radiation sickness was reported in the attack’s aftermath by many who survived the initial blast and firestorm.
Acute radiation symptoms include vomiting, headaches, nausea, diarrhoea, haemorrhaging and hair loss. Radiation sickness can lead to death within a few weeks or months.
Longer-term effects noted among “hibakusha”, or bomb survivors, are increased risks of thyroid cancer or leukaemia.
In both Hiroshima and Nagasaki, which was hit by an atomic bomb on August 9, 1945, the rate of various cancers and leukaemia have risen.
Of 50,000 radiation victims from both cities studied by the Japanese-US Radiation Effects Research Foundation, about 100 died of leukaemia and 850 suffered from radiation-induced cancers.
The foundation found no evidence of a “significant increase” in serious birth defects among survivors’ children, however. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.