content
stringlengths 275
370k
|
---|
In a new Frontiers in Nanotechnology study, researchers synthesize carbon dots using different precursors and studied their antimicrobial effects in gram-negative bacteria. This discovery comes at a time when there is a highly motivated battle against the spread of infectious bacterial diseases, particularly those that are multidrug resistance (MDR), across the globe.
The researchers provided the optimal concentration and incubation times for each bacterial species. They found that not all carbon dots carry antimicrobial properties. Interestingly, they observed that there is a variation between different bacterial species in how their resistance plays against these carbon dots.
Study: Amine-Coated Carbon Dots (NH2-FCDs) as Novel Antimicrobial Agent for Gram-Negative Bacteria. Image Credit: Rost9 / Shutterstock.com
According to the World Health Organization (WHO), antibiotic resistance is one of the greatest health threats globally. Due to the widespread availability of various antibiotics and the exposure of the bacteria to these drugs for extended periods of time, MDR can easily develop in bacteria.
The United States Centers for Disease Control and Prevention (CDC) Annual Threat Report for 2019, shows more than 2.8 million antibiotic-resistant infections currently exist worldwide. The report highlights the human death toll of up to 35,000 due to MDR in the United States.
Therefore, alternative antimicrobial strategies need to be developed. One of the promising approaches is the use of photodynamic inhibition (PDI) properties of materials, which causes nonspecific oxidative damage to vital molecules inside the bacteria.
This process, which is also known as photodynamic therapy (PDT), locally releases free radicals such as reactive oxygen species (ROS) and reactive chlorine species (RCS) produced by a photosensitizer (PS) into the infected cells. All bacteria, including MDR bacteria, are therefore killed in the process.
Importantly, because the bacteria do not develop resistance to PDT, this method is a useful antibacterial strategy. However, current PDI compounds are associated with some limitations including poor photostability, poor water dispersibility, and an inability to be absorbed in the region. In this context, carbon nanoparticles (carbon dots) are considered novel PDI compounds with excellent photostability and autofluorescence properties.
Carbon dots (CDs) are zero-dimensional photoluminescent particles that have a diameter of fewer than 20 nanometers (nm). CDs produce ROS, which can be effectively used against bacteria. The mechanism of action by CDs may include their adhesion to the bacterial surface, disruption of the bacterial cell wall, random oxidative damage to nucleic acids and proteins, altering the gene expression patterns, and affecting other biomolecules.
Different CDs from diverse origins are synthesized and tested against bacteria. In the present study, the researchers evaluate four different carbon dots that were reported for cell imaging for their antibacterial properties.
The researchers synthesized 4 CDs including amine-coated carbon dots (NH2-FCDs), synthesized from glucosamine HCL (GlcNH2·HCl) and 4,7,10-trioxa-1,13-tridecanediamine (TTDDA), carboxyl-coated carbon dots (COOH-FCDs) that consist of glucosamine and b-alanine, nitrogen-coated carbon dots (N-CDs) that consist of glucose and L-arginine, and polyethyleneimine functionalized (PEI)-carbon dots that consist of citric acid and PEI.
The researchers characterized the functional groups of the CDs using Fourier transform infrared spectroscopy – attenuated total reflectance (FTIR-ATR) analysis. They also studied photoluminescence properties including absorbance, emission, and excitation wavelength, and measured the electrostatic charges carried by CDs.
To test for antimicrobial properties, the researchers performed the agar plate well-diffusion method to determine the minimum inhibitory concentration (MIC) of NH2-FCDs with different concentrations of CDs from 0.5, 1, 5 to 10 mg/ml. They used six bacterial strains belonging to different taxonomic groups, of which included E. coli (DH5α), Pectobacterium carotovorum (Ecc7), Agrobacterium tumefaciens (EHA101), Agrobacterium rhizogenes (K599), Pseudomonas syringae, and Salmonella enterica subsp. enterica serovar Typhimurium.
The researchers found that all the CDs, except PEI-CD, had differing degrees of bacterial growth inhibition at different concentrations. At higher concentrations, more inhibition was observed.
Using confocal microscopy, the researchers established the mechanism of the antibacterial action in that the CDs interacted with the bacterial cell surface, which resulted in the complete dissolution of the membrane. Even after multiple washing steps, they reported adherence of the CDs to the cell membranes. In the case of the sensitive Pseudomonas bacteria, zero colonies were observed after an incubation time of one hour.
Testing both the precursor solutions and the CDs against the bacteria, the antimicrobial capacity was found to be enhanced in the NH2-FCDs and not the precursors.
The researchers further tested for the incubation time for complete inhibition of bacterial cells. Depending on the species, different bacterial strains required different incubation times for complete inhibitions by NH2-FCDs.
In conclusion the researchers said, "Considering the antibiotic properties of these carbon dots against Agrobacterium which is a common tool in the delivery of foreign DNA into plants, this carbon dot can be used for the eradication of Agrobacterium from the cell and tissue culture media, instead of common antibiotics that are used for this purpose."
- Devkota, A., Pandey, A., Yadegari, Z., et al. (2021). Amine-Coated Carbon Dots (NH2-FCDs) as Novel Antimicrobial Agent for Gram-Negative Bacteria. Frontiers in Nanotechnology. doi:10.3389/fnano.2021.768487.
Posted in: Device / Technology News | Medical Science News | Medical Research News | Disease/Infection News
Tags: Alanine, Antibiotic, Antibiotic Resistance, Arginine, Bacteria, Cell, Cell Imaging, Cell Wall, Confocal microscopy, Drugs, E. coli, Free Radicals, Gene, Gene Expression, Glucosamine, Glucose, Imaging, Membrane, Microscopy, Nanoparticles, Nanotechnology, Oxygen, Photodynamic Therapy, Salmonella, Spectroscopy, Wavelength
Dr. Ramya Dwivedi
Ramya has a Ph.D. in Biotechnology from the National Chemical Laboratories (CSIR-NCL), in Pune. Her work consisted of functionalizing nanoparticles with different molecules of biological interest, studying the reaction system and establishing useful applications.
Source: Read Full Article
|
Models, predictions and pathways
Tools for analysing alternative climate futures. Scientists use climate and atmospheric modelling to understand how the climate works and how greenhouse gas concentrations and other triggers lead to climate change. Models help scientists make predictions about climate changes resulting from biological, physical, and chemical variables, such as greenhouse gas emissions and land use changes. Emission pathway scenarios are developed to understand what emission limits are needed to meet climate stabilisation points, such as avoiding a two-degree rise in surface temperature.
|
The world's longest river, located in Egypt, the Nile flows 4,132 miles (6,650 kilometres) northward to the Mediterranean Sea (a very unusual direction for a river to take). It was considered the source of life by the ancient Egyptians and has played a vital role in the country's history. The Nile flows from two separate sources: the White Nile from equatorial Africa and the Blue Nile from the Abyssinian highlands. The historian Waterson notes, "The Nile has played a vital part in the creation of Egypt, a process which started about five million years ago when the river began to flow northwards into Egypt" (7-8). Permanent settlements gradually rose along the banks of the river beginning c. 6000 BCE and this was the beginning of Egyptian civilization and culture which became the world's first recognizable nation state by c.3150 BCE. As the Nile River was seen as the source of all life, many of the most important myths of the Egyptians concern the Nile or make significant mention of it; among these is the story of Osiris, Isis, and Set and how order was established in the land.
The Nile in the Osiris Myth
Among the most popular tales in ancient Egypt concerning the Nile is that of the god Osiris and his betrayal and murder by his brother-god Set. Set was jealous of Osiris' power and popularity and so tricked him into laying down inside an elaborate coffin (sarcophagus) pretending he would give it as a gift to the one who fit into it the best. Once Osiris was inside, Set slammed the lid down and threw Osiris into the Nile River. Osiris' wife, Isis, went searching for her husband's body in order to give it proper burial and, after looking in many places, came upon some children playing by the Nile who told her where she could find the coffin. From this story comes the ancient belief of the Egyptians that children possessed the gift of divination as they were able to tell the goddess something which she could not discover herself.
The coffin floated down the Nile until it lodged in a tree at Byblos (in Phoenicia) which grew quickly around and enclosed it. The king of Byblos admired the strong, stout-looking tree and had it brought to his court and erected as a pillar. When Isis arrived at Byblos, in the course of her search, she recognized her husband's corpse was inside the tree and, after endearing herself to the king, requested the pillar as a favor. Isis then brought her dead husband back to Egypt to return him to life. This sequence of events would inspire the Djed column, a symbol which appears in Egyptian architecture and art throughout the history of the country, which symbolizes stability. The Djed, according to some interpretations, represents Osiris' backbone when he was encased in the tree or, according to others, the tree itself from which Isis removed Osiris' body to bring him back to life.
Once back in Egypt, Isis left Osiris in his coffin by the Nile to prepare the herbs and potions to bring him back to life. She left her sister, Nepthys, to guard the body from Set. Set, however, hearing that Isis had gone searching for Osiris, was looking for the body himself. He came upon Nepthys and forced her to tell him where his brother's body was hidden. Finding it, he hacked the corpse into pieces, and scattered them throughout Egypt. When Isis returned to revive her husband, Nepthys tearfully confessed what had happened and vowed to help her sister find out what Set had done with Osiris' body.
Isis and Nepthys went in search of Osiris' remains and, wherever they found a piece of him, they buried it according to the proper rituals and erected a shrine. This accounts for the many tombs of Osiris throughout ancient Egypt and was also said to have established the nomes, the thirty-six territorial divisions of ancient Egypt (similar to a county or province). Wherever a part of Osiris was buried, there a nome eventually grew up. She managed to find and bury every part of him except for his penis which Set had thrown into the Nile and which had been eaten by a crocodile. It is for this reason the crocodile came to be associated with the god of fertility, Sobek, and anyone eaten by a crocodile was considered fortunate in a happy death.
Since he was incomplete, Osiris could not return to life but became Lord of the Afterlife and Judge of the Dead. The Nile, which had received Osiris' penis, was made fertile because of this and gave life to the people of the land. Osiris' son, Horus, avenged his father by defeating Set and casting him out of the land (in some versions of the tale, killing him) and so restored balance and order to the region. Horus and Isis then ruled the land in harmony.
Importance to Egypt
Through this myth and others like it the Nile was held up to the ancient people as the source of all life in Egypt and an integral part of the lives of the gods. The Milky Way was considered a celestial mirror of the Nile and it was believed the sun god Ra drove his ship across it. The gods were intimately involved in the lives of the ancient Egyptians and it was believed that they caused the river's annual floods which deposited the fertile black soil along the arid banks. According to some myths, it was Isis who taught the people the skills of agriculture (in others, it is Osiris) and, in time, the people would develop canals, irrigation, and sophisticated systems to work the land. The Nile was also an important recreational resource for the Egyptians.
Besides swimming, the people enjoyed water jousting in which two-man teams in canoes, a `fighter' and a `rower', would compete trying to knock each other's fighter out of the boat. Another popular river sport was boat racing and displays of skill such as were described by the Roman playwright Seneca the Younger (1st century CE) who owned land in Egypt:
The people embark [on the Nile] on small boats, two to a boat, and one rows while the other bails out water. Then they are violently tossed about in the raging rapids. At length they reach the narrower channels and, swept along by the whole force of the river, they control the rushing boat by hand and plunge head downward to the great terror of the onlookers. You would believe sorrowfully that by now they were drowned and overwhelmed by such a mass of water, when far from the place where they fell, they shoot out as from a catapult, still sailing, and the subsiding wave does not submerge them, but carries them on to smooth waters.
The river became known as the “Father of Life” and the “Mother of All Men” and was considered a manifestation of the god Hapi, who blessed the land with life, as well as with the goddess Ma'at, who embodied the concepts of truth, harmony, and balance. The Nile was also linked to the ancient goddesses Hathor and, later, as noted, with Isis and Osiris. The god Khnum, who became the god of rebirth and creation in later dynasties, was originally the god of the source of the Nile who controlled its flow and sent the necessary yearly flood which the people depended on to fertilize the land.
Source of Life
During the reign of King Djoser (c. 2670 BCE) the land was struck with famine. Djoser had a dream in which the god Khnum came to him to complain that his shrine on the island of Elephantine in the river had fallen into disrepair and he was displeased at the neglect. Djoser's vizier, Imhotep, suggested the king travel to Elephantine to see whether the dream's message was true. Djoser found the temple shrine in poor condition and ordered it rebuilt and the complex around it renovated. Afterwards, the famine was lifted and Egypt was fertile again. This story is told on the Famine Stele of the Ptolemaic Dynasty (332-30 BCE), long after Djoser's reign, and is testimony to the great honor the king was still held in at that time. It also illustrates the long-standing importance of the Nile to the Egyptians in that the god of the river, and no other, had to be satisfied for the famine to end.
The Nile river remains an integral part of Egyptian life, lore and commerce today and it is said by the Egyptians that, should a visitor once look upon the beauty of the Nile, the return of that visitor to Egypt is assured (a claim made, also, in antiquity). Seneca described the Nile as an amazing wonder and a "remarkable spectacle" and this is an opinion shared by many ancient writers who visited this “mother of all men” of Egypt; a view shared by many who experience it even today.
|
At Hay on Wye CP School, we use the Jolly Phonics and Letters and Sounds approaches to learn to read and write.
The letter sounds are taught in the Letters and Sounds order. Phase 2 introduces most of the single letter sounds. These are:
Set 1: s, a, t, p
Set 2: i, n, m, d
Set 3: g, o, c, k
Set 4: ck, e, u, r
Set 5: h, b, f, ff, l, ll, ss
Learning the letter sounds in groups as above, rather than alphabetically, enables children to begin building words as early as possible.
More information about the words that can be made using these letters can be found on the Letters and Sounds website.
Phase 3 introduces the remaining single sounds, the double sounds, known as digraphs.
Set 6: j, v, w, x
Set 7: y, z, zz, qu
Consonant digraphs: ch, sh, th, ng
Vowel digraphs: ai, ee, or, oa, oo, ar, ou, oi, er, ue, ie
The Jolly Phonics approach offers a multi-sensory way of learning letter sounds.
There are actions and songs linked for each of the letter sounds.
Please click here for a printable copy of these actions.
Playing games at home
To practise phonics at home, please click on the links below:
|
Homer’s Iliad presents a conflict between fighting to gain honor for oneself while also being committed to others in the larger community. Honor and glory primarily drove the characters in the Iliad, while each character’s individual circumstances determined the influence of the larger community in their actions. Both the Greeks and the Trojans, despite fighting with each other, share in this struggle and find their own balance. Agamemnon is driven almost entirely by glory and selfishness, Achilles by honor but also a commitment to his friendship with Patroclus, and Hector by both glory and a commitment to his family.
In the Euthyphro, Euthyphro himself gives three proposals of piety. First, the pious is to prosecute the wrongdoer and the impious is not to prosecute the wrongdoer. Socrates disputes this example as lacking generality. He believed that in order to define piety, one had to find the form that made all pious acts pious. An example of a pious act does not in turn define piety. Euthyphro’s second attempt stated that the pious is loved by the gods, while the impious was hated by them. Again, Socrates objects, saying that although it passed the generality requirement, there was no conformity among the objects dear to the gods. After all, the gods had different opinions as did humans. Euthyphro then
The main question of this dialogue is the definition of the word holy or piety. Euthyphro brags that he is more knowledgeable than his father on matters relating to religion. In this case, Socrates suggests to Euthyphro to define that term. The first definition fails to satisfy Socrates because of its limitation in application. Apparently, Socrates perceives this definition as an example rather than a definition. Subsequent arguments and line of questioning lead to five sets of definitions that are refined to find the general definition. Socrates expects that the acceptable general definition of the question will act as a reference point in his defense.
Euthyphro’s first definition of piety was; “Piety means prosecuting the unjust individuals who has committed murder or sacrilege, or any other such crime, as I am now, whether he is your father or mother or whoever he is. (5d). In this dialogue, Euthyphro is attempting to assure Socrates that piety means to prosecute the wrong perpetrator, in this case and in the case of Zeus, his father. This is just an example; Socrates wanted a definition, not a pious action. Definition one is inconsistent with the question because nothing has been acceptable in all similar cases, so we can’t use this definition as what piety is. But what we
Throughout the dialogue between Euthyphro and Socrates, they both try to come up with an understanding of the relationship between piety and justice. Within the discussion, Socrates questions Euthyphro to see if he can define the difference and similarities between justice and piety, and if they interact with each other. Eventually, Euthyphro and Socrates came up with the conclusion that justice is a part of piety. This is the relationship that I agree most with because in my own opinion, I believe that all of the gods and people agree that human beings who commit unjust actions need to be punished for their actions.
In this interaction, Socrates considers Euthyphro to help in explaining all there is to be known about piety and the related impiety. Euthyphro confirms that he is indeed an expert in the matter relating to religious issues and can thus assist Socrates in the charges that face him. In their argument in the efforts to define the true meaning of piety, Socrates and Euthyphro engage in the analysis of issues that threaten to confuse human understanding about the whole issue of holiness and impiety in the society, (Plato & Gallop, 2008). To understand the true meaning of piety, it is of great importance to take a holistic analysis of the beliefs of the people about
Homer’s epic The Iliad, is a great tale of war and glory. It takes place during the last year of the ten year Greek-Trojan war. The Greeks have been fighting with the Trojans for quite some time, and just when peace seemed like a possibility, the youngest prince of Troy, Paris, acts out selfishly and steals the beautiful wife of Menelaus, Helen. This instigates the fighting again. Throughout The Iliad, Homer tells of two heroes, both similar, but also very different in their character; the great and powerful Greek, Achilles, and the strong, loving father, Prince Hector of Troy. In Homer’s The Iliad, Hector and Achilles differ as heroes in regards to pride, duty, and family love, the latter being self-centered and prideful, while the
According to Euthyphro, piety is whatever the gods love, and the impious whatever the gods hate. At first this seems like a good definition of piety, however, further inquiry from Socrates showed that the gods have different perspectives vis a vis certain actions. As the gods often quarrel with another, piety cannot simply be what is loved by gods, since they differ in opinions. For, if the gods agreed on what is just, surely they would not constantly fight with one another. Therefore, the first proposition of Euthyphro is wanting. Socrates, thus, is teaching a particular style of inquiry whereby, facile statements are challenged by their own propositions. Socrates does not make any claims initially, but rather questions the logical consequence of Euthyphro’s answer.
When Socrates asked Euthyphro what the meaning of piety is, Euthyphro tells him that, “piety is what the gods love.”(Shafer-Landau 57). This answer leads Socrates into asking, “are acts pious because the gods love them, or do the gods love actions because they are pious?”(Shafer-Landau 57). The issue at hand is Socrates is merely trying to determine exactly what determines if acts are pious or not pious and if there is any relation to the gods. Socrates question is important because if the gods aren’t what determines if acts are pious or not, then there would be no proof as to what is pious and what isn’t. This would mean that each person would have their own justification as to what is right or wrong.
Due to social utility gained from following religion and fear of eternal backlash, pious mortal characters experience an indirect relationship in heeding commands of the Gods and the level of free will–the condition of acting without fate or necessity. Throughout The Odyssey, Homer brings light to the value of piety in Ancient Greece. In good faith, Gods often reward those who are religious, especially heroes. Pious individuals are revered both by their peer mortals and the Gods, causing the emergence of a feedback loop that rewards those who are pious. Furthermore, there are different forms of piety present in The Odyssey, but it is evident that pious individuals gain glory from their actions, usually as a result of receiving favorable treatment from the Gods.
Honor: honesty, fairness, or integrity in one 's beliefs and actions; this is the definition by which these two characters, Hector and Achilles, ought to be judged. By taking this definition to heart, Achilles is far from honorable. Throughout the Iliad, Achilles acts on rage and revenge. “Rage-Goddess, sing the rage of Peleus’ son Achilles, murderous, doomed, that cost the Achaens countless losses, hurling down to the House of Death so many sturdy souls, great fighters’ souls, but made their bodies carrion, feasts for the dogs and birds…” (1, 1-5) From the beginning of the epic the reader learns of Achilles rage and wants for
However in Plato’s Euthyphro, it can be argued that Socrates plays a similar role. In the Euthyphro, Socrates discusses piety in general and what makes things and people pious. Socrates claims he wants to learn more on the subject so that he may better defend himself against the treasonous charges against him. In a way, Euthyphro represents the traditional Athenian way of thinking. He believes in and supports all of the gods and does not submit to Socrates’ prodding of the subject, although he does walk away from him in frustration at the end of the dialogue. However it can safely be said that most Athenians would agree with Euthyphro’s opinion of the gods and to disagree could most certainly be punishable by law, as Socrates was. Socrates’ search for the definition of piety is a difficult one that tests Euthyphro’s patience and ultimately leaves the characters and the reader without an answer. Every time Euthyphro proposes an answer, Socrates is quick to counter it with some thought. Interpreting Socrates’ tone and meaning here is important. Some may see Socrates to be quite demeaning in these instances, almost teasing Euthyphro because he claims to be so pious yet he cannot even define the word. In this way, similar to Aristophanes’ Clouds, Socrates plays a subversive role in the Euthyphro.
Plato's "Euthyphro" introduces the Socratic student both to the Socratic Method of inquiry and to, or at least towards, a definition of piety. Because the character of Euthyphro exits the dialogue before Socrates can arrive at a reasonable definition, an adequate understanding of piety is never given. However, what piety is not is certainly demonstrated. Euthyphro gives three definitions of piety that fail to mean much to Socrates, who refutes each one. In this paper, I will present Euthyphro's definitions along with Socrates' rebuttals. I will also show that Socrates goal in the dialogue is two-fold: 1) to arrive at a true definition, and 2) to exercise his method of teaching/inquiry. At the conclusion of this paper, I will give my own definition of piety and imagine what Socrates might say in response.
This decision of prideful betrayal brings many casualties to the Achaean army. Once Agamemnon apologetically offers Achilles many valuable gifts along with the return of his war prize, Achilles refuses. In this rejection, Achilles is putting his own animosity toward Agamemnon above the needs of his fellow Achaeans. His friend Phoenix tells him to think of his diminishing honor, but Achilles answers, “…what do I need with honor such as that ?/ … It degrades you to curry favor with [Agamemnon],/ and I will hate you for it, I who love you./ It does you proud to stand by me, my friend,/ to attack the man who attacks me…”(p 147). Not only does Achilles reject honor, but he egotistically asks his father figure, Phoenix, to give up his in order to take his side.
The reason why Achilles honor is determined by Zeus is because Achilles feels that since he has a short lifespan, “Zeus should give him honor” to make his contributions during the war in a short period of time worthwhile. However, Achilles’ is currently neglected by Zeus as the God of Gods gives the Swift Runner “nothing”, after all his achievements made during the war, feeling mistreated and denied of honor as “the best of Achaeans”, the hero of the Argives. This implies how Achilles, as a hero, values his achievements and contributions as something worthy of honor, building blocks that constructs his honor not only among men but also the gods. In short, having the deities blessing honor onto a hero is a extravagant recognition to them, meaning that the hard work the hero puts in is greatly appreciated and honored, even by the gods, therefore considered a heroic outlook on life.
|
The word “light” brings up familiar images. The common image of “light” is that of something we see. Yet, the “light” that we can see is only a small slice of a much larger range of waves that are known as the “electromagnetic” spectrum. The entire “electromagnetic” spectrum is considered to be “light.” Yet, vast majority of the “electromagnetic spectrum of light” is invisible.
This concept, and many more concepts pertaining to light, are explained in the 2017 book by astronomers Bob Berman, titled (1):
Zapped: From Infrared to X-rays, the Curious History of Invisible Light
There is a “whole world of light outside our range of vision.”
Berman also explains that all light is in packets of energy (photons) that travel in waves. He states:
“Photons constitute 99.9999999 percent of everything.”
The waves of light have different lengths (known as wavelengths). The wavelength is the distance from crest to crest for two adjacent waves. What makes light visible or invisible is its wavelength. Wavelengths shorter than the visible spectrum are invisible. Wavelengths longer than the visible spectrum are also invisible. A typical graph of the electromagnetic spectrum of waves includes:
We can only see the wavelengths in the visible spectrum of light. All of the other wavelengths are invisible to the human eye. The different wavelengths of the electromagnetic spectrum have different characterizes (positive and negative applications and influences):
- Gamma: They have so much energy that they damage genetic material, causing mutations.
- X-Rays: They also have so much energy that they damage genetic material and cause mutations. But, they also have beneficial applications, such as making diagnostic images like x-rays and CAT scans.
- Ultraviolet: They also have sufficient energy to damage genetic material and cause mutations, particularly skin cancers. However, they are required to help the body make the critically important vitamin D.
- Visible: These are the wavelengths that we can see, including all of the various colors around us. The different wavelengths within the visible spectrum account for the different colors we can perceive.
- Infrared: These wavelengths have some therapeutic applications, and they are most notorious for generating heat.
- Microwave: These wavelengths can rapidly cook food and boil water. They also have a number of technological applications, such as wireless communications and radar.
- Radio: Best known for their ability to be used to broadcast radio, hence the name.
The wavelengths of X-rays have diagnostic significance. Their discovery in 1895 allowed health care providers to visualize bones, joints, and other structures. This application for x-rays was discovered by German physicists Wilhelm Conrad Rontgen. Rontgen was awarded the first Nobel Prize in Physics for his discovery in 1901. Rontgen did not understand that the high energy generated by x-rays would damage genetic material and cause genetic mutations. Rontgen died of radiation poisoning.
Polish scientist Marie Curie earned two Nobel Prizes. Her first was in Physics, awarded in 1903, for her pioneering work in radioactivity, a word she coined. Her second Nobel Prize was in Chemistry, awarded in 1911, for her discovery of the radioactive elements radium and polonium. Curie also did not understand the high energy generated by radioactivity wavelengths, including the energy produced by her elements radium and polonium, which also damaged genetic material and caused genetic mutations. Curie also died of radiation poisoning.
Walter Bradford Cannon was an American physician and physiologist. He was Chairman of Harvard Medical School’s Department of physiology for about forty years. His research expanded upon the concepts and importance of physiological homeostasis (a word he also coined) and the “fight-or-flight” response of the sympathetic nervous system. His investigations included exposing volunteer subjects to diagnostic x-rays to determine the digestion of food when exposed to periods of calm v. stress. Like Rontgen and Curie, Cannon did not understand that the high energy generated by x-rays would damage genetic material and cause genetic mutations. Cannon also died of radiation poisoning.
Computed Axial Tomography (CAT or CT scan) is technology that allows for rapid axial (“slice-of-bread”) x-rays of parts of the body. Because many slices of x-ray exposure are made, the subject is exposed to higher levels of x-ray radiation. This allows for a unique and often valuable imaging of the body, but at the cost of increased risks of genetic damage and mutations for the subject.
Magnetic Resonance Imaging (MRI or MR) also images the inner body, but without causing genetic damage. MRI exposure time is much longer and they are more expensive.
Some of the first applications for x-rays was to image the human spinal column. These early x-rays were known as spinographs. Today, chiropractors often make great use of all three modalities (x-rays, CAT, MRI), and many chiropractors have x-ray machines in their clinical offices.
The most common modality for imaging used by chiropractors is x-rays. As noted, many chiropractors have x-ray machines in their clinical offices. Chiropractors expose x-rays on patients for a number of reasons, including:
- To discover and/or rule out diseases that should be referred to medical specialists, such as cancer or infection or some metabolic diseases.
- To document degenerative diseases of the spine, particularly pertaining to the discs, facet, central canal and lateral recess.
- To document congenital anomalies such as facet tropism, lumbosacral transitional segments, hemi and/or demi vertebrae, additional or missing ribs, block vertebrae, central canal stenosis, etc.
- To document prior spinal pathologies, such as Schmorl’s nodes, spondylolysis, spondylolisthesis, fractures, etc.
- To assess the unique and specific biomechanics on an individual patient in order to optimize the specific line-of-drive for the spinal adjustment as well as to influence various rehabilitation intervention to assist with spinal remodeling programs.
The structure that is most critical to spinal health is the intervertebral disk. The intervertebral disk functions as a shock absorber while allowing for the incredible mobility of the spinal column The disk itself is not seen with x-rays but the disk space can be viewed. The size of the disk space is a window into the integrity of disk function.
The internal disk structure can be viewed with both CAT and/or MRI scans.
The intervertebral disk can suffer from a number of acquired and/or pathological changes that may have clinical importance. These changes include degeneration, desiccation (drying out), fissures (cracks that breach the integrity), and herniation (movement of diskal material towards the nervous system: spinal cord, cauda equina, nerve root).
The terminology pertaining to lumbar spinal disk herniations has been confusing, inconsistent, and contradictory. In 2014, the North American Spine Society, the American Society of Spine Radiology, and the American Society of Neuroradiology convened a combined task force to agree upon the nomenclature, including (2):
The Normal Disk
The Bulging Disk
If any part of the annulus extends beyond the normal disc space, it is considered to be a bulging disk.
The Protrusion Disk
In the protrusion disk, the base of the displaced material is greater than the distance the disk material had moved towards the intervertebral foramen and/or the central neural canal.
The Extrusion Disk
In the extrusion disk, the greatest measure of the displaced disk material is greater than the measure of the base of the displaced material.
The Sequestration Disk
In the sequestration disk, the disk material has lost all connection with the original disk material.
How Important Are Imaging Findings in the Genesis of Low Back Pain?
Spinal imaging finding are important for many reasons, including all of the reasons mentioned above. Ironically, one of the least important reasons for spinal imaging is to determine the source of a patient’s pain.
In 1990, researchers from the Department of Orthopaedic Surgery at George Washington University Medical Center produced a study where 67 subjects who had never had low-back pain, sciatica, or neurogenic claudication received MRIs (3).
About one-third of the subjects had a substantial abnormality on imaging:
- Of those who were less than 60 years old, 20% had a herniated nucleus pulposus.
- In the group that was more than 60 years old, 57% had abnormal imaging: 36% had a herniated nucleus pulposus and 21% had spinal stenosis.
- Disc degeneration or bulging occurred at at least 1 lumbar level in 35% of the subjects between 20 and 39 years of age.
- Disc degeneration or bulging occurred in all but one of the 60 to 80 year-old subjects.
In 1991, researchers from the Department of Neurology at the Medical College of Pennsylvania, performed magnetic resonance imaging of the lumbar spine on 66 asymptomatic subjects and found that 18% had either a disc protrusion or herniation (4).
An additional 39% had a bulge that was associated with degenerative disc disease. The authors also found examples of spinal stenosis, narrowed nerve root canals, osteophytes, and vertebral body involvement with multiple myeloma. The authors concluded:
“Degenerative disc disease is a common finding in asymptomatic adults that increases in frequency with age. It occurs more frequently in men and usually involves more than one level. The most common location is L5-S1.”
In 1994, researchers from Hoag Memorial Hospital, Newport Beach, California, performed MRI examinations on 98 asymptomatic subjects aged 20-80 years. The authors documented both disk abnormalities and non-disk abnormalities (5):
- 36% of the 98 asymptomatic subjects had normal disks at all levels.
- 52% of the subjects had a bulge at one or more level.
- 27% had a protrusion.
- 1% had an extrusion.
- 38% had an abnormality of more than one intervertebral disk.
- 19% had Schmorl’s nodes.
- 14% had annular defects.
- 8% had facet arthropathy.
- Again, only 36% had a normal disk at all levels.
The authors concluded that there was a “high prevalence of abnormalities in the lumbar spine on MRI examination of people without back pain.”
In 2009, researchers from Hakodate Central General Hospital in Japan, performed MRIs on 200 healthy subjects to establish the baseline data of degenerative disc disease. The subjects included 68 men and 132 women whose mean age was 40 years (range 30-55 years). For the entire group, they found this incidence of disc degeneration, by spinal segmental level, was (6):
- L1-L2 7.0%
- L2-L3 12.0%
- L3-L4 15.5%
- L4-L5 49.5%
- L5-S1 53.0%
In 2010, researchers from Boston University School of Medicine assessed the role of radiographic abnormalities in the etiology of nonspecific low back pain because of the strong influence radiographic findings have on medical decision making. They specifically looked at the prevalence of lumbar spine degeneration as evaluated with computed tomography (CT) (7):
- Intervertebral disc narrowing
- Facet joint osteoarthritis
- Central canal spinal stenosis
One hundred eighty seven subjects participated in the study: 104 men and 83 women, with a mean age of 53 years. Findings include:
- 64% had disk degeneration
- 65% had facet joint osteoarthritis
- 12% had spondylolysis
In this study, the only imaging finding that showed statistical significance associated with low back pain was central canal spinal stenosis.
Degenerative features of the lumbar spine were extremely prevalent in this community-based sample, but they were essentially not associated with low back pain.
In 2015, researchers from Mayo Clinic, the University of Washington, Oregon Health and Science University, Henry Ford Hospital in Detroit, University of California San Francisco, Kaiser Permanente in Oakland, California, reviewed the literature to estimate the prevalence, by decade age (20, 30, 40, 50, 60, 70, 80 years), of common degenerative spine conditions (8). Their systematic literature review used either CT and/or MRI imaging of 3,110 asymptomatic individuals from 33 studies that met the study inclusion criteria. The authors specifically identified and quantified the following degenerative markers:
- Disk degeneration
- Disk signal loss
- Disk height loss
- Disk bulge
- Disk protrusion
- Annular fissures
- Facet degeneration
The authors began with this premise:
“Given the large number of adults who undergo advanced imaging to help determine the etiology of their back pain, it is important to know the prevalence of imaging findings of degenerative disease in asymptomatic populations.”
All study subjects were asymptomatic at the time of their imaging, and all had no history of back pain. Studies with patients with minor or low-grade back pain were excluded, as were patients with motor or sensory symptoms, tumors, or trauma.
The authors note that MR imaging is highly sensitive in detecting spinal degenerative changes, and that spinal degenerative changes often occur in pain-free individuals as well as those with back pain. They state:
“Findings such as disk degeneration, facet hypertrophy, and disk protrusion are often interpreted as causes of back pain, triggering both medical and surgical interventions, which are sometimes unsuccessful in alleviating the patient’s symptoms.”
“Prior studies have demonstrated that imaging findings of spinal degeneration associated with back pain are also present in a large proportion of asymptomatic individuals.”
“Imaging findings of spine degeneration are present in high proportions of asymptomatic individuals, increasing with age.”
“Many imaging-based degenerative features are likely part of normal aging and unassociated with pain.”
Findings by age decade included:
- “Disk degeneration prevalence ranged from 37% of asymptomatic individuals 20 years of age to 96% of those 80 years of age, with a large increase in the prevalence through 50 years.”
- “Disk bulge prevalence increased from 30% of those 20 years of age to 84% of those 80 years of age.”
- “Disk protrusion prevalence increased from 29% of those 20 years of age to 43% of those 80 years of age.”
- “The prevalence of annular fissure increased from 19% of those 20 years of age to 29% of those 80 years of age.”
- “Disk signal loss (“black disk”) was similarly present in more than half of individuals older than 40 years of age, and by 60 years, 86% of individuals had disk signal loss.”
- “Disk height loss and disk bulge were moderately prevalent among younger individuals, and prevalence estimates for these findings increased steadily by approximately 1% per year.”
- “Disk protrusion and annular fissures were moderately prevalent across all age categories but did not substantially increase with age.”
- Facet degeneration was “rare in younger individuals (4%–9% in those 20 and 30 years of age), but the prevalence increased sharply with age.”
- “Spondylolisthesis was not commonly found in asymptomatic individuals until 60 years, when prevalence was 23%; prevalence increased substantially at 70 and 80 years of age.”
These findings are summarized tin the following chart:
|Disc Signal Loss||17%||33%||54%||73%||86%||94%||97%|
|Disc Height Loss||24%||34%||45%||56%||67%||76%||84%|
This systematic review indicates that many imaging findings of degenerative spine disease have a high prevalence among asymptomatic individuals. Disk degeneration and signal loss were present in nearly 90% of individuals 60 years of age or older. The authors concluded:
“Our study suggests that imaging findings of degenerative changes such as disk degeneration, disk signal loss, disk height loss, disk protrusion, and facet arthropathy are generally part of the normal aging process rather than pathologic processes requiring intervention.”
“With a prevalence of degenerative findings of >90% in asymptomatic individuals 60 years of age or older, our study supports the hypothesis that degenerative changes observed on CT and MR imaging are often seen with normal aging.”
There are many reasons for spinal imaging, some of them are critical. Because there is poor correlation between imaged spinal degenerative changes and a patient’s back pain complaints, many chiropractors initially choose to forgo spinal imaging until response to clinical intervention is ascertained. Ultimately, the choice for spinal imaging or not is the call of the treating chiropractor who is looking out for the best interests of the patient.
- Berman B; Zapped: From Infrared to X-rays, the Curious History of Invisible Light; Little Brown and Company, 2017.
- Fardon DR, Williams LA, Dohring EJ, Rothan SL, Sze GK; Lumbar Disc Nomenclature: Version 2.0: Recommendations of the Combined task forces of the North American Spine Society, the American Society of Spine Radiology, and the American Society of Neuroradiology; Spine Journal; 2014; No. 14; pp. 2525-2545.
- Boden SD, Davis DO, Dina TS, Patronas NJ, Wiesel SW; Abnormal magnetic-resonance scans of the lumbar spine in asymptomatic subjects. A prospective investigation; Journal of Bone and Joint Surgery; March 1990; Vol. 72; No. 3; pp. 403-408.
- Greenberg JO, Schnell; Magnetic resonance imaging of the lumbar spine in asymptomatic adults. Cooperative study–American Society of Neuroimaging; Journal of Neuroimaging; February 1991; Vol. 1; No. 1; pp. 2-7.
- Jensen MC, Brant-Zawadzki MN, Obuchowski N, Modic MT, Malkasian G, Ross JS; Magnetic resonance imaging of the lumbar spine in people without back pain; New England Journal of Medicine; July 14, 1994; Vol. 331; Vol. 2; pp. 69-67.
- Kanayama M, Togawa D, Takahashi C, Terai T, Hashimoto T; Cross-sectional magnetic resonance imaging study of lumbar disc degeneration in 200 healthy individuals’; Journal of Neurosurgery Spine; October 2009; Vol. 11; No. 4; pp. 501-507.
- Kalichman L, Kim DH, Li L, Guermazi A, Hunter DJ; Computed tomography-evaluated features of spinal degeneration: prevalence, intercorrelation, and association with self-reported low back pain; Spine Journal; March 2010; Vol. 10; No. 3; pp. 200-208.
- Brinjikji W, Luetmer PH, Comstock B, Bresnahan BW, Chen LE, Deyo RA, Halabi S, Turner JA, Avins AL, James K, Wald JT, Kallmes DF, Jarvik JG; Systematic Literature Review of Imaging Features of Spinal Degeneration in Asymptomatic Populations; American Journal of Neuroradiology (AJNR); April 2015; Vol. 36; No. 4; pp. 811–816.
“Authored by Dan Murphy, D.C.. Published by ChiroTrust® – This publication is not meant to offer treatment advice or protocols. Cited material is not necessarily the opinion of the author or publisher.”
|
The Address Resolution Protocol (ARP) allows for conversion from a network layer address to a hardware layer address (e.g. from the IP address to the MAC address). It is defined by RFC 826, and is a layer 2 protocol in the OSI model. For simplicity, this article will refer to IP address resolutions in examples.
- Two systems that each know their own IP address and MAC address
- A usable network layer path between the two systems
- The sending system must know the IP address of the destination system
|
Flower characteristics are one basis for deciding how to group, classify and name plants. The milkweeds provide a clear example of this. Let’s look at four plants with a similar flower structure — which led to their being grouped in the genus Asclepias. Starting with Common milkweed — Asclepias syriaca — we can see the ball of flowers growing at and near the top of the plant (above photo).
When we look closely at a single flower within the inflorescence, we notice its unique shape. There’s a top portion of the flower with five corona limbs, and a bottom portion with five corolla lobes. The reproductive parts are in the center. To get a better understanding of this type of flower, study this description from backyardnature.net.
Let’s look at another Asclepias example — Asclepias tuberosa — commonly known as Butterfly weed. Here’s the plant in flower:
And a closer view of the flowers:
Now for two more Asclepias species which can be found growing in the same region: Asclepias quadrifolia, commonly known as Four-leaved milkweed — and Asclepias incarnata, commonly known as Swamp milkweed.
Each of these four plants has different preferred habitats along with different leaf shapes, flower colors and plant heights. Yet they are clearly related plants — given their similar flower shape and arrangement. This final set of photos shows the similarity in their flower buds.
|
Future long-duration missions will be conduct with knowledge obtain from the LEO missions and the lunar landings of 50 years ago. Scientific knowledge from life sciences research and aerospace medicine practice provides a wealth of knowledge; on physiological and psychological issues in outer space to practitioners,crew members and mission designer’s today. Selection of appropriate individuals and matching them into highly productive crews have result in successful missions. Living and working in space requires life support systems to maintain health and human performance. Medical systems onboard, to support the crew’s medical needs from basic first aid and ambulatory; care to emergent care, must exist.
Challenges of Human Space Flight
The crew members aboard a spacecraft destine for a great distance from the Earth will experience acceleration; radiation, as well as physiological, and psychological impacts owing to confinement; remoteness, and isolation. But the crew members will leave not only the physical environment of Earth but also work, family; social and interpersonal environments, networks, and resources in which they live, while on Earth. As the Earth recedes, so will all the physical, professional; and personal resources on which the crew members grew and train.
As the Earth recedes further and further; the Earth will only be seen as another small dot in the blackness of space. However to ameliorate these challenges, life scientists and aerospace medicine specialists are developing specific onboard systems. These will address inflight medical and environmental monitoring. Countermeasures to meet the physiological and psychological impact on cardiovascular; musculoskeletal, neurosensory, and other human systems will be integrate. Research over the last several decades has provided considerable data; leading to design and evaluation of appropriate systems to meet specific needs of various programs and mission profiles.
As indicated earlier, crews on the ISS and previous human spaceflight missions have been link to the ground through a network that permits telemedicine commmnication to be established. This capability in LEO is synchronous (near real-time) as the bandwidth is available; and the distance for data to travel is unlike that seen in deep space travel. However, this permits interaction of the crew members and ground personnel to address health concerns.
Psychological Issues in Crew members
Spaceflight affects human systems in many ways. The neurovestibular system is the first system that is affect when an individual reaches orbit. Almost all are affect by space motion sickness (SMS) in the first few days. However, the condition, characterize by vertigo, nausea, fatigue, dizziness, and disorientation can be mild, moderate, or severe. Musculoskeletal, neurovestibular, ocular, and cardiovascular changes can affect crew health. Aside from private medical records, methodical evaluation of the crew members from a mental health perspective has not been conduct. Personal medical records are protected. However, the following are document events that have occurr on the Mir Space Station.
In 1997, with the American and Russian crew on board Mir, a solid fuel oxygen generator ignite; starting a fire in the Kvnat module of Mir. This event occurr during the six-member crew dinner. The crew respond by donning protective gear and extinguishing the fire. Communications with mission controllers in Russia were not synchronous and were deem to be poor. This event had an immediate and post event impact on the crew. In July 1990, two Russian cosmonauts complete their assign external repairs on Mir. They could not close the airlock door; forcing them to re-enter the space station at a different entry point. This event caused physical and mental exhaustion as the spacewalk took 2 h more than originally schedule.
These illustrations demonstrate how even a well-train crew can be push to their limits. But they become physically and mentally exhaust. Each event could have result in a catastrophic failure. The immediate response and post-event analysis have serve as useful scenarios in the subsequent system design and crew training. Asthenia can make the individual hypersensitive, irritable, and hypoactive.
|
A computed tomography (CT) scan (formerly known as computed axial tomography or CAT scan) is a medical imaging technique used in radiology to obtain detailed internal images of the body for diagnostic purposes.
The personnel that perform CT scans are called radiographers or radiology technologists.
History of the CT Scan
The invention of this incredible technology has been widely attributed to the success of the scientists in the 1900s. This extraordinary invention was made possible through the work of several scientists, most notably Godfrey Newbold Hounsfield and Allan MacLeod Cormack, a physics professor at Tufts University in Medford, Massachusetts who share Nobel prize in Medicine and Physiology in the year 1979, for their outstanding contributions to the development of Computed tomography.
The history of computed tomography goes back to 1917 when Johann Radon developed the basic mathematical equation with the theory of the Radon transform. Back then to the early 1920s, when many scientists were developing methods to image a specific layer or section of the body, at that time, ‘‘body section radiography’’ and ‘‘stratigraphy’’ (from stratum, meaning ‘layer’) were used to describe the technique and in 1935, Grossman refined the technique and named it as “Tomography”.
With this tomographic concept, Watson developed another Tomography technique, which was based on transverse/axial scans and was referred to as “Transverse Axial Tomography” in 1936.
Another mathematical advancement was the “Algebraic Reconstruction Technique,” which was formulated by Polish mathematician Stefan Kaczmarz in 1937. Both Reconstruction Technique and Radon transform theory were adopted by Hounsfield to create one of the greatest inventions in the medical history.
The basic principle of tomographic scan was published by Frank & Takashi in 1940 which was a revolutionary step towards its creation on which the tomographic scanning was based on with the theory of Image Reconstruction given by Allan MacLeod Cormack in 1956.
In 1967, Hounsfield was investigating pattern recognition and reconstruction techniques by using the computer. From this work, he deduced that, if an x-ray beam were passed through an object from all directions and measurements were made of all the x-ray transmission, information about the internal structures of that body could be obtained. This information would be presented to the radiologist in the form of pictures that would show 3D representations.
With encouragement from the British Department of Health and Social Security, an experimental apparatus was constructed to investigate the clinical feasibility of the technique.
The radiation used was from an americium gamma source coupled with a crystal detector. Because of the low radiation output, the apparatus took about 9 days to scan the object. The computer needed 2.5 hours to process the 28,000 measurements collected by the detector. This procedure was too long, due to which various modifications were made and the gamma radiation source was replaced by a powerful x-ray tube. The results of these experiments were more accurate, and it took 1 day to produce a picture.
In 1971, the first clinical prototype CT brain scanner (EMI Mark 1) was installed at AtkinsonMorley’s Hospital and clinical studies were conducted under the direction of Dr. James Ambrose. The processing time for the picture was reduced to about 20 minutes. Later, with the introduction of minicomputers, the processing time was reduced further to 4.5 minutes.
Godfrey Newbold Hounsfield
GODFREY NEWBOLD HOUNSFIELD
Godfrey Newbold Hounsfield was born in 1919 in Nottinghamshire, England. He studied electronics and electrical and mechanical engineering. In 1951, Hounsfield joined the staff at EMI Limited (Electric and Musical Industries) where he began work on radar systems and later on computer technology. Hounsfield died on August 12, 2004, at the age of 84 years.
In 1972, First clinical ‘Brain’ Scanner was been developed and the first patient scanned by this machine was a woman with a suspected brain lesion, and the picture showed clearly in detail a dark circular cyst in the brain. By developing the first practical CT scanner, Hounsfield opened up a new domain for technologists, radiologists, medical physicists, engineers, and other related scientists.
Evolution of terms
Hounsfield’s invention revolutionized medicine and diagnostic radiology. It was him, who called the technique ‘computerized transverse axial scanning’ (tomography) in his description of the system, which was first published in the British Journal of Radiology in 1973.
Terms such as ‘‘computerized transverse axial tomography,’’ ‘‘computer-assisted tomography or computerized axial tomography,’’ ‘‘computerized transaxial transmission reconstructive tomography,’’ ‘‘computerized tomography,’’ and ‘‘reconstructive tomography’’ were also used.
The term computed tomography and its acronym, CT, was established by the Radiological Society of North America in its major journal ‘Radiology’. In addition, the American Journal of Roentgenology accepted this term, which is now used worldwide within radiology.
Between 1973 and 1983, a number of CT units were installed globally. The first significant technical development came in 1974 when Dr. Robert Ledley , a professor of radiology, physiology, and biophysics at Georgetown University, developed the first ‘whole-body CT scanner’.
High-Speed CT Scanners
In 1975, Dynamic special reconstructor (DSR) was installed at Myo clinic. The goal of the DSR was to carry out dynamic volume scanning for imaging of the dynamics of organ systems and the functional aspects of the cardiovascular and pulmonary systems with high temporal resolution as well as imaging anatomic details.
In the mid-1980s, high speed CT Scanner was introduced that used electron beam technology, a result of work by Dr. Douglas Boyd and colleagues, at the University of California at San Francisco. The scanner was invented to image the cardiovascular system to overcome motion artifacts.
At that time this scanner was called as the cardiovascular CT scanner but later, this scanner was acquired and marketed by Siemens Medical Systems under the name Evolution and was subsequently referred to as the “electron beam CT” (EBCT) scanner.
The U.S. Food and Drug Administration (FDA) cleared the EBCT scanner in 1983. As of 2007, the EBCT scanner is marketed by General Electric (GE) Healthcare under the name e-Speed and it now features proprietary technologies that play a significant role in imaging the heart.
During the 1990’s “Portable/Mobile CT” scanners grew in popularity. The portable CT scanners allow the clinicians to maximize the availability of stationary CT equipment in a hospital as improving the workflow of standard scanners creates faster imaging for ICU and non-ICU patients. In addition, minimizing the need for extra transport contributes to economic benefits, as well as improved the use of other equipment and enhance the overall quality of patients care.
Spiral/Helical CT scanners
The spiral/helical CT scanners were developed after 1989 and were referred to as “single-slice” spiral/helical (SSCT) or volume CT scanners. In 1992, “Dual-slice” spiral/helical (volume scanner) CT Scanner was introduced to scan two slices per 360-degree rotation, thus increasing the volume coverage speed compared with single-slice volume CT scanners.
In 1998, “multi-slice” CT (MSCT) was introduced at RSNA meeting in Chicago, which was based on the use of multi-detector technology to scan four or more slices per revolution of the x-ray tube and detectors, thus increasing the volume coverage speed of single-slice and dual-slice volume CT scanners.
Evolution of MSCT scanner
- 2000 = 8-16 and 32-40 slice CT Scanner was introduced
- 2004 = Introduction of 64 slice CT Scanner
- 2006-2007 = Introduction of 256 slice CT Scanner(prototype) & 320 slice CT scanner
By 2005, 90% of PET scanners were actually “PET-CT” fusion imaging scanners and in 2006, “Dual source” CT (DSCT-CT) scanner with two x-ray tubes coupled to two detector arrays was introduced. In 2009, at the International Symposium on Multi-detector-Row CT (MDCT), Dr. Mathias Prokop discussed the clinical implications of the 16 cm wide detector CT. The wider coverage per gantry rotation enabled more dynamic scanning and the ability to do multiple acquisitions in less time.
The Aquilion One Prism 640-Slice CT Scanner
The 640-slice CT scanner is the ultimate testament to the application of modern technology that is equipped with a gantry rotation of 0.275 seconds, a 100 kw generator and 320 detector rows (640 unique slices) covering 16 cm in a single rotation, with the industry’s thinnest slices at 500 microns (0.5 mm).
It has emerged as the most powerful tool to diagnose cardiovascular disease, diseases like cancer at an early stage and is used in Cardiology, Neurology, Oncology, Gastroenterology and Paediatrics. Also, abdominal, neck, brain and lower extremity angiograms can be done rapidly with 4D DSA with more precision, contrast and less radiation.
Toshiba has unveiled a 640-slice CT scanner at the 2012 annual meeting of the RSNA. The system was cleared by the U.S. Food and Drug Administration (FDA) in September 2012 and currently has one install in the United States at the National Institutes of Health (NIH). Presently, Apollo Hospitals, Chennai, has introduced India’s First State-Of-The-Art, The Advanced Aquilion One Prism 640-Slice CT scanner for non-invasive assessments.
HAVE A NICE DAY & DROP A COMMENT BELOW
YOU CAN REQUEST A TOPIC ON MAIL AND AS WELL AS IN A COMMENT BOX
STAY TUNED FOR MORE TOPICS LIKE THIS AND DON’T FORGET TO LIKE THIS
|
Pablo Picasso was one of the most influential and famous artists of the twentieth century. Praised as the father of Cubism, alongside Georges Braque, Picasso also made major contributions to the schools of Symbolism and Surrealism. However, as with every great artist, Picasso faced poverty and hardship in the early years of his career. Thankfully, he successfully found a way to overcome the obstacles in his way and became one of the most noted names in the history of art.
According to his mother Maria, the first words that Picasso had uttered were “piz piz,” a shortening of the Spanish word for pencil, lápiz. From a young age, Picasso showed great passion and skill for drawing. His father Don José Ruiz y Blasco, himself a painter and professor of art at School of Crafts, specialized in naturalistic depictions of birds.
Young Picasso was trained by his father as a traditional academic artist. During this period, Picasso demonstrated extraordinary artistic talent, and by 1894 he began his career as a painter.
At the turn of the century, the young painter headed to Paris, which at the time was considered ” The Art Capital of Europe.” There, Picasso met his friend, the poet and journalist Max Jacob, who soon became his roommate.
They lived in a small, poor apartment, possibly with just one bed. The poet slept at night and worked during the day, while the nocturnal artist got his creative juices flowing at night and slept during the day. Even though Picasso arrived in Paris at the time of the Belle Epoque, a period of great economic prosperity and cultural advancement, he experienced severe poverty, cold, and desperation. To keep the apartment warm, Picasso was forced to do the unthinkable and burn a huge amount of his artwork.
Shortly after this troubled section of his life, Picasso moved to Madrid where he began his Blue Period, which was characterized by somber paintings that made extensive use of different shades of blue and blue-green, only occasionally featuring warmer colors. Perhaps it was that fierce Parisian winter that inspired this era of Picasso’s work.
|
Ameloblastoma is a rare, noncancerous (benign) tumor that develops most often in the jaw near the molars. Ameloblastoma begins in the cells that form the protective enamel lining on your teeth.
The most common type of ameloblastoma is aggressive, forming a large tumor and growing into the jawbone. Treatment may include surgery and radiation. In some cases, reconstruction may be necessary to restore your teeth, jaw and facial appearance. Some types of ameloblastoma are less aggressive.
Though ameloblastoma is most often diagnosed in adults in their 30s through 60s, ameloblastoma can occur in children and young adults.
Ameloblastoma often causes no symptoms, but signs and symptoms may include pain and a lump or swelling in the jaw.
If left untreated, the tumor can grow very large, distorting the shape of the lower face and jaw and shifting teeth out of position.
When to see a doctor
Talk to your dentist or health care provider if you have jaw swelling or pain or any other concerns with your oral health.
Ameloblastoma begins in the cells that form the protective enamel lining on your teeth. Rarely, it may start in gum tissue. The exact cause of the tumor is unclear, but several genetic changes (mutations) may be involved in the development of ameloblastoma. These changes may impact the location of the tumor, the type of cells involved and how fast the tumor grows.
Ameloblastomas are generally classified by type, but they can also be classified by cell type. The four main types include:
- Conventional ameloblastoma. This is the most common type and grows aggressively, usually in the lower jawbone, and approximately 10% recur after treatment.
- Unicystic ameloblastoma. This type is less aggressive, but typically occurs at a younger age. The tumor is often in the back of the lower jawbone at the molars. Recurrence is possible after treatment.
- Peripheral ameloblastoma. This type is rare and affects the gums and oral tissue in the upper or lower jaw. The tumor has a low risk of recurrence after treatment.
- Metastasizing ameloblastoma. This type is very rare and is defined by tumor cells that occur away from the primary site in the jaw.
Rarely, ameloblastoma can become cancerous (malignant). Very rarely, ameloblastoma cells can spread to other areas of the body (metastasize), such as the lymph nodes in the neck and lungs.
Ameloblastoma may recur after treatment.
Ameloblastoma diagnosis might begin with tests such as:
- Imaging tests. X-ray, CT and MRI scans help doctors determine the extent of an ameloblastoma. The tumor can sometimes be found on routine X-rays at the dentist's office.
- Tissue test. To confirm the diagnosis, doctors may remove a sample of tissue or a sample of cells and send it to a lab for testing.
Ameloblastoma treatment may depend on your tumor's size and location, and the type and appearance of the cells involved. Treatment may include:
- Surgery to remove the tumor. Ameloblastoma treatment usually includes surgery to remove the tumor. Ameloblastoma often grows into the nearby jawbone, so surgeons may need to remove the affected part of the jawbone. An aggressive approach to surgery reduces the risk that ameloblastoma will come back.
- Surgery to repair the jaw. If surgery involves removing part of your jawbone, surgeons can repair and reconstruct the jaw. This can help improve how your jaw looks and works afterward. The surgery can also help you to be able to eat and speak.
- Radiation therapy. Radiation therapy using high-powered energy beams might be needed after surgery or if surgery isn't an option.
- Prosthetics. Specialists called prosthodontists can make artificial replacements for missing teeth or other damaged natural structures in the mouth.
- Supportive care. A variety of specialists can help you work through speaking, swallowing and eating problems during and after treatment. These specialists may include dietitians, speech and language therapists, and physical therapists.
Due to the risk of recurrence after treatment, lifelong, regular follow-up appointments are important.
Last Updated Nov 17, 2021
|
When people think of the martial arts, immediately the Eastern martial arts tradition and disciplines come to mind, but did you know that Africa had its own martial arts traditions and disciplines as well? Get your weight up! Knowledge is power!
The presence of African influence and tradition in the Americas has long been recognized in art, music, language, agriculture, and religion. T. J. Desch Obi explores another cultural continuity that is as old as eighteenth-century slave settlements in South America and as contemporary as hip-hop culture.
In this thorough survey of the history of African martial arts techniques, Obi maps the translation of numerous physical combat techniques across three continents and several centuries to illustrate how these practices evolved over time and are still recognizable in American culture today. Some of these art traditions were part of African military training while others were for self-defense and spiritual discipline.
Grounded in historical and cultural anthropological methodologies, Obi’s investigation traces the influence of well-delineated African traditions on long-observed but misunderstood African and African American cultural activities in North America, Brazil, and the Caribbean. He links the Brazilian martial art capoeira to reports of slave activities recorded in colonial and antebellum North America. Likewise Obi connects images of the kalenda African stick-fighting techniques to the Haitian Revolution. Throughout the study Obi examines the ties between physical mastery of these arts and changing perceptions of honor.
Including forty-five illustrations, this rich history of the arrival and dissemination of African martial arts in the Atlantic world offers a new vantage for furthering our understanding of the powerful influence of enslaved populations on our collective social history.
|
Triple Alliance, secret agreement between Germany, Austria-Hungary, and Italy formed in May 1882 and renewed periodically until World War I. Germany and Austria-Hungary had been closely allied since 1879. Italy sought their support against France shortly after losing North African ambitions to the French. The treaty provided that Germany and Austria-Hungary were to assist Italy if it were attacked by France without Italian provocation; Italy would assist Germany if Germany were attacked by France. In the event of a war between Austria-Hungary and Russia, Italy promised to remain neutral. This abstention would have the effect of freeing Austrian troops that would otherwise have been needed to guard the Austrian-Italian border.
When the treaty was renewed in February 1887, Italy gained an empty promise of German support of Italian colonial ambitions in North Africa in return for Italy’s continued friendship. Austria-Hungary had to be pressured by German chancellor Otto von Bismarck into accepting the principles of consultation and mutual agreement with Italy on any territorial changes initiated in the Balkans or on the coasts and islands of the Adriatic and Aegean seas. Italy and Austria-Hungary did not overcome their basic conflict of interest in that region, the treaty notwithstanding. On November 1, 1902, five months after the Triple Alliance was renewed, Italy reached an understanding with France that each would remain neutral in the event of an attack on the other. Although the alliance was again renewed in 1907 and 1912, Italy entered World War I in May 1915 in opposition to Germany and Austria-Hungary.
|
‘Dialogic’ is related to the word ‘dialogue’ and describes ‘having a conversation’. In education, dialogic reading refers the conversation adults have with students around the text they are reading together.
Adults typically ask questions and make comments to help young readers explore the shared story at a deeper level.
Because it is the dialogue that happens around a story that results in the most learning, the medium of the story is far less important. Adults can practise dialogic reading with their children whether they are reading a paper story or an app. The key here is that the story progresses via ‘page turns’ (or the digital equivalent thereof). When the reader is able to control the pace at which they read the story, they are afforded more time to reflect.
Kathy Hirsh-Pasek, a professor of psychology at Temple University and an author of “Einstein Never Used Flashcards,” has studied use of e-books and other electronics by parents and children and said the lesson of all the studies is that what really matters is the back-and-forth relationship.
“Look for something that’s active, engaging, meaningful and interactive,” she said. “The bad news and the good news is, you can’t outsource learning to an app, but the good news is there’s really room for us, two minutes of time, five minutes of time, look into our children’s eyes, have the conversation.”Perri Klass
Dialogic ‘reading’ can also happen when adults and children watch TV and movies together, but it’s more difficult to find quiet spots in which to have a conversation, as stories told on screen are not inherently designed for the pause.
EXAMPLES OF DIALOGIC QUESTIONS
The key to asking questions which prompt dialogue: ‘might’ and ‘think’. Don’t ask what will happen next. Ask what the child thinks might happen next. The young reader needs reassurance that there are no wrong answers. We are never wrong about what we think might happen. A mixture of open and closed (yes/no) questions is fine and natural. Just be sure to include the open questions.
- What do you think this word means?
- Can you see the [Green Sheep]?
- What do you think might happen next?
- What is this pig doing?
- Why did she do that?
- Can you make a cow sound?
- Do you think he wants to share the hat?
- Would you ever want to keep a moose as a pet?
Including questions about feelings helps students to become emotionally literate. Once you start engaging with children about the books you’re reading together, you’ll find there are some books they like more than others.
Although children are often positively engaged with picture-books, they may also respond with negative comments. While some educators or other adults may interpret this as dislike for the book, Sipe and McGuire (2006b) report that young children’s oppositional responses or resistance to picture books read aloud can reveal ways in which this audience perceives the book’s literary and visual elements and their relationships as they convey meaning. From their observations of interactive read-alouds, the researhcers identified six conceptual categories that describe how and why children may express resistance to a picture book:
- Intertextual Resistance. A new version of the story is different from the version with which the child is familiar.
- Preferential or Categorical Resistance. Children attribute their dislike of teh book to some personal construct: e.g., the book is for someone much younger or the topic is boring.
- Reality Testing. Children perceive a conflict between the world of the story and their understanding of reality.
- Engaged of Kinetic Resistance. Events or characters in the story are rejected because they are painful for the reader to consider.
- Exclusionary Resistance. Children can’t imagine themselves in the story.
- Literary Critical Resistance. Children have constructed criteria of what makes a good story, and the present story doesn’t meet those criteria.
Sipe and McGuire (2006b) theorize that “the match or mismatch between the assumptions and perspective embedded in the text [and/or/the art] and those held by the reader contribute to the expression of resistance. (p 10). The researchers suggest that “Observant teachers can capitalize on instances of resistance…to initiate deeper discussions” (p. 10) about both text and art in picture-books. Such discussions encourage students to learn from one another’s perspectives, to question and develop rationales for artists’ and authors’ decisions, and to create their own versions of stories and images. Resistance is not the same as indifference; rather, it is another form of engagement that is likely to inspire considerable interest and analysis among readers.Recent Research On Picturebooks
Header painting: Jessie Willcox Smith (1863 – 1935), American illustrator
|
Research methodology defined
Define methodology: a body of methods, rules, and postulates employed by a discipline : a particular procedure or set of — methodology in a sentence. 3- 1 chapter three: research methodology 31 introduction the way in which research is conducted may be conceived of in terms of the research philosophy subscribed to. Research methodology is a systematic way to solve a problem it is a science of studying how research is to be carried out essentially, the procedures by which. Academiaedu is a platform for academics to share research papers a research population is also known as a well-defined a sample group can be defined as a. Research models and methodologies clarke phenomena of interest as defined in model, and research methodology.
Now we have defined quantitative research, it is a good idea to compare it with qualitative research, to which it is usually put in opposition while. Methodology definition, a set or system of methods, principles, and rules for regulating a given discipline, as in the arts or sciences see more. Understanding mixed methods research w influences the procedures of research, we define methodology as the frame-work that relates to. Define research: careful or diligent search — research in a sentence.
Looking for research methodology find out information about research methodology the branch of philosophy concerned with the science of method and procedure the. Research methods/methodology search this guide search scholarly research and related resources: research methods/methodology. Definition of research: systematic investigative process employed to increase or revise current knowledge by discovering new facts.
Research and methodology cont problem-solving research analytic vs descriptive research methodology defined & described slide 21 the process of research. Qualitative research is a method of open-ended responses from surveys or tightly defined if the methodology and findings are to be compared. Analysis of data answers asked attitude basis behavior cluster sampling coding concept data analysis data collection defined research methodology research.
A clinical trial involves research participants it follows a pre-defined plan or protocol to evaluate the effects of a medical or behavioral intervention on health. An operational definition is how we (the researcher) decide to measure our the variables research method used to observe and describe. I preface this lecture note on research methodology is primarily aimed at health science students it is also hoped to be useful for other individuals who would like. Discussion of research approach is a vital part of any scientific study regardless of the research area within the methodology chapter of your dissertation to you.
Key concepts of the research methodology understanding the significance of the scientific method.
|
Walnut trees (Juglans spp) are deciduous trees belonging to the walnut family (Juglandaceae), which includes hickory trees (Carya). The 21 Juglans species all provide edible walnuts which can be further divided into the following:
- English/Persian walnuts (Juglans section, containing only J. regia)
- Black walnuts (Rhysocaryon section, typically J. nigra)
- Butternuts (Trachycaryon section, containing only J. cineria)
- Heartnuts (Cardiocaryon section).
The name "walnut" derives is a corruption of the French "Gaul nut" and are native to Asia Minor and southeastern Europe. Most are grown for their beauty, timber and some for their edible nuts. Walnuts produce both male and female flowers on the same tree which open at different times; because of their offset flowering cycles, trees must be cross-pollinated. The flowers are green, with males growing in long catkins and females forming small clusters. Walnut leaves are one- to two-feet long, pinnate, and formed of 2"-4" ovoid lance-shaped leaflets. Walnuts are typically 2"-3" in diameter and surrounded by a fleshy, greenish husk during the summer; by fall, they ripen and drop from the trees.
Black walnuts seem to be all over Davis. Black walnut wood is reddish in color and is frequently used in joinery because of its rot resistance and ease of use in woodworking. Squirrels, rats and crows drop and bury the nuts which sprout in spring. Though a non-native species, California produces about 20% of the world's English walnut crop, and is the only state within the US producing commercially; most of this production comes from the Central Valley, and about 50% of that production is controlled by Sun Diamond growers co-op headquartered in Stockton. Most walnut tree species will flourish in well-drained, loamy soil, and should be pruned between June and December to encourage the formation of a single, central shoot.
|
Adding Another Dimension to Computer Simulations
Four-dimensional space is a difficult concept but this idea is driving a new revolution in programming today. Individuals familiar with August Ferdinand Möbius’ research know that an additional dimension allows a three-dimensional form to be rotated over on top of its mirror image. This gives us the so-called Möbius strip. While computer algorithms that really simulate scalable four-dimensional space are still in their infancy, they’re already making a big splash.
Mobius Strip. Credit: http://paulbourke.net/geometry/mobius/
It’s important to remember that abstract mathematical concepts have no real bearing on the actual universe. Texts on theoretical physics use four-dimensional space as a term to describe the phenomenon caused by three-dimensional objects moving through time. Naturally, this concept of a fourth dimension is far different from that defined by computer scientists. While additional dimensions are valid mathematical constructs, they have little to do with the world around us. Software is merely producing two-dimensional output anyway, so its safe to assume that nothing a TV screen produces is going to break the space-time continuum.
Image Credit: John Hopkins
Computers provide mathematicians with the opportunity to produce very complex geometrical forms. In three dimensions, polyhedra are made up of distinct two-dimensional polygons. Four-dimensional space grants engineers the freedom to create polychora made up of three-dimensional polyhedra. While this might be complicated, it’s actually useful outside of the world of mathematical research.
Mapping Euclidean space gives scientists the opportunity to produce stereographic projection diagrams of theoretical objects like the Clifford torus. This could be useful in the construction of space colonies, for instance. Puzzles based around 120-cell hecatonicosachoron objects became popular for a time, and illustrate the advantages of constructing objects in a virtual world.
Average computer users probably aren’t too interested in this type of research either. They might be more pleased to hear that four-dimensional simulations are revolutionizing video games. While virtual reality might not actually be the future, a simulation of it very well could be.
Edwin A. Abbot popularized the concept of different dimensions in fiction, and Marc Ten Bosch’s new independent video game is taking it to the next level. Miegakure is a platform that is essentially set in a three-dimensional environment, but players can go through walls and inspect them by entering into an additional dimension. The game has yet to be released to the general public, but it illustrates the possibilities programmers have when they leave the confines of our limited universe. Just as an author isn’t limited when writing a novel, computer programmers can create simulations that aren’t defined by what real individuals can and cannot do.
Share the post "Adding Another Dimension to Computer Simulations"
|
13th Amendment to the U.S. Constitution
The 13th Amendment to the Constitution declared that “Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.” Formally abolishing slavery in the United States, the 13th Amendment was passed by the Congress on January 31, 1865, and ratified by the states on December 6, 1865.
Library of Congress Web Site | External Web Sites | Selected Bibliography
A Century of Lawmaking for a New Nation
This collection contains congressional publications from 1774 to 1875, including debates, bills, laws, and journals.
References to debate on the 13th Amendment (S.J. Res. 16) can be found in the Congressional Globe on the following dates:
- March 31, 1864 – Debated in the Senate (S.J. Res. 16).
- April 4, 1864 – Debated in the Senate.
- April 5, 1864 – Debated in the Senate.
- April 6, 1864 – Debated in the Senate.
- April 7, 1864 – Debated in the Senate.
- April 8, 1864 – The Senate passed the 13th Amendment (S.J. Res. 16) by a vote of 38 to 6.
- June 14, 1864 – Debated in the House of Representatives.
- June 15, 1864 – The House of Representatives initially defeated the 13th Amendment (S.J. Res. 16) by a vote of 93 in favor, 65 opposed, and 23 not voting, which is less than the two-thirds majority needed to pass a Constitutional Amendment.
- December 6, 1864 – Abraham Lincoln’s Fourth Annual Message to Congress was printed in the Congressional Globe: “At the last session of Congress a proposed amendment of the Constitution, abolishing slavery throughout the United States, passed the Senate, but failed for lack of the requisite two-thirds vote in the House of Representatives. Although the present is the same Congress, and nearly the same members, and without questioning the wisdom or patriotism of those who stood in opposition, I venture to recommend the reconsideration and passage of the measure at the present session.“
- January 6, 1865 – Debated in the House of Representatives (S.J. Res. 16).
- January 7, 1865 – Debated in the House of Representatives.
- January 9, 1865 – Debated in the House of Representatives.
- January 10, 1865 – Debated in the House of Representatives.
- January 11, 1865 – Debated in the House of Representatives.
- January 12, 1865 – Debated in the House of Representatives.
- January 13, 1865 – Debated in the House of Representatives.
- January 28, 1865 – Debated in the House of Representatives.
- January 31, 1865 – The House of Representatives passed the 13th Amendment (S.J. Res. 16) by a vote of 119 to 56.
- February 1, 1865 – President Abraham Lincoln signed a Joint Resolution submitting the proposed 13th Amendment to the states.
- December 18, 1865 – Secretary of State William Seward issued a statement verifying the ratification of the 13th Amendment.
Abraham Lincoln Papers at the Library of Congress
The complete Abraham Lincoln Papers at the Library of Congress consists of approximately 20,000 documents. The collection is organized into three “General Correspondence” series which include incoming and outgoing correspondence and enclosures, drafts of speeches, and notes and printed material. Most of the 20,000 items are from the 1850s through Lincoln’s presidential years, 1860-65.
A selection of highlights from this collection includes:
Search the Abraham Lincoln Papers using the phrase “13th amendment” to locate additional documents on this topic.
The Alfred Whital Stern Collection of Lincolniana
This collection documents the life of Abraham Lincoln both through writings by and about Lincoln as well as a large body of publications concerning the issues of the times including slavery, the Civil War, Reconstruction, and related topics.
From Slavery to Freedom: The African-American Pamphlet Collection, 1822-1909
This collection presents 396 pamphlets from the Rare Book and Special Collections Division, published from 1822 through 1909, by African-American authors and others who wrote about slavery, African colonization, Emancipation, Reconstruction, and related topics.
This site allows you to search and view millions of historic American newspaper pages from 1836 to 1922. Search this collection to find newspaper articles about the 13th Amendment.
A selection of articles on the 13th Amendment includes:
- “Freedom Triumphant,” New-York Daily Tribune. (New York, NY), February 1, 1865.
- “Glory to God! The Constitutional Amendment Passed the House by a Vote of 119 to 56,” Fremont Journal. (Fremont, OH), February 3, 1865.
- “The Constitutional Amendment,” The Daily Phoenix. (Columbia, SC), December 14, 1865.
- “The Official Announcement of the Adoption of the Constitutional Amendment–Opinions of the Leading Press,” Daily National Republican. (Washington, D.C.), December 21, 1865.
Constitution of the United States of America: Analysis and Interpretation
The Constitution of the United States of America: Analysis and Interpretation (popularly known as the Constitution Annotated) contains legal analysis and interpretation of the United States Constitution, based primarily on Supreme Court case law. This regularly updated resource is especially useful when researching the constitutional implications of a specific issue or topic. It includes a chapter on the 13th Amendment. (PDF, 201 KB)
The African-American Mosaic
This exhibit marks the publication of The African-American Mosaic: A Library of Congress Resource Guide for the Study of Black History and Culture. This exhibit is a sampler of the kinds of materials and themes covered by this publication. Includes a section on the abolition movement and the end of slavery.
African American Odyssey: A Quest for Full Citizenship
This exhibition showcases the African American collections of the Library of Congress. Displays more than 240 items, including books, government documents, manuscripts, maps, musical scores, plays, films, and recordings. Includes a brochure from an exhibit at the Library of Congress to mark the 75th Anniversary of the 13th Amendment.
American Treasures of the Library of Congress: Abolition of Slavery
An online exhibit of the engrossed copy of the 13th Amendment as signed by Abraham Lincoln and members of Congress.
The Civil Rights Act of 1964: A Long Struggle for Freedom
This exhibition, which commemorates the fiftieth anniversary of the landmark Civil Rights Act of 1964, explores the events that shaped the civil rights movement, as well as the far-reaching impact the act had on a changing society.
American Memory Timeline: The Freedmen
The Emancipation Proclamation and Thirteenth Amendment freed all slaves in the United States. This page links to related primary source documents.
The Collected Works of Abraham Lincoln, Abraham Lincoln Association
Documents from Freedom: A Documentary History of Emancipation, 1861-1867, University of Maryland
End of Slavery: The Creation of the 13th Amendment, HarpWeek
“I Will Be Heard!” Abolitionism in America, Cornell University Library, Division of Rare and Manuscript Collections
Landmark Legislation: Thirteenth, Fourteenth, & Fifteenth Amendments, U.S. Senate
Mr. Lincoln and Freedom, The Lincoln Institute
Our Documents, 13th Amendment to the U.S. Constitution, National Archives and Records Administration
Proclamation of the Secretary of State Regarding the Ratification of the Thirteenth Amendment, National Archives and Records Administration
Proposed Thirteenth Amendment Regarding the Abolition of Slavery, National Archives and Records Administration
The Thirteenth Amendment, National Constitution Center
Avins, Alfred, comp. The Reconstruction Amendments’ Debates: The Legislative History and Contemporary Debates in Congress on the 13th, 14th, and 15th Amendments. Richmond: Virginia Commission on Constitutional Government, 1967. [Catalog Record]
Hoemann, George H. What God Hath Wrought: The Embodiment of Freedom in the Thirteenth Amendment. New York: Garland Pub., 1987. [Catalog Record]
Holzer, Harold, and Sara Vaughn Gabbard, eds. Lincoln and Freedom: Slavery, Emancipation, and the Thirteenth Amendment. Carbondale: Southern Illinois University Press, 2007. [Catalog Record]
Maltz, Earl M. Civil Rights, the Constitution, and Congress, 1863-1869. Lawrence, Kan.: University Press of Kansas, 1990. [Catalog Record]
Richards, Leonard L. Who Freed the Slaves?: The Fight Over the Thirteenth Amendment. Chicago: The University of Chicago Press, 2015. [Catalog Record]
Tsesis, Alexander, ed. The Promises of Liberty: The History and Contemporary Relevance of the Thirteenth Amendment. New York: Columbia University Press, 2010. [Catalog Record]
—–. The Thirteenth Amendment and American Freedom: A Legal History. New York: New York University Press, 2004. [Catalog Record]
Vorenberg, Michael. Final Freedom: The Civil War, the Abolition of Slavery, and the Thirteenth Amendment. Cambridge; New York: Cambridge University Press, 2001. [Catalog Record]
Biscontini, Tracey and Rebecca Sparling, eds. Amendment XIII: Abolishing Slavery. Detroit: Greenhaven Press, 2009. [Catalog Record]
Burgan, Michael. The Reconstruction Amendments. Minneapolis: Compass Point Books, 2006. [Catalog Record]
Schleichert, Elizabeth. The Thirteenth Amendment: Ending Slavery. Springfield, N.J.: Enslow Publishers, 1998. [Catalog Record]
|
Learning Korean alphabet symbols? Discover the history and decipher the Korean alphabet to learn the language...
The Korean language has distinct alphabets. It is known as Hangul in South Korea and Joseonguel in North Korea making it the native alphabet of the region. It is completely different from the logographic Chinese and Korean vocabulary system. It originated in the middle of the 15th century and became over time the official written text for both South Korea as well as North Korea.
It can be best described as a ‘phonemic’ alphabet that is classified by syllable blocks. Each individual block of these syllables has a minimum of 24 letters derived from the Hangul characters. Furthermore, at least one of the 14 consonants are present along with the 10 vowels in the alphabet. The beauty of the alphabet is that it can be written vertically in columns going from top to bottom and it is read from right to left. Apart from this the alphabet can be written in horizontal rows and then it is read from left to right.
The previous Korean alphabet had many more letters but eventually some of the Jamo phonologic was made obsolete.
There are many official names for the language starting with the one created in the year 1912 by Ju Sigyeong. Basically it can be broken up into two parts with the first syllable called Han meaning great in the Korean language while the second syllable of guel refers to the Korean word for script. It is utilized as Hangul when described in South Korea as the official language after the revised Romanization of the language. The alphabet and its subsequent language were considered very vulgar and part of the illiterate people’s dialect all the way up to the first quarter of the 20th century. The elite and literate class was more inclined towards utilizing the writing system known as Hanja for a more polished effect. However, the alphabet has completely disappeared from North Korea and is rarely found in South Korea anymore.
The History of Korean Alphabet Symbols
The Korean alphabet known as Hangul was created by Sejong the Great who was part of the Joseon dynasty and ruled as the fourth King representing that dynasty. By early 1444 the alphabet was completed and then documented in Hunmin Jeongeum, which was written by the Hall of Worthies.
The National Holiday that Commemorates the Korean Alphabet
In North Korea January 15 is celebrated as the national Hangul Day while the same celebrations are observed in South Korea on 9 October every year. This document is a comprehensive explanation of each design pertaining to the consonant letters with reference to their articulated phonetics along with the vowels that follow the yin-yang principle and create harmony in the alphabet.
Ease of Use of the Korean Alphabet
This alphabet was clearly differentiated from the Chinese script and explained in detail in the document. In fact before the alphabet was created for Korean nationals, a large number of the local population was illiterate. The foresight of the fourth King was evident in the creation of an alphabet that could be learned by the even most ignorant person, making everyone emancipated in terms of literacy in Korea.
|
During 1980-81, the UK entered a recession – with falling output, rising unemployment and a fall in the inflation rate. The recession particularly hit manufacturing sector. The recession was caused by high-interest rates, an appreciation in Sterling and tight fiscal policy.
In 1979, the incoming Conservative government inherited an economy with inflation in double figures. (April 1979 inflation was 10.1%). Also, many industries were considered inefficient, and trades unions were powerful. There had been a winter of discontent with many strikes taking place in the late 1970s. See: Economy of the 1970s
UK economic growth 1979-1995
On coming to power, the government adopted a Monetarist approach to try and tackle the various economic problems of the UK.
In particular, their overriding macroeconomic objective was to reduce inflation which peaked in 1980 at around 18%.
UK Inflation Since 1979
To Reduce Inflation The Government
- Raised interest Rates (tightening of Monetary Policy)
- Tightening of Fiscal Policy to reduce the budget deficit. This involved increasing taxes and restricting government spending. Higher taxes reduced disposable income and therefore reduced consumer spending. (Monetarists believe that a budget deficit is likely to cause inflation because to finance it the govt may be tempted to increase the Money Supply)
- Sticking to strict Money Supply targets for M3. Monetarist theory states that inflation is caused by excess growth in the money supply. Therefore, it is necessary to control the growth on money supply to reduce inflation.
The effect of these policies was:
- Reduce consumer spending, investment and exports. Therefore there was a decline in Aggregate Demand and hence economic growth.
- Increase in the exchange rate ( as well as higher interest rates, the production of oil in the north sea caused a rapid appreciation in Pound Sterling) Exporters struggled to cope with the rapid appreciation because it made their exports less competitive. Many industries who relied on exports went bankrupt. Manufacturing output fell by almost a third during the recession.
- Inflation was brought under control
- Rise in unemployment. Unemployment rose to over 3 million and didn’t fall below 3 million until 1986.
The recession of 1980-81 caused a sharp rise in unemployment. Initially, this was demand deficient unemployment, but it also caused structural unemployment with many people who lost their job in manufacturing struggled to find new jobs.
- Real GDP fell by 2.2% in 1981
The government argued that the recession was necessary to shake up the economy and get rid of inefficient firms. It is true that some firms were inefficient, but most economists would argue that the recession was deeper than it needs to be. With the rapid appreciation of Sterling many good firms also went bankrupt.
Targeting the money supply proved to be a futile task because money supply growth proved to be very erratic and there was no direct link between inflation and Money supply. By 1985 the govt had effectively abandoned Monetary targets.
Trade Union power was reduced by both political legislation and also the decline in manufacturing firms where trades unions used to be very strong.
Last updated 11 Nov 2017
|
As we mark the 60th anniversary of Brown v. Board of Education, we know how poor America’s public school students are. We also know from the Census and a recent Southern Poverty Foundation report how dramatically poverty among public school students has grown in the past decade. Student poverty makes it incredibly hard to improve student and school performance, given its link to chronic absence, housing instability, difficulty attracting and retaining strong teachers, and insufficient school resources.
In addition to growing poverty, we can see how much ground we have lost since the 1960s and 1970s in desegregating our schools. They’re intensely racially segregated not only in former Jim Crow states like Mississippi, but in New York, which now has the most segregated schools in the entire country—as measured by students’ exposure to peers of other races.
What is most critical, however, is how racial and income segregation interact with one another. Indeed, William Julius Wilson’s seminal 1967 book, The Truly Disadvantaged, jumpstarted an entire body of research on this issue. Recently, Richard Rothstein and Patrick Sharkey discussed both neighborhood- and school-level links between segregation, poverty, and related factors that particularly harm black and brown families and children. Their work prompted the Economic Policy Institute and Broader Bolder Approach to Education to explore what that interaction looks like for kids who are starting school now; our new paper uses data from US children who entered kindergarten in the 2010-2011 school year.
Our findings affirm those of Wilson, Rothstein, and Sharkey: due to racial segregation, minority status conveys multiple disadvantages. Chief among them, black and Hispanic kindergartners are disproportionately in schools in which the majority of their peers are poor. (The definition of poverty in this paper is that used by many policymakers to establish eligibility for many government supports – 200% of the federal poverty line, or less than about $37,000 annually for a family of three.
If our kindergarten classrooms were not economically and racially segregated, we would expect most students to be in classes in which about a quarter of their peers were low-income; since overall, about 25 percent of all kindergartners are from low-income households. But in our segregated society classrooms don’t look like that at all: Three in five white students are in classrooms in which just over 10 percent of their classmates are poor. This means that they are likely to be in schools that do not face obstacles like classmates whose lack of preparation demand extra teacher attention, or peers whose hunger and toothaches prompt them to act out and disrupt class. They are less likely to suffer from shuttered school libraries, counselors that must each support 1000 students, or a lack of nurses to treat ordinary and emergency medical needs – things that are increasingly common in low-income and heavily minority schools.
For black and brown students, the story is flipped: Only 11 percent of Hispanic and 7 percent of black students make it into such low-poverty kindergarten classrooms; most are in classrooms in which at least 75 percent of their peers are minorities, and the majority of those peers are poor. More than 56 percent of black students, and more than 55 percent of Hispanic students, enter kindergarten classes in which half of the kids are poor. Moreover, one-third of their classmates do not speak English at home, and the percentage of their peers’ mothers who have at least a bachelor’s degree is in the single digits. Less than 5 percent of white kindergartners attend schools facing these multiple disadvantages.
This pattern of concentrating black and Hispanic children in our poorest schools poses major obstacles to attaining the integrated schools and equal access to opportunity that our democracy demands. Reducing child poverty must be our ultimate goal, but if today’s students are to reap the benefits of schools with a diverse mix of peers, we must immediately enact education policies focused on deconcentrating poverty.
Revamping “choice” to incentivize integration by promoting socioeconomically mixed schools – at the federal, state, and local levels – would be a good start. For example, laws that authorize and evaluate charter schools could make socioeconomic integration a key metric, and districts that encourage choice among schools should also establish integration as a criterion for students who want to move out of their neighborhoods. At least one example suggests it’s good policy all around: students in Chicago’s non-selective magnet schools – which tend to integrate rather than further segregate students – see larger test score gains than their charter school peers. Finally, the obsessive focus on test scores as a measure not only of student, but of school success, has exacerbated segregation by unfairly weakening and stigmatizing schools. Dialing that pressure back in federal and state policies would also promote integration. Policies such as these would help ensure that all schools are well-resourced, attractive options for parents, and conducive spaces for children to learn.
Elaine Weiss is the National Coordinator of the Broader, Bolder Approach to Education (BBA). She works with BBA’s three co-chairs and a diverse task force to shift the focus of education policy discussions from achievement gaps, which are symptoms of our education problems, to opportunity gaps – the root of the problem, in BBA’s view – and to advance policies that address poverty-related impediments to teaching and learning.
|
Asia is currently the region most affected by acidification.
Acid rain primarily results from the transformation of sulphur dioxide (SO2) and nitrogen oxides into sulphuric acid (H2SO4), ammonium nitrate (NH4NO3) and nitric acid (HNO3). The transformation of SO2 and NO2 to acidic particles and vapours occurs as these pollutants are transported in the atmosphere over distances of hundreds to thousands of kilometres. Wet deposition is acid rain, the process by which acids with a pH normally below 5.6 are removed from the atmosphere in rain, snow, sleet or hail. The gases can then be converted into acids when they contact water.
A pH scale is used to measure the amount of acid in liquid-like water. Because acids release hydrogen ions, the acid content of a solution is based on the concentration of hydrogen ions and is expressed as "pH." This scale is used to measure the acidity of rain samples.
The smaller the number on the pH scale, the more acidic the substance is. Rain measuring between 0 and 5 on the pH scale is acidic and therefore called "acid rain." Small number changes on the pH scale actually mean large changes in acidity.
Causes of Acid rain
Sulphur dioxide (SO2) is generally a by-product of industrial processes and burning of fossil fuels. Ore smelting, coal-fired power generators and natural gas processing are the main contributors.
The main source of NOx emissions is the combustion of fuels in motor vehicles, residential and commercial furnaces, industrial and electrical-utility boilers and engines, and other equipment.
Effect of acidic rain on aquatic ecosystems
Most biological life survives best within a narrow range of pH levels, near neutral or 7.0. Aquatic vegetation and animal life vary in their susceptibility to changes in pH; some species are more acid-tolerant than others. Species higher up the food chain that relies on these organisms for food will be affected. If the pH levels drop below 5.0 most fish species are affected.
Effect of acidic rain on soils and plant growth
Some plants are tolerant of acidic conditions, while others are not. Acidic soils may affect microorganisms in the soil, which play important roles in plant growth. Acidity affects the availability of nutrients that are essential for plant growth . Nitrogen is a nutrient and at certain levels, nitrogen deposition from air emissions has increased growth of vegetation; however, at higher levels, excess nutrients can reduce plant growth. Plant leaves get burnt and dry.
Effect of acid rain on buildings and materials
Acidic rain is corrosive of metals and alkaline building materials such as marble and limestone. Urban areas subject to high levels of automobile exhaust and other sources of acidic rain have experienced significant weathering of statues and building materials. The important example of this is Taj Mahal, which looks darkened or yellow due to Acid rain caused by oil refinery near by.
Effect of acid rain on health
Acidic rain does not affect human health directly; however, the particulate matter associated with acid rain has been shown to have adverse health effects, particularly among those who have respiratory disorders. There is also some concern that acidic rain could contribute to leaching of toxins such as mercury that could be carried by runoff into bodies of water, contributing to environmental sources of this toxin.
The following are some more specific suggestions on what you, as an individual, can do:
In the home
* Run the washing machine with a full load.
* Hang dry some-or all-of the laundry.
* Buy energy-efficient appliances.
* Avoid the use of air conditioners altogether.
* Turn out the lights in empty rooms and when away from home.
* Consider installing compact fluorescent bulbs instead of high-wattage incandescent bulbs.
* If you have a forced-air furnace, change or clean its filters at least once a year.
* Avoid burning trash or leaves
* Look for products bearing the EcoLogo. They minimize the use of environmentally hazardous substances and maximize energy efficiency and the use of recycled materials.
* Buy locally produced or grown items from local stores and businesses. They don't require the transportation energy of imported products.
* Walk, ride your bike or take a bus to work.
* Share a ride with a friend or co-worker.
* Have your engine tuned at least once every six months.
* Check your car tyre pressure regularly.
* Use alternative fuels, such as ethanol, propane or natural gas.
* Avoid unnecessary idling.
* Reduce the number of trips you make in your car.
* Drive at moderate speeds.
* Take the train or bus on long trips.
* Go CFC-Free. Control emission from vehicle. Check it regularly.
Conserve energy. Reduction in demand for oil and coal reduces the amount of acid rain.
Contributed by Nitish Kulkarni, Std.6th.,11 yrs.,
|
As you know, to an observer on earth, the entire sky appears to rotate from East to West during the course of a night (actually during the day too, but the only celestial object we can see during the day is the sun), which is really due to the fact that the Earth is rotating.
Now imagine that, at the same time every night for a year--say, at midnight--you were to go outside and make a perfect sketch of every object you could see in the sky: stars and planets.
What you would soon notice is that, while the stars appear to be in the same position relative to one another *every night*, the planets actually change their positions from night to night. That is, the planets appear to *move* relative to the stars, and this motion is fast enough to become noticeable after only a few days of observations in most cases.
What we mean by "retrograde motion" is this: *most* of the time, this relative motion of the planets with relative to the fixed stars is West to ...
In 699 words, this solution discusses planetary motion as opposed to star motion, and how this can be explained using both the Earth-center and Sun-center hypotheses.
|
Every foreign language makes use of certain preferred “bandwidths” or frequency zones.
These differences are caused by the different acoustic impedances of air. With respect to the differences in altitude, vegetation, humidity, and other geographical factors, the air will propogate certain frequencies and attenuate others. Spoken language will naturally modulate around and according to these frequencies giving rise to different languages and also different accents. For example in England the air is an excellent conductor of high frequencies but attenuates the lower frequencies. It is thus typically more difficult for an Englishman to speak French where the frequency band is very narrow and completely outside the English bandwidth.
The narrowness of the French band explains why the French in general have difficulty learning foreign languages and especially English which is entirely in the high frequencies. The Slavs have a greater aptitude for foreign languages owing to their very wide bandwidth.
Two muscles in the middle ear are mobilized during the listening process (as opposed to the mechanism of audition which is purely passive).
It is thanks to these muscles that we can select and analyse the sounds to which we want to listen. For each foreign language, the muscles of the ear work in a different manner, adapted to the frequencies of the language.
With respect to age and personal history the ear will be more or less closed to certain sounds and even after long stays abroad many will have the greatest difficulties understanding and speaking correctly a foreign language even if grammar, vocabulary and knowledge of the written language is correct.
How to accelerate foreign language learning
The ear can be trained, very quickly, to enter into the bandwidth of another language following an audio-vocal training. Listening machine simulators with audio-psycho-phonological effect are programmed not only to make the ear work in the frequency and rhythms of the desired language but also to correct any distorsions in the individual ear of the subject. After a sufficient number of sessions, the ear is capable of decoding the sounds and thus enabling their instantaneous reproduction with an ever decreasing accent/distortion. The language in the process becomes less and less “foreign”, blockages are removed and the memorisation of the words and grammar is achieved more quickly.
|
All people, regardless of their ancestry or heritage, come with unique histories and worldviews. These are integral to their personal identities – so much so that it is often difficult for them to state exactly what their histories and worldviews are and how they affect how they think and act on a daily basis. Very simply, the term “Worldview” refers to how people see and interpret the world and their place within it. This perspective is influenced by family, culture, personal experience and education.
What is sometimes referred to as Traditional Knowledge includes an immense body of cultural knowledge that can include: mythic stories of the days when animals could talk; detailed knowledge of the land and its resources; and practical information about hunting areas, trapping techniques, food preparation, etc. While this term is usually applied to aboriginal cultures, every society possesses a body of traditional knowledge that it transmits in various ways.
Code of Conduct
The terms “law” and “code” suggests a formal legal system. The reality is both simpler and more complex. Cultural practices based on reciprocity, kinship, and commonly- held values made it possible for people to live together on the land. Everyone was cared for wherever they went and everyone was expected to care for themselves and contribute to the greater good of the whole group.
Principles and Values
Most of our principles are based on respect and the give and take of social interaction. Our values reflect the way our community has developed through the millenia.
Traditionally the people who lived at the mouth of the Klondike River spoke a dialect of Hän, a language spoken in an area centred on the Yukon River drainage in the western Yukon and eastern Alaska. Today our First Nation includes people whose ancestors spoke Gwich’in, Northern Tutchone and other languages. Most of these are part of the Athapaskan family of languages that extends across the north and can even be found in parts of the USA in languages such as Navajo and Apache.
The View from Here
Our worldview centres on core beliefs and ideas that are still relevant to us today. Many of our citizens still live according to these core values and it can be difficult for them to engage in activities that run counter to these values. In our self-government we are finding ways to adapt this worldview to contemporary realities, realizing that these current beliefs may also have to be adapted.
|
Each chapter in each book has its own assignment. Each assignment includes a vocabulary list of words used in that chapter. We present the words, not in alphabetical order, but in the order, you will hear them. Instead of providing definitions, we thought you might want the listener to look each word up, or put them in alphabetical order, or write their own definitions based on what they think the word might mean (from context) and then look it up and see how they did.
Each vocabulary list is followed up with questions related to the content of the chapter. Our goal is to ask at least one question in each of the following categories: History, Geography, Language, and Character.
After the questions, we provide suggestions for Activities the listener might do that will reinforce the information in that chapter.
|
Back-of-the-Envelope Calculations: Volume of the Earth and Sun
Suppose you and your friends wanted to make a scale model of the Earth and the Sun. You start by cutting a one-inch cube of Play-Doh to represent the volume of the Earth.
- How many one-inch Play-Doh cubes would you have to cut in order to represent the volume of the Sun at the same scale?
- If you stacked the blocks up into a cube, how big would the cube be?
- And, finally, if you and all your friends mashed and shaped that huge cube into a sphere, and you made a sphere out of the Earth cube as well, how far away from your Play-Doh Sun would you have to hold your scale Earth to match the true scale of the solar system?
References and Resources
This SERC page describes the use of Back of the Envelope Calculations
The Back of the Envelope : This page outlines one of the essays in the book "Programming Pearls" (ISBN 0-201-65788-0). The book is written for computer science faculty and students, but this portion speaks very well to back of the envelope calculations in general.
|
A satellite S is in a geosynchronous orbit; that is, it stays over the same point T on Earth as Earth rotates on its axis. From the satellite, a spherical cap of Earth is visible. The circle bounding this cap is called the horizon circle. The line of sight from the satellite is tangent to a point Q on the surface of Earth. If the radius of Earth, CQ, is 3963 miles and the satellite is 23300 miles form the surface of the Earth at Q, what is the radius of the horizon circle, PQ, in miles?
I've been staring at this problem for hours. I'm completely stumped.
|
Major content provider: U.S. National Cancer
Muscle tissue is composed of cells that have the special ability to shorten or contract in order to produce movement of the body parts. The tissue is highly cellular and is well supplied with blood vessels. The cells are long and slender so they are sometimes called muscle fibers, and these are usually arranged in bundles or layers that are surrounded by connective tissue. Actin and myosin are contractile proteins in muscle tissue.
Muscle tissue can be categorized into skeletal muscle tissue, smooth muscle tissue, and cardiac muscle tissue.
Skeletal muscle fibers are cylindrical, multinucleated, striated, and under voluntary control. Smooth muscle cells are spindle shaped, have a single, centrally located nucleus, and lack striations. They are called involuntary muscles. Cardiac muscle has branching fibers, one nucleus per cell, striations, and intercalated disks. Its contraction is not under voluntary control.
|
Ten steps to establishing a PBL-friendly culture
To experts in the field of human performance, there is no mystery as to why PBL succeeds—or doesn’t. Three decades of research has established the factors that maximize individual effort and the desire to achieve:
- Caring relationships. Whether growing up in a household, studying in school, or working in a job, people perform better when they feel cared for and attended to. A caring relationship begins with recognizing and respecting the autonomy of the individual.
- The desire for meaning and purpose. Human beings work harder when they have a goal and purpose. The goal must be relevant to the person’s needs and desires.
- The power of mastery. Achievement is a natural state of being. People enjoy doing tasks well, and feel intrinsic rewards that sustain more effort.
Carefully-designed projects tap into these intangibles. That is the core strength of PBL; it can inspire drive, passion, and purpose in students.
- Trust. Trust encourages peak cognition and intelligent behavior. Successful PBL depends very much on your belief that young people desire to learn and will perform well when respected by an adult and guided appropriately.
- Use the language of peak performance. IQ is malleable and performance is driven by self-fulfilling belief systems. Students who move from a ‘fixed mindset’ to a ‘growth mindset’ will believe in themselves, and in their creative potential. Your language will shape their beliefs.
- Treat ‘soft’ skills as ‘hard’ skills. Writing an essay or solving a math problem is traditionally regarded as a ‘hard’ skill, while communicating with someone who disagrees with you is a ‘soft’ skill. The reverse is actually true: Communication and collaboration are the most difficult of human skills—and need to be taught and practiced relentlessly.
- Expect mastery. Setting high expectations for academic performance is usual in good teaching. But setting high expectations for performance is crucial in PBL. Expect students to communicate and collaborate according to the standards of high performing industries.
- Train the imagination. Teaching innovation, problem solving, and creativity to the global generation is now a primary goal. Creativity will soon be valued as a basic skill and has been identified as the number one leadership competency of the future. Use creativity exercises, encourage brainstorming and—most important—design projects that challenge the imagination.
- Encourage peak performance. Currently, we have no measure for peak performance in schools. But you can design rubrics with a ‘breakthrough’ category—a blank column that invites students to deliver a product that cannot be anticipated or easily defined in words. The breakthrough column goes beyond the A, rewarding innovation, creativity, and unusual performance—a kind of ‘wow’ column.
- Pass along the 10,000 hour rule. Mastering a skill at a high level takes 10,000 hours of practice. Your students aren’t likely to put that many hours into Algebra 1. But let them know that practice works—and the more they practice, the better they will be. Most important, let them know that achievement comes from hard work, not a special gene for brilliance.
- Teach to the iceberg. Remember that the deeper self—the domain of creativity and motivation—is not immediately accessible or public. Think in terms of an iceberg. Below the tip of the iceberg is 90% of the human being. If we want skillful, motivated creators, we need to pay attention to empathy, bias, and all the normal variations in a young person’s emotional makeup. Take time and care to surface the deeper aspects of learning.
- Be aware of your ‘emotional content.’ PBL involves ‘up close and personal’ teaching. As you work side by side with students, they will closely observe your own attitude toward skills, lifelong learning, and emotional balance. Be aware. Be positive.
- Do the small things. Small acts of kindness and respect can leverage larger shifts in your classroom culture. Stand at the door and greet students at the beginning of the period. Wish them well as they exit. Reward them with unexpected five-minute breaks when they perform well. Celebrate on occasion.
And let’s add an eleventh step for good measure:
- No ‘teacher’ talk. Sarcasm and put-downs by teachers are all too common in classrooms. Be firm when necessary—but don’t question character or use a tone of voice that a respected friend would find offensive. This violates the first rule of performance: care.
|
Changes in land use and management affect the amount of carbon in plant biomass and soils. Historical cumulative carbon losses due to changes in land use have been estimated to be 180 to 200 PgC by comparing maps of "natural" vegetation in the absence of human disturbance (derived from ground-based information (Matthews, 1983) or from modelled potential vegetation based on climate (Leemans, 1990)) to a map of current vegetation derived from 1987 satellite data (de Fries et al., 1999). Houghton (1999, 2000) estimated emissions of 121 PgC (approximately 60% in tropical areas and 40% in temperate areas) for the period 1850 to 1990 from statistics on land-use change, and a simple model tracking rates of decomposition from different pools and rates of regrowth on abandoned or reforested land. There was substantial deforestation in temperate areas prior to 1850, and this may be partially reflected in the difference between these two analyses. The estimated land-use emissions during 1850 to 1990 of 121 PgC (Houghton, 1999, 2000) can be compared to estimated net terrestrial flux of 39 PgC to the atmosphere over the same period inferred from an atmospheric increase of 144 PgC (Etheridge et al., 1996; Keeling and Whorf, 2000), a release of 212 PgC due to fossil fuel burning (Marland et al., 2000), and a modelled ocean-atmosphere flux of about -107 PgC (Gruber, 1998, Sabine et al., 1999, Feely et al., 1999a). The difference between the net terrestrial flux and estimated land-use change emissions implies a residual land-atmosphere flux of -82 PgC (i.e., a terrestrial sink) over the same period. Box 3.2 indicates the theoretical upper bounds for additional carbon storage due to land-use change, similar bounds for carbon loss by continuing deforestation, and the implications of these calculations for atmospheric CO2.
Box 3.2: Maximum impacts of reforestation and deforestation on atmospheric CO2.
Rough upper bounds for the impact of reforestation on atmospheric CO2 concentration over a century time-scale can be calculated as follows. Cumulative carbon losses to the atmosphere due to land-use change during the past 1 to 2 centuries are estimated as 180 to 200 PgC (de Fries et al., 1999) and cumulative fossil fuel emissions to year 2000 as 280 PgC (Marland et al., 2000), giving cumulative anthropogenic emissions of 480 to 500 PgC. Atmospheric CO2 content has increased by 90 ppm (190 PgC). Approximately 40% of anthropogenic CO2 emissions has thus remained in the atmosphere; the rest has been taken up by the land and oceans in roughly equal proportions (see main text). Conversely, if land-use change were completely reversed over the 21st century, a CO2 reduction of 0.40 x 200 = 80 PgC (about 40 ppm) might be expected. This calculation assumes that future ecosystems will not store more carbon than pre-industrial ecosystems, and that ocean uptake will be less because of lower CO2 concentration in the atmosphere (see Section 188.8.131.52).
A higher bound can be obtained by assuming that the carbon taken up by the land during the past 1 to 2 centuries, i.e. about half of the carbon taken up by the land and ocean combined, will be retained there. This calculation yields a CO2 reduction of 0.70 x 200 = 140 PgC (about 70 ppm). These calculations are not greatly influenced by the choice of reference period. Both calculations require the extreme assumption that a large proportion of today's agricultural land is returned to forest.
The maximum impact of total deforestation can be calculated in a similar way. Depending on different assumptions about vegetation and soil carbon density in different ecosystem types (Table 3.2) and the proportion of soil carbon lost during deforestation (20 to 50%; IPCC, 1997), complete conversion of forests to climatically equivalent grasslands would add 400 to 800 PgC to the atmosphere. Thus, global deforestation could theoretically add two to four times more CO2 to the atmosphere than could be subtracted by reforestation of cleared areas.
Land use responds to social and economic pressures to provide food, fuel and wood products, for subsistence use or for export. Land clearing can lead to soil degradation, erosion and leaching of nutrients, and may therefore reduce the subsequent ability of the ecosystem to act as a carbon sink (Taylor and Lloyd, 1992). Ecosystem conservation and management practices can restore, maintain and enlarge carbon stocks (IPCC, 2000a). Fire is important in the carbon budget of some ecosystems (e.g., boreal forests, grasslands, tropical savannas and woodlands) and is affected directly by management and indirectly by land-use change (Apps et al., 1993). Fire is a major short-term source of carbon, but adds to a small longer-term sink (<0.1 PgC/yr) through production of slowly decomposing and inert black carbon.
Deforestation has been responsible for almost 90% of the estimated emissions due to land-use change since 1850, with a 20% decrease of the global forest area (Houghton, 1999). Deforestation appears to be slowing slightly in tropical countries (FAO, 1997; Houghton, 2000), and some deforested areas in Europe and North America have been reforested in recent decades (FAO, 1997). Managed or regenerated forests generally store less carbon than natural forests, even at maturity. New trees take up carbon rapidly, but this slows down towards maturity when forests can be slight sources or sinks (Buchmann and Schulze, 1999). To use land continuously in order to take up carbon, the wood must be harvested and turned into long-lived products and trees must be re-planted. The trees may also be used for biomass energy to avoid future fossil fuel emissions (Hall et al., 2000). Analysis of scenarios for future development show that expanded use of biomass energy could reduce the rate of atmospheric CO2 increase (IPCC 1996b; Leemans et al., 1996; Edmonds et al., 1996; Ishitani et al., 1996; IPCC, 2000a). IPCC (1996b) estimated that slowing deforestation and promoting natural forest regeneration and afforestation could increase carbon stocks by about 60 to 87 PgC over the period 1995 to 2050, mostly in the tropics (Brown et al., 1996).
Savannas and grasslands - fire and grazing
Grasslands and mixed tree-grass systems are vulnerable to subtle environmental and management changes that can lead to shifts in vegetation state (Scholes and Archer, 1997; House and Hall, 2001). Livestock grazing on these lands is the land use with the largest global areal extent (FAO, 1993a). Extensive clearing of trees (for agricultural expansion) has occurred in some areas. In other areas, fire suppression, eradication of indigenous browsers and the introduction of intensive grazing and exotic trees and shrubs have caused an increase in woody plant density known as woody encroachment or tree thickening (Archer et al., 2001). This process has been estimated to result in a CO2 sink of up to 0.17 PgC/yr in the USA during the 1980s (Houghton et al., 1999) and at least 0.03 PgC/yr in Australia (Burrows, 1998). Grassland ecosystems have high root production and store most of their carbon in soils where turnover is relatively slow, allowing the possibility of enhancement through management (e.g., Fisher et al., 1994).
Peatlands/wetlands are large reserves of carbon, because anaerobic soil conditions and (in northern peatlands) low temperatures reduce decomposition and promote accumulation of organic matter. Total carbon stored in northern peatlands has been estimated as about 455 PgC (Gorham, 1991) with a current uptake rate in extant northern peatlands of 0.07 PgC/yr (Clymo et al., 1998). Anaerobic decomposition releases methane (CH4) which has a global warming potential (GWP) about 23 times that of CO2 (Chapter 6). The balance between CH4 release and CO2 uptake and release is highly variable and poorly understood. Draining peatlands for agriculture increases total carbon released by decomposition, although less is in the form of CH4. Forests grown on drained peatlands may be sources or sinks of CO2 depending on the balance of decomposition and tree growth (Minkkinen and Laine, 1998).
Conversion of natural vegetation to agriculture is a major source of CO2, not only due to losses of plant biomass but also, increased decomposition of soil organic matter caused by disturbance and energy costs of various agricultural practices (e.g., fertilisation and irrigation; Schlesinger, 2000). Conversely, the use of high-yielding plant varieties, fertilisers, irrigation, residue management and reduced tillage can reduce losses and enhance uptake within managed areas (Cole et al., 1996; Blume et al., 1998). These processes have led to an estimated increase of soil carbon in agricultural soils in the USA of 0.14 PgC/yr during the 1980s (Houghton et al., 1999). IPCC (1996b) estimated that appropriate management practices could increase carbon sinks by 0.4 to 0.9 PgC/yr , or a cumulative carbon storage of 24 to 43 PgC over 50 years; energy efficiency improvements and production of energy from dedicated crops and residues would result in a further mitigation potential of 0.3 to 1.4 PgC/yr, or a cumulative carbon storage of 16 to 68 PgC over 50 years (Cole et al., 1996).
The IPCC Special Report on Land Use, Land-Use Change and Forestry (IPCC, 2000a) (hereafter SRLULUCF) derived scenarios of land-use emissions for the period 2008 to 2012. It was estimated that a deforestation flux of 1.79 PgC/yr is likely to be offset by reforestation and afforestation flux of -0.20 to -0.58 PgC/yr, yielding a net release of 1.59 to 1.20 PgC/yr (Schlamadinger et al., 2000). The potential for net carbon storage from several "additional activities" such as improved land management and other land-use changes was estimated to amount to a global land-atmosphere flux in the region of -1.3 PgC/yr in 2010 and -2.5 PgC/yr in 2040, not including wood products and bioenergy (Sampson et al., 2000).
Other reports in this collection
|
Tinnitus is the perception of sound in the ears or head not caused by an external sound source. Ringing and buzzing sounds may be heard in one or both ears or appear to be generally in the head region but can be variable and difficult to decide exactly where it seems to be.
Almost always, it is a totally subjective noise which only the person who has it can hear. On rare occasions, it can be heard by others as well; this is called objective tinnitus but is not associated with the effects of noise exposure.
It’s not an illness or a disease in itself, but it is often a symptom of a problem with the ear or the hearing pathways to the brain. Usually, it occurs when the inner ear is damaged or impaired in some way. Some of the causes of are:
These are just a few of the most common causes, but it can also be a side-effect of medication or a result of other health concerns, such as high blood pressure. It is also commonly associated with age-related hearing loss, although it can affect anyone at any age.
It is often described as a "ringing in the ears," but what people with this condition hear is extremely variable. Some people hear hissing, whooshing, roaring, whistling or clicking. It can be intermittent or constant, single or multiple tones or more noise-like. Probably the most common description for noise-induced tinnitus is a high pitched tone or noise.
The volume or loudness is very individual and can range from very quiet to disturbingly loud. Although some people say that it comes and goes or as a tone that changes pitch through the day. For most it is a steady, unchanging noise every waking minute.
Tinnitus is not a disease itself or a cause of hearing loss. It is a symptom that something is wrong somewhere in the auditory system, which can include the cochlea of the inner ear, the auditory nerve and the areas of the brain that process sound. In about 90% of cases, it accompanies hearing loss and an individual can have both hearing loss and tinnitus from noise damage. However the two do not always occur together. It is possible to have no measurable hearing loss but suffer from the condition.
About 90 percent of cases occur with an underlying hearing loss. The World Health Organisation (WHO) now lists tinnitus as a distinct disorder and states that noise exposure is a major cause of permanent hearing loss around the world.
Recent research confirms that it is the second most common form of hearing loss after age-related hearing loss.
The persistency of the condition is experienced by approximately 10% of the adult UK population. Prevalence increases with age but experiences of it are very common in all age groups, especially following exposure to loud noise. About half of those who live with the condition find it moderately or severely distressing with about 0.5% of adults in the UK (242,000 people) having a severe effect on the ability to lead a normal life.
About 8% of the population actively seek medical advice with approximately 750,000 primary care consultations in England each year. You may suffer debilitating symptoms such as anxiety, depression or sleep disturbances but only 2.5% attended hospital for this purpose.
It can be confusing and even frightening when it occurs for the first time, but it is rarely a symptom of a serious disorder. If it lasts for longer than a week, or if it is affecting your concentration, sleep or anxiety levels, book an appointment with your GP or with your local Amplifon Audiologist.
In some cases, the problem can be managed with relaxation exercises. There are also specialist hearing solutions available that can provide soothing tones to distract you from the noise of it.
For more information, visit The British Tinnitus Association who can support and authoritative information, much of it written by medical and audiology professionals or clinical researchers. Its support network can also put you in touch with other people who share similar experiences.
To learn more about the causes, symptoms and treatments, refer to our individual guides.
|
In preschool, science concepts focus on plants, animals and weather. At home, reinforce these concepts by sharing a science activity with your child. Weed the flowerbed together, and as you dig up dandelions point out the plant's roots, leaves, stem and seeds. Dissect a tulip together, and show your child the petals, stamens and pistil. While in the flowerbed, look for worms and bees and talk about their role in the garden. In preschool, science activities utilize the senses, so be sure to encourage your child to touch, smell and observe. Kids' toys and games, such as bug jars and magnifying glasses, are excellent tools for future flowerbed science activities.
|
Pluto hogs the spotlight in the continuing scientific debate over what is and what is not a planet, but a less conspicuous argument rages on about the planetary status of massive objects outside our solar system. The dispute is not just about semantics, as it is closely related to how giant planets like Jupiter form.
When a star passes within a certain distance of a black hole, the stellar material gets stretched and compressed as the black hole swallows it, briefly releasing an enormous amount of energy as a flare. Astronomers have now observed infrared light echoes from these “stellar tidal disruption” events reflected by dust encircling a black hole.
A matter of scientific speculation since the 1930s, dark matter itself cannot yet be detected, but its gravitational effects can be. Now, eight scientists from Johns Hopkins University consider the possibility that the first black hole binary detected by LIGO could be part of this mysterious substance known to make up about 85 percent of the mass of the universe.
Astrophysicists from Germany and America have for the first time measured the rotation periods of stars in a cluster nearly as old as the Sun. It turns out that these stars spin once in about twenty-six days — just like our Sun. This discovery significantly strengthens what is known as the solar-stellar connection, a fundamental principle that guides much of modern solar and stellar astrophysics.
Galaxies reached their busiest star-making pace about 11 billion years ago, then slowed down. Scientists have puzzled for years over the question of what happened. Now researchers have found evidence supporting the argument that the answer was energy feedback from quasars within the galaxies where stars are born.
Delta (δ) Cephei, prototype of Cepheids, which has given its name to all similar variable stars, was discovered 230 years ago by the English astronomer John Goodricke. The Cepheids role as distance calibrators for more than a century means δ Cephei is one of the most studied stars, but astronomers were shocked to discover that they lacked an essential piece of information…
|
Developing Arts Literacies:
Understanding Genres, Analyzing and Evaluating - Critique
Composing and Planning, Producing, Executing and Performing
Connecting to History and Culture
Students will learn about the traditional Mexican musical form of corridos, which dates back to the 1800s and continues to be very popular. They will analyze the themes and literary devices used in corridos such as "El Corrido de Gregorio Cortez" and "El Moro de Cumpas". The lesson will culminate in students writing their own corridos based on the traditional form.
- Analyze corridos to gain a sense of the traditional form.
- Analyze theme and literary devices in corridos.
- Write original corridos based on the traditional form.
- Project-Based Learning
- Arts Integration
- Discovery Learning
- Experiential Learning
What You'll Need
Teachers should familiarize themselves with the corrido genre and its place in history using the following sources:
- Keen, Benjamin. A History of Latin America. 7th Edition. Houghton Mifflin Company, 2003.
- Paredes, Américo. A Texas-Mexican Cancionero. Austin: University of Texas Press, 1976.
- Paredes, Américo. With his Pistol in his Hand. Austin: University of Texas Press, 1958.
See the ArtsEdge How-to: Turning Students into Songwriters: Tips on Writing Corrido Lyrics for helpful guidelines on writing lyrics. Inspire students by sharing corridos written by fellow high school students. Winners of the annual Bilingual Corrido Contest in Arizona, a program conducted by the University of Arizona Poetry Center, have written some excellent corridos. See the 2003 winner, "El rancho de los pinos" by Julianna Echerivel Prieto, a corrido about a family that gathers on Sundays to spend time together, and "El corrido de caballo con hambre y sed" by Adriana Aguilar, a corrido about a child who is tasked with feeding a hungry, thirsty horse. Both student corridos and additional examples of corridos are available on the ARTSEDGE Look-Listen-Learn resource, Corridos.
Prior Student Knowledge
Students should be familiar with the geography and general history of Mexico. Students should be familiar with current events.
Small Group Instruction
Students with visual impairments or disabilities may need modified handouts or texts.
Resources in Reach
Here are the resources you'll need for each activity, in order of instruction.
1. Begin with a free-writing exercise (or if using class journals, ask students to write in their journals). Ask students to describe what their everyday life is like. Then ask them to write about a time when their everyday life was disrupted in some way–anything from a humorous anecdote to a significant event.
2. Ask students how the language they used in their freewriting exercise may differ from essays they turn in as homework assignments or from business letters. Explain that many forms of literature are written in everyday, ordinary language (i.e., poems by Langston Hughes, contemporary slam poets, etc.); the Mexican corrido is one example of a literary tradition written in everyday language.
1. Pass out the What is the Corrido? information sheet located in the Resource Carousel and discuss the characteristics of corridos. Explain that the corrido is a type of ballad or short narrative, a story usually based in real life. Ballads have been written in cultures all over the world, and the form dates back to the 14th and 15th centuries. The ballad has roots in the oral tradition, and thus the form is simple and direct, and uses ordinary, everyday speech and dialogue. The subjects in ballads tend to be about lost love and recent events. Some traditional corridos, in particular, tended to focus on events due to the clashing of cultures—that of the United States and Mexico. However, almost any subject can be the focus of a corrido.
2. Explain that although most traditional corridos were written about historical events (wars and revolutions) and heroes (John F. Kennedy and Fernando Valenzuela), and major catastrophes (earthquakes and train wrecks), many corridos were written about the common aspects of everyday life, and the ways that everyday life is disrupted. Subjects of such corridos have included the struggles and joys in relationships and employment, the characteristics of a hometown or region, and stories of individuals who defend themselves from outside forces.
Corridos sung along the U.S.-Mexico border in the 19th and early 20th centuries, for instance, often dealt with conflict between the U.S. and Mexico that affected their daily lives.
3. Pass out the lyrics to the famous corrido, "El Corrido de Gregorio Cortez" and the "Story of Gregorio Cortez" information sheets located in the Resource Carousel. Provide students with some historical context surrounding Cortez's story. Cortez epitomized the border hero for Mexicans and Mexican-Americans because he represented a man who stood on principles and defended his rights against the rinches, the name given to Texas Rangers (a "law enforcement" group of that was founded in 1823 to fight Native Americans). The Texas Rangers had achieved worldwide fame as a fighting force during the Mexican-American War, but when the war ended, the Rangers no longer had an official function since it was up to the U.S. military to defend Texas. The Texas Rangers continued to participate in fights with Mexican nationals. In 1916, Pancho Villa raided Columbus, New Mexico and intensified tensions between Anglos and Hispanics. The Rangers, along with hundreds of special Rangers appointed by Texas governors, killed approximately 5,000 Mexicans and Mexican Americans between 1914 and 1919. Stories of brutality and injustice among the Rangers were common.
4. Discuss how Gregorio Cortez is depicted as a border hero in the corrido. Ask students whether they think he would be such a hero if he were not a farm hand and vaquero, but an outlaw prior to his encounter with the sheriff. Albeit Cortez's story is a more extreme example of conflict between cultures, the Mexican and Mexican-American people could relate to Cortez's struggle, because it mirrored their own everyday struggles living in under poor employment and economic conditions and their own conflicts with the rinches. Discuss how the theme of "his pistol in his hand" is linked to the oppression of Mexicans by the United States.
5. Examine how the lyricist glorifies Cortez through simile (comparison of two unlike objects with the word "like" or "as") and hyperbole (exaggeration for emphasis) in "El Corrido de Gregorio Cortez." Ask students for examples ("leaped out of the corral", "His voice was like a bell", "trying to catch Cortez/ Was like following a star.") Remind students that the characteristics of heroes in many literary traditions (i.e., tall tales) are often depicted in hyperboles (obvious exaggeration for effect).
1. Inform students that corridos were usually written in a timely fashion in response to current events. Pass out the lyrics to the corrido "El Moro de Cumpas" by Leonardo Yañez, which tells the story of a very famous horse race that took place in 1957 in the town of Agua Prieta, Mexico, which borders Douglas, Arizona. Composer Leonardo Yañez (nicknamed "El Nano"), a member of the Mariachi Copacabana, wrote this corrido after watching the race from the finish line. The handout is located above under 'Resources in Reach'. "El Moro de Cumpas" is one of the best known corridos; almost every singer of this tradition knows it. This corrido also served as the inspiration for a feature-length commercial film about the horse race it documents.
2. Tell students that the horse is an important symbol in Mexican culture. First of all, horses are essential to the cattle industry, a widespread source of employment in many parts of Mexico, including the northern border states. Secondly, without horses, the Spanish conquistadors would not have been able to defeat the native peoples and occupy their land. Point out that a horse is also important in "El Corrido de Gregorio Cortez" since without the horse, he would not have been able to outrun the rinches. Also, in rural Mexico, the horse used to be an important means of transportation. Horseracing became a popular form of entertainment for Mexicans and Mexican Americans, and remains so today. In Mexican horse races, usually only two horses are raced against each other.
3. In corridos about horse races, the horses are often given human characteristics that describe them as brave, respected men. And although there is one winner in the competition, both participants are honored. Read the lyrics of "El More de Cumpas" aloud. You may also wish to play an excerpt of the corrido for your students (see the ArtsEdge Look-Listen-Learn resource Corridos.) Ask students to point out instances in the lyrics in which the horses were given human characteristics (i.e., El Moro is described as handsome, both horses are "two seekers after triumph").
4. Discuss the themes of the corrido. One theme involves the way people are prone to make judgments based on appearances. Point out the lines "Everyone kept saying / that that horse came / especially to win." Ask students why everyone thought El Moro was going to win (they were charmed by his good looks). Note that Relámpago surprised many of the betters (note that "relámpago" translates to "lightning").
5. Discuss the theme of gambling in the poem. Note how many people from Agua Prieta and neighboring towns voted on the match. Discuss what the allure of gambling is, in general, including at American horse and dog tracks as well as the lottery and casinos. Ask students if they think economic conditions in the border town of Agua Prieta may have influenced more people to gamble. Tell students that many who bet on Moro lost not only money but also vehicles and ranches. Discuss how the descriptions of the horses and the horserace reveal a general respect for horses. Gambling is not criticized in the lyrics, for example. The horses are lauded for their beauty and speed. Ask students how the description of the horses reflects the important position of the horse in Mexican culture.
6. Ask students how Yañez builds suspense in this corrido (by waiting until the last two stanzas to state the winner of the race). Discuss how writers are able to create "page turners" through suspense.
7. Challenge students to write their own corridos. You may wish to encourage them to write on whatever they would like, or provide some options (see the What is a Corrido? handout, located above under 'Resources in Reach' for potential themes). The main criterion is that the corrido should be centered on an event, character, or story that is happening in the present time. Tell students that they must follow the form of traditional corridos and use colloquial language. You may also wish to ask students to make their corridos suspenseful.
1. Organize a corrido concert, asking students to read the lyrics of their songs aloud—or better yet, to sing or perform them if a student in your class plays an appropriate instrument (i.e., guitar, accordion). See the ArtsEdge How-to: There's a Song in Everyone: Tips on Composing a Simple Corrido for useful guidelines on helping students to compose music.
Assess the students based on the following criteria:
- Evidence of understanding through insightful and frequent participation in class discussions.
- Evidence of understanding of the corrido form.
- Wrote an original corrido in the traditional form that reflects the student's understanding of the corrido tradition.
You may also use the Assessment Rubric located above under Resources in Reach.
|
2 Answers | Add Yours
In ionic bonds one atom is positively charged (the atom that gives its electrons), and the other atom is negatively charged (the atom that receives the electrons); therefore, it is the opposite charge of the ions that attracts them to one another and holds them together. This electrostatic force is quite strong, which accounts for why ionic bonds are difficult to break apart. Metals bonded to nonmetals compose the vast majority of ionic bonds where the metal is the cation (giver of electrons) and the nonmetal is the anion (taker of electrons). Ionic bonds form only under exothermic conditions.
The atoms are held together by opposite charges.
We’ve answered 315,647 questions. We can answer yours, too.Ask a question
|
|Palestinian children are taught through formal and informal education to see a world in which "Palestine" exists and replaces all of Israel. Two children's quizzes broadcast last week on Fatah-controlled Palestinian Authority television show how the PA routinely teaches its children to identify all Israeli cities as Palestinian cities.
In the two quizzes on PA TV, children were asked to identify “Palestinian” cities. The suggested answers were exclusively Israeli cities: Haifa, Jaffa, Acre and Tiberias.
The following are the transcripts of the PA TV quizzes:
Quiz 1: Haifa, Jaffa and Acre are “Palestine”
Host reads clue: “Where is Palestine's most important port, in Haifa, Jaffa or Acre?”
Host: “Is it correct?” [Checks computer.]
[PATV (Fatah), Aug. 26, 2009]
Quiz 2: Jaffa, Acre and Tiberias are “Palestine”
Host reads clue: “There's a Palestinian city whose walls are very high and strong, and Napoleon, whom we all know, stopped his battle, because he was unable [to breach] the solid walls.
Which city is it, Jaffa, Acre or Tiberias?”
Host: “Applause, bravo!”
[PATV (Fatah), Aug. 30, 2009]
The following are examples from PA schoolbooks, which also teach Palestinian children to imagine a world without Israel:
"Coastal states differ in terms of their access to water sources, such as...: states located on sea coasts with accesses to two seas, for example: Palestine and Egypt to the Mediterranean Sea and the Red Sea."
[Physical Geography and Human Geography, Grade 12, p. 105]
"Palestine has a long coast facing the Mediterranean Sea and a short coast on the Gulf of Aqaba."
[Health and Environment Studies, Grade 8 (2003), p. 130, the Israeli city of Eilat is on the Gulf of Eilat (Aqaba) - Ed.]
"The Tiberias Lake [Sea of Galilee], in Palestine"
[Physical Geography, Grade 5, p. 25]
The following are examples of previous PA TV quizzes teaching the identical message of a world without Israel:
Child host: “List three Palestinian ports… we have the Haifa port, Jaffa, Ashkelon, Eilat, Ashdod & Gaza.” [Note: All are Israeli cities except Gaza.]
Child host: “What is the size of the state of Palestine?”
(On phone) Haidar: “27,000 sq. km.”
[Note: The size of the West Bank and Gaza is 6,220 sq. km. The figure of 27,000 sq. km. includes all of Israel.]
Child host: “Name three countries bordering Palestine.”
(On phone) Muhammad: “Lebanon, Jordan, Egypt.”
[Note: Only Israel borders Lebanon.]
Child host: “The Palestinian borders overlook two important seas. What are their names?”
(On phone) Lama: “The Mediterranean and the Red Sea.”
[Note: The Red Sea borders Israel's southern tip.]
Child host: “What's the name of the only sweet-water lake in Palestine?”
On phone, Ayyam: “The Tiberias Sea [the Sea of Galilee].”
[Note: The Sea of Galilee is in Israel.]
[PATV (Fatah), Sept. 1, 2008]
Child host: “Where is Ein Harod [Israeli town]?”
Child: “In Palestine.”
Child host: “Good. Which mountain is the tallest in Palestine? And where is it located?”
Child: “Mount Meron [in Israel].”
Child host: “That’s half an answer, where is it located? In Nablus, Hebron or Galilee?”
Child: “In the Galilee [in Israel].”
Child host: “Correct answer. The tallest mountain in Palestine is Mount Meron, in the Upper Galilee, east of the mountain is the city of Safed [Israeli city], the capital of northern Palestine.”
[PA TV (Fatah), Sept. 9, 2008]
Child host: "What is the name of the desert in Palestine?"
Nahad: "The Negev desert" [southern Israel]
Child host: "Correct answer."
[PATV (Fatah), Sept. 18, 2008]
Child host: "Which of the Palestinian cities is named after the Roman ruler, the Emperor Tiberius?"
Child: "Tiberias." [Israeli city]
Child host: "Your answer is correct."
[PATV (Fatah), Sept. 20, 2008]
Child host: “Yesterday’s riddle: O bride of Palestine, the most beautiful in the garden, O you who sit on the shore waiting for time to return. Don’t cry, my dear, beautiful daughter of Canaan … And the answer is: Jaffa. [Israeli city – part of Tel Aviv]”
Child host: “Which Palestinian city is called ‘the flower of Galilee’? The possibilities are: Tiberias, Nazareth, Acre.” [All are Israeli cities]
[PA TV (Fatah) Sept. 15, 2008]
|
SpaceX have a striking video showing Mars spinning faster and faster, transforming from the current red Mars to a planet with a small ocean and with the deserts tinged with green in seven revolutions.
Of course that is poetic exaggeration - it wouldn't terraform in a week. So how long would it take? Science fiction enthusiasts who have read Kim Stanley Robinson's "Mars Trilogy" may remember that in his book, it is terraformed in a couple of centuries. But that's science fiction, not a terraforming blue print.
Is it possible at all, and how long do scientists think it would take? The answer is that if it is possible, it would take thousands of years. Here is how they envision it in the Mars Society - a thousand years to get to trees, water and a landscape where humans can go out of doors without a spacesuit, using scuba gear:
Images from the Big Idea (National Geographic Magazine) See also, How we will terraform Mars - by Jason Shankel and Terraforming Mars by Nicole Willet
Note that after a thousand years, with trees, still you can’t breathe the atmosphere (that's why they are wearing scuba gear in the artist's impression), and there are no animals or birds yet. Also, if you research into their plans, you find out that behind the scenes there is a lot of mega technology to make this possible, if it works. For one thing, they will need to have hundreds of factories constantly producing greenhouse gases, and / or planet sized thin film mirrors in space to reflect extra sunlight onto the planet to keep it warm.
This may perhaps seem a minor achievement to aim for, used as we are to breathing on Earth, but for a prospective Mars colonist, it as a big step forward. At the moment the air on Mars is so thin that the moisture lining your lungs would boil. This is not something you can adapt to. It is simple physics that no warm blooded creature with lungs can breathe there. If someone put you on the surface of Mars with scuba gear, you couldn't take one breath of air because your lungs would stop working immediately in the low atmospheric pressure of 1% of Earth's or less.
Could we do this much?
Our biosphere took hundreds of millions of years to develop on Earth and it might have gone in many different directions. The hope is that the development can be speeded up to a few thousand years, and directed so that you end up with a biosphere like that of Earth at the end, to pretty much the planet you want.
Suppose we had another planet exactly like Earth, but with a thin carbon dioxide atmosphere instead of an oxygen / nitrogen one. Let’s stack all the cards in our favour and make it as easy to terraform as you can for a planet that doesn't have life on it. It’s got continental drift, magnetic field, the works. Now imagine it is in the place of the Moon as close and easy of access as that, I will suggest that we are not anywhere near the level of understanding needed to terraform it with any assurance of success. And it's going to be much harder, with a planet that, though related, is also so different from Earth, further from the sun, much less water, no continental drift, no active volcanism right now (though not quite inactive), lower gravity, different surface chemistry, superoxigenated with perchlorates and hydrogen peroxide, etc etc.
But not to be disheartened if we can't do it. Perhaps terraforming is more like a grand goal for a mature civilization thousands of years old. But if so, there is much we can do as a young civilization, by way of building city domes, lava tube caves and free flying space settlements. We could settle the entire solar system right out to beyond Pluto with space settlements spinning for artificial gravity before the terraforming project has got off its starting blocks.
Sunlight is usually brought to the habitat via mirrors in these designs e.g. here is how it's done for the Stanford Torus.
This is a modern update of the design, Habitat 2, with a big 2 km across mirror (aluminised Mylar) - the animation leaves out the cosmic radiation shielding for artistic reasons:
Video of Habitat 2 and mirror
Sunlight gets reflected around the cosmic radiation shielding into the habitat. UV light can be absorbed on the way. Cosmic radiation, as highly energetic particles, goes right through the mirror.
"At all distances out to the orbit of Pluto and beyond, it is possible to obtain Earth-normal solar intensity with a concentrating mirror whose mass is small compared to that of the habitat.”
Indeed, I wonder a bit if we need to start thinking about galaxy protection, as there isn't much after that to stop us spreading to the entire galaxy - is that such a good idea at our young stage in our civilization?
Also it may be more interesting to find out what happens on Mars if we work with what comes naturally to Mars instead of against it. Perhaps the issues with ideas for terraforming it are partly due to fighting against "Gaia". What if the natural ecosystem for Mars is different from that for Earth? Anyway let's start by looking at conventional terraforming there.
TERRAFORMING ATTEMPTS CAN GO WRONG - BADLY WRONG
There would be so much to go wrong. And I mean badly wrong, mistakes that would make it impossible for anyone to terraform it in the future. Especially if they involved introducing the wrong kind of microbe, there'd be no way you can roll that back.
This can be something that builds up slowly, underground, or a few microbes spreading in the wind and weather. By the time you notice it is going wrong, it could have spread far. Indeed, it may be a while before you notice the microbe at all amongst the hundreds of billions or even a trillion distinct species of microbes, on a partially terraformed world. It may well have spread throughout the planet before you know it is there.
The biologist Cassie Conley gave a simple example. She's the NASA planetary protection officer. And this is just for ordinary expedition to present day Mars, right now not even any attempt to terraform yet. Some Earth microbes, in the anoxic conditions on Mars and in the presence of methane (which may well be present there), could form calcite in underground aquifers - so turning them to cement.
"Conley also warns that water contaminated with Earth microbes could pose serious problems if astronauts ever establish a base on Mars. Most current plans call for expeditions that rely on indigenous resources to sustain astronauts and reduce the supplies they would need to haul from Earth."
"What if, for example, an advance mission carried certain types of bacteria known to create calcite when exposed to water? If such bacteria could survive on Mars, Conley says, future explorers prospecting for liquid water instead might find that underground aquifers have been turned into cement."
In more detail what she is talking about there is the anaerobic oxidation of methane that leads to formation of calcium carbonate in anoxic conditions . It's done by a consortium of methane oxidising and sulfate reducing bacteria. See summary here in wikipedia: Calcite - formation process - which links to this technical paper which goes into more detail.
Calcite - calcium carbonate. In the anoxic conditions on Mars, in presence of methane, a combination of methane oxidizing and sulfate reducing microbes can cause calcite to form and so, basically, could turn underground aquifers on Mars into cement. Cassie Conley’s example of one way that accidentally introduced microbes could have unpredictable effects on Mars.
When it comes to microbes introduced to an unfamiliar planet that behaves differently from Earth, with many differences in the chemistry, atmosphere, environment - any number of unexpected interactions could happen.
Here are a few more examples
- You want to grow green algae to take the carbon out of the atmosphere and generate oxygen, but haloarchaea take over. These salt loving microbes are likely to feel at home on Mars, and they convert the sunlight directly to energy by a process using bacteriorhodopsin similar to the way our retina works - and produce no oxygen at all. How do you change the balance back to the green algae?
- As the climate warms up and it gets damp, but with no oxygen - those are ideal conditions for microbes that produce bad egg gas (H2S). Hydrogen sulfide smells like a sewer - the whole planet would stink.
Maybe you want that (it’s a greenhouse gas and this is one suggestion for a way to start a process of terraforming). But maybe instead you are trying to get a carbon dioxide / oxygen ecology warmed by those planet sized mirrors, and the hydrogen sulfide producers take over and kill nearly everything.
One theory of many for the Permian extinction, the largest mass extinction in world history, 251 million years ago, is that it may have been caused by an initial upwelling of hydrogen sulfide that was then maintained by purple and green sulfur bacteria that thrived in the anoxic conditions. See description of their research here, and paper here. Whether or not that's what happened on Earth at the end of the Permian period, that it's a possibility for Earth might suggest that something similar could happen while terraforming Mars, which would start off naturally anoxic.
Or, maybe this happens the other way around, you are trying to produce H2S as a greenhouse gas and the cyanobacteria take over.
Then, there's the possibility that there might be native life as well with unexpected capabilities. They could interact with your ecosystem and may not behave in the same way as the microbe whose niche they take over. Or may hybridize with Earth life via gene transfer. This is an ancient mechanism (GTAs) which works between organisms as different as fungi and aphids, and between microbes that split from each other billions of years ago. If Mars has any life that split of from Earth life after the development of DNA, it may well be able to share genes with Earth microbes in the same way - and indeed in suitable environments such as warm salty water, it could do this rapidly, overnight.
Or it could be some microbe from Earth that’s harmless here and not even been noticed finds the different and unusual Mars environment to its liking and spreads everywhere. Spreading through a new ecosystem, within a few years or a decade or two it could become the most important microbe on Mars - and again maybe it doesn’t behave as you’d like it to.
SO MUCH EASIER WITH A SMALLER ECOSYSTEM
With a small free space or lunar habitat of a few cubic kilometers, then we'd be bound to make many mistakes even with an experiment that small. But it is easily reversed (comparatively).
If there’s a build up of some problem gas, you can scrub the atmosphere. If there’s an infestation of some diseases, insects, or mold, say, you can treat it with chemicals or tackle it biologically and eliminate it. In the very worst case, if something comes up which you can’t fix - you can purge the atmosphere, sterilize it if necessary and start again, learning from your mistakes.
You can't sterilize a planet or purge its atmosphere and start again. Nor can you scrub it of problem gases or easily eliminate some problematical organism. Look at how hard it is for us to do anything about carbon dioxide amounting to an extra hundredth of a percent of the Earth's atmosphere, or to keep out invasive species from an island or continent, even for higher animals never mind microbes.
It might be a species introduced deliberately, because you think it will help with the terraforming, and then it causes problems you never expected. The European starling in the US, for instance, introduced by Shakespeare enthusiasts in the 1890s. That's not going to happen on Mars any time soon, as there won't be any birds in an atmosphere with little or no oxygen. But plants, yes, if they were successful. Kudzu might be a good analogy. US citizens were encouraged to plant it for erosion control, as a livestock feed and to make paper. Then it became a problem, smothering large areas, and it grows so quickly it is hard to keep it under check. Imagine a situation where some plant like that is out of control on Mars, what do you do?
Even feral camels are an issue in Australia. Deliberately introduced, but in such a vast continent, it's hard to get rid of them. Rabbits also, famously. Then moving in the other direction to the very small, diseases of microbes might also become a terraforming issue. For instance bacteriophages, viruses of bacteria, significantly reduce the amount of hydrogen sulfide produced by sulfur bacteria. This can be used as a biological control if the problem is too much H2S - but it may be a nuisance on Mars if your aim is to produce as much of the gas as possible to warm up the planet.
With a Stanford Torus or O'Neil habitat, or a lava tube cave or a city dome, none of these are an insoluble issue. You are not going to have a problem with feral camels, and rabbits also wouldn't be hard to deal with. With invasive plants like kudzu, at worst you have to sift the soil to remove the roots. Even sulfur bacteria, cyanobacteria, or microbes that could turn your water supply to cement - none of these are an insoluble issue. Even with bacteriophages, if you can't control them in any other way, just press the "reset button" of sterilizing the habitat, analyse what went wrong, and start again.
VAST TIMESCALES OF MILLENNIA TO COMPLETE THE PROJECT
With the plans to terraform Mars, as proposed by the Mars Society, they are counting on everything going right for a thousand years to get to the point where you have an atmosphere suitable for trees.
The humans still can't live there even with an oxygen supply. It turns out that carbon dioxide is a poison to us at concentrations above 10% in the atmosphere (not many know that, and I am not mixing up carbon monoxide with carbon dioxide). If the Apollo 13 hadn't found their duct tape MacGyver type solution to the problems with their carbon dioxide scrubbers, this risked killing them all, dying in a carbon dioxide rich atmosphere, with plenty of oxygen still available.
You can't live in a mixed CO2 / O2 atmosphere. They would need to use closed system air breathers like aqualungs rather than an oxygen mask.
Then, there is not enough CO2 in the ice caps to do more than double the atmospheric pressure to around 12 millibars, under 2%, far too little for humans to take off their full body spacesuits. So to go any further then they have to assume lots of CO2 in the form of dry ice mixed in with the soil to considerable depths - but there's no evidence either way about this yet.
They usually suggest using greenhouse gases to warm it up to the point that the dry ice sublimates into the atmosphere. That's a big megatechnology project, 500 half gigawatt power stations according to one estimate, and mining cubic kilometers of fluorite ore on Mars per century to make the gases. That’s the ‘easy solution’. The harder solution is to use planet sized thin film mirrors in space to reflect extra sunlight on Mars.
Chris McKay et al estimate that you could get Mars warm enough to trigger a runaway greenhouse effect (if it has the dry ice available for it) by producing greenhouse gases 24/7 for a century using 245 power stations each half a gigawatt running 24/7 producing gases. The amount of fluorite you'd need to mine on Mars would be equivalent to the contents of eleven cubic kilometers of pure fluorite ore. That's in his 2004 paper with other authors. At forty billion metric tons it is just not practical to transport them from Earth.
In a 2005 paper with Margarita Marinova as the principle author, they do detailed modeling of various greenhouse gases, and come up with a mixture of gases that can tip it into a runaway greenhouse at 0.2 pA. Since one pA is about a thousandth of the current Mars atmosphere (100,000 Pascals to a bar), that's 0.025 teratons, or 25 billion tons of the gases, a somewhat smaller figure.
If this succeed, then a thousand years later, you end up with an atmosphere that only trees and plants can tolerate which is poisonous to humans. If everything breaks in your favour. At that point you have to continue production of the greenhouse gases, however, if you want Mars to stay warm, because it requires a far warmer greenhouse to keep it warm at that distance with half the sunlight levels of Earth. You might be surprised at the difference, but it's because the absolute temperature depends on the balance between the input sunlight and the heat radiated. The absolute temperature of Earth is very high and it doesn't take much of a reduction to make the average temperatures uninhabitable for humans and most vegetation.
A recent study, looking at all the evidence, comes to the conclusion that even this much is probably not possible. It finds that
- The adsorbed CO2 in the regolith, if it exists can't be released quickly
- Even if the atmosphere could be increased to 100 mbar it would raise the temperature by only 10 K and would still not lead to stable liquid water
- If the polar CO2 was sublimed, e.g. by covering with soot, doubling to 12 mbar, then it would not be an equilibrium situation and would be likely to soon recondense back again on the surface.
Their conclusion is:
The ability to release enough CO2 into the Mars atmosphere to provide any significant greenhouse warming is extremely limited. This is the case even if most of the CO2 present on early Mars still remained on the planet, locked up in adsorbed gas and carbonates. Greenhouse warming is further limited in the likely event that the bulk of the early CO2 has been lost to space, as suggested by recent measurements.
While greenhouse warming is still conceivable by the mechanism described by McKay et al., large scale manufacturing of chlorofluorocarbons, that approach is very far into the future at best.
It is not feasible today, using existing technology or concepts, to carry out any activities that significantly increase the atmospheric CO2 pressure and/or provide any significant warming of the planet. Terraforming in the near term is not feasible.
Perhaps you get the carbon dioxide from comets from the outer solar system? At present that's not very feasible but maybe in the future we can move comets around the solar system with ease. So, let's stick with that and explore a bit further.
GENERATING AN OXYGEN RICH ATMOSPHERE - IS IT 900 YEARS (ZUBRIN) OR 100,000 YEARS (MCKAY)?
There are no natural sources of oxygen outside of Earth - so you won't find an oxygen rich comet. The usual suggestion is to produce the oxygen from carbon dioxide so you start with a carbon dioxide atmosphere and then use plants to extract the carbon as organics. If you use photosynthesis it's around 100,000 years to pull all the carbon out of the atmosphere according to Chris McKay. Perhaps that can be speeded up, but somehow or other, you have to create a layer of meters thickness of organics averaged over the entire planet to get rid of the carbon.
To get a rough idea of how much carbon you need to pull out of the atmosphere - if you achieve an Earth pressure atmosphere, it's 2.63 times as much mass as on Earth because of the low gravity. So about 26 tons per square meter of atmosphere. Of that, if it's a pure carbon dioxide atmosphere, then the molecular mass of CO2 is 22, atomic mass of carbon is 6, so the mass of carbon in the atmosphere is 6*26/22 = 7 tons. So you need to extract about 7 tons per square meter of carbon from the atmosphere. Since organic matter has other elements as well, hydrogen and nitrogen, and is typically lighter than water even when dried, then you are talking about more than 7 meters of thickness of organics. For instance the element content of wood is approximately 50% carbon, 42% oxygen, 6% hydrogen, 1% nitrogen, and 1% other elements. Its density is typically up to 0.7.. So that would make it around 20 meters thickness of solid wood that you'd need to accumulate, at a mass of 14 tons per square meter. If the atmosphere is only a tenth of Earth's atmospheric pressure, the thinnest that humans can breathe, it's only 2 meters but then you have the issue that at those concentrations it has to be pure oxygen which is a fire hazard. I think you can understand why Chris McKay worked out that this would take 100,000 years using natural processes.
If you use photosynthesis to extract carbon, this will bury nitrogen and water as well, so taking it out of the atmosphere. If concentrations are similar to those in wood, then getting on for half of it is water, so you would need a lot of water available, before you can extract that much carbon from it. Enough to cover the surface of Mars to a depth of around 10 meters. That is, unless you find a way to burn the biomass to form biochar, all over the planet regularly. That would deal with the water and nitrogen problem but it would be hard to arrange for it to happen.
Zubrin is much more optimistic and he estimates, it could be accomplished as quickly as 900 years, with mega-engineering.
His proposal is to first release one millibar of oxygen from the perchlorates which he estimates requires 2200 terrawatt years of power (that’s the equivalent of 20,000 half gigawatt power stations operating for 220 years). It’s a lot of power but he would use space mirrors, and assumes a 3,125 km radius space mirror focusing the power of the sun as the source of energy.
After that he envisions higher plants genetically engineered with an efficiency of 1% spread over the planet. He doesn’t explain, but for this to work, for all that photosynthesis to be dedicated to increasing the oxygen content of the atmosphere - the plants have to be harvested and buried and more grown on top. For his objective, it’s no good just having them in a cycle returning the material to the atmosphere when they die, as that is just a seasonal oscillating cycle of more and then less oxygen.
With this background he writes:
“… they would represent an equivalent oxygen producing power source of about 200 TW. By combining the efforts of such biological systems with perhaps 90 TW of space based reflectors and 10 TW of installed power on the surface (terrestrial civilization today uses about 12 TW) the required 120 mb of oxygen needed to support humans and other advanced animals in the open could be produced in about 900 years. If more powerful artificial energy sources or still more efficient plants were engineered, then this schedule could be accelerated accordingly….”
As he goes on to say, if we had easy access to fusion power we’d have far higher levels of power available which could change many things. But as it is now it’s a huge megatechnology project.
Photosynthesis cools down the planet by removing carbon dioxide (a greenhouse gas of course). So now you have to step up your greenhouse gas production even more. You are committed to greenhouse gases or large thin film mirrors for as long as Mars remains habitable.
An Earth atmosphere transferred to Mars would not be anything like Earth without artificial means - the water would all freeze and it would be too cold for trees even at the equator. That's simple physics, it is too far from the Sun for Earth's atmosphere to work there.
Also plants on Mars will have to work three times as hard as on Earth to maintain the same level of oxygen, because in the low gravity you need three times the mass for the same atmospheric pressure. But they also have to do that with half the level of sunlight. This is why Chris McKay says it would take so long, 100,000 years, to generate an oxygen atmosphere there. On Earth in similar conditions it could happen in a sixth of the time.
MAGNETIC SHIELD DOESN’T HELP
The idea of a giant magnetic shield got some publicity, but is just useful to protect the Mars atmosphere long term. It is not going to thicken it up in the near term. Even if you had Earth’s volcanic production of CO2 on Mars it would take around a million years to reach 6 millibars, which is the point at which they think a runaway effect could happen.
The reason it is so inactive is probably because of its lack of continental drift. And perhaps it may have thick deposits of carbonates that if they were subducted beneath another continental plate would then lead to many volcanic eruptions and carbon dioxide building up the atmosphere. But even on Earth - you are talking about a very long cycle there. There are some figures for Earth here.
"On average, 10^13 to 10^14 grams (10–100 million metric tons) of carbon move through the slow carbon cycle every year"
So, with the upper estimate, that's up to 10^11 kg from Earth volcanoes per year. The Mars atmosphere is 2.3*10^16 tons.
So, you are talking about 100,000 years of Earth level of volcanism to duplicate the current atmosphere at about 0.6 % of Earth normal, so about a million years to reach 6% which is the point at which a runaway greenhouse might be possible (if there is enough dry ice there to be liberated, which is unknown)..
But Earth has dozens of active volcanoes. Mars, despite many searches using thermal imaging from orbit, has turned up no volcanoes to date. Not even a geothermal hotspot or a fumarole. It could conceivably have some ice fumaroles with the heat signatures hidden by towers of ice but that's a long shot.
There is good evidence that Olympus Mons erupted 25 million years ago. AFAIK that is the most recent eruption there is any evidence for.With the current level of volcanism on Mars you must be talking about getting on for billions of years. And indeed estimates of current outgassing rates confirm that.
In this paper studying outgassing from Mars volcanism, updated with the latest Curiosity results, here Ga means billions of years ago. They estimate 70 mbar outgassed in the last 3 billion years with their most extreme outgassing scenario. The more likely scenario is essentially 0 mbar in the last 3 billion years. In both scenarios most of the outgassing happens over 2 billion years ago.
For completeness, we test the evolution scenarios by both increasing the total outgassing rate and prolonging the outgassing activity. Specifically, we assume the mantle plume scenario of Grott for outgassing, at an oxygen fugacity of IW+1 and a degassing efficiency of 0.4. The total outgassing amount since 3.8 Ga would then be 420 mbar, in which 350 mbar would be outgassed between 3.8 and 3.0 Ga and 70 mbar would be outgassed between 3.0 Ga and present. This is compared with the standard models in which 48?mbar would be outgassed between 3.8 and 3.0 Ga, and essentially 0 would be outgassed after 3.0 Ga.
If you look at their figure 3, you see that even with the highest figure (red solid line in the diagram) most of the outgassing happened over 2 billion years ago so most of that 70 mbar happened before then. Almost none in the last 2 billion years.
Over a time long enough for humans to evolve from shrew like mammals scurrying at the feet of dinosaurs it still wouldn't have thickened up noticeably. Indeed, even over the half billion years it took for humans to evolve all the way from our first microscopic multicellular ancestors, you still wouldn't notice a significant thickening of the atmosphere. This idea is a non starter. It could be useful if Mars already had a thick atmosphere and you needed to retain it long term for millions of years.
NITROGEN AND WATER
Finally, an oxygen only atmosphere is a fire hazard so you have to add nitrogen as a buffer gas (it absorbs some of the heat from a fire and so acts as a fire retardant). Mars seems to be rather deficient in nitrogen. Perhaps its original atmospheric nitrogen is partly buried underground in nitrates beneath its northern sea? Nobody yet has a clear idea where to find it except to import it by hitting Mars with ammonia rich comets.
There's enough ice in the polar caps for a few meters of water over the surface of Mars but that's assuming you have a Mars with no ice caps - how likely is that?
Its ice caps are much smaller than Earth's and an Earth-like Mars would have larger ice caps, not smaller ones.
Mars north pole ice cap - Composite of 1000 Viking Orbiter red- and violet-filter images (NASA / JPL / USGS) - got it from: When Humans Begin Colonizing Other Planets, Who Should Be in Charge?
A terraformed Mars would still have ice caps, larger if anything. Terraformers hope that there are large supplies of ice in the southern uplands beneath the surface. Some of this at least has been found so there may be some reason for optimism, but how much is there?
Also the equatorial regions are dry to considerable depth and any water would have to fill the desert sands to levels of hundreds of meters to kilometers before you have any on the surface.
If you are optimistic you hypothesize enough ice from the earlier oceans trapped in the southern uplands to pour out onto the northern ocean area, and maybe even the ocean that used to cover the entire northern hemisphere is still there somewhere, trapped as ice underground.
If you are pessimistic then nearly all that ice got lost to space as water vapour split by solar storms with the hydrogen escaping and the oxygen reacting with the surface of Mars and rusting it - or it ended up in the hydrosphere kilometers below the surface. After terraforming, unless you import ice from comets - at most you have a few flowing streams of water and a lake or two, in a still largely dry planet covered mostly in desert.
NOT AN AUTOMATIC OUTCOME
As well as that, supposing you do find all the water you need, it took hundreds of millions of years for Earth to develop a microbial biosphere supporting an oxygen rich atmosphere.
But as well as that we don't know that Earth's present biosphere is the automatic outcome. There are probably thousands of possible ways that our planet could have gone, with a variety of ecosystems, and perhaps only a few of those habitable to humans. Some perhaps totally uninhabitable as a result of some runaway effect that tipped it out of habitability.
And we are hoping to speed up those hundreds of millions of years into centuries. If it can be terraformed that quickly it can probably unterraform just as quickly.
PAUL BIRCH'S TERRAFORMING MARS QUICKLY
The article is here: Terraforming Mars quickly.
This is a bit like Robert Zubrin's 3,125 km radius space mirror but far more ambitious, with the aim to terraform Mars in 120 years. He uses his mirror to boil the rock in the regolith at over 3000 C. The largest component is an annular mirror 25,000 km in diameter and 300 km in width of the ring, focuses light via a secondary mirror Total mass of that plus the soletta 50 million tons.
It also has a very low density 920 km diameter "lens" floating at a height of 400 km in the Mars upper atmosphere which is too flimsy to be built at ground level and so must be built in orbit and somehow "lowered" into the atmosphere to rest there floating partly through buoyancy in the upper atmosphere and mainly through the hot air generated beneath it. The aim is to focus all the sunlight into an 80 km diameter spot on the Mars surface which will be so hot it volatilizes the regolith into vapour.
Once an atmosphere forms he has to ge rid of the carbon dioxide. He proposes to use photosynthesis. and inhibit uptake of oxygen by giving the plants sunlight 24/7 using mirrors to illuminate the night side of the planet.
He doesn't really explain how he generates a growing media for the plants - just says there are the elements needed. But he has to generate meters thickness of organics, so the plants have to grow in the remains of previous plants - so how do they do that except by decomposition? But if there is decomposition, then there are microbes returning carbon dioxide back to the atmosphere and using oxygen.
He assumes that we can control which microbes are on Mars so that there is an
"absence of animals, pests, heterotrophic organisms or other competitors".
But he also talks about humans being on the surface already saying they can live on the surface during the terraforming.
This is the most implausible part of it for me. How can you have humans on the surface without heterotrophs (microbes that eat other microbes)? But that is essential to his plan as if you have unexpected microbes there then there is no way to be in control of what the plants do. There's also the possibility of native life and who knows what that would be able to do if it is there.
On nitrogen he proposes a much lower level than on Earth and claims that the low gravity will reduce convection and so reduce the risk of fire. But a pure oxygen or near to pure oxygen atmosphere is generally considered a fire risk in space habitats - that's why they have an Earth normal atmosphere in the ISS rather than an oxygen atmosphere. It would make things much easier to have an oxygen atmosphere at the same pressure as spacesuits when it comes to EVAs.
I don't know of any study saying that an oxygen atmosphere without nitrogen is only a fire hazard in Earth gravity. He doesn't give any cite there to support it. If it is a fire hazard in zero g - then surely it is a fire hazard in Martian g too, so I'm not convinced by that
When it comes to water, he is not sure if there is enough of it. He talks about releasing the water from the polar ice caps. This has to mean the final terraformed world has no ice caps - is it warmed so much, at least at the poles, that it no longer has ice caps? He talks about finding it in the regolith and possibly importing it by transporting Saturn's Hyperion moon to Mars (he has already earmarked Enceladus for Venus).
It's a fun paper and gives n idea of what lengths some future megatechnology civilization would need to go to to terraform Mars quickly. It is major megaengineering and assumes a great deal of confidence and command in ones ability to predict or somehow control what microbes introduced to a planet will do there.
I do recommend reading his papers. They are a fun read for a space geek and though I don't think we will be "terraforming Mars quickly" any time soon, they could spark imaginative science fiction books and maybe lead to ideas of what other millions of years old extra terrestrial civilizations may be able to do. His page with his papers all available for download is here.
KEEPING MARS WARM
In Chris McKay's analysis he assumes continued production of those very potent fluoride based greenhouse gases once you get to an oxygen rich atmosphere. Once the CO2 is gone, the natural equilibrium average temperature of a Mars even with a thick Earth like atmosphere is very low. That's because nitrogen and oxygen do almost nothing to help retain heat and it gets half the sunlight that Earth does, and the temperature dependence on distance is for the temperature measured from absolute zero.
Chris McKay et al in Making Mars Habitable estimate that Mars in thermal equilibrium with an Earth atmosphere would average -55°C, and that includes a greenhouse effect due to CO2 set to 1%, so much more than for Earth, as that also means three times the column mass for the same pressure in the lower gravity.
Earth average temperatures are 16°C which includes a greenhouse effect of about 33°C, much of that due to water vapour. But on Mars at below -50 °C the amount of the greenhouse effect due to water vapour would be small.
For trees you need a temperature of around 3 °C (page 105 of Chris McKay's paper). With a terraformed Mars without substantial continuous warming by greenhouse gases or orbital mirrors, it isn't possible to have trees growing even in equatorial regions.in summer. Even to have trees growing at low levels in equatorial regions would require considerable warming.
To reach average temperature levels similar to Earth, your mirrors or greenhouse gases, assisted by the warming effect of water vapour, have to raise the temperature by about 88 °C, though only around 55 C increase in temperature if you take account of the warming effect of water vapour (if it has as much water vapour in the atmosphere as Earth). And they have to do this in perpetuity for as long as the planet remains habitable.
WASTEFUL OF RESOURCES - PLANETARY CHAUVINISM
It is just so incredibly wasteful of resources to try to terraform a planet. All that water, all that carbon dioxide, all the resources on Mars for habitability together with gathering thousands of comets too, probably, for both water and carbon dioxide too (probably), and smashing them into Mars in order to create 2-3 meters thickness of breathable air (we don't care about the rest) and all that water poured into the dry sands in order to have a tiny amount on the surface for habitability.
Then all those greenhouse gases or large planet sized mirrors to keep it warm enough to be habitable. And that's not just for the start, but you have to keep the mirrors maintained or keep producing the greenhouse gases right through into the foreseeable future, for as long as you want the planet to remain habitable.
This is why in the 1970s, many space settlement advocates turned their attention away from Mars and looked into space habitats. They calculated that you can do settlement far faster, more efficiently, more easily and with fewer resources if you built space habitats. And if you are doing that, you don't need to build them on a planetary surface.
We may be planetary chauvinists because of our familiarity with living on a planet. Isaac Asimov explains here that he got the term "Planetary Chauvinism" from Carl Sagan. He can't say for sure that Carl Sagan invented it, but that was the first he heard of it. He talks about this 35 minutes into this video:
Settlements in space provide much more living area than planets can, for much less investment of effort and much less use of resources. As he says in that interview, our future, for most of humanity, is likely to lie in space habitats rather than on the surface of planets. From his interview in that video:
"I'm convinced that we will build space settlements in space, we will live inside small worlds, and we will eventually recognize that as the natural way to live. It is economical. You have just a relatively small amount of mass, and it is all used. In the case of the Earth, you've got an enormous mass, and almost all of it is not used. It's down deep where we can't get at it, and the only purpose is to supply enough mass to produce enough gravitational intensity to hold stuff onto the outside. And that's a waste! With the same mass you can build a trillion space stations carrying incredible numbers of people inside. And this is what we will eventually come to. I'm sure we will use the asteroid belt to build any number, thousands upon thousands, hundreds of thousands of space stations, which will eventually flee the solar system altogether."
Anyway that's the idea. That planets are not where it is at for the future of space settlement. One calculation they did back then is that there are enough resources in the asteroid belt to build habitats for a thousand times the surface area of Mars (its surface area is also about the same as the land area of Earth).
For any here who are unfamiliar with it, the idea of mining the asteroid belt is to turn the materials there into habitats that spin to create artificial gravity. This is an idea developed by many engineers and scientists in the 1970s including O'Neil and some scientists at Stanford university who drew up detailed engineering plans for how to do it. This has been rather forgotten recently with all the fanfare about Mars.
It's not a case of living on Ceres. Nor is it a case of hollowing out an asteroid and spinning it and living inside it. It's a case of making a habitat out of asteroid materials and then spinning the habitat. And you can generate any level of gravity. Typically the design is to have a habitat under full Earth gravity because the designers assume that's best for human health. If a lower level of gravity is better they can just spin it more slowly. You could have a thousand times the land area of Mars as space habitats with Mars gravity if that was preferred. And you can have any level of sunlight you like and any length of day or night, using thin film mirrors to reflect the sunlight into the habitat, and then shades to simulate night and day.
If you do it that way you end up with a thousand terraformed planets in terms of living area, for much less by way of megatechnology than would be needed to attempt to 'terraform' a single planet Mars. You can do it faster too, far faster, with the first habitats completed in decades.
And what's more, they can be customized to whatever gravity level you like. The atmosphere, temperature, ecology, all easily regulated and within your control.
If this is not feasible, there is no way that terraforming Mars is feasible.
I think myself that it is only after we have thoroughly mastered the art of making space settlements like that of a few cubic kilometers that we should even contemplate terraforming.
These are my articles about it on my Science 2.0 blog:
- To Terraform Mars with Present Technology - Far into Realms of Magical Thinking - Opinion Piece
- Trouble With Terraforming Mars
- Imagined Colours Of Future Mars - What Happens If We Treat A Planet As A Giant Petri Dish?
MARSFORMING MARS - AND MARTIAN GAIA OR ANTI-GAIA
Perhaps the problem with terraforming Mars is that we are fighting against Gaia? Perhaps that is why we would need all that technology?
Gaia is the idea of James Lovelock that a planetary biosphere can be self maintaining and as conditions change it responds to stay in balance much as a microbe does - but on a vaster scale.
This leads in to an idea for Mars by Chris McKay - if we find interesting Martian life on Mars, perhaps independently evolved, that we try to “turn back the clock” to create conditions hospitable to Mars life.
Well if we do this, perhaps the answer is to set up an ecosystem with hydrogen sulfide as a major component of the atmosphere as this is a strong greenhouse gas. Or methane, or both.
Perhaps early Mars had powerful greenhouse gases to keep it warm. Or perhaps it was habitable only briefly when its orbit is much more eccentric than it is now and with its axis optimally tilted to reduce the polar caps to a minimum. Whether or not that is how it did it originally - maybe we could set up an ecosystem like that today?
So what if we make this the objective anyway, whether or not there is life there already? If we work with Mars rather than against it, find where the Mars climate “wants to go” rather than try to make it into a copy of Earth then it may be easier to achieve. We need to find what kind of ‘Gaia’ comes naturally to Mars. Then - rather than a pale shadow of Earth, Mars becomes whatever it is best suited to become. We help it to realize its own potential - which then may well also benefit us too.
An Earth like ecosystem, without megatechnology, on Mars might even be a kind of “anti Gaia” (my own suggestion here, do let me know if you have seen someone else suggest it).
Photosynthetic life would cool down the planet just as it does on Earth, by pulling carbon dioxide out of the atmosphere. But that is the opposite of what you want on Mars to maintain a Gaia like balance. It’s going to keep pushing the planet to colder conditions again as soon as it gets warm enough for life.
That’s why we need all the artificial greenhouse gases and planetary sized mirrors in all the terraforming plans and why none of them can do it all using biology alone. It is too cold, not too hot, so a thermostat that ‘Gaia’ can use to cool it down when life flourishes, based on photosynthesis can’t work there.
But perhaps some thermostat based on hydrogen sulfide producing bacteria can? Maybe biologists can think of a way this could work? That the cooler it gets the more hydrogen sulfide it produces? Anyone got any thoughts about how that could work? Or any other way to help it become self regulating?
If we can't make it self regulating, it might need constant attention and megatechnology to keep it in balance, and as soon as we stop, then it reverts to its current state. So, to get a sustainable world warmed by greenhouse gases, we would need to establish something more, some form of 'Gaia' in the "weak Gaia" sense. We’d also need the magnetic shield on a very long timescale, otherwise any attempt at "Marsforming" it is going to make it even drier once the atmosphere is stripped by solar storms over long time periods.
To be clear, I'm not suggesting we do this right now. But we can certainly think about it, just as we do for the terraforming. We may learn a fair bit just by thinking it over.
There'd be no hurry to do that on Mars - we can try out such things in the small habitats, and there it's a real experiment where you can change things and try again.
When it comes to Mars, well, it's the only other terrestrial planets with habitats on it in our solar system. Even if it is such a common type of planet that nearly every star has a Mars analogue - still, it's the only example we have at hand to study, here on Earth. So for us it is unique. We probably want to study it "as is" before changing it to something else we think is "better" - who knows what discoveries we can make. Even if there is no native Mars life, still, it's our only chance to see what happens to a terrestrial planet, left to billions of years, if it doesn't develop life.
PARATERRAFORMING THE MOON AS THE NATURAL STARTING POINT
We can do “paraterraforming” which is covering a body with habitats. The idea is to eventually cover the entire planet with a transparent "roof" to hold in the atmosphere. To start with you can just build city domes, and greenhouses, covering larger and larger areas with greenhouses and habitats.
The main disadvantage compared with free space settlements is that you have to use the local gravity and also work with the local levels of light.
- Mars has half the levels of sunlight you get on Earth, so photosynthesis has to work twice as hard or be supplemented by artificial light,
- It is occasionally blocked out for weeks on end during the dust storms, which go global every decade or so, sometimes more often.
Compare this with the Moon:
- The Moon has plenty of sunlight and no dust storms.
- But a cycle of a night of 14 Earth days and a similarly long day
- Though it has 24/7 sunlight close to year round in some favoured areas at the two poles.
I suggest that the natural starting point is a habitat at the lunar poles, perhaps eventually even a city dome there. This is where ESA plan to build their "Lunar village" in collaboration with other space agencies, worldwide, and also many private companies. It's much more like a village than like the ISS with separate habitats sharing some common facilities. I think myself that it is perhaps the most exciting idea we've had for ages in the field of humans in space.
After that, the best starting point is to paraterraform a lunar cave. These are known to exist and in the low lunar gravity they may be vast, kilometers in diameter and over 100 km long.
The 14 Earth days long night is not such a drawback for the lunar caves as you might think. Modern LED lighting and especially the purple lights optimized to produce only the light needed for photosynthesis reduce the lighting you need a lot, to only a hundred watts per square meter.
I am going there by this High Efficiency Full Spectrum LED Grow Light - uses 20 watts of power to illuminate 0.2 square meters. So that's 100 watts of supplied power needed per square meter. It is recommended for crops that require bright sunlight such as lettuces in this roundup in 2015: Top 10 Best LED Grow Lights
Remember that on Mars you have to supplement the low levels of light anyway if you want to grow crops that need a lot of sunlight (tropical plants) and you also have to be able to supply sunlight through weeks long global dust storms, unless you just stop all agriculture whenever that happens. It’s not much of an engineering difference if you do it for an unpredictable length of time of at least a few weeks every few years or you do it every 14 days. You still need some way to supply the power for 2 weeks, and in the case of Mars probably longer.
You can grow just about all the food for one person and also provide all their oxygen from 30 square meters of crops, actually tested in the BIOS-3 experiments in Russia. Also, there are many crops that can withstand 14 days of darkness so long as you cool the roots during the lunar night, and continue to crop as normal with a double length growing season.
So, the lunar caves are far more manageable than they seem at first. Also, the Moon has a major advantage that it is within easy access of Earth), has tourist potential, and it may have commercial potential (enthusiasts struggle to find commercial potential for Mars), and in one comparison after another it comes up better than Mars.
I see lunar caves agriculture as involving LED lights mixed with some areas of crops that just go dark at night, and with light piped from the surface in the lunar day.
There are many ways to store enough energy for the night. But longer term, then they are bound to have solar panels that are constantly in sunlight, as solar panels are easy to construct on the Moon. At any time one of those arrays will be in sunlight and the electricity transferred to settlements throughout the Moon using microwaves or else long distance power transmission using high voltage direct current which should be easy to do with cables laid over the lunar surface. They might well run alongside the lunar railways that take you around the Moon.
This is a fair bit of engineering - large solar arrays, long distance HVDC cables, eventually city domes, lava tube cave settlements, and lunar railways, but it is nothing compared to what is needed to terraform a planet.
Of course you can paraterraform Mars as well, live in Martian lava tube caves much like the lunar caves - or build city domes. We know that Mars does have those caves, though we don't know how large they are, can only see the entrances from orbit (similar to the Moon's situation).
However Mars is by far the most vulnerable from Earth microbes of any place in the solar system, if you are interested in searching for extraterrestrial life.
WHAT IS THE BUSINESS MODEL FOR MARS? OR THE MOON?
Mars has also got many other disadvantages. For instance, when asked how his Mars colony is going to support itself commercially, Elon Musk says that prospective colonists with half million dollar homes will sell their homes to go to Mars and he says this is how the colony will sustain itself to start with through the sale of the homes of its prospective colonists. Here is his interview where he says so (on the SpaceX channel):
But even a spacesuits currently cost two million dollars each and are good for a couple of dozen or so space walks from the ISS (much less wear and tear than you get for a Mars suit) and need constant servicing.
It's no surprise they cost so much as they have many tasks to perform. They have to give thermal protection, protect against micrometeorites (usually with multiple layers), cool the astronaut as you get too hot easily in a spacesuit, usually done at present with a a Liquid Cooling and Ventilation Garment, hold in the pressure, supply the oxygen, scrub carbon dioxide, it needs a power supply, a fan to circulate the air, they need joints that can remain flexible in a vacuum - or if it is one of those form fitting ones proposed by Dava Newman it will need to be designed to fit the astronaut snugly, basically tailor made. It needs strong transparent material for the face plate that can hold in the high outwards pressure of the oxygen and is also able to withstand micrometeorite impacts, somehow it needs to protect the hands from damage when working inside very stiff gloves (unless they are also tailored biosuit style). See What is a Spacesuit? (NASA).
Surely they will be reduced in cost with mass production - but by how much? Let's guess a ten-fold reduction in cost to $200,000 but that may be optimistic for the near future.
Even if they are reduced in cost ten fold. they will need to be repaired - and will need to be replaced from time to time. Maybe they can be made more durable, but surely not likely to become so durable they last for years any time soon. Then what about the cost of your habitat on Mars? It will probably cost hundreds of millions of dollars. And it will have a finite life - if it is like the ISS it will last a few decades before most of it has to be replaced.
You have got to Mars, but you are planning to live there for the rest of your life and raise a family there too, presumably, or it is not really a colony.
Your half million dollars won’t last long. Some is gone on the ticket out there - Elon Musk says the ticket price is $200,000. So after buying your spacesuit as well, now you have a $100,000 deposit on your (much smaller) habitat on Mars and that’s your half million dollars already gone before you have done anything on Mars.
He says in that interview that a colony can't support itself by exports from Mars.
The only other source of income he suggests for the colony is intellectual property rights - the income from their inventions and other IP that they export from Mars. Both he and Robert Zubrin think that by living in such harsh conditions the Martian colonists will become extraordinarily inventive and make many extraordinary inventions that will then earn them big bucks back on Earth.
I don’t “get that” myself. Here in the UK we pride ourselves as a nation of inventors and unlike the US we don’t credit it to a “frontier spirit” but just think we are naturally inventive. The French also think of themselves as great inventors and so do many other countries.
But - let’s give him this for now and see where it takes him. So how is that supposed to work?
Presumably the idea is that some of the billionaires there employ others, from their earnings from sales of their inventions on Earth, and so keep the colony going. But what's to stop a new billionaire who has invented something on Mars from emigrating back to Earth which is where his or her new invention will be marketed anyway? It will seem a paradise to them too after Mars And then why would they pay billions of dollars of their income to Mars?
Sorry, I just don't "get" their business model.
Anyway, whether it would work or not, In one way after another, the Moon actually wins over Mars - I was surprised. There's a lot of work to be done, but at least on paper it seems promising.
There are several enthusiastic space engineers, geologists and scientists who have written books with detailed working out of what seem to be practical economics for the Moon on paper, involving exports of platinum, or exports of ice from the lunar poles to LEO (if those are easily mined) or by setting up tourist hotels on the Moon and many other ideas.
See for instance:
- The Value of the Moon by Paul Spudis
- Moonrush by Dennis Wingo
- The Moon: Resources, Future Development and Settlement by Madhu Thangavelu, David Schrunk, and many other contributing scientists
Much of the material in these books is devoted to a detailed business case for the Moon.
You just don't get this for Mars. The nearest I've seen is David Kuck's idea of the Deimos water company - and that could be feasible if Deimos does indeed have ice in it (as it might) - though if there is ice at the lunar poles, it's hard to compete (except for export to Mars itself, which can't work unless Mars has a business case).
For exports from the Mars surface, just not much. Robert Zubrin talks mainly about the same idea of export of intellectual property. His deuterium export idea from his section on Interplanetary Commerce in “Case for Mars” page 239 chapter simply doesn't work, as it's only saving one step in a process that on Earth is done in bulk in 27,000 tons deuterium factory, the size of a skyscraper, and powered by a large hydro-electric scheme with an output of 128 MW. Hard to compete with that on Mars. He has other speculative ideas there, but none of the details of the lunar ideas.
FAR EASIER TO EXPORT FROM THE MOON
It's far easier to export from the Moon because of the low delta v to get back to Earth. Then you also have the fixed distance, the short travel time there and back (especially for tourism), and that you can go there any day of the week, any month of any year. There's even the Hoyt cislunar tether system, which acts a bit like a siphon feeding a waterwheel - it takes materials from the surface of the Moon, and through two rotating orbital tethers, one in low lunar orbit and one in LEO, it transfers it to LEO, lower in the gravitational gradient of the Earth, and if you time things carefully with a net flow in the Moon - Earth direction it actually generates power instead of using power and fuel.
It is hard to see how Mars could compete. And if there is this premium on inventions from living in a hard place in space - well what about inventions from lunar colonists? You have all the other exports from the Moon and then if Zubrin and Musk are right, you have the inventions too.
I go into this question of whether there is any commercial case for Mars and compare it with the Moon in the sections in Case for Moon First:
- Commercial value for Mars
- Would a space colony survive with only exports of intellectual property to pay for imports?
I based my answer on those sections of the book, which go into a lot more detail with quotes and cites. I searched the online forums, and included all the methods of possibly paying for a Mars colony by Mars enthusiasts I could find along with the few suggestions in the short chapter on the topic in Robert Zubrin’s Case for Mars.
It originated as a Quora answer
And was run by Forbes magazine as
PLANETARY PROTECTION AND CASE FOR MOON
I think also there are serious planetary protection issues involved in sending humans to the Mars surface. Not us, but the microbes that come with us, may make native Mars life extinct, maybe before we even know it is there. That would be tragic. There is so much we could learn from it. And we simply haven’t looked yet. See my
For these and many other reasons, I suggest that the Moon is the best place to start with human settlement. I cover it in Case For Moon First: Gateway to Entire Solar System - Open Ended Exploration, Planetary Protection at its Heart
IDEA THAT MARS WILL HAVE ITS OWN INTERNAL ECONOMY LIKE EARTH - WHICH DOESN’T NEED EXPORTS
This is a nice idea in principle, to set up its economy - like another planet. Earth doesn’t need exports so why would Mars?
However the difference is that conditions are so hostile on Mars. Remember if you could do anything there, you could do it far easier in a desert on Earth. We have not yet got to the point where someone could do that.
Even if you had e.g. a spacesuit making factory in the middle of the Gobi desert then it would need imports, and the people in the factory would need to be fed, and their clothes imported and generally it would have loads of imports which it would pay for by selling its spacesuits to the rest of the world.
But you can't do that on Mars. Not until you have all that stuff already there.
I don't know if eventually you could, if you had big city domes and inside can do Earth industry with an ultra low maintenance outer skin to the dome - or if you paraterraformed it and covered it with a low maintenance reliable meteorite proof transparent roof over the Valles Marineres or something. Maybe then you have areas where humans can work on Mars as easily as on Earth and produce things at low cost locally. It might then have an economy that can work on its own without imports or exports to other planets.
But if so, there is a long way from here to there. Right now, there's no way that it would work unless you have exports to Earth from Mars to pay for your spacesuits and other high tech imports - or you have people on Earth with big pockets willing to pay out trillions to get a Mars colony underway.
TERRAFORMING A SMALL AREA OF A PLANET WITH A "GREAT WALL OF MARS"
This is an idea described by Isaac Arthur towards the end of his video. I haven't come across it before, so I thought I'd share it here
So, the idea is to have a "Great wall of Mars" enclosing a large area, say the summit of Olympus Mons, which you then terraform normally. So you have the complete depth of atmosphere but it is only above the point that your colonists live. This could fit in with ideas to have an orbital mirror that instead of warming the entire planet just warms one spot, in geostationary (or in this case aerostationary) orbit above it.
My main concern for a project like this would be the same as for any Mars colonization - the issue of bringing Earth microbes to another planet that may have its own indigenous interesting native life, as well as the impact of introduced Earth microbes on the planet anyway - such as Cassie Conley's accidental conversion of subsurface aquifers to cement.
But it's an interesting idea, if we can work through such issues somehow.
He leaves out one final option, although it would not fit with the story line of the other approaches and couldn't be done simultaneously with them.
If we find native Mars life that is of outstanding interest or we decide that Mars is of great interest 'as is' as the only terrestrial planet untransformed by life available to study - or just want to delay modifying it until we know what we are doing, then we might want to stay in orbit. We could then operate rovers on the surface and humanoid robots, but live in orbital colonies. You work on Mars by getting into a VR suite in orbit, and can walk around Mars experiencing it far more directly than in a spacesuit. The robots on the surface would be a bit like the inhabitants of civilizations in the game of "Civilization" where you leave them doing things automatically, and then step in from time to time when they need direction.
You could still grow plants on the surface, if they use sterilized seeds and sterile aeroponics. Then you can export to orbit using sterilized spacecraft. It is possible to have 100% sterile robots, in principle. Just heat them up to 500 C and so long as you have electronics capable of withstanding such temperatures, leave them at that temperature for a while and they will be sterile. Or assemble them in sterile conditions in a vacuum on Phobos or Deimos, or on the Moon.
WHAT ABOUT THE GRAVITY LEVELS?
As for gravity levels - first nobody knows what is needed for health. We may even be healthier in lunar gravity. And - though there is no way to know for sure - at least it seems likely that after months or a year at lunar gravity it would be much easier to adapt to the Earth on returning home than if you spend the same time in zero g. Not likely that you are unable to stand at all and have to be taken off the spaceship in a stretcher until you adapt back to Earth gravity as happens for many astronauts who spend a long time in zero g.
But if we do need more gravity then we may well need it for only a short time per day, e.g. while exercising, or eating or sleeping which we could do spinning for artificial gravity.
Spinning is much better tolerated in zero g than on Earth with astronauts able to tumble end over end without any feeling of nausea or dizziness and this might be the case for the Moon too.
HIGH TOLERANCE OF ASTRONAUTS IN ZERO G TO SPINNING RAPIDLY - WITHOUT FEELING SICK
Rats don't get nauseous when they spin so it is to do with human physiology. On page 95 of Packing for Mars, by Mary Roach she mentions that NASA Ames researcher Bill Toscano has a defective vestibular system. He only realised this when they put him on the spinning chair and he experienced no nauseous effects at all from the spinning.
So, some people won't get nauseous or dizzy anyway. But most will, and the habitat has to be built for everyone of course. However there's some intriguing evidence that perhaps everyone might be nausea free like Bill Toscano in zero g. Or nearly so anyway, at least tolerant of much higher spin rates than on Earth.
Tim Peake tried tumbling end over end in the ISS and he could tolerate fast spin rates he couldn't tolerate before or after on Earth.
Of course astronauts are trained to be able to tolerate tumbling motions better than most. But he says that he couldn't have handled this on the ground. Spinning so fast is a challenge even for an astronaut - they are not trained to withstand such rapid spins to the same degree as, say, an ice dancer.
He is not first to notice this, indeed it's well known by astronauts - and it actually goes back to Skylab experiments (Chapter 11, Experiment M131. Human Vestibular Function from: Biomedical results from Skylab.).
The crew as they ran inside Skylab found they could tolerate spins very well - and there were a series of experiments in a spinning chair - the "litter chair" designed to help them get a better understanding of space sickness. In the process they also found out about astronaut tolerances to spins in zero g. All the astronauts tested were better able to tolerate rapid spins in zero g than they were either before or afterwards on the ground. They tested them all up to 30 rpm in space, the highest spin rate available, and they were all symptom free, while they couldn't tolerate such high spin rates before or after.
Interestingly the lack of susceptibility to nausea actually persisted for a day or two after the flight. There was only one exception in all their experiments. The commander of skylab 3 did experience a "vague malaise" in one experiment at 30 rpm, on mission day 52 which persisted for around 30 minutes, but it was not typical of acute motion sickness and so might have had other causes.
His explanation is probably wrong however. It's not the vestibular system that shuts down. It's the otolith that senses linear motion, according to the Skylab experimenters. The vestibular system is still active because he can sense his orientation in space very well.
During the Skylab experiments with the rotating litter chair experiments,the experimenters found evidence that the otoliths are not stimulated due to the lack of any linear acceleration along the axis of the spin. They hypothesized that the reason the astronauts don't feel nauseous when they spin in space is because there is no conflict between the spin which is a circular motion and the gravity on the axis which is a linear acceleration.
Whether that's the reason or not, it does seem that astronauts in zero g can tolerate spins that they couldn't before or after.
WHAT ABOUT PEOPLE WHO ARE VERY SUSCEPTIBLE TO LOW SPIN RATES OF ONLY A FEW RPM?
Some people are very susceptible to spinning . And some are the opposite - I'm hardly susceptible to it at all myself. I tried just spinning on the spot at 30 rpm for an hour like a whirling dervish to test my susceptibility and felt only mild dizziness on stopping and no nausea at all :). That may be why I find it easier to accept that spinning for AG would work in zero g.
But in the Skylab zero g experiments mentioned in the last section, note that NONE OF THE ASTRONAUTS experienced any nausea or symptoms at all at 30 rpm although most of them did before or after. (Apart from one instance which seems a coincidence as it didn't match typical spin nausea symptoms).
Probably someone susceptible to 4 rpm wouldn't pass astronaut training so I know it's just a selection. But if it is true that the otoliths that sense linear acceleration are de-activated in zero g and if this applies to everyone, and if it is true that nausea from spinning is triggered by this conflict - I know it's a lot of if's but they are each individually quite plausible - it's possible that NOBODY SUFFERS FROM SPIN INDUCED NAUSEA IN SETTLEMENTS SPINNED FOR AG AT LEAST UP TO 30 RPM.
Of course this will also depend on the otoliths remaining shut down after long term residence in a spinning habitat.
30 rpm is enough for full g with a radius of only one meter so that would mean that a spinning habitat can be as small as you like effectively. It's just impossible to know the answer to that without more experiments.
"In order to truly address the operational aspects of short-radius AG, a centrifuge must be made available on orbit. It's time to start truly answering the questions of "how long", "how strong", "how often", and "under what limitations" artificial gravity can be provided by a short radius device."
EASY TO DO ARTIFICIAL GRAVITY TESTS IN LEO - IDEA FIRST PROPOSED IN THE 1960S FOR A VOSKHOD IN RUSSIA
Artificial gravity was a priority for the ISS up until the loss of Columbia in 2003, first in NASA Ames, then later on the project was passed on to the Japanese space agency, then called NASDA, now called JAXA, who built a Centrifuge Accommodations Module which however never flew because the Space Shuttle was needed to get it into orbit. See page 55 of this paper. Then in 2010 there were proposals to send a centrifuge to the ISS, but it never happened.
Joe Carroll's idea is to develop an artificial gravity research gravity in LEO to let us test multiple levels of gravity at multiple spin rates can start off as simple as just one space module with a counterweight. And indeed the first experiments are even simpler than that. He has been advocating it for years. He's an expert on space tethers and several of his tethers have flown in space.
The idea actually dates back to the 1960s. We now know that Sergey Korolev had a plan to tether a Voskhod with its spent final stage which he put forward in 1965-6. It was going to be a 20 day flight to upstage the Americans. It would have included a pilot, and a physician and the artificial gravity experiments would have lasted for 3-4 days during the flight. He died unexpectedly in January 1966 and the mission was postponed to February 1966 then cancelled outright. So we came very close to doing this experiment way back in 1966. (See page 17 of this thesis).
Joe Caroll's idea similarly is to start with a tether spin experiment with a module tethered to its final stage, as this goes into orbit anyway. He keeps it attached to the spaceship by a tether. Then he spins it up with a series of thrusts during the perigee, when closest to Earth to spin it up (spinning in the plane of the orbit). Those thrusts boosts its apogee while at the same time setting the combination spinning for AG. After staying in that elliptical orbit for a while to test AG (which has much less drag than a final stage normally has), then he cuts the tether at apogee, so circularizing the orbit of the spaceship at a higher altitude and meanwhile sending the final stage down to a targeted re-entry in the southern Pacific (at present its re-entry is uncontrolled). It's a really neat idea!
The nice thing is that all the delta v put into spinning up the assembly gets released at the end of the experiment. It uses no extra fuel unless the tether is severed by space debris, which from his experience in improving tether design is now a very low probability event. The Soyuz always carries extra fuel in event that more is needed than expected during the launch. So he would use this, and only use it if it hasn't been used up during the launch (usual situation|). As a result, the Soyuz would still get to the ISS even in that worst case scenario where the tether is cut at the worst possible time by space debris. Similarly, they could also cut the tether at any time in an emergency and simply continue with the normal approach to the ISS. It would also be done in such a way that even if the tether is cut by orbital debris, there is no chance of it going near the ISS without an extra burn. So there are no safety issues.
It is worked out in detail and could be done right away, as quickly as the Gemini tether mission was put together, for a near future crew mission to the ISS. They'd use the longer phasing approach of several days, so you could test several days of artificial gravity. It could be done with the Soyuz TMA or any other crewed mission to the ISS. The cost wouldn't be much as human spaceflight experiments go, just to add a tether to a Soyuz TMA mission that is going to the ISS anyway.
Though this would be a short experiment, there are many things you can test in a short mission. It would of course test things such as tether dynamics and tether spin up. Also radio communications during tether spins, and orientation of the panels to achieve adequate solar power throughout the orbit.
Also, in particular, it would give us the first real data on spin tolerances of humans in artificial gravity long term. It's a different experiment from the short arm centrifuges in the ISS, because these can be much longer tethers, far too long to fit inside the ISS, so with slower spin rates
MIGHT TOURISTS BE THE FIRST TO TEST ARTIFICIAL GRAVITY IN SPACE?
There is so little interest in testing AG amongst the space agencies at present, that I wonder if the first to test it may be tourists, in space hotels for the private sector.
In the near future I think we will definitely get spinning toilets in zero g if it is true that everyone can tolerate 30 rpm. A toilet constructed with a 2 meter radius in say a Bigelow habitat would need only 22 rpm. I see that as likely to be one of the first applications because zero g toilets are difficult and unpleasant to use - and so once we have tourist hotels in space they are likely to go for an AG toilet as one of the first luxuries to develop. And you only need to spend a few minutes in it, and can set the spin rate you like, only a fraction of a g would make a huge difference.
It is the same for sleeping, and for eating, eating is far easier if you have at least some g. Also exercise ,easier to do exercise to keep you fit with some gravity to work against. And - even if you can only tolerate minutes at 22 rpm (say), it may make a big difference to health.
So, perhaps our first AG experiments in space will be done "accidentally" by the space tourism industry?
If you want to have a go at a "design your own" spinning habitat, and try different spin rates and sizes to check the artificial gravity levels, here is a fun web browser app by Tom Lechner together with many presets. You can even try the effect of throwing a ball in AG :).
Press and hold to "throw" a constant stream of dots and you can vary the strength of the throw by repeatedly pressing "throw harder" or "Throw softer". You can design your own or choose from lots of presets such as the Kaplana One, Stanford Torus and other space stations of the engineers or fiction.
- Space Station Numbers (if the numbers don't seem to "take" - enter the number as text and then use the up / down arrows - it seems to respond when you use those but not immediately when you type in the numbers by hand).
. Also there's a good online calculator here, works out the level of AG from the spins.
COULD A "STANFORD TORUS" TYPE HABITAT BE REALLY SMALL, JUST A FEW METERS ACROSS
If the data from Skylab is right, and stands up for longer duration spins, if astronauts can spin for hours at 30 rpm symptom free, then the Stanford Torus maybe doesn't need to be that large at all. Maybe it can just be 10 meters or even 4 meters, who knows. We simply have never done the experiments needed to find out the answer to this either way.
Note, that even on Earth many people can tolerate very high spin rates. You can learn to do so. Ice dancers are noted for this - also skiers etc. There were lots of people doing multiple tumbles and spins in the winter olympics. That takes them years of training of course, but many ordinary folk also can come to tolerate high spin rates up to 30 rpm with just three training sessions according to an experiment done by some MIT researchers. Others who can tolerate high spin rates are the whirling dervishes, who are used to twirling for long periods of time too.
So, could we all tolerate high spin rates of 30 rpm or even more in zero g, for hours on end, symptom free? We have simply never done the experiments needed to find out. For more on this see my:
- Need for adventurous experiments in life support and artificial gravity in LEO first
- Astronauts don't get nauseous when spun rapidly in zero g - so could a device as simple as a spinning hammock be all that is needed to keep us healthy in space?
- Spaceship tethered to final stage and then spun up - experiment first proposed for a Voskhod in 1965-6 but never flown - could this solve all the zero g health problems?
in my online book Touch Mars?
However we don't have to have massive Stanford Torus type habitats even for slow spin rates. If you use tethers between habitats to spin up, then they can be as far from the center of gravity of the system as you like.
So, When you want a habitat spinning for artificial gravity but just want it small, for instance to travel on an interplanetary journey - just set up a tether between your habitat and another one. Or for instance, when traveling to Mars, every time a spaceship goes to Mars, nearly always its final stage is on the same trajectory. Well - tether it to its final stage and spin them slowly around their common center of gravity. With a tether system like that,you can also change course while still spinning .
It's not like a wheel spinning under gravity - there isn't anything rigid there and so it won't resist a change of direction by wobbling like a bicycle wheel. It's more like running and changing direction while whirling a stone around on a rope in one hand. Indeed you can even change course with only minor interruption in the passenger's sense of artificial gravity if you boost carefully.
LOWEST MAINTENANCE HABITATS AND HABITATS WITH LEAST OUTLAY
Any space habitat requires some level of constant maintenance, if just the airlocks and the spacesuits. The inhabitants probably do have to be able to get out of it occasionally, however maintenance free it is inside - or send autonomous robots or semiautonomous telerobots to do the necessary maintenance tasks.
However, remember, so does a terraformed Mars with constant production of the greenhouse gases or maintenance of the planet sized mirrors to reflect more sunlight onto it. If you do succeed in terraforming it, then on the surface it may seem an easy place to live but you are dependent on a lot of technology working “behind the scenes” to keep it going long term.
There are also many biogeochemical cycles you need to complete, carbon cycle, nitrogen cycle, oxygen cycle, phosphorous cycle etc and those won’t necessarily work automatically as they do on Earth. It might be a constant on-going challenge to keep its ecology on track and stop it “unterraforming” - if you succeed in terraforming in the first place.
It is also a very long term commitment. In the Middle Ages there were some projects to complete cathedrals on a timescale of centuries. But this far exceeds any of those projects. It’s like the inhabitants of the Lascaux caves starting a project that would take so long that we’d still be at the early stages of it 17,000 years later.
Any civilization that can contemplate such immensely long timescales has to be very mature. I think that a civilization that takes on a terraforming project with confidence of success, and of seeing it through to completion would probably be at least thousands of years old, and more likely millions of years old. It’s probably completed many centuries and millennia long projects before it tries this one.
Even houses on Earth of course take time to build, and need maintenance. However, Earth is the only place where humans can survive without any technology at all, like the gorillas do, in at least some places. Then, with minimal pre-industrial technology, we can survive anywhere from the Kalahari desert to the Arctic (San people, to Inuit).
There are some places outside of Earth where we can live with fairly low levels of technology, though nowhere we can live without any at all, not anywhere that we know of.
Saturn's largest moon Titan with its dense methane atmosphere actually has an atmospheric pressure greater than Earth’s. You need thermal insulation, and you need an Earth atmosphere inside your habitat, but you do not need to hold in the internal pressure. Habitats on Titan could be any shape and be lightweight flimsy things like houses on Earth, indeed flimsier and easier to construct in the lower gravity (apart from any artificial gravity requirements if you need spinning for AG for health).
The Venus cloud colonies are similarly lightweight and also arguably low tech. These float just above the clouds at the level where the temperature and pressure is similar to that on Earth. It needs the technology of an airship + sulfuric acid resistance. But the acid protection is only for the outer skin of a large habitat. Arguably acid resistance is easier to engineer for than holding in Earth’s atmospheric pressure against a vacuum - and is far less mass at least, just a thin layer of teflon or similar.
When it comes to paraterraforming or the large spinning habitats in space, then as for the Venus cloud colonies what matters is how easy it is to maintain the outer skin.
If you think about it that way, then perhaps the lava tube caves are strong contenders too. Most of the mass is already there - in the form of the lava tube itself. Perhaps you just need to fill in cracks and make it impervious to air. If so, the launch mass from Earth could be very low, lower even than a cloud city or Titan dwelling.
If you can make any of the big structures nearly maintenance free and within it you have an Earth normal atmosphere, it might well end up being lower maintenance than e.g. living on Titan without a city dome . Once it is built, that is. If it is exceptionally low maintenance it could be easier than living on a house on Earth.
However nowhere in space can be lower maintenance than living in a tropical jungle on Earth, unless you find a way to make the maintenance totally automated with robotic machinery (as is the case in many science fiction stories).
Note - large spinning habitats do not need any form of propulsion to keep spinning. Maintenance, and the level of technology needed to live in such a habitat would be similar to a city dome, or a lava tube cave.
That’s just the external structure. The internal ecology is likely to require constant monitoring and “gardening”. But that again is the same for a planet.
Only a very mature civilization would have confidence that the ecology of their terraformed planet could continue long term without constant vigilance and monitoring, and then correction of issues as they arise, I think. And it would probably gain that confidence at least in part through working with larger and large enclosed habitats, starting with much smaller scale closed cycle ecosystems of up to a few cubic kilometers, and gradually gaining confidence through those experiences and also through study of exoplanets.
That is, unless, of course, we make contact somehow with a mature ETI that has solved these problems already, long ago. Even then, their solutions may need adaptation to terrestrial biology.
WHAT ABOUT ELON MUSK’S “NUKES TO TERRAFORM MARS”
This was just an off the cuff joking remark he made. He talks about it here at 2 minutes in to this video, where he called it a “fixer upper of a planet”. He says is
“There’s a fast way and a slow way… The fast way is to drop thermonuclear weapons on the poles":
He gave no details yet it was taken up in many news stories. presented half seriously as a way to terraform Mars. This was not based on any research, just an off the cuff joking remark by a CEO of a space company. It was soon followed by other news stories by the more techy and geeky journalists saying it was impossible to do it that way.
Could his remark he based it on that idea that if you liberate enough dry ice, you could kick start the runaway greenhouse? But there isn’t enough dry ice at the Martian poles anyway to reach the magic 6 mbar to start a runaway greenhouse, at most you could double the current 0.6 mbar. Then, the number of nuclear bombs you’d need if there would involve a vast megaproject, hardly an “easy” solution. You are talking here about hundreds of thousands of hydrogen bombs as powerful as the 50 megaton Tsar Bomba - the largest nuclear bomb ever tested.
This was my article about his idea, in response to those many journalist stories:
When asked for clarification he later explained he meant constantly exploding nuclear fusion bombs to form two “mini suns” above both the lunar poles. Rather a science fiction scenario. See Elon Musk Clarifies His Plan to "Nuke Mars".
Probably many of you saw this as the screen saver as you wait for a SpaceX video to start - it doesn't give any timescale however.
As far as I know they are focused on space engineering and are not actively researching into terraforming. They leave that to the likes of the Mars society and keen scientists who are researching into it anyway.
WHAT ABOUT KIM STANLEY ROBINSON’S ‘THE MARS TRILOGY’
This is a series of three books, Red, Green and Blue Mars that came out in the mid 1990s together with a book of short stories with the history as backdrop called “The Martians”. In this he envisions Mars terraformed and developing a planet spanning civilization in a couple of centuries. The main focus is on social issues but he has a backdrop of terraforming with plausible sounding science, and this has influenced many people to think that terraforming Mars would be easy and accomplished as soon as a couple of centuries. See Mars trilogy - Wikipedia
Kim Stanley Robinson himself says that it would take far longer than his trilogy suggests, which is based on 1980s ideas. In a podcast he gives as his main points
- Mars seems to have lost its nitrogen. We need nitrogen
- There could be life in the basement regolith a hundred meters or a kilometer underground and that’s going to be very hard to disprove. So we may be intruding on alien life.
- Its surface is covered by perchlorates, poisonous to humans in the parts per billion. They could be changed into something more benign to humans, by introducing something to eat them, but that would take time.
- The best analogy is Antarctica - beautiful, scientifically interesting, and for Mars, especially of interest for comparative planetology - going to Mars is a way to study Earth
- Mars trilogy is a kind of allegory of people on Earth. We have over 7 billion people and may end up with 9 or 10 billion. There is no way we can use Mars as an escape valve in less than thousands of years.
- He was following Carl Sagan - and Martin Fox who suggested thousands of thermonuclear bombs so deep they heat up the planet. Then you introduce genomes from Earth.
- He thinks terraforming is for later - once we have a sustainable civilization on Earth and proved we will not wreck this one, we can then consider the next great project.
- He does not think that Mars is in the same relation to Earth as the New World is to the Old World. The New World wasn't really a pioneering colonization anyway as the first people were there already. Also, Mars is not habitable without terraforming, with thousands of years needed to terraform. All of this make the analogy not applicable in his view. It’s not going to be a solution in decades. He says he has a profound disagreement with Robert Zubrin on the New World analogy, and says that this is not what Mars is about.
He says that we can’t use Mars as a ‘backup planet’. We have to fix our problems on Earth to have any hope of surviving on the timescales of the book. See the podcast here and summary on Io9 here here
There are many other issues with his book though, if you consider it as science rather than science fiction:
- He assumes that the humans can be genetically engineered to tolerate carbon dioxide instead of nitrogen in the atmosphere - supposedly by using crocodilian hemoglobin then humans become able to tolerate high levels of CO2 as well as becoming nearly immortal
- He greatly accelerates the timescales, e. g. the effects of photosynthesis, a hundred fold or a thousand fold.
- He doesn’t explain how Mars is kept warm with a CO2 / O2 atmosphere
- His Mohole wouldn’t work - he fudges the numbers and there is no serious scientific paper suggesting use of moholes to terraform Mars. He proposes to warm the atmosphere of an entire planet using a number of large radiators at a temperature of 50 C or so.
They are typically one kilometer in diameter, let's call the area one square kilometer. There are ten of them, so call that ten square kilometers. With this he proposes to warm up a planet with a surface area of 144.8 million km². The atmosphere is in thermal equilibrium with the surface - so he has to warm up at least the top few millimeters of the regolith, not just the atmosphere. I'm not sure how to do the detailed calculations to see what temperature difference there would be, but it's going to be minute.
- His windmills idea is plain silly for a physicist - I know it's meant as a deception to illegally spread algae - but how could it deceive the other scientists in his plot line?
The wind is going to be slowed down anyway. Slowing it down prematurely using windmills is just a way of concentrating the energy dissipated by slowing it down into a single place on the surface. So there would be no net heat input into the atmosphere as a result of the windmills. It would only make a difference if the atmosphere was moving as an ideal fluid without friction.
Fun scientific quibble for geeks,: this is a simplification. Actually, as Lee Weinstein (mechanical engineer and energy researcher at MIT) wrote in his blog post "Windmills on Mars", there would be a very minute, but temporary, warming effect. By slowing down the winds with the windmills, this reduces the kinetic energy of the Mars atmosphere due to wind, and by conservation of energy, this has to mean a slight increase in the temperature of Mars. However this is just a temporary effect while they reduce the speed of the wind. Once the average speed of the wind has reached a new, lower, equilibrium, then Mars returns to thermal equilibrium with the rest of the universe. So there would be a really tiny increase in temperature for a short while after the windmills are deployed, after which the temperature returns to normal. When you stop the windmills, the opposite happens, it cools down then returns to normal.
You could use the same argument for the moholes. The temperature of Mars can only increase temporarily as a result, because no heat is being created. The heat from the interior is just being lost more quickly at that point. The rest of the crust of Mars must be getting slightly less heat radiated through it, so eventually this is going to cause Mars to cool down slightly elsewhere, by tiny immeasurable amounts but probably only on long timescales.
"Robinson has certainly set up the puzzle correctly, but the physics behind many of the solutions his characters propose is silly. Silliest of all are the windmills, which are supposed to heat the planet by using wind-generated electricity to drive heating coils. (I won’t insult the reader’s intelligence by spelling out why this wouldn’t work.) One could argue that the windmills were really just a ruse for illegally dispersing Mars-adapted algae, but it’s more than a little implausible that all the high-powered physicists among the Mars colonists would be taken in. There is other silliness. Polar caps are dissipated by albedo-reducing algae, and water vapor is added to the atmosphere by cometary impacts and ”moholes” without regard to the constraints imposed by Clausius-Clapeyron.
"On the other hand, there are some interesting and workable ideas in Robinson’s book. There is a space mirror to catch sunlight and turn Martian night into day, but to bring Martian insolation up to Earth levels would require a mirror with a cross sectional area equal to Mars itself; still, a more modest mirror with 10% of the Martian cross-section could make a useful contribution. The question of the microclimate of low-lying areas like the Hellas Basin, where surface pressure will be greatest, bears thinking about, as does the circulation one would get around the rim of the basin. It would be rather like Death Valley (how jolly!) or a drained Mediterranean, only more so. If it were up to me, I’d make some use of algae bio-engineered to release HFC’s, and perhaps also synthetic cloud particles optimized to reflect infrared while letting through a lot of sunlight"
(there Clausius-Clapeyron is an equation used to estimate vapour pressures of liquids such as water and how it depends on temperature).
All of this is acceptable in a science fiction book written for entertainment, especially given that he says his main aim was to reflect on conditions on Earth. But it is not a scientific blueprint for terraforming Mars, and was surely never intended as such.
On the issue of intruding on alien life on Mars, then there are many suggestions now for habitats not just at the base of the regolith, but on the surface or in the top cm or so of the soil as well. There is an almost bewildering variety of possible habitats for surface, near surface and subsurface life. None confirmed but many to be investigated. Here are some of them - the links take you to the section of my online Touch Mars? book.
- Nilton Renno's droplets that form where salt touches ice - why did he call a droplet of salty water on Mars "a swimming pool for a bacteria"?
- Recurring Slope Lineae
- Lichens and cyanobacteria able to take in water vapour directly from the 100% night time humidity of the Mars atmosphere
- Liquid brines beneath the surface of sand dunes at night - beneath the sand that Curiosity drives over - that one was reported as uninhabitable but Nilton Renno was not so sure, biofilms might make it habitable
- Transgressing sand dunes bioreactor
- Desert varnish
- Sun warmed dust grains embedded in ice
- Southern hemisphere flow-like features - these may involve fresh water!
- Methane plumes on Mars and the possibility of water deep below the surface in its hydrosphere
- Porous basalt
- Two ways Curiosity's methane spikes could be generated in the shallow subsurface (centimeters deep at most)
- Ice covered lakes habitable for thousands of years after large impacts
- Ice covered lakes from volcanic activity
- Possibility of geological hot spots in present day Mars
- Life in ice towers hiding volcanic vents
Also see Modern Mars Habitability in Wikipedia (one of my contributions to the encyclopedia)
NATIVE MARS LIFE IS NOT NECESSARILY SAFE FOR US OUR ANIMALS OR CROPS
The terraforming plans assume that there is nothing on Mars that can harm us already. If there is - or some ancient microbe is activated as a result of the terraforming - there is nothing to say it has to be safe for us or our animals or crops. They do not have to be adapted to us to harm us. Indeed microbes normally become less harmful when they adapt to humans, and may eventually become symbionts.
For instance legionnaire’s disease infects amoebas and biofilms. It uses the same mechanism to attack the lungs of humans. So a disease of Martian biofilms could easily attack human lungs too.
Microbes can also harm us indirectly through producing a toxin. Examples include botulism, ergot disease, tetanus, and aspergilliosis (a fungus that can cause allergic reactions such as asthma and can be fatal to people with damaged immune system). None of these are adapted to us.
For another example that is plausible for some alien biochemistry, Alzheimer's disease may perhaps be caused by cyanobacteria which produce a mimic BMAA for an amino acid L-serine that is not exactly the same and gets misincorporated in the proteins of our body. This is ongoing research - but it highlights something that could easily happen in response to an alien biology with similar but not identical chemicals to those used by Earth biology.
Similarly some algae blooms that form in Lake Eyrie in the States to kill cows. There is no evolutionary advantage - cows are not their natural “prey”. It’s just a coincidence. The same could happen with Mars life and ourselves. For more on all this see this study, lead by David Warmflash of the NASA Johnson Space Center: Assessing the biohazard potential of putative martian organisms for exploration class human space missions and see the section Many microbes harmful to humans are not "keyed to their hosts" in my online book.
LONG TERM SPACE SETTLEMENT IN FREE SPACE HABITATS
I think one way or another we are likely to find a way to be able to live in large habitats on the Moon. But if not, we have the large habitats in free space, such as the Stanford Torus. They can be positioned anywhere in the solar system and have whatever gravity levels you want and with thin film mirrors, as much sunlight as you want whenever you want it. I think they are the natural end point for space settlement myself.
You could make a habitat like this using materials mined from a small 300 meter diameter NEO such as 4660 Nereus
4660 Nereus, 300 meters diameter, NEO, easier to get to than the Moon, has more than enough material for the cosmic radiation shielding (main part of the mass) for an entire Stanford Torus with 10,000 inhabitants.
Long before you have the capability for terraforming, or can have got even a fraction of the way towards terraforming your first planet, if you ever do succeed at it, you have the capability for these free space habitats.
FREE SPACE HABITATS CAN FILL THE ENTIRE SOLAR SYSTEM TO BEYOND PLUTO - AND GALAXY PROTECTION
Once you have them, just by using larger and larger thin film mirrors to concentrate the sunlight, you can live anywhere in the solar system right out to well beyond Pluto. The mass for larger thin film mirrors will be only a small fraction of the mass of the habitat.
It's a case of do it once, colonize the entire solar system. Indeed it would become so easy to colonize that I am concerned about the effect it could have on the galaxy and wonder how we will achieve galaxy protection, a long way into the future that is at present. See my draft article: Galaxy Protection Solutions to Fermi's Paradox - No Need to be Scared of 'Great Filter' - I'm working on that one at present and plan to post it in my blog here in the near future. I have a long section on Galaxy Planetary Protection in my Touch Mars? book and it's based on that. See
- Travel to other stars
- Planetary protection for other stars and exoplanets
- Galaxy protection - what about colonizing other star systems?
I think we will find a way through it though and if so, well then we could end up with a solar system with trillions upon trillions of humans living sustainably if we so wish.
And such a civilization would be immune to anything. When the sun goes red giant - just move the habitats further out. Or reduce the size of the thin film mirrors reflecting sunlight into the habitats for the already distant settlements.
I cover the lunar gardening in detail in my
I cover the asteroid habitats in my
I cover the idea of space habitats right out to Pluto and beyond in this section of my Touch Mars? book.
- Space habitats made from asteroid and comet materials get plenty of sunlight - right out to Pluto (using thin film mirrors to concentrate the light) (above)
My original motivation for exploring alternatives to Mars was for reasons of planetary protection. For someone keen on humans in space but also keen on science and interested in the possibility of finding life based on a different biology, it would be tragic to make life on Mars extinct in our eagerness to send humans there.
But I then realized that the Moon is greatly favoured anyway and that Mars is not the natural next place for humans that it seems to be at first.
NOTIFICATIONS OF FUTURE POSTS
DO COMMENT ON THIS POST
Any thoughts, ideas, any questions or suggestions, anything I've missed out? Do say in the comments area on this post.
Also if you spot any mistakes however minor, be sure to say. Thanks!
TOUCH MARS? BOOK
You can read my Touch Mars? book free online here:
Touch Mars? Europa? Enceladus? Or a tale of Missteps? (equivalent to 1938 printed pages in a single web page, takes a while to load) also available on Amazon kindle
The other ones are
- Case For Moon First: Gateway to Entire Solar System - Open Ended Exploration, Planetary Protection at its Heart - and on kindle
- MOON FIRST Why Humans on Mars Right Now Are Bad for Science - and on kindle.
This includes my An astronaut gardener on the Moon
|
The genomes of all cellular organisms, from bacteria to humans, consist of double-stranded DNA. But in viruses, there is tremendous diversity of virus genomes: double stranded or single-stranded, DNA or RNA, positive- or negative-sense, but only viruses have RNA genomes.
In terms of virus genomes, “negative sense” means that a single-stranded nucleic acid molecule has the opposite sequence to messenger RNA (mRNA) and so cannot be translated into protein until it has been copied. This has important biological implications for viruses with negative-sense RNA genomes. Since cells have no biochemical mechanism to copy RNA, every negative-sense RNA virus must carry within the virus particle an RNA-dependent RNA polymerase (or “replicase” as it is frequently called), or the virus genome will be biologically meaningless once in a host cell.
There are seven virus families and 31 genera of viruses with negative-stranded RNA genomes, and these groups contain some very important pathogens. Based on their similar genetic structure, four of these virus families are believed to have arisen from a common ancestor, and are grouped into a taxonomic order, the Mononegvirales (unsegmented negative-strand viruses).
The Bornaviruses are a relatively little-studied group, giving rise to Borna disease, a neurological syndrome of warm-blooded animals. The Filoviruses have pleiomorphic (variably shaped), elongated particles approximately 80 nm in diameter and between 130-14,000 nm long – hence their name, which means “thread-like” viruses. The Filovirus genome encodes seven proteins on monocistronic mRNAs which are complementary to vRNA (the virus genome). Until recently, relatively little work has been performed on these viruses because of the difficulties of working with them, but their replication is known to be similar to that of rhabdoviruses and paramyxoviruses (also members of the Mononegvirales).
The Paramyxoviruses have enveloped particles which are 125-250nm in diameter. Their genome contains a linear arrangement of six genes, separated by repeated sequences. Paramyxoviruses include:
- Parainfluenzaviruses: These cause acute respiratory infections ranging from relatively mild influenza-like illness to bronchitis, croup and pneumonia.
- Respiratory Syncytial Virus (RSV): A major cause of lower respiratory tract disease in infants.
- Measles: A highly infectious virus spread by aerosols, which causes a systemic infection with complications including ear infections (1 in 20 cases), pneumonia (1 in 25), convulsions (1 in 200), meningitis/encephalitis (1 in 1,000), subacute sclerosing panencephalitis (SSPE) (1 in 1,000,000), and even death (1 in 2,500-5,000 cases).
The Rhaboviruses have unique bullet-shaped particles with prominent protein spikes on the surface of their lipid envelope. Rhabdovirus genomes are around 11 kilobases long and contain five genes. Diseases caused by Rhabdoviruses include vesicular stomatitis in cattle, pigs, horses and wildlife, and rabies, which causes a fatal encephalitis.
In addition to the Mononegvirales, other negative-sense RNA viruses have segmented genomes, i.e. their genomes comprise a number of separate molecules, all of which must be packaged into a particle in order to give rise to an infectious virus.
The best known of these are the Orthomyxoviruses, which include influenza virus. Influenza viruses can infect a wide variety of mammals, including humans, horses, pigs, ferrets and birds, and are a major human pathogen. Unlike other negative-sense RNA viruses, Orthomyxovirus genomes are replicated in the nucleus of the host cell, rather than in the cytoplasm.
In summary, this is possibly the most biologically diverse class of viruses.
- Mononegvirales (replication in cytoplasm)
- Orthomyxoviruses (replication in nucleus)
- Arenaviruses, Bunyaviruses (ambisense genome)
- Different strategies of gene expression to cope with genome coding patterns.
- They include some of the most important virus pathogens.
|
While completing a novel study for the novella A Christmas Carol by Charles Dickens have your students answer reading comprehension questions for every chapter (staves) using this organized packet.
Included are 5 sets of questions (one for every stave) which analyze character motives, themes, symbols, irony, dialogue, and basic comprehension of the plot. Detailed answer keys are provided for all questions. There are also after reading discussion questions included in the Stave 5 section. You can print this resource as one all-encompassing packet or you can provide students worksheets one stave at a time.
This resource includes the following formats:
READY TO PRINT Student Copy of Questions (PDF)
EDITABLE Student Copy of Questions (Word Document)
ANSWER KEY TEACHER COPY of Questions (PDF)
You May Also Like!
|
Hey there, language lovers!
Ready for a deep dive into the intriguing world of the passive voice?
Buckle up because we’re about to unveil the mystery behind this often misunderstood part of English grammar.
By the end of this session, you’ll not only understand what the passive voice is, but you’ll also know when and how to use it effectively in your writing.
So, let’s get started!
What Is Passive Voice?
Imagine you and your friends are playing a game of catch in the park. You might say, “I threw the ball.” That’s what we call an active voice sentence because you (the subject) are doing the action.
But what if we flipped things around and made the ball the star of the sentence? We’d say, “The ball was thrown by me.” Suddenly, the ball is getting all the attention. This, my friend, is what we call the passive voice.
So, in the simplest terms, passive voice is when what’s usually getting the action turns into the one leading the sentence.
Whether it’s in day-to-day conversation, in an exciting novel, or even in a formal report, you’ll find the passive voice at work everywhere once you start looking for it.
Active Voice Vs Passive Voice: What’s The Difference?
Let’s break down this whole active vs passive thing.
|Examples Of Passive Voice
|Active voice is direct and clear. It tells the reader exactly who is doing what.
|Passive voice can sometimes be less clear about who is performing the action.
|“My bike was stolen.” (We don’t know who stole it.)
|Active voice is less formal and often used in casual or conversational writing.
|Passive voice can sound more formal or academic, making it a common choice in scientific or scholarly writing.
|“The experiment was conducted carefully.” (Sounds like a science report, right?)
|Active voice emphasizes the ‘doer’ of the action. It’s about who is performing the action.
|Passive voice puts more emphasis on the action itself or the receiver of the action. It’s about what is happening.
|“The gift was opened with excitement.” (The focus is on the action of opening the gift and the receiver’s reaction.)
|In active voice, the doer is known and is important to the context of the sentence.
|In passive voice, the doer may be unknown, unimportant, or obvious from the context, so it’s often left out.
|“A new record was set at the race.” (It’s more about the record, not who set it.)
|Active voice usually uses fewer words, making sentences more concise.
|Passive voice often uses more words, as additional words are needed to indicate who or what is performing the action (if included).
|“The cake was baked by John.” (6 words)
|Active voice directly attributes actions, making responsibilities clear.
|Passive voice can obscure responsibility, which can be useful if you want to avoid placing blame.
|“Mistakes were made in the project.” (We’re not pointing fingers at any specific person or group. It could have been anyone involved in the project.)
Easy Steps To Form A Passive Sentence
Making a sentence passive isn’t hard. Just follow these steps:
|Start with an active sentence.
|Active: “The cat chased the mouse.”
|Identify the object of the sentence, which is the thing that the action is being done to.
|Passive: “The mouse was chased…”
|Move the object to the beginning of the sentence and make it the subject.
|Passive: “The mouse was chased…”
|Add the correct form of the verb “to be” (like “is,” “was,” or “has been”) followed by the past participle of the main verb.
|Passive: “The mouse was chased…”
|Add ‘by’ and the original subject at the end (if you want to mention who did the action)
|Passive: “The mouse was chased by the cat.”
And there you have it! You’ve just made a passive sentence.
The Word ‘By’ In Passive Voice
You know how in a movie they always say, “Directed by…” before naming the director? That’s kind of how ‘by’ works in passive voice!
In passive sentences, we use the word ‘by’ when we want to tell who did the action.
So if your whole cake disappeared and you found out your brother was the culprit, you’d say, “The cake was eaten by my brother!” Here, ‘by’ helps us point out the sneaky cake-eater.
Just remember, ‘by’ is like the arrow that points to who did the action in a passive sentence.
Using Passive Voice In Different Tenses
|Passive Voice Example
|Text abbreviations are frequently used by smartphone users in informal communication.
|Many songs have been sung by that famous singer.
|The Truth or Dare questions were asked by the group during the party.
|All the tickets had been sold for the concert before we arrived.
|A new skate park will be built in our neighborhood next year.
|By the end of the summer, all the books will have been read by the book club.
Myths About Passive Voice
1. Passive voice is always wrong: Nope, that’s not true! Passive voice is just another way to structure sentences. It can be very useful in certain situations, like when the focus is on the action or the receiver of the action.
2. Using passive voice makes your writing weak: Not necessarily! Both active and passive voice have their strengths. It’s a way to change up your sentence structure and create different effects.
3. Passive voice should always be avoided: That’s a myth too! Sometimes passive voice is the best choice. It can be handy when you don’t know who did the action or when it’s not important to mention the doer.
4. Passive voice is always wordy: While passive voice can sometimes use more words, it’s not always the case. It depends on how you construct your sentence.
5. Passive voice is formal and boring: Not true! Passive voice can add variety and make your writing more interesting. It’s like adding a splash of color to your sentences.
6. Passive voice always hides the doer. While passive voice can sometimes omit the doer or make it less clear, it doesn’t always hide the doer. You can include the doer in a passive sentence if it’s important or if you want to give credit where it’s due.
Remember, using passive voice has its own purpose and can be used effectively in different situations. Don’t be afraid to explore and experiment with it!
Fun Passive Voice Exercises With Answers
Let’s have some fun with the passive voice. I’ll give you a sentence in the active voice, and your job is to switch it over to the passive voice. I’ll provide the answers too, but give it a shot first, okay?
1. Active: She wrote an interesting book.
2. Active: My dog chased the postman.
3. Active: The teachers will mark the exams next week.
4. Active: The mechanic fixed my car.
5. Active: The director will release the new movie next month.
And here are the answers:
- Passive: An interesting book was written by her.
- Passive: The postman was chased by my dog.
- Passive: The exams will be marked by the teachers next week.
- Passive: My car was fixed by the mechanic.
- Passive: The new movie will be released by the director next month.
How did you do? Remember, the key to mastering the passive voice is understanding its structure and purpose: to emphasize the action, or the recipient of the action, rather than the doer of the action.
Quick Quiz: Test Your Passive Voice Skills
Now I’ll give you a sentence and your task is to identify whether it’s in active or passive voice. Ready?
- The movie was watched by millions of people.
- You should finish your homework.
- The award was won by the youngest contestant.
- Mom is baking an apple pie.
- A novel was written by the famous author.
Wrap-Up: Understanding Passive Voice Better
Whew! What a journey we’ve been on, huh?
We’ve unraveled the complexities of the passive voice together, and hopefully, it’s not as intimidating as it may have seemed at first.
The key takeaway here? Use the passive voice when the action is more important than who performed it or when you don’t know or want to hide who’s responsible. It’s all about adding variety to your writing and being intentional with your words.
Now that you’ve leveled up your passive voice skills, why not share this knowledge with your friends?
Sharing is caring, and your pals might also benefit from these handy grammar tips. Go ahead and click on that share button!
And don’t forget to follow Hi English Hub on Pinterest and Twitter for more exciting language adventures. I’m constantly bringing you all sorts of fun language and grammar tidbits to keep you sharp and savvy.
So, keep practicing, stay curious, and until our next linguistic deep-dive, happy writing, everyone!
|
Ideally, the law has morals. However, that is not how it always is. Laws can be unjust. For example, some laws prevented people of different races from marrying. These are anti-miscegenation laws. They made interracial relations a crime. That means the police could arrest someone for it. Ultimately, the Supreme Court struck down these laws. They are unconstitutional because they discriminated against people based on race.
Today, laws are still unjust, resulting in the trickle-down effect. For example, crack cocaine triggers a sentence a lot easier than powder cocaine. African American communities have historically used this type of cocaine, making it more likely for the court to incarcerate them for this crime. While legislation is tackling this issue, it is still a problem. Here is more on the trickle-down effect and how incarceration hurts.
Criminal justice system
The criminal justice system is those that create policies around crime. It involves the police, courts and prisons. Local, state and federal governments all have criminal justice systems.
Right now, legislators are debating what crime classification sends someone to prison. Legislators also have to decide how long they will go to prison. The main motivator behind their decisions is deterring future crime. This applies to others and the individual who committed the crime.
If and how long someone needs to go to prison for a crime is a hot topic. For youth, incarceration can encourage future criminal activities. Therefore, it is often ineffective. Plus, it reduces the chances of an individual going to school again and earning a decent living. Therefore, many minors go to diversion programs.
Laws define crimes. Due to the trickle-down effect, these can be immoral and contribute to other social problems.
|
The Acts of the Apostles, often referred to simply as Acts, or formally the Book of Acts, is the fifth book of the New Testament; it tells of the founding of the Christian church and the spread of its message to the Roman Empire.
Acts and the Gospel of Luke make up a two-part work, Luke-Acts, by the same anonymous author, usually dated to around 80-90 AD, although some experts now suggest 90-110. The first part, the Gospel of Luke, tells how God fulfilled his plan for the world's salvation through the life, death, and resurrection of Jesus of Nazareth, the promised Messiah. Acts continues the story of Christianity in the 1st century, beginning with the ascension of Jesus to Heaven. The early chapters, set in Jerusalem, describe the Day of Pentecost (the coming of the Holy Spirit) and the growth of the church in Jerusalem. Initially, the Jews are receptive to the Christian message, but later they turn against the followers of Jesus. Rejected by the Jews, the message is taken to the Gentiles under the guidance of the Apostle Peter. The later chapters tell of Paul's conversion, his mission in Asia Minor and the Aegean, and finally his imprisonment in Rome, where, as the book ends, he awaits trial.
Luke-Acts is an attempt to answer a theological problem, namely how the Messiah of the Jews came to have an overwhelmingly non-Jewish church; the answer it provides is that the message of Christ was sent to the Gentiles because the Jews rejected it. Luke-Acts can also be seen as a defense of (or "apology" for) the Jesus movement addressed to the Jews: the bulk of the speeches and sermons in Acts are addressed to Jewish audiences, with the Romans serving as external arbiters on disputes concerning Jewish customs and law. On the one hand, Luke portrays the followers of Jesus as a sect of the Jews, and therefore entitled to legal protection as a recognised religion; on the other, Luke seems unclear as to the future God intends for Jews and Christians, celebrating the Jewishness of Jesus and his immediate followers while also stressing how the Jews had rejected God's promised Messiah.
|
Letter Z Lesson Plan What You Know About Letter Z Lesson Plan And What You Don’t Know About Letter Z Lesson Plan
Nouns are the words that we accord to altar and places. They are our allotment words. Assertive nouns accept absolutely able connotations. Connotations are all of the things that a clairvoyant thinks about back they appear beyond a assertive word.
The bastille loomed over the horizon.
The chat ‘prison’ ability accept abrogating connotations, alike threatening, as it is associated with abomination and criminals. If we change the noun in the book but accumulate the verb, we can actualize a altered impression:
The academy loomed over the horizon.
This time, the reader’s consequence will be added abreast by the verb use in the sentence. The verb actuality acclimated in the book will add to this. A ‘school that looms’ arguably has added absolute connotations.
When you accept a go at your adventure or composition today, accede absolutely anxiously how your verbs and nouns will assignment calm to actualize an impression.
If you attempt to bethink the differences amid altered chat classes, this accessible video will admonish you.
Letter Z Lesson Plan What You Know About Letter Z Lesson Plan And What You Don’t Know About Letter Z Lesson Plan – letter z lesson plan
| Encouraged to be able to the blog site, with this moment I’ll demonstrate about keyword. And from now on, this is actually the first graphic:
|
How is particle size determined?
For 3D non-spherical or non-cubic particles, more than one parameter is required to define dimension. You can select a few dimensional numbers to describe a particle for regularly-shaped objects such as a rectangle (2- or 3-dimensional numbers) or cylinder (2-dimensional numbers). For irregular particles however, dimensions can’t be described with a few parameters and when working with millions of particles, describing them individually isn’t practical. Only one number should be used to characterize each particle and this number is size. The definition used to define size will affect the data obtained.
The most common size definition used to describe irregularly-shaped 3D particles is equivalent spherical representation. When using only one value (e.g., diameter) to describe size, all the dimensional information is condensed into a single number and will contain distorted or summarized information about the particle due to its shape.
|
In 1974, near the village of Hadar in the northern Afar region in Ethiopia, a marvelous fossil was discovered. It had an estimated age of 3.2 million years and represents the first fossil of its kind ever found by scientists.
The fossil was found by Donald Johanson, an American paleoanthropologist currently a Professor at School of Human Evolution and Social Change at Arizona State University College of Liberal Arts and Sciences and the Founding Director of the Institute of Human Origins. The fossil he uncovered was the Lucy specimen of Australopithecus afarensis, which is thought to be very closely related to humans on the tree of life. Some speculate that A. afarensis could even be a direct ancestor to Homo, but that is currently unknown.
The Institute for Human Origins built an intriguing website around early hominin evolution called Becoming Human with a lot of exciting features. At its center is a video documentary narrated by Johanson called Becoming Human. It examines the evidence from early hominins, the anatomy of different hominin fossils with a particular focus on bipedalism, the origin and extinction of the neanderthals and their genetic legacy to modern humans, as well as the development of early human culture. The first chapter starts of with a fascinating introduction to wet our appetite:
What is it that makes us human? That gives us the ability to reflect on the past and ponder the future? Who we are as a species and where we came from make up the basis of a fantastic story, spanning more than 4 million years.
Johanson takes us on a journey back to Hadar in Ethiopia and describes the different kinds of field work he performed that he made there, reaching a crescendo with the discovery of the Lucy specimen with an amazing degree of complexity (40%) given its age. There have been many fossil discoveries since, but this became a key catalyst for future developments.
This video also attempts to answer many crucial questions: how do scientists reconstruct ancestral environments? How are bones fossilized? How did early hominins avoid predators like hyenas? How does anatomic and molecular evidence demonstrate common ancestry between humans and chimps? How do large volcanic eruptions and rainfall help scientists identify fossil footprints? What positive feedback mechanisms were involved in bipedalism?
What other early hominin fossils have been uncovered? How did Homo erectus disperse around the planet? What is the origin of the neanderthals? What happened to them? Have they contributed any genes to modern humans?
Besides the core documentary, there is also a ~20 minute side-documentary about early craftsmanship that goes into additional detail about what kind of stone tools were used by early hominins. This video is also accompanied by many other features. There is a regularly updated news section and a section with book reviews.
There are also interactive browser games that let you build skeletons or aligning chromosomes and about a dozen or so classroom materials that you can cut out and play with during class. An interactive timeline lets you look at human fossils for a period of seven million years and a robust glossary will help you understand core concepts.
|
Chemical Engineering Principle of Chemical Processes
SO2 Removal from PowerPlant Stack Gases∗
Numerous inventories of the world’s energy reserves have shown that coal is the most abundant
practical source of energy for the next several decades. Two immediate problems have become
apparent as the use of coal has increased: mining can be costly, both economically and
environmentally, and air pollutant emissions are relatively high when coal is burned.
Stack gases from coal-fired furnaces contain large quantities of soot (fine unburned carbon particles) and
ash; moreover, most coals contain significant amounts of sulfur, which when burned forms sulfur
dioxide, a hazardous pollutant. In this case study we examine a process to reduce pollutant
emissions from coal-fired power plant boiler furnaces.
SO2 emissions from coal-fired furnaces whose construction began after
August 17, 1971, must, by Environmental Protection Agency (EPA) regulation, contain less
than 1.2 lbm, SO2 per 106 Btu (heating value of fuel fed to the boiler). When coal containing a
relatively high quantity of sulfur is to be burned, the emissions standard may be satisfied
by removing sulfur from the coal prior to combustion or by removing SO2 from the product
gases before they are released to the atmosphere.
The technology for removing SO2 from
stack gases is currently more advanced than that for sulfur removal from coal, and a large
number of stack gas desulfurization processes are currently in various stages of commercial
Sulfur dioxide removal processes are classified as regenerative or throwaway, according to
whether the agent used to remove SO2 is reusable.1 Regenerative processes have two major steps:
the removal of SO2 from stack gases by a separating agent, and removal of SO2 from the
separating agent. An example of such a procedure is the Wellman-Lord process-absorption of SO2
by a solution of Na2SO3 to produce NaHSO3, followed by the release of SO2 by partial
vaporization of the NaHSO3 solution. In this process the Na2SO3 solution is regenerated for
reuse as the absorbent.
Na2SO3 SO2 H2O ! 2NaHSO3
2NaHSO3 ! Na2SO3 SO2 H2O regeneration
Throwaway processes utilize a separating agent to remove SO2, followed by the disposal
of both SO2 and the separating agent. Wet limestone scrubbing is one of the most advanced
throwaway processes in terms of industrial acceptance. Several versions of this process have
been developed, one of which is examined in detail in this case study. Parts of the process
have proved troublesome, particularly those involving deposition of solids on surfaces of the
Participants in the case study may find it interesting to learn where the
trouble spots are and to check recent technical articles on SO2 removal processes for
discussions of approaches to solving these problems. Such articles are published frequently
in Chemical Engineering Progress, Environmental Science and Technology, and other
∗ This case study was prepared for the first edition of the text with the assistance of Norman Kaplan of the U.S.
Environmental Protection Agency.
1 Alternatively, the processes may be classified according to whether the sulfur is recovered as a saleable product.
BOILER-INJECTION, WET-LIMESTONE PROCESS DESCRIPTION
The plant to be described is to produce 500 MW of electrical power. The flow rates, compositions,
stream conditions, and other details to be given are representative of such installations. The key
step in removing SO2 from the stack gas is the reaction of SO2 with CaO and oxygen to produce
CaSO4, an insoluble stable compound. Four major components of the process will be traced: the
coal-limestone-stack gas streams, the scrubber water, the cooling-heating water cycle, and the
generated steam cycle.
The composition of coal can vary considerably, but that shown in Table CS 2.1 is typical of
that used in this process. During coal combustion the sulfur in the coal reacts to form SO2 and very
small amounts of SO3. Eighty-five percent of the ash in the coal leaves the boiler in the stack gas as
fly ash; nitrogen emerges as N2, and the carbon, hydrogen, and sulfur in the fuel are oxidized
completely to CO2, H2O, and SO2.
Finely ground limestone, whose composition is given in Table CS 2.2, is injected directly into
the furnace where complete calcination occurs.2
CaCO3 ! CaO CO2
The limestone feed rate to the furnace is 10% in excess of that required for complete consumption
of the generated SO2. Both limestone and coal enter the process at about 77°F. A waste stream
consisting of 15% of the limestone inerts and coal ash is removed from the furnace at 1650°F.
2 Direct injection of limestone is the method of operation in this case study, but in conventional practice the flue gas and a
limestone slurry are contacted in an external scrubber.
TABLE CS 2.2 Limestone Properties
Component Dry Wt%
Moisture: 10 wt% water
2. Heat capacity of inerts
Cp Btu/lbm F 0:180 6:00 105T °F
TABLE CS 2.1 Coal Properties
1. Composition (ultimate analysis)
Component Dry Wt%
Moisture: 4.58 lbm/100 lbm wet coal
2. Heat capacities
Dry coal: Cp=0.25 Btu/lbmF
Ash: Cp=0.22 Btu/lbmF
3. Heating value of coal: 13,240 Btu/lbm dry coal
2 CASE STUDY 2 SO2 Removal from Power-Plant Stack Gases
Air at 110°F and 30% relative humidity is brought to 610°F in an air preheater, and the heated
air is fed to the furnace. The air feed rate is 40% in excess of that required to burn the coal
completely. Gases from the furnace containing fly ash, CaO, and CaSO4 and at 890°F are cooled in
the air preheater and then split into three trains. The gas in each train is cooled further to 177°F, and
fed to a scrubber where it is contacted with an aqueous slurry of CaO and CaSO4. Sulfur dioxide is
absorbed in the slurry and reacts with the CaO.
The gas leaving each scrubber contains 3.333% of
the SO2 and 0.3% of the fly ash emitted from the boiler furnace. The effluent gas from the scrubber,
which is at 120°F and saturated with water, is heated and mixed with the gas streams from the other
trains. The combined gas stream is sent to a blower where its pressure is increased from 13.3 psia
to 14.8 psia; it is then exhausted through a stack to the atmosphere.
The liquid feed enters the scrubber at 117°F and contains 10.00 wt% solids; it is fed at a rate
such that there are 6:12 lbm liquid per lbm inlet gas. Liquid scrubber effluent at 120°F is sent to a
holding tank where it is mixed with fresh makeup water and water recycled from a settling pond.
From the holding tank, one stream is recycled to serve as liquid feed to the scrubber and another is
pumped to the settling pond for solids removal.
Generation of steam and its utilization in the production of electricity in this plant is typical of
many power cycles. Steam is generated in the boiler and leaves the boiler and superheater tubes at
1400°F and 2700 psia. It is expanded through a turbine where its pressure and temperature are
reduced to 5 psia and 200°F. The low-pressure steam is then condensed at constant pressure and
pumped isothermally to the inlet boiler tubes.
The temperature of the water used to cool the gas entering the scrubber is 148°F. The hot
water at 425°F is then used to reheat the effluent gas stream from the scrubber. (The water thus
undergoes a closed cycle.)
The power company for which you work is contemplating adding an SO2 scrubber to one
of its generation stations and has asked you to do the preliminary process evaluation. In
solving the following problems, you may neglect the formation of SO3 in the furnace, and
assume that CaSO4 and CaO are the only calcium compounds present in the slurry used in the
scrubber (i.e., neglect the sulfite, bisulfite, and bisulfate compounds that are present to some
extent in the real process).
Problems CS 2.2 through CS 2.7 should be answered using a basis of 100 lbm/min of wet coal fed to
CS 2.1. Construct a flowchart of the process, labeling all process streams. Show the details of only one
train in the SO2 scrubber operation.
CS 2.2. From the data on coal composition given in Table CS 2.1, determine the molar flow rate of each
element other than ash in the dry coal.
CS 2.3. Determine the feed rate of O2 required for complete combustion.
CS 2.4. If 40% excess O2 is fed to the boiler, calculate:
(a) The air feed in
(ii) Standard cubic feet/min.
(iii) Actual cubic feet/min.
(b) The molar flow rate of water in the air stream.
CS 2.5. Determine the rate of flow of CaCO3, inerts, and H2O in the limestone feed.
CS 2.6. Estimate the rate at which each component in the gas leaves the furnace. What is the waste removal
rate from the boiler?
CS 2.7. At what rate must heat be removed from the furnace?
CS 2.8. Plants of the type under consideration operate at an efficiency of about 35%; that is, for each unit of
heat extracted from the combustion process, 0.35 units are converted to electrical energy. From
this efficiency and the specified power output of 500 MW, determine:
(a) Coal feed rate in lbm/h.
(b) Air feed rate in
(ii) Standard cubic feet/min.
(c) The flow rate of each component in the gas leaving the furnace.
CS 2.9. How much additional coal is consumed in the boiler because of the addition of limestone?
CS 2.10. Calculate the feed rate of liquid to each scrubber in lbm/h.
CS 2.11. Estimate the composition and flow rates of the gas and liquid streams leaving the scrubber. Are the
EPA requirements satisfied?
CS 2.12. Determine the rate at which water (fresh water and recycled water from the pond) must be mixed
with the effluent from the scrubber to reduce the solids content (ash, CaO, CaSO4) to 10 wt%.
CS 2.13. If essentially all the solids in the waste stream fed to the settling pond are precipitated, and if the
pond surface area is such that half of the water in the waste stream is evaporated, at what rate, in
gallons per minute, must fresh water be fed to the process?
CS 2.14. Determine the temperature of the gas stream as it leaves the heat exchanger following the boiler.
CS 2.15. What is the water circulation rate through the heat-recovery loop; that is, the flow rate of the stream
that cools the gases entering the scrubber and heats the absorber effluent? What is the minimum
pressure at which this cycle can operate with liquid water? To what temperature is the gas leaving
the scrubber reheated before it is mixed with gas from other trains?
CS 2.16. One of the design specifications for power plant boilers is the amount of excess air used in burning
coal. Evaluate the heat removed from the boiler for 20% and 100% excess air if the temperature of
the exit gases and slag is 890°F. What are the ramifications of altering the ratio of air to coal?
CS 2.17. At first glance it might appear that there is no need to split the exit gases into three streams, only to
remix them later in the process. However, for the scrubbers under consideration, the maximum
allowable velocity of the gas through the empty column is given by
vm ft/s 0:15 ρL ρG=ρG
0:5 where ρG and ρL are the densities of the gas and liquid phases. Estimate the minimum column
diameters for one-, two-, and three-train operations. Why is the three-train operation used?
CS 2.18. Why is the gas leaving the scrubber reheated before it is sent to the stack? (Think about it—the
answer is not contained in the process description.)
CS 2.19. If 2.5% of the heat removed from the boiler is lost to the surroundings, at what rate is steam generated?
CS 2.20. Neglecting kinetic and potential energy changes across the steam turbine, calculate the rate at
which work is produced in megawatts.
CS 2.21. What is the flow rate or cooling water through the steam condenser if the water temperature is
allowed to increase by 25°F?
CS 2.22. The pump that transports the steam condensate from the condenser to the boiler has an efficiency of
55% (i.e., 55% of the energy input to the pump is converted to useful work on the condensate).
Neglecting friction losses in the condensate flow and changes in kinetic and potential energy, what
is the required energy input to the pump in horsepower?
CS 2.23. It is estimated that the total capital costs for the SO2-removal portion of this plant will be $25
million. The lifetime of the plant is determined to be 25 years, assuming 7000 hours per year
4 CASE STUDY 2 SO2 Removal from Power-Plant Stack Gases
operation. The annual operating costs, including labor, maintenance, utilities, and the like, are
estimated to be about $10.5 million. Using current costs for electrical energy, estimate the
incremental cost per kilowatt for the desulfurization process. (Note: The costs are based on 1978
estimates. The cost figures can be updated using available cost indices.)
Additional Problems for Study
CS 2.24. Power companies have objected to the wet-limestone scrubbing process, claiming it creates more
environmental problems than it solves. What environmental problems are created by this process?
CS 2.25. As pointed out earlier, many of the problems associated with boiler injection of limestone have
proved to be insurmountable. Discuss what you think some of the problems might be, and propose
an alternative processing scheme that retains the essential features of wet limestone scrubbing. Feel
free to use available references.
CS 2.26. There have been many changes in technology and regulations since this case study was originally
prepared. Search the internet for at least one modification in regulations and/or one new
technology associated with removing SO2 from power-plant stack gases.
Last Updated on December 9, 2019 by Essay Pro
|
This summer, as the temperatures rise, be sure to pay attention to what your body is telling you. Heat exhaustion is a result of your body overheating. Though not as serious as heatstroke, which can be fatal, heat exhaustion can be scary. It could also lead to heatstroke if untreated.
The Causes of Heat Exhaustion
According to the Mayo Clinic, the most common causes of heat exhaustion are exposure to hot weather – particularly when combined with high humidity – and strenuous physical activity. Dehydration, which reduces your body’s ability to sweat, is another cause.
All three disrupt your body’s ability to cool itself efficiently, causing symptoms such as dizziness, fatigue, nausea, heavy sweating, fainting and more.
Athletes who train outdoors during the summer are prone to heat exhaustion, as are people under 4 years old and over 65 because they are less tolerant of heat.
How to Prevent Heat Exhaustion
To protect yourself and your family from heat exhaustion, follow these prevention tips from the Mayo Clinic.
- Wear loose, lightweight clothing.
- Avoid sunburn by wearing sunglasses with UV protection and wide-brimmed hats, as well as applying broad-spectrum sunscreen with an SPF of at least 15.
- Stay hydrated by drinking plenty of fluids.
- Avoid strenuous physical activity in hot weather. Do it indoors if you can or schedule it for the cooler times of the day (usually morning or evening).
- Be mindful of medications that may increase your risk of having heat-related problems.
- Don’t leave anyone – pet or person – in a parked car for even the smallest amount of time. The temperature in a parked car can rise about 20 degrees in 10 minutes. Cracking the windows or parking in a shady area won’t keep the temperature down. Even a seemingly harmless, 70-degree day could quickly become dangerous.
If you or a loved one are experiencing symptoms of heat exhaustion, stop all activity, move to a cooler place and drink cool water or sports drinks with electrolytes. Contact your doctor if symptoms worsen or don’t improve within one hour.
Have any tips for staying cool during summer activities? Tell us in the comments below.
AAA members save on ReadyRefresh by Nestle.
|
Weeping Willow Planting Instructions
The graceful weeping willow is an attention-getter in any landscape. Its flowing, drooping branches cascade around the tree and sweep the ground, making weeping willows popular as showy specimens or grouped for shade and privacy. Native to central Asia and China, the story of the weeping willows’ origin in Babylon is referenced in the trees’ scientific name, Salix babylonica. Weeping willows are often found growing near water in U.S. Department of Agriculture plant hardiness zones 2 through 9.
Weeping willows are easily propagated from stem cuttings. Cuttings are at least two feet long, cut at the base and taken from mature weeping willows when trees are dormant, after leaves fall in autumn and temperatures are consistently below 32 degrees Fahrenheit at night. Cuttings can be placed directly into soil in late winter or early spring. Keeping the soil moist throughout the growing season allows the cutting to develop healthy roots.
Weeping willows are fast-growing trees, adding up to 10 feet per year when young. They typically grow 30 to 50 feet tall and spread as wide, so they need a planting site where they will have adequate room. Their aggressive roots can spread out farther than the tree is tall, often along the surface of the ground and they are attracted to water. Because of this, it's best to avoid planting weeping willows near water or sewer lines, septic tanks or sidewalks.
Soil, Light and Water Requirements
Weeping willow trees prefer to be planted in rich, moist soil but do tolerate a wide variety of soil types, from sandy loam to clay, acidic or alkaline, as long as the soil doesn’t drain too quickly. They are drought tolerant but need regular watering in dry conditions or they will lose some leaves. Weeping willow trees grow well in full sun to partial shade.
Pests and Diseases
Weeping willows can be plagued by gypsy moths, caterpillars, aphids, scales and borers. Chemical and organic pest controls are available at garden centers. Weeping willow trees are prone to canker diseases, often caused by fungi. Infected branches should be pruned out and leaf debris cleared away, but once the trunk is infected, the tree will likely die. However, weeping willows are resistant to black canker.
- Furman (S.C.) University: Weeping Willow
- University of Nevada Cooperative Extension: Tips for Successfully Planting Willows in a Riparian Area
- New Mexico State University: Starting willow trees from stem cuttings
- Washington State University Extension: Garden Tips: Worst Trees to Plant
- Arbor Day Foundation: Weeping Willow
KW Schumer is an award-winning newspaper editor, reporter and writer with more than 15 years of experience working for large, mid-sized and community newspaper companies. She also writes the food blog Chef HJ's Table with her husband, a professional chef and the director and chef-instructor of a culinary school.
|
By now, I think all educators can agree that phonics instruction is key in teaching kids to read. Decoding strategies should be phonics-based to ensure kids focus on the print first, rather than pictures or context clues. This will help them later on when pictures disappear with more advanced text.
I’ve written about decoding strategies before and want to share one of my favorite ones with you- chunking words with the Chunky Monkey reading strategy.
Sounding a word out letter by letter is a helpful decoding strategy and an important stage in reading. Read here how I help kids with step-by-step decoding.
The next step is recognizing chunks and using them to help with decoding words. Reading chunks can not only help kids decode a word faster than sounding out letter by letter, but it can help them decode more accurately (ex: recognizing digraphs, vowel teams, silent e words).
WHAT IS A CHUNK?
A chunk is any part of a word made up of more than 1 letter. A chunk can be a small word inside a bigger one, a digraph or blend, vowel team, suffix, word family, etc.
WHY is CHUNKING Helpful?
When chunking, kids are still paying attention to all of the letters in a word, which is important to orthographically map out the word and store it in memory. But with chunking, they are recognizing that some letters work together to make one sound or a group of sounds in a word. They are more likely to blend together a long word by chunking than sounding out letter by letter.
For example, sounding out the word checking is made easier by recognizing the digraphs ch and ck and the suffix ing and reading those as parts rather than letter by letter.
Also, looking at several letters at a time is helpful because sometimes a vowel sound is dictated by the letter(s) that come after it. For example, in the word for, the o‘s sound will change because it’s followed by “bossy r” (r-controlled vowel). Similarly, the o sound in stone will change because of the silent e.
WAYS TO TEACH AND PRACTICE CHUNKING
Start with a story. For example, tell kids how you were in a rush the other day and climbed a flight of stairs 2 at a time. This helped you get to the top a bit faster. Although each step was necessary to get you there, you could put 2 together at a time to get there faster. This reminded you of chunking words. Each letter in a word is important but combining letters to recognize parts may help you decode a word faster.
Write the word mat on the board and remind kids that sounding out each letter is an effective decoding strategy. This method works for some words but it’s not the most effective for longer words. Write a longer word, such as sheep. Model sounding out letter by letter, then by recognizing (and circling) the digraph sh and the vowel team ee. Now, blend the word parts together to read accurately.
Repeat with a word like called and point out that recognizing the suffix ed can help you by adding it to the base word. Recognizing the word chunk all can also help kids know that the a won’t make its short a sound. Now, they simply blend the parts together to read the word.
After introducing how chunks help, you can practice in many ways!
Here are some of my favorite ways to practice this strategy and they’re all included in my chunky monkey strategy pack.
Chunky Monkey Reading Strategy Pack
Like all of my decoding strategy packs, this one has a slideshow to introduce the strategy. I find kids are much more engaged when they can interact with a slideshow (plus it keeps me on track with my teaching!).
The slideshow starts with a teaching slide.
Then, I included three levels of practice so you can differentiate.
Next, students read sentences and find words and chunks.
The sentence appears without the picture first. The picture appears upon clicking.
The slideshow ends with a little song to help kids remember the strategy! Make sure to grab this freebie at the bottom of this post!
Independent Practice of the Strategy
After introducing the strategy and practicing together, I always like to have kids practice on their own. I model using a “chunky monkey chunk finder” with a big book or charted poem, then I give them each a small chunk finder and certificate to complete as they read. They read their own books with the chunk finder and jot down a word the strategy helped them read.
Chunky Monkey Practice Activities
Then, we work on practice activities during centers and small-group instruction. Here are some of our favorites.
Kids look for the small words in the big-word cards and complete the sentences. I copy these 2-sided and ask them to Read the Room: complete the other side by looking for chunks inside words around the room.
There’s something about adding a spinning element that makes anything fun. I like to use these transparent spinners because they are so easy, but if you don’t have them, a simple paper clip and pencil will do. Place the small end of the paper clip on the center dot and hold it in place with the tip of a pencil as you spin it.
These Build-a-Word pages are so much fun! Kids cut apart the word chunks and paste them where they complete each word.
These skill cards are a recent fun addition to my Chunky Monkey Reading Strategy Pack! There are several different activities for plenty of practice during small-group instruction. I love that they fit nicely into these plastic task card boxes and I included labels for organization.
Recently, I updated my entire Chunky Monkey Reading Strategy bundle and added digital components to make it easier to use during distance learning, or with technology in the classroom! There are 8 digital activities made for use with Google Slides.
I also love the book There’s an Ant in ANThony by Bernard Most (affiliate link). It helps kids see that although they may see a word or chunk inside a larger word, it may not make the sound(s) they expect, as in Anthony and elephant. I find this so helpful in emphasizing that we have to be flexible readers. When one strategy doesn’t work, try another!
Word chunking with the Chunky Monkey is my favorite reading strategy to teach because it helps kids transition from reading letter by letter to recognizing bigger chunks and using those to decode words faster. Plus, I just love when kids start finding chunks everywhere! In the hallway, during the read-aloud of a big book, around the classroom, your kids will be pointing out little words and chunks all over!
Learning to chunk words not only helps with reading, but it will also help with spelling. When kids learn about chunks, they can associate new words with the same chunks which can help them with spelling.
Make sure you download the freebie below! For more reading strategies, read how I teach decoding strategies here.
You can grab the Chunky Monkey Reading Strategy pack from my TpT shop here.
|
Wildfires have always been a natural part of forest and grassland ecosystems, but in recent years they have been getting bigger, more frequent, and more destructive. If you’re a landowner in a fire-prone area, it’s important to be aware of what you can do to reduce the risk of catastrophic loss and promote natural resource recovery in the event a fire does strike.
You may not be able to prevent wildfire from affecting your land, but you can minimize the risk of damage to your home by creating a defensive zone that’s less likely to go up in flames.
Reduce the amount of fuel within the defensible area by choosing plants with a high moisture content, such as grasses, flowers, and other herbaceous plants. Avoid flammable plants, such as juniper and sagebrush, and keep vegetation low to avoid creating “fire ladders”—plant arrangements where fire can easily jump from the ground to higher levels. Thin out dense shrubs and stands of trees to reduce the amount of concentrated fuel. Deciduous trees tend to contain more moisture and fewer flammable chemicals than evergreens.
Water plants well to maximize their resistance to fire, and keep your defensive zone clean by removing dry branches and dead vegetation. Consider adding hardscaping, such as patios, paved or gravel walkways and driveways, rock mulch, and boulders.
How quickly your land recovers after a wildfire will depend on a variety of factors, including climate, vegetation, soil type, available water, and the intensity of the fire. High-intensity fires destroy almost everything in their path, from ground level to the forest canopy, whereas low-intensity fires typically burn along the forest floor, perhaps singeing or scorching lower branches or small trees, but leaving the canopy undamaged, along with some of the ground cover.
Low-intensity fires can benefit plant communities by recycling nutrients and thinning out dense trees and underbrush, reducing the amount of flammable material, and minimizing the risk of future high-intensity fires. In contrast, high-intensity fires kill all the vegetation, leading to severe damage, especially from erosion. Under normal circumstances, the soil surface is protected by both standing vegetation and a layer of decaying plant litter that minimizes evaporation, absorbs rainfall, and reduces runoff. By removing ground cover, high-intensity wildfires leave the soil unable to absorb water and susceptible to erosion by wind and rain, which, in turn, can cause flash flooding and contamination of waterways.
Recovering from wildfire
As soon as possible after the fire, assess your land for damage by evaluating tree mortality and soils, and making particular note of any conditions that have the potential to cause further damage, such as flooding, erosion, or fire reoccurrence. Mature stands of forest may take decades to return to pre-fire size, whereas grasslands can often be returned to productivity within a year.
There are several erosion-control measures that can be taken during the first few years after a fire in order to protect the productivity of the land. The most important thing is to protect the soil surface from the impact of rain and improve its ability to absorb water in order to prevent runoff. Effective treatments include covering soil with mulch and planting it with grass or seedlings that will grow quickly and hold the soil in place. Slopes are at the greatest risk of erosion, so prioritize seeding in these areas, using a cover of mulch to keep seeds in place until they can germinate.
Spreading tree limbs, branches, or logs on bare soil can reduce rain impact and soil runoff. Other barriers that can be used to protect soil from erosion include landscape fabrics, straw bales, and straw wattles (tubes of plastic netting packed with straw or other biodegradable materials).
After assessing fire damage to forest land, you will need to decide whether or not to harvest burned trees. The potential for tree recovery will depend on a variety of factors, including the degree of damage to the crown, foliage, buds, bark, and roots.
Salvage logging can help recover some financial value from burned trees and improve the health of the remaining forest. Fire-damaged trees are at an increased risk of infestation by insects such as bark beetles and can serve as a conduit for problems to spread to healthy trees. Standing dead trees are potential fuel for future wildfire outbreaks, but removing too many trees after a fire can increase the risk of erosion and water quality issues. Leaving some dead trees in place benefits forests by providing wildlife habitat and returning nutrients and organic matter to the soil as they decay.
It’s best to hire a professional forester to evaluate your forest health and provide advice regarding salvage harvesting. This should be done soon after a fire since fire-damaged trees lose commercial value quickly through decay and fungal infections brought by insect pests. By the third year after a fire, burned trees have usually lost most of their value.
Depending on the damage, forests may regenerate naturally or may require active management to promote healthy new growth. Tree planting may be necessary where fire intensity was high.
Unlike forests, grasslands tend to recover rapidly after a wildfire, in part because fires typically move over such plants very quickly, minimizing their exposure to high temperatures. Most grassland species are adapted to fire and will recover with minimal management, provided the land is healthy and not stressed by overgrazing.
Annual broadleaf weeds are the first to re-colonize after a fire, providing essential soil cover that helps prevent erosion. Grasses will typically return in the spring following the fire once moisture conditions improve. As the land recovers, monitor it regularly to make sure troublesome weeds do not take over. If the burned pasture was not a productive one, reseeding may be necessary.
To allow burned grassland to regain its productivity, allow it to rest from grazing for a full growing season following a fire (longer in cases of severe damage). Returning livestock to a pasture too soon will put the land at risk from further damage. Avoid using heavy equipment that will compact burned and exposed soil.
If your property has been affected by fire, you may be eligible to deduct certain losses on your taxes, depending in part on whether your land is classified as personal property or an investment with profit intent. You may qualify for a casualty loss deduction or an investment tax credit for planting and reforestation costs.
However, the costs associated with collecting the data required to calculate the loss may exceed the tax benefits for which you qualify. It’s best to consult a specialized tax professional to help evaluate the situation and determine whether it is worth claiming the loss or expenses on your tax return.
|
By Dr. Murli Dharmadhikari and Tavis Harris
Note: This article has been written at the request of the industry. It is written for wine lab workers with no background in chemistry.
In a wine laboratory, analyzing wine for TA, VA and S02 involves the use of a sodium hydroxide (NaOH) reagent. Winemakers usually buy sodium hydroxide solution of a known concentration (usually 0.1 Normal). This reagent is relatively unstable and its concentration changes over time. To ensure the accuracy of analytical results it is important to periodically check the concentration (Normality) of sodium hydroxide. If the concentration has changed then it must be readjusted to the original concentration or the new concentration (Normality) value needs to be used in calculations.
Sometimes a winemaker may wish to make his/her own NaOH solution instead of buying it. Whether making a new solution or checking the normality of an old solution it is important to know the procedure for making a standard (known concentration) solution of NaOH reagent. In the present article the standardization procedure along with the basic concept behind the titration procedure are explained.
Expressing concentration in solution
A solution consists of a solute and the solvent. Solute is the dissolved substance and solvent is the substance in which the solute is dissolved. A solute can be a solid or a liquid. In NaOH solution, sodium hydroxide (solid) is the solute and water (liquid) is the solvent. Note that the solute being a solid is measured in terms of weight (in grams) and the solvent water is measured in terms of volume. This is an example of expressing solution in weight per volume (w/v) basis.
In a solution consisting of two liquids the concentration is expressed in a volumes per volumes basis. For example the concentration of alcohol in wine is expressed as volume per volume. A 12% alcohol wine means it contains 12 ml of alcohol per 100 ml of wine.
Generally, in many solutions, the weight is given in grams and volume is given in milliliters or liters. At this point, it is important to establish the relation between the units of weight and volume. One kilogram (weight) of water at a temperature of maximum density and under normal atmospheric pressure has the volume of one liter. This means that one kilogram (weight) of water equals one liter of volume, and one gram of water by weight equals one milliliter of water by volume. Thus the units of weight (gram) and volume (ml) are similar and interchangeable.
The chemist expresses the concentration of a solution in various ways. The common expressions include Percent, Parts per million (ppm), Molar and Normal. It is important to have a clear understanding of these terms.
One of the simplest forms of concentration is the percent. This simply means units per 100 units, or parts per 100 parts. The percent concentration can be used in three ways. It can be weight per weight, volume per volume or weight per volume basis.
When winemakers use °Brix hydrometer to measure sugars in grape juice they are essentially measuring grams of sugar per 100 grams of juice. A juice sample of 18 °Brix means 18 grams of sugar per 100 grams of juice or commonly referred as 18%. In describing the alcohol content of a wine, percent alcohol content is expressed in terms of a volume per volume basis. In many cases, including in a laboratory, a solution is made by dissolving a solid in a liquid, usually water. In such a case the concentration is expressed in a weight per volume basis.
Parts per million
When dealing with a very small amount of a substance in solution, the concentration is often expressed in terms of parts per million. A 20 ppm concentration means 20 parts of solute dissolved for every 1,000,000 parts of solution. The unit of measurement can be weight or volume. Generally the ppm concentration is used to indicate milligrams of solute per liter of solution.
A molar solution implies concentration in terms of moles/liter. One molar (I M) solution means one mole of a substance (solute) per liter of solution. A mole means gram molecular weight or molecular weight of a substance in grams. So the molecular weight of a chemical is also its molar weight. To calculate the molecular weight one needs to add the atomic weights of all the atoms in the molecular formula unit. For example the molecule of NaOH consists of one atom each of sodium (Na), oxygen (0), and hydrogen (H). Their respective atomic weights are: Na - 23,0 - 16 and H - 1, so the molecular weight, is 23 + 16 + I = 40. Thus 40 grams of NaOH equals one mole of NaOH, and a 1 molar solution of NaOH will contain 40 grams of NaOH chemical.
The other form of concentration used relatively frequently is normality, or N. Normality is expressed in terms of equivalents per liter, which means the number of equivalent weights of a solute per liter of a solution. The term normality is often used in acid-base chemistry. The equivalent weight of an acid is defined as the molecular weight divided by the number of reacting hydrogens of one molecule of acid in the reaction.
Understanding equivalents requires knowing something about how a reaction works, so let's start there. Below is a basic equation for an acid and a base.
HCI + NaOH ------> NaCl + H20
Acid + base -------> salt + water
In our simple equation above you can see we have the acid and base reacting to form a salt and water, and that they react equally. The acid gives 1 H+ for every -OH given by the base. So for every mole of H+ one needs a mole of
-OH. This reaction is one-to-one reaction on a molar basis. One mole of acid has one reacting unit and one mole of base also has one reacting unit thus both acid and base has, in the above example, equal 1:1 reacting units. As stated above, for acids we define an equivalent weight as the molecular weight divided by the number of H+ donated per molecule. Above, the HCI gave up 1 H+ (proton) to the reaction.
Molecular weight of H2SO4 = 98.08 g = 49.04 grams per equivalent
# of protons given 2 protons
Normality is the molecular weight divided by the grams per equivalent (all this results in the number of equivalents) in a given volume. For an 1 N solution we need 1 equivalent/liter. For hydrochloric acid (HCl) the equivalent weight is 36.46 grams. Therefore, for making an 1 Normal solution, 36.46 g/liter of HC1 is needed. Note that a 1 M solution is also 36.46 g/L. For molecules that can give off or accept only one proton per molecule, the Normality is equal to the Molarity.
Table 1. Molecular and Equivalent weights of some common compounds.
In the case where a molecule can give off or accept more than one proton, you need to adjust your calculation. For example, sulfuric acid with a formula of H2SO4 donates 2 separate protons. Using the molar mass of sulfuric acid, and knowing that one molecule can donate 2 protons we can find the equivalent weight.
With a molar mass of 98.08 grams, a solution containing 98.08 g in 1 liter would have a Molarity of 1 M and a Normality of 2 N. This is because every I mole of sulfuric acid (H2SO4) has 2 moles of H+ atoms.
Table 1 lists the molecular weights and equivalent weights of important acids and bases used in a wine laboratory.
Making 1 N solution of NaOH
From the discussion above, it should be clear that to make 1 Normal solution we need to know the, equivalent of NaOH, which is calculated by dividing Molecular weight by 1, that is 40 divided by 1= 40. So the equivalent weight of NaOH is 40. To make 1 N solution, dissolve 40.00 g of sodium hydroxide in water to make volume 1 liter. For a 0.1 N solution (used for wine analysis) 4.00 g of NaOH per liter is needed.
Before we begin titrating that wine sample we have one more important step, standardization of NaOH solution. Standardization simply is a way of checking our work, and determining the exact concentration of our NaOH (or other) reagent. Maybe our dilution was inaccurate, or maybe the balance was not calibrated and as a result the normality of our sodium hydroxide solution is not exactly 1 N as we intended. So we need to check it. This is achieved by titrating the NaOH solution with an acid of known strength (Normality). Generally 0.1 N HCI is used to titrate the base. The reagent, 0.1 N HCI solution is purchased from a chemical supplier that is certified in concentration. That means it was standardized to a base of known concentration. "But isn't that going in circles?" you ask. No, because acids are standardized to a powdered base called KHP, or potassium hydrogen phthalate. This can be very accurately weighed out because it is a fine powder, and then is titrated with the acid.
To standardize NaOH, start by pipetting 10.0 ml of 0.1 N hydrochloric acid (HC1) into a flask. Add approximately 50 ml of water (remember, not tap water) and three drops of methyl red indicator. Fill a 25 ml buret with the 0.1 N sodium hydroxide solution and record the initial volume. Titrate the hydrochloric acid to the point at which a lemon yellow color appears and stays constant. Record the final volume.
Subtract the initial volume from the final to yield the volume of NaOH used, and plug that into the equation below.
Normality of NaOH = Volume of HCI x Normality of HCI
Volume of NaOH used
Before conquering volumetric analysis totally, we need to discuss some titration techniques. First of all, handle the buret with care. Avoid damaging the tip and petcock assembly because damage and leaks in these areas can and will alter performance. Also, be sure to always record your final and initial volume readings accurately by reading the bottom of the meniscus of the solution. Don't try to squeeze in that last sample and drain the buret past its lowest mark; take the time to refill it properly. For help in reading a buret, take a white index card and color a black square on it as shown. Hold this behind the buret scale when taking readings to aid in seeing the meniscus. Some burets actually come with a stripe painted on them for this reason.
Next, remember to stir your sample as you titrate. Whether using a stir plate (recommended) or stirring by swirling the flask manually, it is imperative that the solution be mixed. Be sure not to slosh the sample outside of the beaker/flask and don't allow the buret's contents to fall outside of the beaker. Also, lower your buret enough so that splatter from the sample does not exit the flask as you titrate. This is not only bad lab practice but can also be dangerous.
Safety is an important consideration when working with burets, acids and bases. Realize that you are handling corrosive chemicals and delicate glassware, treat it like an irreplaceable wine in the daintiest glass. That means deliberately and with respect. Wear safety glasses and a labcoat at least, and gloves are also recommended. When filling a buret, take it out of the stand and hold it at an angle with the tip above the sink. That way any spills will drain into the sink and you can stand safely on the floor, not a stool. Leaning over the buret while it is on the benchtop is dangerous.
Be sure to have access to an eyewash station or something that can supply a stream of water to your body and/or eyes for 15 minutes, the OSHA recommended treatment for chemical spills to the eyes and body. Remember you will have sodium hydroxide in the buret at and above eye level so make sure your equipment is attached to a steady base.
Good laboratory practices can help you monitor the quality of your wines more accurately and efficiently. Volumetric analysis by titration is one of the most common techniques the winemaker employs to analyze his product. Improving your skills in this area is important in the quest for excellent wines on a consistent basis.
*Previously published in Vineyard and Vintage View, Mountain Grove, MO.
|
The hepatitis A virus (HAV) is present in all parts of the world with very different consequences: In developed countries, infections occur only very rarely. In countries with a shortage of clean water and poor sanitation, the virus can lead to epidemics with countless infections. The virus is robust and survives even high temperatures and disinfectants. Around the globe, an estimated 1.4 million people are infected with hepatitis A each year. In Switzerland, 60 to 100 cases of hepatitis A were reported every year over the last few years. Because only some of the infected fall ill, the number is probably up to four times as high. An acute hepatitis A infection always leads to lifelong immunity. It never develops into a chronic illness.
Because the hepatitis A virus is eliminated via the digestive system, it is transmitted mainly through contaminated drinking water or food. Mussels or vegetables, which have been fertilised with faecal manure, are particularly dangerous. Further ways of transmission are close personal contact, in particular sexual contact (especially men who have sex with men), or inadequate hand hygiene. Infection occurs most often when travelling in countries with low hygienic standards. That is why hepatitis A is sometimes called “travellers’ hepatitis”.
Everyone who is not vaccinated against hepatitis A or has not gained immunity through a past infection can fall ill. Most at risk are people in developing countries with poor sanitation. In Switzerland, it was injecting drug users who typically became infected in the past. Today it is people who travel to tropical countries, the Mediterranean and Eastern Europe who are most likely to catch the virus. Men who have sex with men and employees who come into contact with faeces also face an increased risk of infection.
In small children, the infection usually does not present any symptoms. In 50 to 70 per cent of adults, an acute hepatitis A occurs, accompanied by nonspecific symptoms of the gastrointestinal tract. The incubation period is 15 to 50 days. Signs can be mild flu symptoms, fatigue, headaches and a slight fever, often accompanied by nausea, constipation or diarrhoea. Eyes and skin can become yellow. Most symptoms will subside after a few weeks. The infection is never chronic and always leads to lifelong immunity.
Raised levels of certain liver enzymes in the blood indicate a hepatitis A infection. Close examination of the hepatitis A virus antibodies shows if it is a recent infection or immunity from an old infection.
Only vaccination provides reliable protection against hepatitis A. The hepatitis A vaccine, as well as the combined hepatitis A and B vaccine, has been proven to be highly effective and safe. Two injections six months apart provide lifelong protection. Doctors recommend the vaccine before travelling to high-risk regions, for high-risk groups of people and for patients with chronic liver disease or another hepatitis infection.
You can reduce the risk of an infection with hepatitis A by avoiding contact with contaminated faeces and by maintaining the following hygiene standards when eating and drinking (especially in developing countries):
- Only drink bottled drinks
- Avoid ice cubes and ice cream
- Eat only fruit you peeled yourself
- Beware of salad, raw vegetables and seafood
- Wash your hands often and with soap, especially after using the toilet
When travelling, adhere to the motto: Cook it, boil it, peel it or leave it.
There is no antiviral therapy nor are there any other medicines available to treat hepatitis A. Recovery from an acute hepatitis A infection requires bed rest and can take several weeks or months of taking it easy. A balanced, low-fat, high carbohydrate diet and drinking sufficient fluids are important. Alcohol and liver-harming medicines should be avoided in order to not burden the liver further. Hepatitis A patients are not contagious, as long as the rules for hand hygiene are being followed when caring for or touching a patient.
|
What is the Role of the Judiciary?
The courts take action on a very large number of issues!
The judicial system of the country have the power to decide that:
- No teacher can beat students.
- Water disputes among states.
- Punishment to people for their crime.
To be specific about the judicial system of the country, the role can be divided boldly into:
- Dispute Resolution: The judicial system provides a mechanism for resolving disputes between citizens, between citizens and the government, between two state governments and between the centre and state governments.
- Judicial Review: As the final interpreter of the Constitution, the judiciary also has the power to strike down particular laws passed by the Parliament if it believes that these are a violation of the basic structure of the Constitution. This is called judicial review.
- Upholding the Law and Enforcing Fundamental Rights: Every citizen of India can approach the Supreme Court or the High Court if they believe that their Fundamental Rights have been violated.
|
Beavers are nature’s engineers, who use their construction expertise to build huge dams that result in large forest flood pools. Across the entire animal kingdom, just the African elephant can match the power beavers possess to shape their habitat, and together they are the only mammals capable of bringing down whole mature trees.
Leave it to beavers
The study compared two areas in southern Finland; Evo which has a resident beaver population and Nuuksio which does not. The two areas lend themselves particularly well to comparative analysis as they are very similar and both remain relatively untouched by human activity.
The results leave little room for ambiguity, with common teal pairs and broods clearly preferring areas where beavers are present. The researchers speculate that the flightless ducklings thrive in the beaver ponds as they offer a plentiful source of food in the form of insects. Adult common teals are more likely to move between ponds.
Data from just one beaver pond was enough to reveal the common teal ducklings’ preference, while the impact of beaver ponds on the behaviour of adult pairs became apparent upon larger-scale comparisons between beaver and non-beaver landscapes.
Beavering away at wetland restoration
Wetlands are ecological communities characterised by their diversity. They also offer important benefits to humans as they filter out pollutants and other impurities found in water and create habitats for young fish and broods of waterfowl. In the past century, anywhere between 60 and 90 per cent of all European wetlands have been lost, and there is now a huge need for their restoration.
The beaver is ideally suited to this important task: its activities serve to effectively restrict the flow of water and, as a highly adaptable animal, it is capable of adapting to a number of different habitat types – after all, the beaver used to be widespread before land was cleared for human use.
This study supports earlier findings suggesting that beaver reintroduction is an economically and practically viable method for wetland restoration.
Find out more:
- Original article published in Ibis: The effect of beaver facilitation on Common Teal: pairs and broods respond differently at the patch and landscape scales
- Blog post on the British Ornithologists’ Union website: Beavers facilitate Teals at different scales
|
There are four main types of disease affecting poultry: metabolic and nutritional diseases; infectious diseases; parasitic diseases; and behavioural diseases.
Metabolic and nutritional diseases
These are conditions caused by a disturbance of normal metabolic functions either through a genetic defect, inadequate or inappropriate nutrition or impaired nutrient utilisation. These include Fatty Liver Syndrome, Perosis (or slipped tendon), Rickets and Cage Layer Fatigue.
An infectious disease is any disease caused by invasion of a host by a pathogen which subsequently grows and multiplies in the body. Infectious diseases are often contagious, which means they can be spread directly or indirectly from one living thing to another. These include Avian Encephalomyelitis, Avian Influenza, Avian Tuberculosis, Chicken Anaemia Virus Infection (or CAV), Chlamydiosis, Egg Drop Syndrome (or EDS), Fowl Cholera (or Pasteurellosis), Fowl Pox, Infectious Bronchitis, Infectious Bursal Disease (or Gumboro), Infectious Coryza, Infectious Laryngotracheitis, Lymphoid Leukosis, Marek’s Disease, Mycoplasmosis, Necrotic Enteritis, Newcastle Disease and Salmonellosis.
Parasitic diseases are infections or infestations with parasitic organisms. They are often contracted through contact with an intermediate vector, but may occur as the result of direct exposure. A parasite is an organism that lives in or on, and takes its nourishment from, another organism. A parasite cannot live independently. These include Coccidiosis, Cryptosporidiosis, Histomoniasis, Lice and Mites, Parasitic Worms (or Helminths), Toxoplasmosis and Trichomoniasis.
Abnormal behavioural patterns can lead to injury or ill health of the abnormally behaving bird and/or its companions. These include Cannibalism (or aggressive pecking).
Diseases caused by Viruses
Big Liver and Spleen Disease
Chicken Anaemia Virus Infection (or CAV)
Egg drop syndrome (or EDS)
Inclusion Body Hepatitis (or Fowl adenovirus type 8 )
Infectious Bursal Disease (or Gumboro)
Lympoid Tumour Disease (Reticuloendotheliosis)
Marek’s Disease Virus or MDV
Runting/stunting and malabsorption syndromes
Viral Arthritis (Tenosynovitis)
Diseases caused by Chlamydia
Diseases caused by Mycoplasmas
Mycoplasmosis – MG (Mycoplasma gallisepticum; MG infection; Chronic Respiratory Disease)
Mycoplasmosis – MS (Mycoplasma synoviae; infectious synovitis)
Diseases caused by Bacteria
Fowl Cholera (or pasteurellosis)
Spirochaetosis (Avian Intestinal Spirochaetosis)
Tuberculosis (Avian Tuberculosis)
Diseases caused by Fungi
Moniliasis (Candidiasis; crop mycosis)
Diseases caused by Protozoa
Diseases caused by Internal Parasites
Diseases caused by External Parasites
Several types of louse (insect; plural – lice)
Stickfast flea (insect)
Several types of mite (acarid)
Diseases caused by Metabolic Disorders
Cage Layer Fatigue and Rickets
Fatty Liver Haemorrhagic Syndrome
Diseases caused by environmental factors
Cannibalism (or aggressive pecking)
The principles of poultry husbandry
There are a number of requirements by which animals should be managed so that the best performance is achieved in a way acceptable to those responsible for the care of the animals and to the community generally. These requirements are the keys to good management and may be used to test the management of a poultry enterprise in relation to the standard of its management. These requirements are also called Principles.
The importance of each Principle changes with the situation and thus the emphasis placed on each may alter from place to place and from time to time. This means that, while the Principles do not change, the degree of emphasis and method of application may change. Every facet of the poultry operation should be tested against the relevant principle(s). The Principles of Poultry Husbandry are:
The quality and class of stock
If the enterprise is to be successful it is necessary to use stock known to be of good quality and of the appropriate genotype for the commodity to be produced in the management situation to be used. The obvious first decision is to choose meat type for meat production and an egg type for egg production. However, having made that decision, it is then necessary to analyse the management situation and market to select a genotype that suits the management situation and/or produces a commodity suitable for that market. A good example is that of brown eggshells. If the market requires eggs to have brown shells, the genotype selected must be a brown shell layer. Another example would be to choose a genotype best suited for use in a tropical environment. The manager must know in detail the requirements of the situation and then select a genotype best suited to that situation.
The following are of major importance when considering the health, welfare and husbandry requirements for a flock:
Confine the birds
- Confining the birds provides a number of advantages:
- Provides a degree of protection from predators
- Reduces the labour costs in the management of the birds
- Increases the number of birds that can be maintained by the same labour force
- Reduces the costs of production
- Better organisation of the stocking program
- Better organisation management to suit the type and age of the birds housed
- Importantly, the confinement of the birds at higher stocking densities has a number of disadvantages also including:
- Increases the risk of infectious disease passing from one bird to another
- Increases the probability that undesirable behavioural changes may occur
- Increases the probability of a significant drop in performance
- Birds housed at very high densities can often attract adverse comments
Protection from a harsh environment
A harsh environment is defined as the one that is outside of the comfort range of the birds. In this context high and low temperature, high humidity in some circumstances, excessively strong wind, inadequate ventilation and/or air movement and high levels of harmful air pollutants such as ammonia are examples of a harsh environment. Much effort is made in designing and building poultry houses that will permit the regulation of the environment to a significant degree.
It is the responsibility of those in charge, and responsible for, the day-to-day management of the birds that the environment control systems are operated as efficiently as possible. To this end, those responsible require a good knowledge of the different factors that constitute the environment and how they interact with each other to produce the actual conditions in the house and, more importantly, what can be done to improve the house environment.
A successful poultry house has to satisfy the welfare needs of the birds which vary with the class, age and housing system. Failure to satisfy these needs will, in many cases, result in lower performance from the birds. These needs include:
- The provision of adequate floor space with enough headroom
- The provision of good quality food with adequate feeding space
- The provision of good quality water with adequate drinking space
- The opportunity to associate with flock mates
- The elimination of anything that may cause injury
- The elimination of all sources of unnecessary harassment
The maintenance of good health
The presence of disease in the poultry flock is reflected by inferior performance. It is essential that the flock is in good health to achieve their performance potential. There are three elements of good health management of a poultry flock. These are:
The prevention of disease
The early recognition of disease
The early treatment of disease
Prevention of disease
Preventing the birds from disease is a much more economical way of health management than waiting for the flock to become diseased before taking appropriate action. There are a number of factors that are significant in disease prevention. These are:
1. Application of astringent farm quarantine program:
- The isolation of the farm/sheds from all other poultry.
- The control of vehicles and visitors.
- The introduction of day-old chicks only onto the farm.
- The prevention of access to the sheds by all wild birds and all other animals including vermin.
- The provision of shower facilities and clean clothing for staff and visitors.
- The control of the movement of staff and equipment around the farm.
2. The use of good hygiene practices:
- The provision of wash facilities for staff, essential visitors and vehicles prior to entry.
- The use of disinfectant foot baths at the entry to each shed.
- The thorough cleaning and disinfection of all sheds between flocks.
- Maintaining the flock in a good state of well being by good stockmanship, nutrition and housing.
The use of a suitable vaccination program.
The use of a preventive medication program.
The use of monitoring procedures to keep a check on the disease organism status of the farm, to check on the effectiveness of cleaning and sanitation procedures and to test the immunity levels to certain diseases in the stock to check the effectiveness of the vaccination program.
The early recognition of disease
Early recognition of disease is one of the first skills that should be learned by the poultry flock manager. Frequent inspection of the flock to monitor for signs of sickness are required. It is expected that inspection of all the birds is the first task performed each day, to monitor for signs of ill health, injury and harassment. At the same time feeders, drinkers and other equipment can be checked for serviceability. If a problem has developed since the last inspection, appropriate action can be taken in a timely manner.
The early treatment of disease
If a disease should infect a flock, early treatment may mean the difference between a mild outbreak and a more serious one. It is important that the correct treatment be used as soon as possible. This can only be achieved when the correct diagnosis has been made at an early stage. While there are times when appropriate treatment can be recommended as a result of a field diagnosis i.e. a farm autopsy, it is best if all such diagnoses be supported by a laboratory examination to confirm the field diagnosis as well as to ensure that other conditions are not also involved. When treating stock, it is important that the treatment be administered correctly and at the recommended concentration or dose rate. Always read the instructions carefully and follow them. Most treatments should be administered under the guidance of the regular flock veterinarian.
Nutrition for economic performance
Diets may be formulated for each class of stock under various conditions of management, environment and production level. The diet specification to be used to obtain economic performance in any given situation will depend on the factors such as:
The cost of the mixed diet
The commodity prices i.e. the income
The availability, price and quality of the different ingredients
- Maximising production is not necessarily the most profitable strategy to use as the additional cost required to provide the diet that will give maximum production may be greater than the value of the increase in production gained. A lower quality diet, while resulting in lower production may bring in greatest profit in the long term because of the significantly lower feed costs. Also the food given to a flock must be appropriate for that class of stock – good quality feed for one class of bird will quite likely be unsuitable for another.
- The following are key aspects in relation to the provision of a quality diet:
- The ingredients from which the diet is made must be of good quality.
- The weighing or measuring of all the ingredients must be accurate.
All of the specified ingredients must be included. If one e.g. a grain is unavailable, the diet should be re-formulated. One ingredient is not usually a substitute for another without re-formulation.
The micro-ingredients such as the amino acids, vitamins, minerals and other similar materials should not be too old and should be stored in cool storage – many such ingredients lose their potency over time, and particularly so at high temperatures.
Do not use mouldy ingredients – these should be discarded. Mould in poultry food may contain toxins that may affect the birds.
Do not use feed that is too old or has become mouldy. Storage facilities such as silos should be cleaned frequently to prevent the accumulation of mouldy material.
The practice of good stockpersonship
The term “stockpersonship” is difficult to define because it often means different things to different people. However, “stockpersonship” may be defined as ‘the harmonious interaction between the stock and the person responsible for their daily care’. There is no doubt that some stock people are able to obtain much better performance than others, under identical conditions. The basis of good stockpersonship is having a positive attitude and knowledge of the needs and behaviour of the stock under different circumstances, of management techniques and a willingness to spend time with the stock to be able to react to any adverse situations as they develop to keep stress to a minimum. Having the right attitude is also a very important element. The stockperson who spends as much time as possible with the stock from day old onward by moving among them, handling them and talking to them, will grow a much quieter bird that reacts less to harassment, is more resistant to disease and performs better.
The maximum use of management techniques
There are a number of different management techniques available for use by stockpersons that, while not essential for the welfare of the stock, do result in better performance. Examples of these are the regulation of day length, the management of live weight for age and of flock uniformity. The good manager will utilise these techniques whenever possible to maximise production efficiency and hence profitability of the flock.
The use of records
There are two types of records that need be kept on a poultry enterprise:
Those required for financial management – for business and taxation reasons
Those required for the efficient physical management of the enterprise
For records to be of use in the management of the enterprise, they must be complete, current and accurate, be analysed and then used in the decision making process. Failure to use them means that all of the effort to gather the information will have been wasted and performance not monitored. As a result, many problems that could have been fixed before they cause irreparable harm may not be identified until too late.
There are three important elements to good marketing practice:
Produce the commodity required by the consumer – this usually means continuous market research must be carried out to relate production to demand.
Be competitive – higher price is usually associated with good quality and/or specialised product. Therefore, it is necessary to relate price to quality and market demand and to operate in a competitive manner with the opposition.
Reliability – produce a commodity for the market and ensure that supply, price and quality are reliable.
The traditional methods of reducing microbial contamination in feed raw materials have been compromised within the EU recently with the ban on the incorporation of formaldehyde as a feed additive (PT4). Within the EU this is being linked to an increase in Salmonella isolations. With the potential for reduced chemical use to control microbial contamination in feed to be rolled out in other countries it is important that suitable alternative methods to control contamination are put in place.
One response by the industry to the ban on formaldehyde has been the introduction of new organic acid blends and novel compounds for feed sanitation. Organic acids are effective in reducing Salmonella challenges. However, they do not provide a clean, biosecure break in the mill and their performance is very reliant on formulation, the mixing performance in the mill and the time allowed for their action.
With higher generation stock as legislation tightens and public awareness heightens, the use of heat treatment for the decontamination of poultry feeds has become more popular and heat treatment is now generally seen as an effective means of Salmonella destruction in raw materials.
The purpose of this document is to review the heat treatment of poultry feeds in terms of the specifications required to ensure decontamination and the types of equipment that can be successfully employed.
Heat treatment – objectives
There are many recommendations for the heat treatment of poultry feeds published in the literature. For broiler feeds, it has been reported that a moderate level of heat treatment such as 80°C (176°F) for a two minute retention time is sufficient to kill Salmonella. In reality, it is likely to kill most of the Salmonella and damage any remaining Salmonella. By the time these damaged Salmonella start to repair and grow to reach an infective dose detectable by sampling, the broiler flock could be depleted, and the flock considered negative. With breeder flocks however, the Salmonella has a much longer time to grow to a detectable, infective dose level and Salmonella outbreaks can occur later in the life of flock. For these flocks, heat treatment temperatures for feed need to be higher for longer periods of time to ensure maximum Salmonella destruction.
So what are the specifications for successful heat treatment? Firstly, the incidence of Salmonella and the levels of contamination need to be considered. This will vary depending on the raw material(s). Literature review would generally tell us that the highest levels of Salmonella in crop-based raw materials (soya, rape meal and sunflower) would be in the region of 105 per gram. To eliminate Salmonella, heat treatment needs to be sufficient to reduce this level to zero.
Secondly, there must be a balance between the need to destroy pathogenic organisms such as Salmonella and the effect of heat on, for example, vitamin inclusion in the finished feed, the levels of starch gelatinisation achieved, protein denaturation and other anti-nutritional factors.
Once the feed has been decontaminated it is equally important to ensure that measures are in place to prevent the re-contamination of feed after treatment. A feed mill using heat treatment to produce biosecure decontaminated feed must have distinct “dirty” (i.e., before heat treatment) and “clean” (i.e., after heat treatment) areas. Heat treatment is seen as a breakpoint in the feed mill production process where decontamination of the feed takes place at a boundary between the “dirty” and “clean areas”. The clean area must be constructed in a way that will protect the feed from being re-contaminated by the dirty area. Biosecure boundaries and procedures must be in place. This commonly includes separate staff for each area and filtered air, usually to HEPA standard, to the clean area to prevent re-contamination after processing. A full hazard analysis critical control point plan (HACCP) is also required to set standards, provide a monitoring and risk analysis to the clean feed, and to monitor any deterioration in the mill structure or process conditions.
Ongoing work within Aviagen has established that heating at 86°C (187°F) for 6 minutes at 15 percent relative humidity is enough to destroy mesophilic bacterial populations at a level of 105 per gram. Knowing this it is possible to theoretically establish thermal processes (heat treatment) for different types of equipment. Different equipment will require different thermal processes to achieve the same level of reduction; because of this it is important that new heat treatment equipment is validated for efficacy to destroy Salmonella.
Heat treatment effectiveness will also be affected by the moisture content of the raw materials, the quality of the steam (moisture content) being used during the process and how the feed passes through the equipment. The feed should pass through the heat treatment equipment on a first-in, first-out basis to ensure even heat treatment.
It should be noted that the heat applied to feed as part of the process of producing pelleted feed will not be enough to kill Salmonella and should not be considered as part of the heat treatment process.
Heat treatment technology
There are a range of different technologies available to achieve the desired standards of heat treatment described earlier (86°C/187°F for 6 minutes at 15 percent relative humidity); these are listed in Table 1.
In order to produce high-quality birds, backyard keepers should start their breeding plans and select candidates. This allows producers to evaluate the individuals in their flock and identify the strongest birds.
This will help backyard keepers produce high-performance poultry and will let you know what traits to look out for and enhance with your pairings.
SOME RULES OF THUMB
The general rule for breeding is to select two candidate birds that have the traits you’re looking for. If you have two strong birds, their pairing will produce strong or above-average chicks. From there, monitor the chicks during grow out to make sure they don’t exhibit any defects. Once they reach breeding age, pairing those chicks with other strong birds will enhance the traits you’re looking for. Over time, the birds in your flock will start to become more uniform and look alike.
One of the most important aspects of successful poultry breeding is having a culling technique. According to NiceHatch incubators, it’s more about what you remove from your breeding stock than what you seek out. Breeders should be removing bad genes from their flocks and intensifying good traits in their birds. NiceHatch incubators tells listeners to familiarise themselves with the standards for the breed and make corrections if they identify traits that aren’t useful. If backyard breeders are trying to produce high-performance poultry, they need to be selective about the individuals in their breeding plan.
Once a year, keepers should evaluate the birds to see which ones are best suited for breeding. Keeping backyard birds can be expensive and time consuming – NiceHatch incubators tells listeners not to invest in sub-standard birds.
Things to avoid
“Never breed two birds with the same fault,” NiceHatch incubators says. This will only make the trait more pronounced in the chicks and lead to poorer results overall. NiceHatch incubators also warns listeners that breeding two birds with extreme but opposing qualities will not produce “normal offspring”. For example, breeding an overweight bird with an underweight hen will not produce normal-weight chicks. It will produce multiple underweight and overweight chickens.
In NiceHatch incubators’ view, breeders should put an emphasis on the birds’ vigour. He urges listeners to use chickens who, “hustle around” as breeding candidates – don’t breed mediocre birds.
Both NiceHatch incubators and Schneider agree that if one of your birds has been sick in the previous year, it shouldn’t be a breeding candidate. In a similar vein, Schneider tells poultry keepers to avoid over-medicating their birds if they want to breed them.
In his experience, backyard keepers and poultry fanciers tend to give their birds antibiotics or other treatments if they exhibit any symptoms. He urges listeners to make sure that the birds are actually sick before medicating them. Symptoms like coughing, sneezing or dropped feathers can often be attributed to dust in the air, or the birds’ natural moulting process.
If owners are concerned about their birds’ health, Schneider recommends seeking the advice of a vet before administering medications.
Remember the 10 per cent rule
In NiceHatch incubators’ experience, for every 10 birds produced, only 1 is worth keeping. For every 100 birds, 10 will be good breeding stock. For every 1,000 birds, NiceHatch incubators say that 100 will be decent, and 1 will be an “absolute knock-out bird”.
If backyard keepers have been breeding poultry for a year or so but have lacklustre results, NiceHatch incubators urge listeners to keep this rule in mind. Building an outstanding backyard flock takes time and rarely happens with a “one and done” approach. It’s a circular operation – not a linear one.
However, he did emphasise that success in breeding doesn’t rely on a huge budget. It’s better than backyard keepers learn to apply their breeding skills and knowledge to their operations. “Anybody can buy a good bird, not everybody can breed, produce or grow out a good bird,” NiceHatch incubators says. “It’s part science, it’s part art and you have to love it.”
Choosing the right incubator can be a challenge. In this article, our experts at Nice hatch incubators simplify and take you through the key factors you should consider when choosing an ideal incubator that will work for you. It is important to note that just like in any other preference; each farmer’s ideal incubator needs may differ. Remember you can always contact us at any time for clarification or consultancy regarding your challenges in poultry needs.
Egg incubator factors to consider
- Airflow in the incubator
Embryo development in the eggs requires oxygen. The chicks hatching are living organisms that not only consume a fair amount of oxygen rapidly but also produce sizeable carbon dioxide. A good airflow inside the incubator will, therefore, ensure this need is constantly regulated for the best egg hatching conditions needed. A good incubator should have air vents that will keep fresh air circulated. Some incubators may contain inbuilt fans to accelerate airflow.
- Temperature Control of the incubator
Temperature regulation is an important condition in the egg hatching process. The right temperature must be maintained at all times during this hatching process. Temperature regulation is critical since there may be weather changes as well as the differences that may arise between day and night. The inside temperature of an incubator must be keenly observed, maintained and regulated to avoid fluctuation. Fluctuations in temperatures will bring about poor egg hatching rates and loses to the farmer. The common temperature control mechanisms in incubators include wafer thermostats and digital electronic control systems. Wafer-controlled incubators allow for more fluctuation in temperature and can contribute to irregular hatches than electronically- controlled incubators. Once the temperature in a wafer-controlled incubator is set up care must be observed to avoid accidental readjustment of the tuning knob or the adjusting screw. Temperatures for some electronically controlled incubators may have been preset for hatching chicken eggs by the manufacturer, you need to know what temperature is required for each type of poultry egg species and adjust accordingly.
It is recommended that you should run the incubator for 24-48 hours before placing the eggs into the incubator to ensure that the optimum temperature needed is ready. Whenever in doubt consult your manufacture or Nice Hatch Incubators technicians.
- Humidity Control in the incubator
Egg hatching demands the correct amount of humidity all through the incubation period. It is a requirement therefore that an incubator should have moisture devices that will enable this condition to be regulated to facilitate the development of egg embryos. These devices may be in the form of inbuilt troughs, external containers, removable trays, pans, or plastic liners either with automatic self-regulation system of the manual wet-bulb thermometer (hygrometer) that measure humidity levels. Read the incubator manufacturer’s instructions.
- Ability to Observe in the incubator
Some good models of incubators have transparent covers or observation windows that allow you to observe what is happening inside the incubator. This enables the farmer to keep track of the hatching development process without having t open and disrupt the optimum conditions inside. The easier it is to observe inside the better.
- Cleaning Ease of the incubator
Once the egg has hatched you will need to move them to a brooder and cleanup the incubator for the next hatching process. The easier to clean the better.
Every venture has some costs to be incurred, your budget may determine your preferred choice for an incubator.
Keeping Turkeys is one of the greatest poultry farming choice whether you are interested in small or large flocks. One key advantage with turkeys is that they can tolerate crowded conditions and still give you a maximum return on financial investment. Turkeys are increasingly becoming a dominant domestic bird in East Africa region by peasant farmers with commercial ambitions. Commercial turkeys are reared for breast meat for the growing hotel industry and the middle class affluent population. Turkeys provide inexpensive meat for a growing urban market eager to purchase it. Although they can be successfully raised in turkey “porches” and yards, they do best when they can have range or pasture on which to forage. Poults can be raised in a poultry house on on deep litter just like chicken. The key thing is to avoid contamination from droppings is essential. Wire mesh can effectively keep poults away from soiled litter in the case of an enclosed environment.
How to get turkeys for start up
If you are a starter in poultry farming www.nicehatchincubators.com recommends that you start in the least expensive way by buying day-old poults (chicks) from hatcheries or suppliers nearby. Alternatively, you can buy turkey hens and a gobbler (cock). Turkey eggs can be hatched naturally by turkey hens, by broody chickens, or in incubators. Turkey poults are fragile and need protection for the first two months. A mother will keep them warm and protect them, provided she herself has adequate feed, water, and shelter. The incubation period for eggs is 28 days. One turkey hen can brood up to 15 eggs. The more commercial and effective way is to brood artificially through the use of incubators.
Raising, feeding, watering turkeys
Turkey poults grow fast, therefore they need high-protein feed to keep up with this growth rate. Feed your poults on starter mash as they grow, their needs taper off after eight weeks to grower crumble or pellets with lesser percentage of protein. If turkeys are on pasture and not crowded, they will get some protein from the insects and worms as they forage through your farmland.
Hanging feeders and waterers, adjusted to the height of the birds’ eyes as they grow, will reduce
the amount of feed and water wasted as the birds dig around in it with their beaks. Sloppy waterers leave wet litter to ferment and foster disease-causing organisms. Feeders on the ground should not be filled more than half-full, to keep feed contained. Turkeys are perching birds that naturally roost in trees. Poults as young as two weeks old will look for a roost. They can be accommodated with2-inch-diameter poles or branches several inches above the ground. Make an allowance of per bird depending on the population you have intend to have on your farm. Mature turkeys need stronger roosts to handle their weight and size that will support them. For mature turkeys you can use up to 2-inch diameter poles and make an allowance at least 2 feet between each pole to allow ample room for them.
Turkeys do not require routine vaccinations; However, vaccines are available for several common diseases, including fowl cholera, turkey pox, and Newcastle disease. Check with local veterinarians to determine whether such protection is necessary in your area.
BEST EGG INCUBATOR COMPANY IN NAIROBI, KENYA
When incubating any bird egg it is important to control the same factors of temperature, humidity, ventilation, and egg turning.
Temperature is the most critical environmental concern during incubation because the developing embryo can only withstand small fluctuations during the period. Embryo starts developing when the temperature exceeds the Physiological Zero. Physiological zero is the temperature below which embryonic growth is arrested and above which it is reinitiated. The physiological zero for chicken eggs is about 75oF (24oC).
The optimum temperature for chicken egg in the setter (for first 18 days) ranges from 99.50 to 99.75 o Fand in the hatcher (last 3 days) is 98.50 F.
Incubation humidity determines the rate of moisture loss from eggs during incubation. In general, the humidity is recorded as relative humidity by comparing the temperatures recorded by wet-bulb and dry-bulb thermometers. Recommended incubation relative humidity for the first 18 days ranging between 55 and 60% (in setter) and for the last 3 days ranging between 65 and 75%. Frequently there is confusion as to how the measurement of humidity is expressed. Most persons in the incubator industry refer to the level of humidity in terms of degrees F., (wet-bulb) rather than percent relative humidity. The two terms are inter convertible and actual humidity depends upon the temperature (F.) as measured with a dry-bulb thermometer. Conversion between the two humidity measurements can be made using a psychometric table.Rarely is the humidity too high in properly ventilated still-air incubators. The water pan area should be equivalent to one-half the floor surface area or more. Increased ventilation during the last few days of incubation and hatching may necessitate the addition of another pan of water or a wet sponge. Humidity is maintained by increasing the exposed water surface area.
Ventilation is very important during the incubation process. While the embryo is developing, oxygen enters the egg through the shell and carbon dioxide escapes in the same manner. As the chicks hatch, they require an increased supply of fresh oxygen. As embryos grow, the air vent openings are gradually opened to satisfy increased embryonic oxygen demand. Care must be taken to maintain humidity during the hatching period. Unobstructed ventilation holes, both above and below the eggs, are essential for proper air exchange.
4. Turning of eggs
Eggs must be turned at least 4-6 times daily during the incubation period. Do not turn eggs during the last 3 days before hatching. The embryos are moving into hatching position and need no turning. Keep the incubator closed during hatching to maintain proper temperature and humidity. The air vents should be almost fully open during the latter stages of hatching.In a still-air incubator, where the eggs are turned by hand, it may be helpful to place an “X” on one side of each egg and an “O” on the other side, using a pencil. This serves as an aide to determine whether all eggs are turned. When turning, be sure your hands are free of all greasy or dusty substances. Eggs soiled with oils suffer from reduced hatch ability. Take extra precautions when turning eggs during the first week of incubation. The developing embryos have delicate blood vessels that rupture easily when severely jarred or shaken, thus killing the embryo.
5. Position of eggs
The eggs are initially set in the incubator with the large end up or horizontally with the large end slightly elevated. This enables the embryo to remain oriented in a proper position for hatching. Never set eggs with the small end upward.
In most African countries, poultry farmers practice this for
- Home consumption
- Cultural reasons
- Income generation
Poultry farming is an income generating project which provides quality food, energy, fertilizers and is also a source of renewable asset. Income from poultry farming is used for food, school fees, buying clothing, constructing house and unexpected expenses like sickness, buying medicine.
Small scales famers in Africa are faced with some difficulties to such as poor access markets, goods and services. Some lack knowledge and skills on handling the birds, weak institutions and inappropriate technology. They also have poor poultry breeds and feeds. Poor structures for the birds. These factors affect the productivity of poultry farming and quality of the breeds produced.
|
A team of German and Swiss scientists have detected the presence of microplastic particles in the snow of the Arctic and the Alps that seems to have been transported through the air to remote areas of the planet.
The study, carried out by the Alfred Wegener Institute, says these microplastic particles of less than five millimeters and whose presence have repeatedly been documented in seas and animals, have also reached snow.
The study was conducted by scientists at the German institute and at the Swiss WSL Institute for Snow and Avalanche Research SLF.
“The fact that our oceans are full of plastic litter has by now become common knowledge: Year after year, several million tonnes of plastic litter find their way into rivers, coastal waters, and even the Arctic deep sea,” the study said.
“Thanks to the motion of waves, and even more to UV (ultra violet) radiation from the sun, the litter is gradually broken down into smaller and smaller fragments – referred to as microplastic,” it explained.
Snow tests have been performed in different regions of Germany: in Bavaria and the northern coast, as well as in the Arctic and the Swiss Alps.
Until now, the presence of one of the greatest threats to the environment and human health - microplastic - has been deeply studied in rivers, seas and ocean sediments.
However, the possible transmission of microplastic particles through the atmosphere and its presence in the snow had been hardly analyzed, with the exception of some preliminary studies carried out on particles found in the Pyrenees and in French and Chinese urban centers, according to the study published by the German institute.
The highest concentrations of microplastic were detected in snow tests conducted along a Bavarian highway, with a ratio of 154,000 particles per liter.
Whereas in the Arctic, the concentration levels stood at 14,400 particles per liter. EFE
|
Erosion is the process that occurs when soil and other land matter is disturbed by either human activity or natural conditions such as extreme weather. When land erodes, it is carried from its original location into streams and rivers, where it disrupts spawning areas, pollutes water, and reduces flood channel capacity. In addition to creating problems by its presence in streams, the land from which it originally came suffers from a lack of nutrients. Most eroded material is topsoil, which is necessary to sustain healthy plants. Once land erodes, it can take hundreds of years to reform naturally. Common human causes of erosion include poorly designed roads, inadequate drainage facilities, poor grading practices, no revegetation practices, and invasive plant species.
What Can I Do To Prevent Erosion?
Thankfully, there are several things you as a landowner can do to prevent erosion on your property. Below is a short list of erosion control tips to get you started:
Incorporate existing native vegetation into the landscaping plan for new developments.
Existing native vegetation requires the least care of any planting materials. Native plants require little or no watering or fertilizer and grow on difficult sites. Care should be taken in working around trees to prevent damage. Be sure to use native plants with roots at various depths to assist in stabilization. Though each site will be unique, consider incorporating plants that spread well or require less soil, e.g. bunchberry, sword fern, red-flowering currant, Pacific ninebark, nootka rose, and Oregon grape.
Plant grass seed or other vegetation before the fall rains begin.
Plant a grass/legume seed ground cover on all exposed areas and cut/fill slopes to create a vegetated buffer. Plant in fall, winter or early spring depending on the variety – make sure to check with the nursery providing vegetation for the best time to plant. On slopes greater than 20 percent use netting and straw mulch to hold the soil and prevent loss of grass seed while native plants are establishing. Straw mulch will provide erosion control and moisture conservation.
Do preserve trees, shrubs and ground cover in streamside areas.
Streamside vegetation can catch and hold sediment before it enters the stream. Roots of plants help hold the soil and reduce bank erosion. Streamside plants also provide food and shelter for wildlife as well as filter pollutants in stormwater runoff. Preserve streamside vegetation for its value in erosion control, wildlife habitat and pollution filtration.
Remove invasive plant species and replace with native plant species.
Many of the streams throughout Portland are being invaded by non-native invasive plant species like Himilayan Blackberry and English Ivy. These plants have weak root systems that do not provide ample erosion control. These plants also out-compete native plants and wreak havoc on our native ecosystems. Remove invasive plants species and replace with a diverse stand of native plant species with varying root depths and densities for greater erosion control and wildlife habitat.
Adapted from Western Shasta Resource Conservation District (http://www.westernshastarcd.org/Erosion.htm)
|
This is the third post in a series about z-quads, a spatial coordinate system. The first one was about the basic construction, the second about how to determine which quads contain each other. This one is about conversions: how to convert positions to quads and back.
First, a quick reminder. A quad is a square(-ish) area on the surface of the earth. Most of the time we’ll use the unit square instead of the earth because it makes things simpler and you can go back and forth between lat/longs and unit coordinates just by scaling the value up and down. (There’s a section at the end about that.)
So, when I’m talking about encoding a point as a quad, what I mean more specifically is: given a point on the unit square (x, y) and a zoom level z, find the quad at zoom level z that contains the point. As it turns out, the zoom level doesn’t really matter so we’ll always find the most precise quad for a given point. If you need a different zoom level you can just have a step at the end that gets the precise quad’s ancestor at whatever level you want.
So far we’ve focused on quads as an abstract mathematical concept and there’s been no reason to limit how far down you can zoom. In practice though you’ll want to store quads in integers of finite size, typically 32-bit or 64-bit. That limits the precision.
Okay, now we’re ready to look at encoding. What we’ll do is take a unit point (x, y) and find the level 31 quad that contains it. Since we’ve fixed the zoom level we’ll immediately switch to talking about the scalar rather than the quad. That makes everything simpler because within one level we’re dealing with a normal z-order curve and conversion to and from those are well understood.
The diagram on the right illustrates something we’ve seen before: that scalars are numbered such that the children at each level are contiguous. In this example, the first two green bits say which of the four 2×2 divisions we’re in. The next two say which 1×1 subdivision of that we’re in. It’s useful to think of this the same way as the order of significance within a normal integer: the further to the left a pair of bits is in a scalar the more significant it is in determining where the scalar is.
Now things will have to get really bit-fiddly for a moment. Let’s look at just a single 2×2 division. Each subdivision can be identified with an (x, y) coordinate where each ordinate is either 0 or 1, or alternatively with a scalar from 0 to 3. The way the scalars are numbered it happens that the least significant scalar bit gives you the x-coordinate and the most significant one gives you y:
(This is by design by the way, it’s why the z-shape for scalars is convenient). This means that given point x and y, each 0 or 1, you get the scalar they correspond to by doing
x + 2 y
What’s neat about that is that because of the way pairs of bits in the scalar increase in significance from right to left matches the significance of bits in an integer, you can do this for each bit of x and y at the same time, in parallel. To illustrate how this works I’ll take an example.
Let’s start with the point (2/5, 2/3) and find the level 5 scalar that contains it. First we’ll multiply both values by 32 (=25) and round the result to an integer,
x = (2/5) × 32 ≈ 12 = 01100b
y = (2/3) × 32 ≈ 21 = 10101b
Now we’ll perform x + 2 y for each bit in parallel like so,
Looks bizarre right, but it does work. We spread out the bits from each input coordinate so they can be matched up pairwise and then add them together. The result is the scalar we’re looking for because the order of significance of the bit-pairs in the scalar match the significance of the individual bits in each of the inputs.
Now, how can you tell that the result, scalar 626 on zoom level 5, is the right answer? This diagram suggests that it’s about right:
The bit-pairs in the scalar are [2, 1, 3, 0, 2] so we first divide and take subdivision 2, then divide that and take 1, then divide that, and so on. Ultimately we end up at the green dot. In this case the last step would be to add the appropriate bias, b5, to get a proper quad and then we’re all done.
Because this conversion handles all the levels in parallel there’s little benefit to converting to a higher zoom level than the most precise, typically 31. If you want level 5 do the conversion to level 31 and then get the 26th ancestor of the result.
Most of the work in performing a conversion is spreading out the scaled 31-bit integer coordinates. One way to spread the bits is using parallel prefix which takes log2 n steps for an n-bit word,
That’s just one way to do it, there are others.
The operation above gets you into quad-land and obviously you’ll want to be able to get out again. This turns out to just be a matter of running the encoding algorithm backwards. The main difference is that here we’ll stay on whatever zoom level the quad’s on instead of always going to level 31.
Given a quad q this is how you’d decode it:
- Subtract the bias to get the quad’s scalar.
- Mask out all the even-index bits to get the spread x value and the odd-index bits to get the spread y value. Shift the y value right by 1.
- Pack the spread-x and –y values back together to get a zq-bit integer value for each. Packing can be similarly to spreading but in reverse.
- Floating-point divide the integer values by 2zq. You now have your (x, y) coordinates between 0 and 1.
A quad is an area but the result here is a point, the top left corner of the quad. This is often what you’re interested in, for instance if you’re calculating is the full area of the quad. There you can just add the width and height of the quad (which are both 2–zq) to get the other corners. Alternatively you might be interested in the center of the quad instead. To get you can modify the last step of the algorithm to do,
Apply (2 v + 1) / 2zq+1 as a floating-point operation to each of the integer coordinates.
This is just a circuitous way to add 0.5 to v before dividing by 2zq but this way we can keep the value integer until the final division. If you want to minimize the loss of precision when round-tripping a point through a quad you should use the center since it’s the point with the smallest maximum distance to the points that belong to that quad.
So, now we can convert floating-point coordinates to quads and back again with ease. Well, relative ease. In the last part I’ll talk briefly about something I’ve been hand-waving until now: converting from lat/long to the unit square and back.
Latitude and longitude
When you encounter a lat/long it is typically expressed using the WGS 84 standard. The latitude, a value between -90 and 90, gives you the north-south position. The longitude, a value between -180 and 180, gives you east-west.
A typical world map will look something like the one on the right here with America to the left and Asia to the right. The top left corner, northwest of North America, is lat 90 and long -180. The bottom right, south of New Zealand, has lat -90 and long 180. You could convert those ranges to an (x, y) on the unit square in any number of ways but the way I’ve been doing it follows the z-order curve the same way I’ve been using in this post: North America is in division 0, Asia in division 1, South America in division 2, and Australia in division 3. The conversion that gives you that is,
x = (180 + long) / 360
y = (90 – lat) / 180
This puts (0, 0) in the top left corner and (1, 1) in the bottom right and assigns the quads in that z-shaped pattern.
There’s a lot to be said about map projections at this point. I limit myself to saying that while it’s convenient that there’s a simple mapping between the space most commonly used for geographic coordinates and the space of quads, the Mercator projection really is a wasteful way to map a sphere onto a square. For instance, a full third of the space of quads is spent covering the Southern Ocean and Antarctica even though they take up much less than a third of the surface of the globe. I’d be interested in any alternative mappings that make more proportional use of the space of quads.
Okay, so now there’s a way to convert between WGS 84 and quads. I suspect there’s a lot more you can do with z-quads but this, together with the operations from the two previous posts, what I needed for my application so we’re at to the end of my series of posts about them. Well, except for one final thing: string representation. I’ve always been unhappy with how unmemorable geographic positions are so I made up a conversion from quads to sort-of memorable strings which I’ll describe in the last post.
In the meantime I made a little toy site that allows you to convert between positions and quads at a given zoom level. Here is an example,
This is quad 167159423 at zoom level 14 which covers, as always, Århus. The URL is
which kind of explains itself. You can also do
Finally to see where a particular quad is try
If you want to explore the space of quads you can play around with the parameters yourself.
|
Algae can draw energy from other plantsNovember 20th, 2012 in Biology / Biotechnology
The alga Chlamydomonas reinhardtii is a single-cell organism. However, it can do something that other plants cannot do as biologists at Bielefeld University have confirmed. Credit: Bielefeld University
Flowers need water and light to grow. Even children learn that plants use sunlight to gather energy from earth and water. Members of Professor Dr. Olaf Kruse's biological research team at Bielefeld University have made a groundbreaking discovery that one plant has another way of doing this. They have confirmed for the first time that a plant, the green alga Chlamydomonas reinhardtii, not only engages in photosynthesis, but also has an alternative source of energy: it can draw it from other plants. This finding could also have a major impact on the future of bioenergy.
The research findings have been released on Tuesday 20 November in the online journal Nature Communications published by the renowned journal Nature.
Until now, it was believed that only worms, bacteria, and fungi could digest vegetable cellulose and use it as a source of carbon for their growth and survival. Plants, in contrast, engage in the photosynthesis of carbon dioxide, water, and light. In a series of experiments, Professor Dr. Olaf Kruse and his team cultivated the microscopically small green alga species Chlamydomonas reinhardtii in a low carbon dioxide environment and observed that when faced with such a shortage, these single-cell plants can draw energy from neighbouring vegetable cellulose instead.
The alga secretes enzymes (so-called cellulose enzymes) that 'digest' the cellulose, breaking it down into smaller sugar components. These are then transported into the cells and transformed into a source of energy: the alga can continue to grow. 'This is the first time that such a behaviour has been confirmed in a vegetable organism', says Professor Kruse. 'That algae can digest cellulose contradicts every previous textbook. To a certain extent, what we are seeing is plants eating plants'. Currently, the scientists are studying whether this mechanism can also be found in other types of alga. Preliminary findings indicate that this is the case.
In the future, this 'new' property of algae could also be of interest for bioenergy production. Breaking down vegetable cellulose biologically is one of the most important tasks in this field. Although vast quantities of waste containing cellulose are available from, for example, field crops, it cannot be transformed into biofuels in this form. Cellulose enzymes first have to break down the material and process it. At present, the necessary cellulose enzymes are extracted from fungi that, in turn, require organic material in order to grow. If, in future, cellulose enzymes can be obtained from algae, there would be no more need for the organic material to feed the fungi. Then even when it is confirmed that algae can use alternative nutrients, water and light suffice for them to grow in normal conditions.
Provided by University of Bielefeld
"Algae can draw energy from other plants." November 20th, 2012. http://phys.org/news/2012-11-algae-energy.html
|
Introduction Discover the IMPACT OF SPELLING DIFFICULTIES on learners who are struggling to achieve success 1. Explore VARIED AND INNOVATIVE LEARNING OPPORTUNITIES to inspire interest, application, rehearsal and spelling success 2. Understand how to improve THE STORAGE AND RECALL OF SPELLING MEMORIES through the use of strategies and skills which engage and impress the mind 3. Apply investigative activities to identify the role of ENGLISH SPELLING RULES which provide an additional route into learning 4. PROMOTE CREATIVITY to engage and captivate ownership, interest and effective spelling memories 5. Uncover historical facts which have informed the fascinating EVOLUTION OF ENGLISH SPELLINGS 6. Plan, deliver and monitor STRUCTURED DELIVERY to support ongoing spelling development and progress 7. Solutions to spelling activities and challenges
Sally Raymond has worked as a school SENCo, a dyslexia tutor and a dyslexia course manager delivering SpLD teacher-training courses, and has also run courses for parents of dyslexic children and provided dyslexia consultancy advice to schools, colleges and the workplace. She has previously published with David Fulton & Sheldon Press.
|
In almost every science textbook, the origin of the term “vaccine” and its development is paired with the disease smallpox. The two are very closely linked with each other, and surely the disease is one of the primary reasons we have the miraculous medical phenomenon of vaccine, which saves uncountable lives every year. The story of how it all came to be is an interesting one and can be of hope in the dire times as now when we need not just one but several rays of hope to fight and get out of the crazy situation caused by the coronavirus.
So, let’s dive into the history of this disease and how keen observation by one talented physician lead to one of humankind’s greatest achievements.
The exact origin of how smallpox came to be is not known, but it dates to thousands of years ago. Excavated Egyptian mummies that belonged to almost 3rd century BCE were found to have rashes that were similar to the patients who had the disease. Some descriptions have also been found over the course of history in India and China. Ten thousand years ago, this deadly disease, caused by one of two virus variants, Variola major and Variola minor, also created havoc in Africa. In 1350 B.C., the epidemic of smallpox hit the Egypt-Hittite war. It spread from prisoners to the people around and even killed the Hittite King. It continued to wreck other civilizations, aided by the extended trade routes and increased exploration during that time.
The virus causes lesions across the skin and body, rashes, and scars. According to the records, almost 30% percent of the patients passed away, and many recovered. Some even had to go through the danger of losing eyesight. The incubation period of the virus usually lasts a week to a fortnight, and no apparent symptoms appear on the patient. Initially, diseased individuals developed fever and body ache, which later transitioned into rashes, which were contagious. Lesions and scar may also have fluid in the middle causing extreme discomfort to the patient. Those who successfully fought it off, had scabs that fall off, leaving low to zero chances of contagion.
Start of descent
Even though smallpox was quite a nasty disease, it did help a lot in the development of what we in the modern-day call “vaccine.” But, mind you, the process of its eradication didn’t start with vaccination. In 1022 A.D., a book called ‘The Correct Treatment of Smallpox’ mentioned using smallpox scabs taken from a recovered patient and grinding it up to give to healthy individuals. This method was proposed by a Buddhist nun who developed it after noticing that the individuals who recovered from this disease never acquired it again. This method was called “variolation” and was later used for many years after physicians would make slight changes and hone it. It didn’t really make everyone immune to the virus, but the disease development ratio decreased quite significantly.
The real success, however, is attributed to the work done by Edward Jenner. When he was 13 years old, Jenner worked as an assistant to a country surgeon in Sodbury and once heard a milkmaid claim that she will never have smallpox as she has already had cowpox and will never go through the phase of having a face marked with lesions. This was an intriguing statement for the young boy.
Cowpox is another type of skin infection that infects cows. The cowpox virus belongs to the same family of viruses as smallpox, called Orthopoxvirus. Cowpox itself is very similar to smallpox but is a much less severe and contagious form. Jenner later analyzed the statement of the milkmaid when he became a physician himself and noted that what she said was right. When the cowpox virus infects a host different than the original one, in this case, humans, it was less virulent and not as deadly. He then decided to test if it could be used in the treatment of smallpox. So, on the historic day of May 14, 1796, he tested the fluid taken from cowpox blister of a milkmaid, Sarah Nelmes, on the skin of a young boy of eight named James Phipps. The latter developed a fever for a few days but recovered fairly soon.
Some months later, Jenner injected in the boy matter from a smallpox scar, but remarkably the boy did not develop the disease. It meant that he was now safe from it and will never possibly acquire smallpox ever again. This successful method was used for further experimentation, and the physician summarized his work in his treatise “On the Origin of Vaccine Inoculation,” hoping that it will overthrow the deadly sickness. After long discussions and reviews by the health establishment, vaccinations were finally approved. In the following centuries, the procedure was further improved, and scientists started to create new vaccines to fight other diseases such as tetanus, measles, polio, and many more.
Extensive vaccination programs, that we commonly hear of today, were also initiated around the world to combat health scares. Various programs such as those under the belt of the World Health Organization and regional and local governments were launched to take control over such threats and ultimately perish them with combined efforts.
World Health Organization designed and introduced a campaign in 1959 to remove the virus, but the plan received several setbacks. Over the next few years, outbreaks were still occurring, and many people were getting infected with the virus. A more organized program was initiated almost eight years later, and the labs in endemic countries were tasked to produce higher quality vaccines that they successfully delivered. Along with that widespread campaigns, improved surveillance systems and medical equipment also helped to alleviate the problem. Soon, countries across the regions of North America and Europe started to report good progress, and finally, by 1977, smallpox was annihilated.
On May 8, 1980, the world was officially declared free of this ailment by WHO and was indeed one of the biggest conquest health-wise. But the stocks of the virus are still contained in some laboratories that claim to require them for research purposes. International consensus led to reduce and limit the number of stocks and only store them in centers with tight regulations and security so as to avoid any potential use in bioterrorism. The two locations that have the official WHO licenses to handle and store it are the Centers for Disease Control and Prevention in Atlanta, Georgia, and the State Research Center of Virology and Biotechnology (VECTOR Institute) in Koltsovo, Russia.
The impact of vaccination on controlling diseases is indeed very large and can’t be explained in a few words. Diseases like malaria, polio, and measles, etc. that once threatened the lives of millions and took away many precious souls are now within our control, although some underdeveloped regions are still struggling, but the cause is more social and regional than medical. We are in a new wave with mind-blowing technologies and advancement in fields that have elevated the level of services provided and improved the overall quality of life.
With the looming threat of coronavirus, it is in our nature to be scared and intimidated, but we should not forget the achievements this same nature unlocked in previous ages and brought us to the most advanced period in history. If we work together, observe keenly, and put in our best efforts, without a doubt, we can bring this coronavirus down to its knees just like smallpox and every other epidemic in history. It is the matter of will to face it and the courage that should be kept ignited to show that WE CAN, and WE WILL crush it!
Also Read: Viruses = Villains? Not Always!
Maham Maqsood is the Managing Editor at Scientia Pakistan. She is a senior at Quaid-i-Azam University, Islamabad studying Biochemistry. An avid reader and a freelance writer, Maham has worked for several organizations including Globalizon and MIT Technology Review Pakistan.
|
We all know that brushing our teeth, flossing regularly, and scheduling regular check-ups with the dentist are important parts of maintaining good oral hygiene. Everyone wants to avoid cavities and root canals. However, taking care of your mouth is crucial to protecting more than just your teeth and gums. Protect yourself and your loved ones by knowing the facts about how the health of your mouth, teeth and gums can affect your general health.
According to a 2015 oral health survey across US households, 97% of adults value oral health and agree that regular dental visits keep them healthy. However, only 37% actually visited the dentist in the past year.
WHAT’S THE CONNECTION BETWEEN ORAL HEALTH AND OVERALL HEALTH?
The mouth, like most areas of the body, is home to lots of bacteria. While most of the bacteria is harmless, there are harmful bacteria lurking that must be kept at bay. According to Dr. Michelle Crews of Uptown Dental, good oral care, such as daily brushing and flossing, can keep these bacteria under control. When good oral care is neglected, harmful bacteria multiply and breach healthy teeth and gums, causing infections, tooth decay and gum disease.
Keep in mind that a moist mouth is a healthy mouth. Certain medications, whether over the counter or prescription, often reduce saliva flow that leads to dry mouth. Saliva keeps oral bacteria in check by ridding the mouth of food and neutralizing acids. If you take decongestants, antihistamines, painkillers, diuretics or antidepressants, drink plenty of water throughout the day to keep your mouth moist.
Inflammation of any kind in the body can lead to disease. Growing evidence shows that gum disease and other inflammatory diseases of the mouth are strongly linked to the incidence of systemic diseases that affect the entire body rather than a single organ or body part. According to the Mayo Clinic, oral bacteria and the inflammation associated with periodontitis, a severe form of gum disease, might play a role in some diseases.
BEWARE OF CONDITIONS THAT MAY ALSO HAVE IMPACTS ON YOUR ORAL HEALTH
Numerous health conditions may have negative impacts on oral health, including these clinical conditions:
Osteoporosis, a disease that causes bones to become weak and brittle, might be linked with jaw bone and tooth loss. Tooth loss can occur when the bone of the jaw becomes less dense. Women with osteoporosis are three times more likely to have tooth loss than those with normal bone density. Signs of osteoporosis include loose teeth, gums detaching from the teeth and receding gums.
- ALZHEIMER’S DISEASE
A 2013 study found that people with poor oral hygiene or gum disease could be at higher risk of developing Alzheimer’s compared with those who have healthy teeth. Additionally, worsening oral health is seen as Alzheimer’s disease or dementia progresses because providing oral care becomes more difficult as the patient and caregiver have other health challenges.
Diabetes is a chronic condition that reduces the body’s resistance to infection, putting gums and teeth at risk. Tooth loss and gum disease appear more frequently and severely among people who have diabetes. People who suffer from diabetes and gum disease have trouble controlling their blood glucose levels. Good oral hygiene and regular dental visits can improve blood sugar control and health.
- OTHER HEALTH-RELATED CONDITIONS
Other conditions that might be linked to oral health include eating disorders, rheumatoid arthritis, head and neck cancers, and bacterial pneumonia.
Always inform your dentist of any medications you are taking or if you have had any changes in your overall health, especially the development of a chronic condition like diabetes.
POOR ORAL HEALTH HAS BEEN LINKED TO OTHER HEALTH CONDITIONS
Poor oral health may contribute to various diseases or simply be associated with these conditions, including:
Endocarditis is an infection of the inner lining of your heart, the endocardium. Endocarditis typically occurs when bacteria or germs from another part of your body, such as your mouth, spread through the bloodstream and attach themselves to damaged areas in the heart.
- CARDIOVASCULAR DISEASE
Research suggests that heart disease, clogged arteries and stroke may be linked to inflammation and infection caused by harmful bacteria in the mouth.
- BIRTH OUTCOMES
Gum disease has been linked to premature births and low birth weight. Dental care is important before and during pregnancy. If you are pregnant and have not seen the dentist recently, schedule an appointment for a check-up, and be sure to let your dentist know you are pregnant. Dr. Michelle Crews states, “Gingivitis is so common among pregnant women that some dental plans have started offering an additional cleaning for pregnant women. Patients should check to see if their policy has this feature.”
HOW TO MAINTAIN GOOD ORAL HEALTH
To protect your oral health, practice good oral hygiene every day.
- Brush your teeth twice a day with fluoride toothpaste.
- Floss daily.
- Eat a healthy diet and limit between-meal snacks.
- Avoid sugary beverages and sticky candy.
- Replace your toothbrush every three months or sooner if bristles are flattened or frayed. Schedule regular dental checkups and cleanings.
- Avoid tobacco use.
The American Academy of Pediatric Dentistry recommends that infants visit a dentist when teeth begin to show or by six months of age, whichever comes first.
READ PAGETURN VERSION: https://issuu.com/vervesouth/docs/verve___fall_2018/a/59756
|
Those pristine-looking Alpine glaciers now melting as global warming sets in may explain the mysterious increase in persistent organic pollutants in sediment from certain lakes since the 1990s, despite decreased use of those compounds in pesticides, electric equipment, paints and other products.
When glaciers melt they set free chemicals which have been locked for decades in the "eternal ice." Researchers from Empa, the ETH Zurich and Eawag have analyzed sediment layers in the Oberaarsee and have been able to reconstruct the processes by which long-lived organic compounds have accumulated in the ice over the last sixty years. A study just published in the journal Environmental Science and Technology describes how shrinking glaciers have, for about ten years now, become a secondary source of pollutants which have long been banned and are no longer produced in industrial quantities.
When glaciers shrink due to the effects of global warming, the retreating tongues sometimes reveal things which have been buried in the ice mass for decades or even centuries. This includes chemical substances which have been banned for years and which really ought to be kept under lock and key anyway, such as those known as POPs -- short for persistent organic pollutants. These are organic environmental pollutants which take a long time to decompose and include for example chemicals used as plasticizers (softeners) in various synthetic materials, pesticides and also dioxins.
Many of these POPs are endocrine disrupters and carcinogenic, and are suspected of interfering with human and animal development. In addition they are extraordinarily long-lived, and can be transported great distances through the atmosphere. POPs can therefore be found all over the world, even in glaciers in environments high in the Alps, where ecosystems are extremely sensitive.
A drill core from a glacial lake
When glaciers melt the accumulated chemicals -- deposited years ago by air currents onto the snow layer and then frozen into the ice -- are carried by the runoff water into the nearest glacial lake. There, together with other matter suspended in the melt water, they sink to the bottom of the lake and accumulate in the sediment. This has taken place, for instance, in the Oberaarsee, an artificial reservoir at an elevation of 2300 metes near the Grimsel Pass in the Bernese Oberland region.
In the winter of 2006 sedimentologists from Eawag journeyed to the frozen mountain lake to extract drill core samples of sediment, each about a meter long and six centimeters in diameter. "We then took the cores and cut them into slices which we freeze dried," explains Peter Schmid, a chemist at Empa. Back in the laboratory, he and his team analyzed the various sediment layers for a range of chemicals, including POPs.
A history of POPs over the past half century
The researchers were able to read the sediment layers in the Oberaarsee drill cores like tree rings, layer for layer all the way back to 1953, when the dam which created the lake was first built. "Based on our analysis of the layers we were able to confirm that POPs were being produced in large quantities from 1960 to 1970, and deposited in alpine lakes," says Christian Bogdal, who completed his doctoral thesis at Empa on the polluting effects of these organic chemicals and who now conducts research in the field at the EZH Zurich. Equally clearly visible in the cores was the reduction in the quantity of pollutants at the beginning of the 1970s when many of these environmentally damaging substances were banned.
Just as impressive, and also somewhat surprising, was the renewed increase in POP concentrations in sediment layers which were only ten to fifteen years old, according to Bogdal. For example, the quantities of chlorine-containing chemicals found in sediment layers from the end of the 1990s were sometimes higher than those seen in the 60s and 70s. A possible explanation for this is that the lake is fed primarily by the runoff from the Oberaar glacier, the tongue of which has receded by 1.6 km since 1930. In the last ten years alone it has shrunk by more than 120 meters, and could therefore have released a relatively large amount of accumulated toxic substances. As environmental scientists have long suspected, and now proven, glaciers represent a serious secondary source of POPs re-entering the environment.
Research into the "eternal ice" continues
This study is by no means the end of investigations into long-lived organic pollutants in glaciers. "In the meantime we have had results from other mountain lakes which confirm our data. There are still many other unanswered questions of great interest to us chemists, as well as the sedimentologists and glaciologists too, of course," says Peter Schmid. For instance not much is known about how POPs actually accumulate in glaciers, what paths they follow within the glacier and what chemical changes they undergo, if any, when they are exposed to intense UV light. "We also want to know if we should expect even larger quantities of pollutants to be released from glaciers," he adds. Bogdal and Schmid, together with glaciologists, chemists and sedimentologists from the ETH Zurich, the Paul Scherrer Institute and Eawag, are therefore currently submitting a project proposal to the Swiss National Science Funds to investigate the path of pollutants in the "eternal ice".
- Bogdal et al. Blast from the Past: Melting Glaciers as a Relevant Source for Persistent Organic Pollutants. Environmental Science & Technology, 2009; 091001081051082 DOI: 10.1021/es901628x
Cite This Page:
|
To use the law of sines we need a known side opposite a known angle. Sometimes we do not have that information, as when, for example, we know three sides and no angle. We can still solve such a triangle using the law of cosines.
Consider an oblique triangle ABC as shown in Fig. 8-27. As we did for the law of sines, we start by dividing the triangle into two right triangles by drawing an altitude h to side AC.
In right triangle ABD,
But AD = b − CD. Substituting, we get
Now, in right triangle BCD, by the definition of the cosine,
Substituting a cos C for CD in Equation (1) yields
Squaring, we have
Let us leave this expression for the moment and write the Pythagorean theorem for the same triangle BCD.
Again substituting a cos C for CD, we obtain
Substituting this expression for h2 back into (2), we get
|
What is cognitive development?
Cognitive development is the development of your baby’s abilities to see, hear, touch, feel, taste, and smell; to remember and learn; to understand language and then to say words; and to think. All of these processes take place in your baby’s brain, which undergoes tremendous development during the first year of life.
Your baby is constantly learning. Even in the womb, she was taking note of her surroundings and storing memories of events that occurred over and over again. Studies have shown that newborn babies can tell the difference between their mother’s amniotic fluid and someone else’s amniotic fluid. Newborns prefer the sound of their mother’s voice over that of another person.
Babies tend to live in the here and now because their memory capacity is limited and they are not able to sustain their attention. Newborns and young babies are easily distracted because they live "in the moment." Over the first year, your baby’s memory capacity will grow.
Right from the beginning of life, your baby is very sensitive to contingency, meaning that she notices when one event consistently follows another event closely in time. For example, she will figure out that the sound of your voice is usually followed by you picking up and holding her. She will notice that if she cries, she will soon hear you approaching.
Also from the start, your baby will begin to acquire a sense of self, that her body is separate from others. She will start to develop proprioception, which is a sense of where the parts of her body are in space and in relation to the other parts of her body. Try touching your nose with your finger with your eyes closed: the ability to locate the parts of your body in space is proprioception. Of course, your baby cannot do this yet; it will take some time. It is very important to your baby to know where she is in space, and to feel secure in that space. This is why, if you are jumpy when you hold your baby, she may feel insecure and start to become upset.
There is a continuing controversy about how much of a baby’s ability to learn is because of the internal wiring in her brain, and how much is due to learning through experience. There is also some disagreement with regard to when a baby acquires specific cognitive abilities. Some scientists think cognitive development in babies occurs in stages, and others think it is a more gradual, continuous process.
Researchers do agree on two things, though. First, the initial 12 months of life is a time of great learning, and a baby’s interactions with her caregivers is crucial to her cognitive development. Second, in a baby’s first year, many neural connections in the brain are wired together and some connections are lost; the creation and "pruning back" of these connections are affected by a baby’s experiences.
Babies cannot tell us what they are thinking. As a result, researchers have used careful observation of baby behaviour to try to figure out what is going on in babies’ minds. The pages in this section provide a general month-by-month description of what cognitive abilities a baby acquires in the first year of life. However, it is important to keep in mind that every baby develops at her own pace.
|
Mosquitoes are thin, long-legged, two-winged insects and are typically six to 12 millimetres in length. Both males and females have antennae and an elongated "beak" or proboscis three to four times longer than its head.
Range: There are thought to be 82 species of mosquitoes in Canada and over 2,500 species in the world. There are 10 main groups in Canada but only five of them have members that are significant pests of humans: Anopheles, Culex, Aedes (including Ochlerotatus), Mansonia (Coquillettidia) and a few species of Culiseta.
Habitat: Mosquito habitat varies for each species and can include natural areas such as rain puddles and ponds, decomposing material such as wet leaf matter, ditches and marshes. While healthy wetlands are habitat for mosquitoes, they are also home to mosquito predators.
Diet: Female mosquitoes of most species need to feed on blood to develop eggs. Male mosquitoes cannot bite and both sexes of mosquitoes use their long proboscis to feed on the nectar of flowers or other sugar sources like honeydew.
Mosquito breeding varies per species however the common traits include eggs being laid in fresh or stagnant water. All mosquitos need water to develop. From eggs they transform into larvae stage, looking like a wiggling worm. As they moult they turn into pupae which are comma shaped in look and they are busy growing wings and legs. When they are done growing they break out from the pupal skin as full mosquitos.More on this Species:
Hinterland Who's Who
|
Alan Turing was a brilliant mathematician, cryptographer, and logician, plus the father of computer science and artificial intelligence. He also worked in biology, and now, 58 years after his tragic death, science has confirmed one of his old biological hypotheses.
Turing's idea was that biological patterns - such as a tiger's stripes or a leopard's spots - are formed by the interactions of a pair of morphogens, which are the signaling molecules that govern tissue development. The particular pair that Turing proposed was an activator and an inhibitor. Turing proposed that the activator would form something like a tiger's stripe, but then interaction with the inhibitor would shut down its expression, creating a blank space. Then the process would reverse, and the next stripe would form. The interaction of these two morphogens would combine to create the full stripe pattern.
This hypothesis has remained mostly just speculation until now, as researchers at King's College London have now tested the idea in the mouths of mice. The roofs of mice's mouths contain regularly spaced ridges, and the researchers discovered the precise two morphogens that were working as activator and inhibitor to create the pattern, just as Turing suggested. What's more, when the researchers tampered with one morphogen or the other to increase or decrease their activity, the pattern of the ridges changed just as Turing's initial equations predicted they would. Researcher Dr. Jeremy Green adds:
"Regularly spaced structures, from vertebrae and hair follicles to the stripes on a tiger or zebrafish, are a fundamental motif in biology. There are several theories about how patterns in nature are formed, but until now there was only circumstantial evidence for Turing's mechanism. Our study provides the first experimental identification of an activator-inhibitor system at work in the generation of stripes – in this case, in the ridges of the mouth palate. Although important in feeling and tasting food, ridges in the mouth are not of great medical significance. However, they have proven extremely valuable here in validating an old theory of the activator-inhibitor model first put forward by Alan Turing in the 50s."
Green also suggested that this first experimental confirmation of Turing's activator and inhibitor model could prove useful in regenerative medicine, where this knowledge could allow us to rebuild structure and patterns when using stem cells to replace damaged tissues. And, as Green pointed out, it's a nice way to ring in the 100th anniversary of Turing's birth by proving that his biological theory was, indeed, right all along.
|
3D File : electronic file representing an object in three dimensions. It is designed by 3D modeling to enable printing of the desired object with a 3D printer. 3D file creation is done by CAD.
3D Model : object obtained by 3D modeling.
3D modeling : three-dimensional computer graphics step of creating, in a 3D modeling software, a three-dimensional object by adding, subtracting and modifications of its components.
3D printing : additive manufacturing process. Here are some technologies that coexist : FDM, SLA and selective laser sintering …
3D printing machine : for the manufacture of pieces in three dimensions by depositing successive layers of melt.
Additive Manufacturing : manufacturing processes by adding material, most of the computer-aided time.
CAD : Computer Aided Design = includes all the software and geometric modeling techniques to develop, virtually test and realize manufactured products and tools to make them.
Charmille : measure of grain for a grained surface of an object.
Fab Lab : FABrication LABoratory = Community digital workshop open to all (handymen, designers, artists, students, hackers, …). It is equipped with computer controlled machinery must comply with a charter established by the MIT.
FDM : Fused Deposition Modeling = wire Removal: mechanical deposit plastic in successive layers. The machine deposits a molten plastic thread through a nozzle.
GF : Glass filled = loaded glass: material loaded with glass fibers.
HDPE : High density polyethylene: polymer plastic.
HST : Mineral Fiber Filled = loaded mineral fiber: material filled with mineral fibers.
Layer thickness : The layer thickness measuring the height of each added material in the additive manufacturing processes or 3D printing based on the layer stack.
Layout : partial or full representation of an object (existing or proposed) in order to test and validate certain aspects. The model can be performed at a given scale, usually reduced or enlarged for easier viewing.
PA : Polyamide: polymer plastic.
PC : Poly-Carbonate: Polymer plastic.
Photosensitive : material which reacts with light radiation.
Polymerization : chemical reaction or process whereby small molecules react together to form molecules of higher molecular weights.
PP : Poly-Propylene Polymer plastic.
Prototype : an original model that has all the qualities and all technical operating characteristics of a new product, but it is also sometimes an incomplete copy of that may be a product.
Rapid Prototyping : making computer-controlled method, which includes a set of tools which, arranged between them, allow to arrive at intermediate representation projects product design : models, prototypes and pre-series.
RIM : Reaction Injection Molding = By creating an epoxy or silicone resin mold (previously from a master model made Stereolithography) prototype can then take shape.
SLA : Stereolithography Stereolithography Apparatus = polymerization of a photosensitive epoxy resin by ultraviolet laser in layers of 0.10 to 0.15mm.
SLS : Selective laser sintering = Laser Sintering: polyamide particles are agglomerated by a CO2 laser.
|
Carotid artery disease, or carotid artery stenosis, refers to a narrowing within the carotid arteries that is usually caused by the buildup of plaque within the artery, called atherosclerosis.
The carotid artery supplies blood and oxygen to the brain as well as the head and neck. There are two common carotid arteries - one on each side of the neck - that split into two arteries: the internal and external carotid arteries. The internal carotid artery supplies blood and oxygen to the brain and the external carotid artery supplies blood and oxygen to the face, neck, and scalp.
For many people, carotid artery stenosis does not cause symptoms. However, when pieces of plaque break off (called emboli) and travel to the brain, blood flow to an area of the brain is blocked and causes a stroke. 30% to 50% of strokes are caused by carotid artery disease. Since stroke is the third leading cause of death in Canada, the diagnosis and treatment of this condition is critical.
People who smoke, are overweight, are inactive, or who have high cholesterol, high blood pressure, or high blood sugar levels (e.g., diabetes) are at an increased risk of carotid artery disease.
Because plaque can also build up in arteries other than the carotid arteries, people who have carotid artery disease may also have coronary artery disease, or heart disease.
Carotid artery disease is caused by narrowing of the carotid artery, usually caused by atherosclerosis. Atherosclerosis is a buildup of plaque within the artery.
Plaques, which consist of cholesterol and other material, start to build up when there is damage inside the arteries. When plaques in the arteries break open or crack, platelets stick to the crack and form a blood clot. This can partially or completely block the carotid artery. For some people, a small piece of the plaque can break off and travel to the brain, cutting off blood supply to a certain area of the brain and causing a stroke.
Symptoms and Complications
Most people have no symptoms in the early stages of carotid artery disease, but as more of the carotid artery is blocked, symptoms associated with a transient ischemic attack (TIA) or stroke can occur. For some people, the first symptoms of carotid artery disease are those of a stroke or a TIA.
Symptoms of a stroke include:
- sudden, severe headache
- sudden, severe dizziness or difficulty walking
- sudden difficulty speaking
- sudden blurred vision in one or both eyes
- sudden weakness or numbness of the arms, legs, or face
A TIA is also referred to as a mini-stroke. A TIA has the same symptoms as a stroke, but the symptoms go away within a day. If you experience symptoms of a stroke or a TIA, get immediate medical attention and do not drive yourself to the hospital. Early treatment is imperative to minimize damage to the brain and increase the chance that you will recover without permanent effects.
A stroke is the most serious complication of carotid artery disease.
Making the Diagnosis
Your doctor will perform a physical examination and ask you questions about your symptoms. As part of the examination, your doctor will listen to your carotid arteries with a stethoscope. If you have carotid artery disease, your doctor may hear bruits, which are swooshing sounds caused by changes in blood flow.
If your doctor suspects that you have carotid artery disease, they will order a test called a doppler ultrasound, which evaluates the blood flow through the carotid arteries using sound waves.
Some people may require additional tests such as an angiogram, a computer tomography angiogram (CTA), or a magnetic resonance imaging angiogram (MRA). Angiograms involve injecting a contrast agent ("dye") into a vein to evaluate the carotid arteries.
Treatment and Prevention
Treatment of carotid artery disease is aimed at reducing the risk of stroke and can include medications, lifestyle management, and surgery.
Medications that may be used to manage carotid artery disease include:
- medications to lower blood pressure
- "statin" medications (e.g., lovastatin, pravastatin, simvastatin, atorvastatin*) to lower cholesterol levels
- medications to reduce blood sugar levels
- antiplatelet medications such as acetylsalicylic acid (ASA) or clopidogrel that make platelets in the blood less likely to form a blood clot
Your doctor may also suggest that you eat a healthy diet, stop smoking, exercise, or lose weight to help reduce the risk of stroke. Your doctor and other health care professionals can help you implement lifestyle changes safely.
For some people at a high-risk of having a stroke or those who have symptoms due to carotid artery disease, their doctor may recommend a surgical procedure, such as:
- an endarterectomy, where plaques within the carotid artery are removed
- angioplasty with a stent, where the carotid artery is widened with a small balloon that is inflated at the end of a tube
Following the surgical procedure, a stent (mesh tube) is placed in the artery to keep the carotid artery open.
The best way to help prevent carotid artery stenosis is to manage risk factors. A healthy diet, exercise, and quitting smoking will all help to reduce the risk of developing carotid artery disease, as well as reduce the risk of stroke and heart disease. It is also important for people to control their blood sugar, cholesterol levels, and blood pressure to help reduce the risk of carotid artery disease and stroke.
*All medications have both common (generic) and brand names. The brand name is what a specific manufacturer calls the product (e.g., Tylenol®). The common name is the medical name for the medication (e.g., acetaminophen). A medication may have many brand names, but only one common name. This article lists medications by their common names. For information on a given medication, check our Drug Information database. For more information on brand names, speak with your doctor or pharmacist.
All material copyright MediResource Inc. 1996 – 2019. Terms and conditions of use. The contents herein are for informational purposes only. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition. Source: www.medbroadcast.com/condition/getcondition/Carotid-Artery-Disease
|
Fifth grade students have enhanced their recent gains in consciousness and grown more accustomed to being an isolated self, seeing the world in a new perspective. Yet, like third grade students, they are about to leave another phase of childhood and cross a new threshold of experience. The curriculum must therefore build on already established foundations, and introduce certain new elements to prepare them for this next step forward.
Until now, history has been taught from only a pictorial and personal nature. No attempt has been made to introduce exact temporal concepts or to proceed in strict sequences. Now, however, history becomes a special Main Lesson subject, as does geography. By telling of humans’ deeds and strivings, history stirs children to a more intense experience of their own humanness. Geography does exactly the opposite; it leads them away from themselves out into ever widening spaces. History brings children to themselves; geography brings them into the world.
In the fifth grade, ancient history starts with the childhood of civilized humanity in ancient India, where humans were dreamers. The ancient Persian culture that followed felt the impulse to transform the earth, till the soil, and domesticate animals, while helping the sun god conquer the spirit of darkness. The great cultures of Mesopotamia (the Chaldeans, the Hebrews, the Assyrians, and the Babylonians) reveal the origins of written language on clay tablets. The Egyptian civilization of pyramids and pharaohs precedes the civilization of the Greeks with whom ancient history ends.
Every means is used to give children a vivid impression of these five ancient cultures. They read translations of poetry, study hieroglyphic symbols of the Egyptians, sample arts and crafts of the various ancient peoples, trying their hands at similar creations. At this age, history is an education of children’s feelings rather than their memory for facts and figures, for it requires inner mobility to enter sympathetically into these ancient states of being so different from our own.
In contrast, American geography examines every consideration of the earth’s physical features and links this with a study of the way human life has been lived in the region, including the use of natural resources. As a continuation of their study of the living earth, fifth grade students begin botany, the study of the plant world. After discovering some of the secrets of the plant life found in their own environment, the students’ attention is drawn to vegetation in other parts of the world.
Building on years of form drawing, freehand geometry is introduced in the fifth grade. Fractions and decimals continue to be emphasized in mathematics, along with mixed numbers and reciprocals.
In music, they study time values, harmony, and the major and minor scale. They sing rounds and canons, and participate in a school-wide chorus and one of two orchestras.
Foreign language study builds additional skills in reading simple texts, syntax, short talks, and descriptions. Sanskrit poems and Greek phrases are also learned. In addition to free geometric drawing, fifth grade students practice form drawing, watercolor painting, knitting, woodworking, carving and clay work.
Fifth grade students continue their study of eurythmy. Physical education includes rhythmic exercises, gymnastics, kickball and Greek sports such as javelin, discus, shot put, and high jump. The year culminates in a Greek Pentathlon in which students compete in mixed-school “city states” with fifth grade students from other Waldorf Schools.
Main Lesson Subjects
- Ancient civilizations: India, Mesopotamia, Persia, Egypt and Greece
- Language Arts: reading, grammar, composition
- Mathematics: fractions, decimals and introduction to geometry
- North American geography
- Greek mythology
- Research reports
- Olympic “Pentathlon” festival
|
The MBA Math Monday series helps prospective MBA students to self assess their proficiency with the quantitative building blocks of the MBA first year curriculum.
The first MBA Math economics exercise explained that marginal analysis discovers a firm’s optimal production quantity and profit. Marginal analysis problems can be posed equivalently in terms of tables, formulas,or charts. The first exercise used data in tables. This exercise uses formulas but without invoking calculus.
Economics is tricky because it takes awhile to internalize the necessary chain of reasoning. Of course, it also has its own terminology (qth unit?) that is confusing at first. The goal is generally clear, at least in intro economics exercises. In this exercise, we want to know what level of production yields the highest profit. The challenge is to build a chain of reasoning from the problem as stated to the goal.
Once you’ve got a clear solution path in your head, the rest is simple algebra. Starting with algebra before you know where you’re headed is a common form of flailing for beginning (or returning) students.
Working with data in tables is more intuitive for many beginning students but, if you can accept that formulas represent the same information in shorthand, you can get more quickly to an answer with formulas. This one takes about 5 seconds once you know what you’re doing.
The last step in a marginal analysis is to introduce fixed costs to determine whether the optimal strategy results in a true profit or loss. This has been the Twilight Zone world of U.S. car companies recently. However much they’ve (debatably) been closing the gap in quality and marginal production costs, the industry-wide drop in demand combined with their massive fixed cost burdens put them in the position of working like hell to lose the least amount possible.
Suppose that you can sell as much of a product as you like at $92 per unit. Your marginal cost (MC) for producing the qth unit is given by:
If fixed costs are $350, what is the optimal output level?
Solution (with audio commentary): click here
|
A new camera called the One Degree Imager, located at Kitt Peak National Observatory in Arizona, is responsible for this enticingly crisp image of the Bubble Nebula. Released on December 4, this closeup highlights the sphere of gas blown out by the nebula's central star, which is 45 times larger than our own sun.
Located in the constellation Cassiopeia, the Bubble Nebula is ten light-years across. Intense radiation from the massive star causes the otherworldly glow that lights up what looks like a celestial soap bubble.
Sheets of plasma—or superheated gas—break off from the surface of the sun in this image taken in extreme ultraviolet light. Before floating free, the plasma rose up and danced along pathways determined by magnetic fields. These unseen forces pushed and pulled the gas into contorted shapes.
Twin probes circling the moon—part of NASA's GRAIL mission—since 2011 have been recording changes in gravity as they fly over peaks and valleys on the lunar surface. As one probe flies over an area with a greater gravity field, the washing-machine-size instrument speeds up slightly, increasing the distance between it and its sister probe. These minute changes in position have enabled scientists to construct this very detailed view of local changes in the moon's gravity.
In this artist's conception of fine-grained particles around a brown dwarf—or a failed star—bits of debris are on their way to becoming a rocky planet.
Current thinking on Earthlike, rocky planet formation says that random collisions between tiny debris occur in the disc of material surrounding newborn stars. But the recent discovery of such a disc surrounding a brown dwarf has some astronomers rethinking that hypothesis. If planets can form from the debris around these failed stars, then rocky planets may be much more common than previously thought.
The Suomi NPP satellite captured this nighttime image of the aurora australis over the Antarctic coast south of South Africa. Despite the winter darkness and a waning crescent moon, the auroras produced enough light for sensors on board the new NASA and National Oceanic and Atmospheric Administration satellite to capture the boundary between the ice shelf and what some scientists call the Southern, or Antarctic, Ocean (dark line across the bright center swirl).
Launched last year, Suomi NPP enables scientists to capture nighttime images of the Earth's atmosphere and surface. The new sensor is so sensitive, it can capture the lights from one ship at sea.
Image courtesy Jesse Allen and Robert Simmon, Suomi NPP/NASA
In NASA's Landsat 5 satellite image, the Mergui Archipelago shows off the vibrant greens of its rain forests and the brilliant blues of the surrounding tropical ocean. Located along the southern border of Myanmar (Burma), next to Thailand, the area is known for valuable pearl oysters (Pinctada maxima) and a highly diverse concentration of plants and animals.
The Venusian atmosphere is made mostly of carbon dioxide, along with periodic spikes in sulfur. Recent research has found that volcanoes on the planet's surface, seen here in an artist's conception, are likely injecting that sulfur into the upper atmosphere. Direct confirmation is difficult, however, since Venus's thick atmosphere precludes a peek at its peaks.
|
The global ecosystem depends on both plants and animals to survive -- and as part of that ecosystem, plants and animals need each other. Although their symbiotic relationship is sometimes subtle, at other times their impact on one another is striking.
Plants and animals benefit each other as members of food chains and ecosystems. For instance, flowering plants rely on bees and hummingbirds to pollinate them, while animals eat plants and sometimes make homes in them. When animals die and decompose, they enrich the soil with nitrates that stimulate plant growth.
Many relationships between plants and animals are mutually beneficial. For instance, flowers need hummingbirds to pollinate them, just as hummingbirds need the flowers' nectar to refuel.
Plants provide a global benefit to animals by releasing oxygen into the atmosphere. Although pollination and food chains affect only a few local plants and animals, they frequently overlap to form larger food webs that contain hundreds or even thousands of wildlife species. This global significance means that when one plant or animal goes extinct, many others suffer as a result.
|
Modeling and teaching writing
Are you uncomfortable with the idea of teaching your kids to write? Maybe you think you can’t teach writing because you never really learned yourself. Or maybe you’re a confident writer, but you don’t have a clue how to pass that on to your kids.
One thing I do know: Regardless of skill or background, you can model and teach writing with confidence. Even though you may not believe it—you really do know more than your children.
Why Model and Teach Writing?
Simply, it’s unfair to expect our children to do something that hasn’t first been demonstrated.
Modeling writing in front of your children matters, but be encouraged that you don’t have to be perfect or have all the right answers. As homeschool parents, like it or not, our job is to teach and model the process until our children get it. They need to see and hear us thinking through our ideas. It’s good for them watch us struggle to come up with a topic sentence or find the words to make up the lines of a poem. Why? Because they struggle too!
But let’s step out of writing mode for a moment.
Students learn geometry because you show them over and over how to do it, right? They rarely get it the first time. Or the second time. Or even third.
We’d never dream of throwing our kids to the math lion, yet when it comes to writing, we want to assign a topic and say “Go!”
For whatever reason, we just expect them to write intuitively. It’s pretty silly, really, because there are many strategies and skills involved with writing a good paragraph or story.
Model and teach through Guided Writing Practice to provide your young child with a daily, predictable, shared writing experience. Together, write several short sentences about simple, familiar topics such as animals, friends, the weather, or upcoming events.
During this time, you’re modeling important writing skills such as:
- Left-to-right progression
- Letter formation
- Correct spacing
- Punctuation and capitalization
Most importantly, Guided Writing gives your child the freedom to put together ideas without the limitations and fear of having to write them down himself.
A simple way to introduce writing skills is through predictable sentence starters. Young children thrive on repetition, so they’ll enjoy the consistency and routine of using the same sentence starter all week. Just draw out a different response each day.
Hello, _________.(Mommy, Jamie, Mittens)
Today is _________. (Tuesday, Friday, my birthday)
It is _________. (sunny, cloudy, foggy)
We are going to _________. (bake with Grandma, play Legos)
I think _________. (we will have fun, I will build a tower)
As your child’s writing skills increase, use your Guided Writing times to gradually introduce new concepts such as beginning, middle, and end; writing a friendly letter; or thinking of a problem and solution for a story.
This is often the point where moms drop off the grid: You go from nurturing the writing process to feeling guilty that you’re getting in the way of your child’s progress or creativity. Ironically, this is when most kids come to hate writing!
Instead, recognize that this is the phase of writing where you and your child can work together to produce the final project. Model and teach writing skills through examples and prompts. Keep things moving by continuing to do most or all of the writing, but share in the process. Because some of the work is yours and some is your child’s, it’s a collaborative effort. Let this free you instead of tether you to your guilt!
Middle and High School
Even if your teen is now working quite independently, you should still be modeling new writing skills and methods. As you work together, modeling helps familiarize her with the lesson’s expectations.
On a white board, demonstrate and teach writing skills through dialogues, prompts, and questions, but also show examples of the targeted writing. You and student should both contribute to the paragraph.
Again, you’re not modeling a polished final draft, you’re modeling the thinking process. When your teen heads off to write her own paper, your time together will have set the stage.
At every age, your child needs your involvement in the writing process, not just to give editing feedback, but to instruct and model. Like teaching your child to make a bed, knit a scarf, or build a birdhouse, you remain involved until she is confidently and successfully progressing.
Collaborative writing takes time, too—to coax, encourage, ask questions, and discuss possibilities. Together, you and your child will grow comfortable with these writing sessions, and before you know it, you’ll watch her begin to apply the same thinking process when she works by herself.
So stay connected and involved. It’s crucial to your child’s writing success!
Copyright 2011 © Kim Kautzer. All rights reserved.
|
This 1767 engraving, published in Great Britain and attributed to Benjamin Franklin, warned of the consequences of alienating the colonies through enforcement of the Stamp Act. The act was a 1765 attempt by Parliament to increase revenue from the colonies to pay for troops and colonial administration, and it required colonists to purchase stamps for many documents and printed items, such as land titles, contracts, playing cards, books, newspapers, and advertisements. Because it affected almost everyone, the act provoked widespread hostility. The cartoon depicts Britannia, surrounded by her amputated limbs—marked Virginia, Pennsylvania, New York, and New England—as she contemplates the decline of her empire. Franklin, who was in England representing the colonists’ claims, arranged to have the image printed on cards that he distributed to members of Parliament.
Source: The Colonies Reduced. Design’d and Engrav’d for the Political Register, 2 3/8 x 3 7/8 inches, 1767—Prints and Photographs Division, Library of Congress.
|
Many speakers have sought to imbue the Thirteenth Amendment with a broad underlying principle, and Professor Rebecca Zietlow identified antisubordination as that principle. She focused on the precedents set forth by Congress, rather than by the courts. In this particular context, Professor Zietlow argued, legislative precedent has value because the Thirteenth Amendment was the first amendment to use an enforcement clause (§ 2) and because the framers of that amendment envisioned a broad grant of power. Congress acted then and since to effectuate an antisubordination principle. She gave several examples, of which I will list only a few. The Reconstruction Congress tried to help freed slaves enforce their rights, not only to remove slavery. The Anti-Peonage Act of 1867 acted against involuntary servitude regardless of race. Members of the New Deal Congress made arguments analogizing labor rights to freedom from slavery. And even in 2000, Congress passed the Trafficking Victims Protection Act.
|
Spelling words with "q"
In this spelling worksheet, learners practice spelling words beginning with qu by writing each of the ten words three times. Students write each word in a sentence.
3rd - 4th English Language Arts 3 Views 5 Downloads
Compound Word Trivia
Engage young learners in expanding their vocabulary with these fun games and activities. Children learn how compound words, root words, and affixes provide clues about the meaning of unfamiliar vocabulary. These six activities get...
1st - 4th English Language Arts CCSS: Adaptable
Building Vocabulary: Prefixes, Roots, and Suffixes
Word roots, prefixes, and suffixes can hold the key to determining the meaning of a host of different words. Included here are five pages of prefixes, roots, and suffixes paired with their meanings and example words.
3rd - 10th English Language Arts
Making Sense of Decoding and Spelling
Go over digraphs, vowel sounds, and affixes with a series of decoding and spelling lessons. Each lesson guides learners through a different reading and phonics skill, building on the lesson before, and challenging them with each step.
Pre-K - 3rd English Language Arts CCSS: Adaptable
Essential Reading Strategies for the Struggling Reader
Beneficial for beginning readers, struggling readers, and those in need of review, a set of language arts activities is a great addition to any foundational reading unit. Focusing on phonological awareness, fluency, instructional...
K - 5th English Language Arts CCSS: Adaptable
Greek and Latin Roots, Prefixes, and Suffixes
How can adding a prefix or suffix to a root word create an entirely new word? Study a packet of resources that focuses on Greek and Latin roots, as well as different prefixes and suffixes that learners can use for easy reference
3rd - 8th English Language Arts CCSS: Adaptable
Improve Your Spelling with the Visual Thesaurus
Using Visual Thesaurus software, class members participate in a computer-based spelling bee. Then they work in groups to analyze the words and use deductive reasoning to infer spelling patterns. They then present one of their "rules" to...
3rd - 12th English Language Arts CCSS: Adaptable
What Members Say
- Melissa W., Teacher
- Cookeville, TN
|
Explore Teaching Examples
Earth System Topicsshowing only Solid Earth Show all Earth System Topics
Earth System Topics Show all Earth System Topics
- Interactive Lectures
- Socratic Questioning
- Role Playing
- Gallery Walk
- Quantitative Writing
- Peer Review
- Student Research
- Group Work
- Just in Time Teaching
- Cooperative Learning
- Service Learning
- Teaching with Visuals
- Teaching with Data
- Teaching with Models
- Mathematical and Statistical Models
- Physical Analog Models
- Teaching with Technology
- Teaching with GIS
- Class Response Systems
- First Day of Class
- Indoor Labs
- Concept Map
- Quantitative Skills
- Quantitative Literacy
- Quantitative Reasoning
- Spreadsheets Across the Curriculum
- Teaching Communication
- Open Inquiry
- Structured Inquiry
- Guided Inquiry
- Question of the Day
- Lecture Tutorials
- Problem Solving
- Teaching with Google Earth
Results 1 - 10 of 525 matches
Investigating Earthquakes: GIS Mapping and Analysis (College Level) part of Teaching with GIS:Examples
This is a college-level adaptation of a chapter from the Earth Exploration Toolbook. The students download global quake data over a time range and use GIS to interpret the tectonic context. -
Geologic Puzzles: Morrison Formation part of Interactive Lectures:Examples
Images of faulted strata, tilted turbidites, and beach rocks bring the field into the classroom, giving students practice in doing what geoscientists do. These images are examples of geologic puzzles. -
The Sleeping Mountain part of Role Playing:Examples
In this role-playing scenario, students represent townspeople whose lives and livelihoods are endangered by an active volcano which may or may not erupt in the near future. -
See the activity page for details.
Characterizing Plate Boundaries part of Cutting Edge:Online Teaching:Activities for Teaching Online
Students examine maps showing four different types of geologic data along three specific plate boundaries, and document the patterns in the data along each boundary. Next, they compare their observations to the ...
Geologic Mapping Exercise part of Cutting Edge:Early Career:Previous Workshops:Workshop 2010:Teaching Activities
This exercise is designed to simulate some of the mapping aspects of a basic geological investigation. This mock geological investigation is a good wrap-up exercise because it incorporates a variety of geological ...
Determining Earthquake Recurrence Intervals from Trench Logs part of Rates and Time:GSA Activity Posters
Trench logs of the San Andreas Fault at Pallett Creek, CA are the data base for a lab or homework assignment that teaches about relative dating, radiometric dating, fault recurrence intervals and the reasons for uncertainty in predicting geologic phenomena. Students are given a trench log that includes several fault strands and dated stratigraphic horizons. They estimate the times of faulting based on bracketing ages of faulted and unfaulted strata. They compile a table with the faulting events from the trench log and additional events recognized in nearby trenches, then calculate maximum, minimum and average earthquake recurrence intervals for the San Andreas Fault in this area. They conclude by making their own prediction for the timing of the next earthquake.
Depositional Environments and Geologic History Labs part of Cutting Edge:Rates and Time:Teaching Activities
This is a pair of labs that incrementally prepare students to interpret the geologic history of a rock sequence. The first lab introduces students to depositional environments and fossils. The second lab presents a ...
Global Earthquakes: Teaching about Earthquakes with Data and 3D Visualizations part of Cutting Edge:Visualization:Examples
In this series of visualizations and accompanying activities, students visualize the distribution and magnitude of earthquakes and explore their distribution at plate boundaries. Earthquakes are visualized on a 3D ...
Visualizing Global Earthquakes Where and Why do Earthquakes Occur? part of Cutting Edge:Visualization:Examples
In this activity students visualize the distribution and magnitude of earthquakes at and below the surface of Earth and how their distribution is related to plate boundaries. Earthquakes are visualized on a 3D ...
Visualizing Earthquakes at Divergent Plate Margins part of Cutting Edge:Visualization:Examples
In this activity students visualize the distribution and magnitude of earthquakes at divergent plate boundaries. Earthquakes are visualized on a 3D globe, making it easy to see their distribution within ...
|
Radiocarbon dating calibration
Some of the first radiocarbon dates produced showed that the Scottish tombs were thousands of years older than those in Greece.
The barbarians of the north were capable of designing complex structures similar to those in the classical world.
Isotopes of a particular element have the same number of protons in their nucleus, but different numbers of neutrons.
This means that although they are very similar chemically, they have different masses.
In this way large domed tombs (known as tholos or beehive tombs) in Greece were thought to predate similar structures in the Scottish Island of Maeshowe.
This supported the idea that the classical worlds of Greece and Rome were at the centre of all innovations.
Radiocarbon dating works by comparing the three different isotopes of carbon.
The second difficulty arises from the extremely low abundance of C, making it incredibly difficult to measure and extremely sensitive to contamination.
In the early years of radiocarbon dating a product’s decay was measured, but this required huge samples (e.g. Many labs now use an Accelerator Mass Spectrometer (AMS), a machine that can detect and measure the presence of different isotopes, to count the individual C atoms in a sample.
This method requires less than 1g of bone, but few countries can afford more than one or two AMSs, which cost more than A0,000.
Australia has two machines dedicated to radiocarbon analysis, and they are out of reach for much of the developing world.
|
The vacuoles are non-cytoplasm areas bounded by a single membrane bilayer present in the cytoplasm. Large vacuoles are characteristic of matured plant cells.
Young or growing plant cell contains many small vacuoles which coalesce to form a large central vacuole. Similarly animal cells have either many very small vacuoles or totally absent. In a mature plant cell the central vacuole may occupy about 90% of the cell volume.
As a result the cytoplasm is pressed against the cell membrane as a thin layer and the nucleus becoming lateral. The, membrane surrounding the vacuole is known as tonoplast and the aqueous solution inside is called cell sap or vacuolar sap.
This sap contains digestive enzymes and ions, metabolites and waste products. The tonoplast is differentially permeable and regulates the movement of ions and metabolites into the vacuole. Like the lysosomes the pH inside the vacuole is slightly lower than surrounding cytoplasm.
This lower pH inside the vacuole is maintained by pumping in of hydrogen ions by hydrogen pump present in tonoplast membrane. High concentration of salts, sugar and many water soluble pigments are present in vacuole.
Pigments like anthocyanin present in the vacuole gives the plant parts its specific color. (Deep purple or red) The vacuole originates from the fusion and enlargement of small vacuoles present in meristematic cells which are believed to originate from endoplasmic reticulum. Functions
1. Storage of reserve food like sucrose.
2. Stores and concentrates minerals.
3. As they contain solute in high concentration water enters the vacuole resulting in an outward turgor pressure on the cytoplasm and the cell wall. This results in turgidity of the cell.
4. Store waste products
5. Contain water-soluble pigments to impart coloration to the plant parts.
6. Some plant vacuoles have hydrolytic enzymes acting at acidic pH. These vacuoles function like lysosomes.
7. Secondary metabolites like Tannin, latex etc. are stored in vacuoles.
8. Contractile vacuoles found in some protists and algal cells take part in osmoregulation and excretion.
9. Gas vacuoles or pseudo vacuoles or air vacuoles found in prokaryotes provide buoyancy and also mechanical strength.
Cytoskeleton is a cellular 'scaffold" of "Skeleton" contained in the cytoplasm of eukaryotic cell . This is a dynamic structure of extensive net-work acting as the skeleton and muscle of the cell, for movement of stability.
They are also involved with distribution and orientation of cell organelles and cellular division. Eukaryotic cells contain three main kinds of cytoskeleton filaments. They are Microfilaments, intermediate filaments and microtubules.
Microfilaments / Actin filaments
Microfilaments are long, narrow cylindrical protein filaments of about 7 nm in diameter. Being the thinnest of the cytoskeleton filaments, they are called as microfilaments.
These filaments are formed by a family of proteins called actin proteins and for this reason these filaments are also known as actin filaments. Monomers of actin protein form long thin chains like 'strings of pearls'. Two chains of actin closely twin around each other to form a filament.
These filaments are mostly concentrated below the plasma membrane, to maintain cellular shape and in some cases form cytoplasm protuberances (like pseudopodia and microvilli). Functions
1. By forming a band below the plasma membrane they maintain the shape of the cell and also provide strength to the cell.
2. Generate locomotion in some cells like W.B.C. and amoeba (Pseudopodia formation)
3. Interact with myosin muscle fibers for muscle contraction.
4. Link Trans membrane proteins (e.g. surface receptors) to cytoplasm proteins.
5. Anchors centrosomes at the opposite poles of cell during cell division.
6. Cytoplsmic streaming movement is cause by the action of microfilaments.
7. Form cleavage furrows at the time of cytokinesis.
These are filaments of 8 - 11 nanometers in diameter, more stable than actin filaments and form heterogeneous constituents of cytoskeleton. These filaments arc constituted by fibrous protein molecules twined together in an overlapping arrangement. There are four types of intermediate filaments:
Vimetins: common structural support of many cells. Provide mechanical strength to muscle and other cells.
Keratin: Found in skin cells, hair and nails, form tonofibrils of desmosomes.
Neurofilaments: Found in axons and dendrons of nerve cells; strengthen the long axons and dendrons of nerve cells; strengthen the long axons of neurons.
Hamin: gives structural support to nuclear envelope.
The nucleus in epithelia cells is held within the cell by a basket-like network of intermediate filaments made up of keratins. Different kinds of epithelial cells use different keratins to build up their intermediate filaments Functions
1. Provide support and strength to cell membrane and nuclear envelope.
2. Form a skeletal network in the cytoplasm.
3. Found as constituents of hair, nail and skin (Keratin).
4. Tonofibrils support the desmosomes.
5. Neurofilaments strengthen the axons.
6. Provide mechanical strength to muscle (vimetins)
|
Contour Trenching and Terraces
Construction of trenches on slope contours to detain water and sediment transported by water or gravity downslope generally constructed with light equipment. These are also known as contour terraces or contour furrowing. lined with geotextiles and filled with rock, stacked or placed to form an erosion resistant structure.
Purpose: Contour trenches are used to break up the slope surface, to slow runoff and allow infiltration, and to trap sediment. Rills are stopped by the trenches. Trenches or terraces are often used in conjunction with seeding. They can be constructed with machinery (deeper trenches) or by hand (generally shallow). Width and depth vary with design storm, spacing, soil type, and slope.
Relative Effectiveness: Excellent-67% Good-33% Fair-0% Poor-0% (Replies = 3). Two of the three interviewees who rated trenching considered its effectiveness “excellent;” the other thought it “good”. Trenches trap sediment and interrupt water flow, slowing runoff velocity. They work best on coarse granitic soils. When in-stalled with heavy equipment, trenches may result in considerable soil disturbance that can create problems.
Implementation and Environmental Factors: Trenches must be built along the slope contour to work properly; using baffles or soil mounds to divide the trench reduces the danger of excessive flow if they are not quite level. Digging trenches requires fairly deep soil, and slopes of less than 70 percent are best. Trenches are hard to construct in heavy, clay soils and are not recommended for areas prone to landslides. Hand crews can install trenches much faster than log erosion barriers (a similarly effective hillslope treatment), and crew skill is not quite as important to effective installation. Trenches have high visual impact when used in open areas (and thus may be subject to controversy), but tend to disappear with time as they are filled with sediment and covered by vegetation. On the other hand, more extreme (wide, deep) trenches installed several decades ago are still visible on the landscape in some areas.
|
Laki craters are a wonderful result of the tremendous devastation that occurred in Iceland in the 18th century. A global catastrophe that killed thousands of people, animals, destroyed much property and was felt almost everywhere. Volcanic ash covered the sky causing famine even in Japan! Today, the Lakagígar crater area in south Iceland attracts many visitors from around the world who come to see unique natural phenomenon which is also a reminder to us all how fate lies in the hands of Mother Earth.
The Lakagígar eruption occurred in 1783 and was the largest volcanic eruption since settlement in Iceland. The flow of lava generated was the third largest on earth since the last Ice Age. The Lakagígar region and line of Laki craters were formed over a period of eight months between the years 1783-1784 and named Laki – they are the highest mountains in the region. The eruption was preceded by earthquakes, which shook the region for several days. The eruption took place on June 8, 1783 and was accompanied by thunderous explosions, the pungent smell of sulfur, ash and landslides.
Check our best sellers to the Central Of Iceland
The Lakagígar Craters are located along a line of ten crevasses, each of which is 2-5 km long. At the southern end of the line of craters is the Hnúta Mountain, where the first crevasse appeared. Frequent earthquakes accompanied eruptions each one causing a new crevasse north of the previous one. Apparently, there were probably at least 10 cycles of volcanic activity for eight months and as a result a line of craters were formed, the largest being in the middle and in all a total of 135 craters. The Laki eruption changed the surface completely. The region facing Laki was covered with muddy earth from the river; in the south were green valleys with farms and swamps where the Skafta River flowed.
The eruptions were named Skaftáreldar which means "Fires on the Skafta River" because of the lava that flowed through and filled the river on the third day of the eruption. The Hverfisfljót River also filled with flowing lava causing it to change route. These two rivers formed channels of lava, the western crevasse called Eldhraun and the eastern crevasse called Brunuhraun. Lava filled valleys, flowing at all levels, in certain places the lava flew over 40 km and reached the coastline.
Two months later, in late July, the eruption subsided on the south side, but continued more intensely in northern areas. All activities ended on 7 February, eight months after they began.
The answer to the question of whether a similar eruption could once again take place is, probably not, however additional eruptions could occur elsewhere in the active volcanic region. There is another line of lava cones located north of Laki, called Lambavatnsgígar but it is unclear when they erupted.
There are three types of craters in the Lakagígar region: scoria cones, spatter cones and ash rings. All are Tephra Craters in various shapes and sizes, some round and some are oblong. Most of them are lava craters; they are also the highest and soar to a height of 100 meters above the surrounding areas.
Most of the lava that seeped out from the western crevasse turned into jagged and grooved blocks, whilst the lava from the eastern crevasse looks like ropes of lava. The craters were formed in different shapes and made from different types of materials, due to different levels of crater activity. The rate of gases released from the magma and contact with water determines the shape and size of the crystallization slag. Slags of lava are between 5 and 10 cm, while spatter cones can be between one to two meters in diameter. The different colors of lava and stones result from the materials in the area around the time of eruption. Red stones contain iron oxides, i.e. rust, whilst black stones do not. Black sand often called "volcanic ash" is carried by water and wind, filling cracks, cavities and holes.
Flora and Fauna
Eldhraun lava area is almost entirely covered with vegetation in plains near the coast and up to the crater crevasses, 650 m high. At the end of the eruption, the region was of course bare, but over time, vegetation has taken over. At Laki, you can see all stages of the process: first cooled and crystallized lava, then the growth of moss, grass, shrubs and trees. The vegetation development was affected by volcanic activity, greater-than-average rainfall and a relatively temperate climate. Even though it rains a lot, the water percolates through the lava fairly quickly therefore making it difficult for most of the plants to feed. Moss grows continually, and new layers grow and cover the dead and decomposing layers. The vegetation is not sufficient to provide for animals and birds, although there are a number of bird species that come to breed here. The Arctic fox is the only mammal living in the area.
- The lava that flowed from the craters covered 0.5% of the territory of Iceland.
- The lava flow reached a volume of 6000 cubic meters per second, much faster than the water flow in Iceland's strongest rivers.
- Lava volume 15 km³. To contain the volume of a cube the size - 2.5 km each side.
- Lava Zone Area: 600 sq. km
- The eruption released between 400 and 500 million tons of gas into the air.
- The height of the lava fountain reached between 800, 1400 meters, and the height of the ash reached up to 15 km.
As stated, the eruption was the largest destructive event since the settlement in Iceland. Toxic ash fell almost everywhere in the country and toxic gases spread through the air. Dark clouds of ash descended on the coastal area and brought darkness. Ash poisoned vegetation causing it to wilt and therefore affected the animal's food supply. The Icelandic winter brought cold and hunger and because there was no vegetation and the animal population was so reduced, many people died of starvation.
Reports on the events reached the King of Denmark, who at the time ruled Iceland, in autumn, several months after the start of the eruption. He sent a ship with food supplies that left Denmark in November, but was unable to reach Iceland until spring. Heavy clouds, cold weather and ice gathered around the coast of Iceland, preventing the departure and arrival of ships. When the supplies finally arrived, they were distributed slowly, due to the deterioration of the population. The situation was so bad that the king considered transferring the residents to Jutland in Denmark.
Two years after the eruption the number cattle in Iceland fell by half, the number of horses fell by one-third and only one-fifth of sheep survived. Ten thousand people died. A fifth of all residents.
In Laki, which at the time mainly a farming area half of the population died, 20 farms were completely covered in lava and 30 were closed for a long time.
An interesting story from this period is the "Fire Sermon". After a month and a half after the beginning of the eruption, the priest from Kirkjubæjarklaustur village gathered his community in church on Sunday. He believed that the eruption was God's punishment for the sins of the community. The lava flowed towards the village, and explosions was heard everywhere. The priest turned to God in prayer and assured that the community would refrain from sin and asked to redeem them. Surprisingly the lava flow stopped just short of the village.
The devastating eruption did not only affect Iceland but was felt in the entire northern hemisphere. The gradual expansion of the sulfur gas cloud spread and affected the climate. Two days after the eruption clouds of gases formed a fog that reached the Faroe Islands, Norway and Scotland. A week later, it spread over the entire European continent. Black fog covered the sky in Finland and reached the Balkans. European press described the sun as a blood-red disk at sunrise and sunset. They described that the pollution was so strong, you could look at the sun directly. A month later the fog reached Russia and China.
Apart from Iceland, the ecological effect was felt primarily in Scandinavia, Western Europe and the British Isles and affected crops. In July, the effect impacted Russia, Siberia and China. Acid rain caused leaves to fall from the trees and killed the spring germination. Overall temperatures fell by 1.3 degrees. The cold weather lasted for three years with devastating global effects. In the summer of 1873, Japan's rice crop failed due to cold and wet weather, as a result, Japan entered the Great Famine period in its history. Similar stories came from Alaska, where entire communities died of starvation. There is a theory that states that the French Revolution was the result of the Laki eruption that brought famine to France.
How to get there?
The nearest town is Kirkjubæjarklaustur, located on Route 1. After traveling approximately 6 km west, take the F206 to the north. The distance from there to Laki is around 45 km. The journey takes about two hours each way, and involves crossing several wide rivers. Another option is to take a bus trip that runs every morning from Skaftafell to Kirkjubæjarklaustur.
This trail leads to Laki summit, a height of 818 m, located in the center row of craters. You can easily see the crevasses created in the western and northern slopes. The view from the summit and the row of craters that can be seen is spectacular. The Vatnajökull glacier and its surroundings can be clearly seen from here. The trail is marked in red, about an hour and a half and the climb is fairly steep.
Craters Trail at the foot of Laki
This trail leads from the foot of Laki to the low slopes of the craters, leading through them and showing how the ground erupted. Now the rocks and lava are covered with green moss. The moss is delicate and visitors are asked to avoid stepping on it or touching it. The trail is marked in blue, is easy and takes about half an hour.
Tjarnargígur - Eldborgafarvegur Crater Trail
The trail passes near the water-filled crater Tjarnargígur and continues through a series of craters. Large parts of the sides of the trail are covered in delicate moss that should be avoided stepping on. The trail is easy and approximately two hours long.
Other interesting sites in the region
A wide and impressive canyon 100 m deep and 2 km long. The canyon runs through soft rock covered and filled with layers of lava. The canyon was formed at the end of the Ice Age about two million years ago. When the ice cap melted, it created a large lake in the valley above the canyon. The water was kept in the lake by a large rock that blocked the outflow apart from a gentle trickle down into the canyon. Melting snow formed rivers that led to the rock's erosion; as a result, there was a massive jet of water that created the canyon we see today. The lake refilled again and the river eroded deeper and deeper into the rock, a phenomenon that continues in small scale to this day. The canyon is on the way to Laki.
The name means "beautiful waterfall" and has thin streams of water, creating a picturesque appearance. There are signs from the parking lot that lead up a small hill overlooking the waterfall and the channel in which it flows. The waterfall is located approximately 24 km from Kirkjubæjarklaustur on the F206.
|
Lessons & Classroom Games for Teachers
Recounts are purportedly factual accounts of events from those who actively participated in the occurrences. Students are often interested in recounts because they provide a voyeuristic opportunity to look into an event in which the student was not a participant. When teaching the recount genre to your students, you can use the lessons to encourage them to think critically about information and explore the reliability of the account as a whole. This practice promotes the development of critical thinking skills and careful consideration of both the text and the subtext of a written work.
Define the recount genre. Students can not begin to learn the common elements of recounts if they do not understand what a recount is. Explain to students that a recount is a firsthand telling of an event from the point of view of the writer.
Read a grade level appropriate example of a recount. Many literature books contain recounts, if yours doesn't, look for a collection of personal narratives or firsthand short stories written at your students' academic level.
Discuss the purpose. Author's purpose is an important element of any recount. Discuss the author's reason for writing about the event with your students.
Explore how the age, gender and socioeconomic status of the author affects the recount. The information that you receive from a recount, and the way in which that information is shared, depends heavily on who the writer is. Look at recounts from different individuals and discuss how the recount style and content differs depending upon who wrote it.
Consider the reliability of the narrator. Explain to students that, in recounts, not all information should be trusted. Discuss the factors that influence narrator reliability, such as agenda and affiliations. Decide whether the author should be believed in each recount you read with your students.
Read recounts of the same event from different points of view and compare the accounts. Use a Venn diagram to complete the comparison. Discuss elements that are similar, as well as those that are different. Ask students to decide which recount is likely most truthful and explain the reasons behind their accuracy decision.
Engage students in the composition of their own recounts. Tell each student to select an event of importance to describe in a detail. Ask students to base their recount upon the recounts that they have read in class.
|
A spinning neutron star is tied to a mysterious tail - or so it seems. Astronomers using NASA's Chandra X-ray Observatory have found that this pulsar, known as PSR J0357+3205 (or PSR J0357 for short), apparently has a long, X-ray bright tail streaming away from it.
This composite image shows Chandra data in blue and Digitized Sky Survey optical data in yellow. The two bright sources lying near the lower left end of the tail are both thought to be unrelated background objects located outside our galaxy.
PSR J0357 was originally discovered by the Fermi Gamma Ray Space Telescope in 2009. Astronomers calculate that the pulsar lies about 1,600 light years from Earth and is about half a million years old, which makes it roughly middle-aged for this type of object.
If the tail is at the same distance as the pulsar then it stretches for 4.2 light years in length. This would make it one of the the longest X-ray tails ever associated with a so-called "rotation- powered" pulsar, a class of pulsar that get its power from the energy lost as the rotation of the pulsar slows down. (Other types of pulsars include those driven by strong magnetic fields and still others that are powered by material falling onto the neutron star.)
The Chandra data indicate that the X-ray tail may be produced by emission from energetic particles in a pulsar wind, with the particles produced by the pulsar spiraling around magnetic field lines. Other X- ray tails around pulsars have been interpreted as bow-shocks generated by the supersonic motion of pulsars through space, with the wind trailing behind as its particles are swept back by the pulsar's interaction with the interstellar gas it encounters.
-Megan Watzke, CXC
|
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, videos, activities, or other ideas you'd like to contribute, we'd love to hear from you.
Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals.
Teacher Resources by Grade
|1st - 2nd||3rd - 4th|
|5th - 6th||7th - 8th|
|9th - 10th||11th - 12th|
Semicolons and Swift: Analyzing Punctuation and Meaning
|Grades||9 – 12|
|Lesson Plan Type||Standard Lesson|
|Estimated Time||Two 60-minute sessions|
- Develop understanding of punctuation, particularly semicolons, by considering the use of semicolons in Swift's essay and investigating reader expectations of semicolons on usage sites
- Analyze how punctuation relates to meaning by investigating the rhetorical effects of semicolons in Swift's essay
- Apply what they have learned by using semicolons in their own writing
|1.||Open with a discussion of "A Modest Proposal," which your students should have read and discussed (see Preparation, Step 2). If students are unaware, point out that multiple versions of the essay exist. Explain that in this lesson they will explore the use of semicolons in only one version of the essay because they are looking at how the effects of punctuation might affect a reader's interpretation-no matter who inserted the semicolon.
|2.||Put students into groups of two and have each pair open a new word-processing document. The pairings work best if you design partnerships so that at least one of the students is a strong reader. Have each pair of students put their names on the document and save it to a drive you designate. Give them a copy of the Effects of Semicolon Rubric and review it so students will know the expectations for the lesson's outcomes.
|3.||Next, have students access the online version of "A Modest Proposal" by Jonathan Swift. Tell them that their task will be to find all the sentences they can that contain semicolons. Each time they find one they should copy and paste it into the document, making sure that they skip a line between each sentence. Model this process before students begin to do it on their own.
|4.||When students have found all the semicolons they can, have them use the highlighter function to highlight the six to eight words following each semicolon.
|5.||Using the Sample Semicolon Sentence Sets as a guide, model for students how to group the different types of semicolon use in the essay. Have students look at the highlighted words and read the portion of the sentence that follows the semicolon. They should then put the sentences in groups that have similar structures and patterns. Start them off by doing some with them. For example, you might show the following sentences and ask students which ones would go together:
|6.||Tell students they should work with their partners to group the sentences they found according to their similarities. There may be some sentences that students do not group because they aren't like any other sentences they collected. This is acceptable for the moment, as long as students have grouped the majority of sentences.
|7.||Once students have grouped the sentences that are alike, model how to descriptively label the groups so the labels explain what they see. For example, using the "and I" sentences from Step 5, ask students what names they could give the group of sentences that are like this. They might call them "And I" sentences or "And" sentences. Be sure to explain that there is not one correct way to label the groups, but that the labels need to be descriptive of the sentence pattern.
|8.||Students should then label the rest of their sentences with their partners. Note: If students are still struggling with this concept after you model it for them, you can choose to continue the labeling as a class. If students continue to work in partnerships to do the naming but would benefit from whole class work, have them share their groups of sentences with the class to see if other groups have suggestions.
|1.||Have students access one or more of the websites from the Resources section and read about the "rules" for semicolons. In the same pairs as Session 1, have them summarize what they learned and each write a summary on an index card for reference.
|2.||Have students open up their documents from Session 1. As a whole class, review what the students found out about semicolons. Then ask students to look in their documents and find an example of a sentence that uses one of the rules they have identified. Write the sentence on the board. Discuss how it follows the rule.
|3.||Next, ask students to find a sentence that follows a similar pattern but does not use a semicolon (for example, find a compound sentence with "and" as the conjunction but punctuated more traditionally-with a comma). Help them to locate one if they are having trouble to show them what you mean (see examples in the Sample Semicolon Sentence Sets). Have them paste the new sentences into the document. Select an example from what they find to compare to the sentence already written on the board, and write that sentence on the board as well. Ask students why they think Swift or an editor might have chosen to use semicolons in one sentence and not another. What effect does the semicolon have on how the sentence could be interpreted?
If students are struggling to answer this question, suggest some possible interpretations. For example, a compound sentence is usually punctuated with a comma to show the relationship of the ideas. A semicolon usually joins two independent clauses without a conjunction. Using both a conjunction and a semicolon (rather than creating two separate sentences) both ties two ideas together and yet separates them more distinctly than a comma would. For example, in the third example listed in Session 1, Step 5 the semicolon makes a break that emphasizes the second part of the sentence. That emphasis could make a reader see the irony in the second part of the sentence. Not only are the methods of cooking mentioned, the mention of specific dishes in that second part of the sentence emphasizes the irrationality of the suggestion.
|4.||After the group work of theorizing about the effects of the use of semicolons on meaning, have students individually write their conclusions about Swift's use of semicolons and how it contributes to what he's trying to say in his essay. In this writing, they should use a semicolon once in the way the rules indicate; they may use it one more time in a way Swift does, if they can use it to create the same effect he did. Refer to the Semicolon Writing Prompt for details. Note: You may choose to allow students to finish this work for homework if there is not sufficient time left at the end of this session.
- You may also choose to teach the lesson Every Punctuation Mark Matters: A Mini-Lesson on Semicolons, which looks at semicolon use in Martin Luther King's "Letter From a Birmingham Jail."
- Students might find it interesting to read "The Sissy Semicolon" by James J. Kilpatrick and discuss or debate the points it brings up about the uselessness of semicolons. You might even consider sharing this article just prior to giving the writing prompt in Session 2 as a way to allow some students evidence for a different perspective.
- Have students apply the same process they did in Session 2 to other forms of punctuation in other pieces of literature: 1) finding examples; 2) grouping and naming them; 3) looking up corresponding rules; 4) finding contrasting examples or nonexamples; 5) theorizing about the choices and the effects of those choices. A good option would be the use of dashes in a selection from The Woman Warrior by Maxine Hong Kingston (Vintage, 1989).
- Sentences in one version of "A Modest Proposal" that are punctuated by a semicolon are sometimes punctuated differently in other versions. Have students compare punctuation in different versions of Swift's essay to see how meaning can be affected by punctuation. Here are other possible online versions:
- Have students write a short satirical piece of their own using semicolons to create similar rhetorical effects as those they discovered in Swift's essay.
- Observe students’ participation as they work in pairs to collect their sentences and create the word document. Check for how they work independently when it was time for them to work independently but also how effectively they worked cooperatively during the partnered portions of the lesson. If you choose to do more of the work as a whole class, also consider the level of participation during whole-class discussions.
- Have students turn in the highlighted document they created in Session 1, the semicolon rules summary, and their individual paragraphs. Use the Effects of Semicolon Rubric to evaluate their work.
|
Fluent Readers are Fabulous!
Growing Independence In Fluency
While reading, we should aim to be reading fluently, which is smoother, faster and with expression. Reading fluently has four indicators: reading faster, reading more expressively, reading silently, and reading voluntarily. This lesson's focus is on reading faster. The goal in reading fluently is reading with automatic word recognition. While we are reading, we want out words to flow just like when we are talking to our friends. We want our readers to not have to decode, but to have automatic word recognition. A timer will be used during repeated readings to track students reading.
-A Job for Zach
-Check list for time
1.Start the lesson by explaining to students what fluency means, and why it is so important.
Say: "Today we are going to be working on fluency. Fluency means just reading words smoother, faster, and with expression. It's just like you're having a conversation with your friends. Fluency is so important because you will be able to read so much more and be able to comprehend the story and enjoy reading it at the same time.
2.While reading today, it is important to go back and reread words until our sentence is as smooth as if we are talking to our friends. A strategy we are going to work on today is cross checking. Cross checking just means going back to the unfamiliar word and using strategies to figure it out. The more times we read a word, the easier the word is going to be for us to read.
3.Use modeling to help students understand the concept of fluency.
Say: "When I'm reading a new book, sometimes I come across words that I have never seen before. Have you ever noticed that your reading pace slows down when this happens, and you sometimes even forget what you just read? I'm going to show you how to fix this problem when you come to an unfamiliar word while reading."
Read this passage slow and struggling with words:
"He set down a bill for the buns."
"He s-a-t (sound out word slowly and then go back and figure out it is set) oh, set d-o-n don down a b-i-l-l f-o-r t-h-e b-o-n bon oh, b-u-n-s"
Say: "Did you see how much trouble I had with reading these words? Did it sound smooth when I read? Did it sound like I was talking to my friends? Since this passage was so hard for me to read, I'm going to read it again and really focus on reading fluently."
Read passage again making few mistakes:
"He set d-o-w-n a bill for the b-u-n-s"
Say: "This time I didn't make any mistakes, but I still had to sound out some words. I was able to read it faster because I recognized those unfamiliar words that slowed me down the first time." Now I'm going to read this sentence again but try not to sound out any words and not make any pauses."
"He set down a bill for the buns"
Say: "The more times I read the sentence, the better my fluency got. Re reading sentences helps us build fluency.
4.Say: "Now we are going to read the book A Job for Zach. This book is about a boy named Zach that has to run an errand for his mom. While on his errand, he sees a box fall out of a delivery truck. Zach decides it is his duty to get the box back to the owner. Let's read to find out what trouble Zach gets into. First, we are going to read by ourselves to page 10. Then we will pair up with partners and time ourselves.
5.Pair the students up and have them read together.
Say: "This time we are going to be taking turns while reading. One of you reads, while the other times you. Each of you should read to page 10 three times. You are going to record the times after each reading on the paper I hand out to you. While your partners reads make sure they are reading with fluency, like they are talking with a friend.
6.Say: "Great job class! See how much easier it is to read fluently?"
The paired reading records will be collected and words per minute will be
calculated based on children's reading times. We will also have a component in
writing, having student answer three questions about the story.
- Reading Genie Website: http://www.auburn.edu/academic/education/reading_genie/fluency.html
-Blast off with Reading by: Mallie Frasier
-Speeding on the seam foam with fluency by: Andrew Brown
-A Job for Zach,Matt Sims. High Noon Books, 2002.
After 1st read: _______
After 2nd read: _______
After 3rd read: ________
Return To Doorways Index
|
When we think of tobacco, a lot of health issues immediately spring to mind – lung cancer, throat cancer, cardiovascular disease, tooth decay, asthma. But tobacco products also wreak havoc on our environment, a problem that is often overlooked.
Tobacco is harmful to the environment throughout the product cycle – all the way from acquiring the materials to post-consumer waste. Tobacco farming, manufacture, industry waste, transport, use, and post-consumer waste all have a negative impact on the environment; all for a product that is deadly and has no benefit to society.
Some of the many harms in the life cycle of tobacco include:
• 5% of global deforestation is due to tobacco farming- 900,000 acres a year
• Tobacco growing is dependent on chemical inputs like fertilizer and pesticide, which causes soil degradation and water pollution, and can have negative health impacts on laborers.
• Over a million pounds of toxic chemicals were released by tobacco product manufacturing facilities in a single year. The top five chemicals released were ammonia, nicotine, hydrochloric acid, methanol, and nitrate compounds.
• Smokers litter cigarette butts rather than disposing of them properly 65% of the time, which results in approximately 845,000 tons (1.69 BILLION pounds) of cigarette butts as toxic trash each year.
There are numerous policy options that can be considered to combat the negative impact of tobacco on the environment.
Implement best practice policies from the WHO Framework Convention on Tobacco Control (FCTC)
The global tobacco treaty includes an Article (number 18) which addresses tobacco and the environment. Parties are bound to have “due regard” for the environment in their tobacco control policies. This Article has largely been ignored by the Parties, but should be brought to the forefront of the tobacco control discussion going forward.
Extended Producer Responsibility
These programs would require tobacco corporations to monitor their environmental impact, reduce waste, recycle and cleanup any waste. They could also require tobacco corporations to reimburse local communities for cleanup costs associated with post-consumer waste.
Countries, states or localities can pass laws that can help. For example, jurisdictions have considered legislation that bans filters, as well as taxes on cigarette butts.
Finally, a key step is public education. Many people – smokers, tobacco control advocates and the general public alike – don’t consider the environmental impact of tobacco. But with public education, many of these people could become passionate allies. Together, we can shine a light on this issue and protect our world from the toxic and hazardous impacts of tobacco.
Are you interested in this program? Read more about our next steps, and please consider donating at CrowdRise>
|
I’m yet to find a really accurate, original, and dare I say it,
interesting, explanation of inductive learning so I thought I’d write
one. Firstly, it’s necessary to examine inductive and deductive reasoning.
Reasoning is how we structure our thinking to arrive at answers and
truths, using facts and logic as the basis for higher conclusions.
Inductive reasoning means conjecturing an answer using the information you have as platform for further inference.
Deductive reasoning means finding the answer contained in the information you have been given.
Out of these five fruits: Banana, Apple, Tomato, Peach, Pear, the tomato is the odd one out because it is the only one eaten or cooked with salt.
The others can be, but in reality almost always aren’t.
Out of these five fruits: Banana, Apple, Tomato, Peach, Pear, the tomato is different as it can also be classed as a vegetable.
This is a basic fact and cannot be falsified.
You can see from the examples that inductive reasoning takes far more imaginative and cognitive work than simple Aristotelian recall. In other words, inductive reasoning is a deeper form of reasoning that involves speculating, calling on a wider field of information, and taking an intellectual risk. While deductive reasoning involves a certain degree of playing it safe.
I feel there is not enough promotion of reasoning in schools and academia generally. We use reasoning every day but formal and deep argument I believe is something of a forgotten skill. Instead, people are encouraged to simply review and reference literature on a subject and use hedged and equivocal language to arrive at a typically tame conclusion.
In the same vein, inductive learning has become less favoured in education than its easier, less original brother, deductive learning. That is, the promotion of knowledge over understanding. Both are required in education and in life, but the two call on very different thought processes.
In Language Learning
Lying at the root of reasoning in language learning are the
questions we ask our learners. For instance, a comprehension question
about a text is deductive, while asking the reader’s opinion is
Deductive vocabulary teaching
Inductive vocabulary teaching
Deductive grammar teaching
Inductive grammar teaching
[Don’t get me started on pron. That’s for another post...]
The inductive approach has in many contexts gone out of fashion. It
is viewed as a relic from the Direct Method where learners were felt to
be overly challenged and often struggled to grasp a concept.
Inductive learning is thus considered an uphill approach. Despite the
Direct Method being such a natural method; attempting to mirror the L1
acquisition process, it is also natural that teachers are very quick to
provide the answers and make learning as easy as possible. Sometimes
people pay lip service to thorough practice, but in reality take the
easy route of teaching at the students and to the test.
Deductive elicitation does indeed have its limits but inductive elicitation has a lot more mileage in constructing understanding. The basic premise of this is that as language teachers, we are teaching a skill more than we are teaching a knowledge. Ergo, the emphasis ought to be on the skill of communicative risk-taking above the ability to recall declarative facts. The latter relies on tedious repetition, the former on discovery. Knowing a rule or a word is something altogether different than being able to use it in normal interaction. For that, we do need repetition but more so we need independent and somewhat unscaffolded practice. The truth is, people surprise themselves and grow when they try to do something new.
Inductive reasoning and learning requires people to make generalisations. These are often wrong, but that shouldn’t be an issue in a language classroom. The wrong answer is fodder for further reasoning and talk. Communicative risk taking is so important to develop in learners. When a learner does not know how to say a word she sees on the page, instead of stopping and asking how it is said, which is the default reaction for many. If the learner just attempts the word, fifty percent of the time they get it right! And of the remaining fifty percent, half the time they get it half right! People surprise themselves and grow when they try to do something new. This is how we make quick learners out of people and dispel a deficit mentality in teachers. Inductive approaches like this make better real world communicators out of us too.
Some ways to encourage inductive learning
1. Aristotle was the first to comprehensively outline deductive reasoning. As well as this, his theories of classification universals all relate to deductive inference.
2. Which is why I’ve written a book on the subject.
4. For an overview of the Direct Method, its value and its failings, read this.
5. One of these days I’ll do a post on the Silent Way.
6. For example: these assertions on the website of Transparent Language Learning deprecating inductive learning.
7. See the activity ‘Attributes’.
|
Technetium: the essentials
Since its discovery, searches for the element technetium in terrestrial materials have been made without success. Technetium has been found in the spectrum of S-, M-, and N-type stars, and its presence in stellar matter is leading to new theories of the production of heavy elements in the stars.
Technetium is a silvery-grey metal that tarnishes slowly in moist air. Until 1960, technetium was available only in small amounts. The chemistry of technetium is related to that of rhenium.
Technetium: historical information
Element 43 (technetium) was predicted on the basis of the periodic table by Mendeleev. He suggested that it should be very similar to manganese and gave it the name ekamanganese. Technetium was erroneously reported as having been discovered in 1925, at which time it was named masurium. The element was actually discovered by C. Perrier and Emilio Gino Segre in Italy in 1937. It was found in a sample of molybdenum bombarded by deuterons. Technetium was the first element to be produced artificially and all its isotopes are radioactive. It is named after the Greek technetos, artificial.
Technetium: physical properties
Technetium: orbital properties
Isolation: it is never necessary to make a sample of techntium anywhere other than specialist laboratories. This is because technetium is radioactive. Technetium is a byproduct of the nuclear industry and is a product of uranium decay. Alternatively it can be made by the bombardment of molydenum targets with deuterium nuclei.
Because of the scale of the nuclear industry it is possible to make quite large quantities of technetium (kilograms). The metal itself may be made by the reaction of the sulphide Tc2S7 with hydrogen at 1100°C or of the pertechnate NH4TcO4 with hydrogen.
|
In 1654, Chevalier de Mere, a French gambler, wrote to Pierre Fermat and Blaise Pascal, two of France's mathematical giants, with a number of problems concerning the odds of particular combinations of numbers occurring, when several dice are rolled. This event is considered to be the birth of probability theory.
Let's investigate a simple question that Chevalier de Mere could have asked. Suppose we roll two dice. We can get a sum of 4 in two different combinations: (1,3) and (2,2). We can get a sum of 5 in two different combinations also: (1,4) and (2,3). Why is it that in de Mere's practice 5 appears more often than 4?
The answer is the following: the combinations (1,3) and (2,2) are not equiprobable. We have a probability of 1/6 that the first die rolls 2, and a probability of 1/6 that the second die rolls 2, thus making a combination (2,2) with the probability 1/36. By a similar argument we see that the probability that the first die rolls 1 and the second die rolls 3 is 1/36. The probability that the first die rolls 3 and the second die rolls 1 is also 1/36. Hence, the combination (1,3) is rolled with probability 2/36 = 1/18.
In the table below, the numbers in the left column show what is rolled on the first die and the numbers in the top row show what is rolled on the second die. We will color in blue the cells corresponding to the sum of 4, and in pink the cells corresponding to the sum of 5.
Probabilities for Two Dice
Now we can see that the sum 4 will be rolled with probability 3/36 = 1/12, and the sum 5 with probability 4/36 = 1/9.
Below you can check our random "roll of dice" generator. It will count for you the total number of rolls and the total for each sum. To set the count back to 0, press "Start Over" button.
Random Dice Generator
|
Birds: Designers, Engineers, and Builders of Nests
This video is incorporated into the Birds: Designers, Engineers, and Builders of Nests lesson plan. With this lesson students explore the nest-building practices of various bird species. Using video, discussion questions, hands-on exploration, and writing assignments, students will gain an understanding of how and why birds design, engineer and build their nests to meet their needs and how their nesting is influenced by both biological and environmental factors.
Assign directly to your students using the code or link above, without having them log in. Simply tell your students to go to
www.pbsstudents.org and enter the Assignment Code, or click on the Assignment URL to share the assignment as a link.
|
Properly drying onions prior to storage is key to their preservation and prevents the development of bacteria, mold, and freezing of the onions. Drying is especially important in wet climates or if the onions have been exposed to extended periods of moisture through the harvesting season. The perishability of onions is directly related to their respiration rate; less moisture present at the time of storage means a longer shelf life (Sargent, Talbot, & Brecht, 1988).
It is extremely important that the onions are dry before they are lifted from the field and that all excess moisture is gone before they are stored. A dry outer layer of skin protects and maintains onion freshness and quality during storage. Bulbs harvested for storage require a total of 14 to 20 days of drying and curing prior to being stored (Opara, 2003). An onion is dried correctly if the neck is tight and outer scales are of uniform color and dry to the touch (Adamicki, n.d.). Proper technique prevents shrinkage and sloughing off of the onion caused by excessive drying (Matson, 1985).
Curing, like drying, requires heat application (natural or artificial) before the onions are placed in a storage facility. Proper curing optimizes maximum storage life and quality of onions, protects them from damage and disease, and promotes natural dormancy. Curing processes vary from grower to grower in each geographical growing region. Cost effectiveness and sustainability are dependent on weather conditions and available resources.
Artificial curing can reduce the incidence of neck-rot and spoilage of the onions and may be necessary if onions are exposed to significant amounts of moisture, humidity, or low temperatures during the harvesting season. Artificial curing methods involve blowing hot air on onions placed on large pallets. The hot air—near 115° F (approximately 66°C) – must blow at approximately 3-4 feet per second for a period of 16-24 hours. It is important to monitor the onions to avoid cooking or overheating, as temperatures exceeding 125°F (52° C) for 24 hours or 115°F (46°C) for 48 hours can severely damage onions in storage (Vaughan, Cropsey, & Hoffman, 1964).
Natural curing may be beneficial if onions are grown in a climate hot and dry enough for them to be cured outside. In natural curing, onions are typically windrowed, topped, and left to dry in the field in bags or crates for a period of at least 5 days. During this time, precipitation could disrupt the curing process. If weather does not permit windrowing, other drying methods may be necessary. It has been noted that for optimal curing, the use of bulk pallets or wooden crates should be considered (Vaughan, Cropsey, & Hoffman, 1964). It should also be noted that if curing in crates or pallets is not possible, curing onions in burlap bags is suggested (Vaughan, Cropsey, & Hoffman, 1964). An onion has been correctly cured when the neck is dry and shrunken (Matson, 1985).
Other natural curing methods minimize handling by allowing crops to cure in place. In the cure-in-place method, water supply to onions is cut 1 to 2 weeks before lifting, working best in areas with dry, warm harvesting seasons. Once fields have been dried, onions can be undercut (removal of roots) by a machine, then lifted, topped, and loaded in a one step process for immediate bagging or storing. This minimal handling process maximizes efficiency by reducing costs, energy usage, and the amount of harvesting processes.
|
Education for Sustainable Development
Climate change, the global food crisis and the ongoing financial and economic crisis are examples of sustainability issues our societies have to cope with in a globalized world.
In conducting pilot projects to better prepare children and young people to tackle effectively the challenges of an increasingly interdependent world, ASPnet schools have an important share in the United Nations Decade for Education for Sustainable Development (2005-2014). In fact, ASPnet plays a vital role in pilot-testing, developing and implementing ESD methods that are eventually documented and provide examples of good practices for other schools.
Education is the foundation for sustainable development. It is a key instrument for bringing about changes in values and attitudes, skills, behaviours and lifestyles consistent with sustainable development within and among countries.
The concept of sustainable development includes the key areas of society, environment and economy, with culture as an underlying dimension. The values, diversity, knowledge, languages and worldviews associated with culture influence the way Education for sustainable development is implemented in specific national contexts.
Education for sustainable development is a tool for addressing interlinked objectives such as:
Society: to increase understanding of social institutions and their role in change and development, to promote social justice, gender equality, human rights, democratic and participatory systems, and health care (including HIV/AIDS)
Environment: to increase awareness of the resources and fragility of the physical environment, the affects of human activity on the environment, climate change, environmental protection (including water education), and biodiversity
Economy: to create sensitivity to the potential and the limits of economic growth, its impact on society and the environment, responsible and sustainable consumption, and rural development
In addition to reflection in the classroom, schools often conduct community-oriented projects. This does not only serve to improve immediate local needs, but to equip students with the necessary skills to transform oneself and society.
Therefore ESD should not be seen narrowly as another subject or concern to be added onto the formal education system. It is as much about the content as about the method. ESD is a broad teaching and learning process that encourages an interdisciplinary and holistic approach and promotes critical and creative thinking in the educational process.
- Baltic Sea Project
An ASPnet flagship project for the Baltic Sea countries
- Sandwatch Project
An ASPnet flagship project on the protection of coastal areas
- Great Volga River Route Project (2004-2007)
An ASPnet flagship project on strengthening world heritage education for Education for Sustainable Development
- Water Education in Arab States
An ASPnet flagship project on water conservation in eight Arab countries
Back to top
|
An article in the Los Angeles Times says that scientists have built a working computer model of the human brain:
A new computer model of the brain can perceive, process and act on visual information, such as questions from an IQ test. The model, published Thursday in the journal Science, only simulates about 2.5 million of the estimated 60 billion to 100 billion cells in the brain, but it is the first to connect simulated activity of those cells to actual behaviors.
Over the last decade, an increasingly ambitious group of neuroscience researchers has focused its energies on using computers to model the activity of the brain. For the most part, those researchers have attempted to expand the number of cells the models include while maintaining the biological accuracy of the simulations.
But the new model takes a different tack. Rather than focus on how many cells the model includes, the researchers have focused on getting from simulated brain activity to observable actions. And instead of just being comprised of a set of computer chips, the model includes a camera to "see" and a robotic arm to "act."
The artificial brain can use these tools to carry out eight different tasks, including counting, memorizing numbers, and answering questions from an IQ test. The research team, led by Chris Eliasmith of the University of Waterloo in Canada, named the model Spaun.
What makes the model different from your iPhone or laptop, which can also take in information, process it, and act? While hardware and software engineers couldn't care less about how our brains work, the model's creators designed their model to replicate the properties of a small group of brain areas and the neurons contained within them, making all information processing and behavior the result of simulated brain cell activity. The areas were chosen because they are directly involved in visual perception, decision-making and movement.While this is a notable or least interesting scientific and engineering achievement, I am not sure what or where it will lead to in the advancement of humanity. Not all scientific or engineering feats make us better human beings; some make us worse, and thinking and acting non-humans are the chief narrative components of dystopian science fiction.
Leaving that argument aside for now, I find it puzzling that computer scientists would want to replicate the human brain, with all its imperfections and limitations. Or perhaps it might be that science allows that human imperfection is an acceptable and perhaps an ideal state.
You can read the rest of the article at [LA Times].
|
Aims of the subject:
To provide students with an engaging curriculum that will enable students to appreciate the intricacies of language, including the beauty and versatility that language has in shaping meaning. By the end of Key Stage 4, students should be able to:
- read a wide range of high-quality and challenging texts, fluently and with good understanding
- read critically, and use knowledge gained from wide reading to inform and improve their own writing
- write effectively and coherently using Standard English appropriately
- use grammar correctly, punctuate and spell accurately
- acquire and apply a wide vocabulary, alongside a knowledge and understanding of grammatical terminology, and linguistic conventions for reading, writing and spoken language
- listen to and understand spoken language, and use spoken Standard English effectively
GCSE Examination Board: AQA
Assessments comprise each part of the AQA specification:
- Language Component 1A: Reading 20thor 21st Century Literature
- Language Component 1B: Prose Writing
- Language Component 2A: Reading 19thCentury and either 20th or 21st Century Non-Fiction
- Language Component 2B: Transactional Writing
- Literature Component 1A: Shakespeare extract and essay
- Literature Component 1B: 19thCentury novel
- Literature Component 2A: Modern texts
- Literature Component 2B: Poetry
- Literature Component 2C: Unseen Poetry
|Year||What will I learn?||Assessment|
This will involve study of the whole play, focusing on key scenes in detail, with a particular focus on key characters, relationships, themes and ideas.
A Christmas Carol
This will involve the study of the whole novella, focusing on key extracts in detail, with a particular focus on key characters, relationships, themes and ideas.
English Language Paper 1
This will involve the reading and close analysis of a wide range of challenging literary extracts from the 20th and 21st Centuries. You will practice a wide range of reading skills and question types from short simple comprehension to more lengthy questions relating to structure, language choices and writer’s craft. You will also practise the skills of creative writing – both narrative and descriptive.
Power and Conflict Poetry
Study of a selection of poems from the Power and Conflict cluster of the AQA Anthology.
Revision of Language and Literature Paper 1.
Power and Conflict Poetry
Study of a further selection of poems from the Power and Conflict cluster of the AQA Anthology.
|Shakespeare extract and essay (Literature 1A)
19th Century novel extract and essay (Literature 1B)
19th, 20th and 21st Century Literature Reading (Language 1A) and non-fiction Reading (Language 2A)
Prose Writing (Language 1B) and Transactional Writing (Language 2B)
Poetry comparison (Literature 2B), Unseen Poetry response (Literature 2C).
English Language Paper 2
This will involve reading and close analysis of a wide range of challenging non-fiction texts from the 19th,20th and 21st Centuries. You will practice a range of reading skills from information retrieval to inference and deduction, and will become more confident in your understanding and appreciation of such texts. This will also involve revisiting transactional writing and being prepared to write more confidently and competently about a serious topic in which you have a point of view.
An Inspector Calls
This will involve the study of the play (including social and historical context), focusing on key extracts in detail, with a particular focus on key characters, relationships, themes and ideas.
This will also involve the practice of English Language Paper 2 Skills as part of the study of the text and surrounding social and historical contexts.
Power and Conflict Poetry
Study of the remaining poems from the Power and Conflict cluster of the AQA Anthology.
Completion of English Language and Literature course and revision of all components for the examinations prior to the external examinations.
Revision of all components for the examinations prior to the external examinations.
|Modern Text essay based on whole text (Literature 2A)
20th and 21st Century Literary extracts Reading (Language 1A)
External examinations of all components
- Theatre trips and visits relating to set texts
- Revision sessions and workshops
- Creative writing competitions
How you can support your child’s progress
- As the English Language and Literature courses both require a great deal of reading (both whole texts – old and new, and shorter fiction and non-fiction texts), actively encouraging your child to read more often and more widely can only benefit them as they encounter a wider range of texts and become more confident in their approach to less familiar texts. Suggested texts include online and print Broadsheet news articles, magazine articles, travel writing, autobiographies and indeed good quality fiction texts including novels, plays and poetry. Not only will a love of reading improve your child’s creativity, it will also impact positively on their overall literacy.
- In terms of writing, actively encourage your child to proof-read their work to find any errors or areas for improvement. This will not only highlight the need to practise such skills for their English examinations, but also as a life skill for the future.
- Encourage your child to speak using Standard English, with grammatically correct sentences.
- Encourage your child to attend extra-curricular opportunities such as revision sessions, workshops and subject specific visits to gain more thorough understanding and appreciation of texts.
- Encourage your child to regularly access the vast array of online resources made available to them, including on GCSE Bitesize, Educake, Seneca etc, and/or to use revision guides to supplement the work completed in class and for homework.
- Encourage your child to revise the texts progressively along the course so that the knowledge is being consolidated regularly to improve their overall understanding by the end of the course.
- Encourage your child to learn key quotations from the set texts, so that they can use them confidently in their work.
- Encourage your child to plan answers to a range of exam questions, as practice is an invaluable form of revision in English.
|
Atlas was the first king of Atlantis and was the son of Poseidon according to the story of Atlantis from Plato. However, in traditional, Atlas was the son of the Titan, Iapetus, often identified with the biblical Japheth, and the nymph Clymene. This apparent contradiction can be explained by the fact that the name Atlas is applied to more than one figure in Greek legends.
Atlas is usually portrayed kneeling with the world on his shoulders. However, the earliest known statue of Atlas, the 2nd century Farnese Atlas(c), which is a Roman copy of an older Greek statue, has the sky is represented as a sphere with a map of the stars and constellations known to the Ancient Greeks, which they represented as objects, animals and mythological creatures and characters. 16th century cartographers assumed that the globe represented the Earth, not the sky and since then it has been depicted accordingly.
Edwin Björkman noted the opinion that the name Atlas does not have a Greek root but is generally thought to have a Semitic origin. He also suggested the possibility that the name may have been derived from one of the Greek words for sea, thalassa.
However, Peter James points out[047.190] the name has a clear etymology in the Greek root ‘tlaô’ which can mean ‘to bear’, ‘to endure’ or ‘to dare’. Atlas has also been identified with both the Egyptian god Shuand the biblical Enoch, the latter being a more controversial concept. Lewis Spence went further and identified the meso-American deity, Quetzalcoatl, with Atlas!
A somewhat more conventional view was offered by Thorwald Franke who has written a convincing paper(a) identifying Atlas with king Italos of the Sicels, who gave their name to Sicily and were one of the earliest groups to inhabit the island.
A more radical view has been put forward by Brit-Am writer John R.Salverda, who claims that the biblical Adam is the Atlas of Plato’s Atlantis narrative. A similar theory was proposed by Roger M. Pearlman in a 2018 booklet . In this small, difficult to read, book the author suggests, a linkage between the destruction of Sodom & Gomorrah and Atlantis, places Atlantis in the Jordan Valley and equates Abraham with Atlas – “ If Atlas as described in Plato’s work was based on a historic figure, Abraham alone meets key criteria.”In a more recent paper(d), Pearlman suggests that Göbekli Tepe was founded by Noah*(Noach) and his sons!
Moving further east, the Hittites had an equivalent if not original version of Atlas in the form of Tantalus. The Hittites in turn may have developed the identity from the Hurrian god Ubelleris. It was this Anatolian figure that led Peter James to his conclusion that Atlantis had been located in Turkey. Tantalus had a son Pelops, whom some consider Phrygian and according to Herodotus the Phrygians were the oldest race on earth.
An even more extreme idea has been proposed by Sean Griffin that the yogic concept of ‘Kundalini’ is contained within part of Plato’s Atlantis story(b). Griffin begins his explanation by pointing out that Atlas is the medical term for the 33rd vertebra of the human spine!
|
Page created on March 4, 2019. Last updated on April 11, 2022 at 18:31
Introduction and epidemiology
Cirrhosis is a chronic liver disease characterised by replacement of normal liver tissue by scar tissue. It’s an irreversible end-stage of hepatitis which cause significant morbidity and mortality. There is continuous loss of functional liver tissue, which is initially compensated and asymptomatic as the remaining liver can compensate. However, acute insults precipitate decreases in liver function, causing hepatic decompensation and development of dramatic and life-threatening complications. Cirrhosis is also an important risk factor for hepatocellular carcinoma.
Cirrhosis is a common condition worldwide.
- Alcoholic liver disease
- Metabolic associated fatty liver disease
- Chronic viral hepatitis (C, B or D)
- Biliary obstruction (primary biliary cholangitis, primary sclerosing cholangitis)
- Autoimmune hepatitis
- Wilson’s disease
- Heart failure
- Alpha-1 antitrypsin deficiency
The most common causes are alcoholic liver disease and hep C infection.
Continous necrosis and regeneration of the liver parenchyme replaces the functioning liver parenchyme with fibrosis. The pattern of fibrosis eventually forms fibrous septa. The more prominent they become, the more the liver takes on a nodular cirrhotic appearance. Histology shows pseudolobules separated by fibrosis. These pseudolobules can be distinguished from hepatic lobules by the fact that they don’t have a central vein.
The liver will now be a brown, shrunken, nonfatty organ composed of cirrhotic nodules.
We can distinguish multiple types of cirrhosis:
- Micronodular cirrhosis (Laennec cirrhosis) – caused by alcoholism
- Macronodular cirrhosis – caused by chronic hepatitis B or C or by autoimmune hepatitis
- Pigment cirrhosis – Caused by hemochromatosis (iron accumulation) or Wilson disease (copper accumulation)
- Biliary cirrhosis – Caused by damage to the biliary tree
Cirrhosis causes hepatic failure, the symptoms of which you can read about below, but the most significant are jaundice, ascites, and hepatosplenomegaly. Laboratory findings in cirrhosis show:
- Elevated serum aminotransferases (ALT and AST) and alkaline phosphatase
- Hypoproteinaemia (globulins, albumins and clotting factors)
- Elevated INR – due to decreased production of clotting factors
Hepatic or liver failure, also called hepatic decompensation, describes the condition where the liver is unable to perform its normal functions. This occurs because the liver parenchyme is so damaged or replaced by scar tissue that the remaining liver cannot compensate for the loss. It may be acute or chronic, but chronic is much more common. Chronic hepatic failure is almost synonymous with cirrhosis, as cirrhosis is the most common cause.
- Acute hepatic failure
- Acute viral hepatitis (A, E)
- Drug effects
- Poisonous mushrooms, phosphorous, CCl4 and halothane
- Paracetamol overdose
- Chronic hepatic failure (cirrhosis)
The complications of hepatic failure are many, and it can be divided into two types:
- Parenchymal decompensation – due to decreased function of the parenchyme
- Hypoalbuminaemia -> oedema, ascites
- Decreased production of clotting factors -> coagulopathy -> excessive bleeding
- Hyperbilirubinaemia -> jaundice
- Hepatic encephalopathy -> confusion and altered mental status
- Hepatorenal syndrome
- Hepatopulmonary syndrome
- Vascular decompensation – due to congestion of the portal circulation
- Portal hypertension
- Oesophageal varices -> can cause significant bleeding
- Caput medusae
You can read more about these consequences in pathophysiology 2.
Alcoholic liver disease
Alcoholic liver disease and non-alcoholic fatty liver disease don’t have their own topics, but they’re important disorders to know.
Introduction and epidemiology
Alcoholic liver disease (ALD) is an umbrella term for liver conditions caused by significant and chronic alcohol abuse. It initially causes liver steatosis, which progresses to alcoholic hepatitis to cirrhosis unless alcohol consumption stops.
Almost all who abuse alcohol develop liver steatosis, which is reversible, but only a few progress to hepatitis and cirrhosis. Hepatitis C is often found in chronic alcoholics and leads to acceleration of alcoholic liver disease.
Excessive ethanol consumption causes more than 60 % of chronic liver diseases in the Western countries and is the 5th leading cause of death. Alcoholic liver disease is a major cause of liver transplantation.
Alcoholic liver disease is caused by significant alcohol consumption over long periods of time. The risk increases proportionally with the amount of alcohol consumed. There is no threshold above which ALD invariably develops, as this varies from person to person, but most people with ALD have been drinking ~10 units daily for decades.
When talking about alcohol consumption, it’s important to define how much a unit of alcohol is. In Europe in general, one unit is 10 g of alcohol, while it’s 14 g in the US. This corresponds to one 0,33 L beer, one glass of wine, or one small glass of hard liquor.
Some “fun facts” about alcohol drinking: women are more susceptible to alcohol-induced liver injury, and binge drinking may be as harmful as daily drinking.
Hepatocellular steatosis results from shunting substrates away from catabolism and towards lipid synthesis, because alcohol dehydrogenase generates so much NADH from all the ethanol that lipid synthesis is favoured by the metabolism. You can read more about this pathophysiology 2.
The causes of alcoholic hepatitis are most likely toxic products and metabolites from ethanol metabolism, like:
- Reactive oxygen species generated during oxidation of ethanol
- Cytokine-mediated inflammation and cell injury
- Alcohol itself
The progression of alcoholic liver disease is as follows:
- Hepatocellular steatosis
- Steatosis refers to fatty change of the liver. Fat accumulates in the hepatocytes, and the centrilobular ones are the first ones to acquire this change. The lipid accumulation in hepatocytes spreads outward from central veins to the midlobular and periportal hepatocytes. The lipid droplets that accumulate expand the cells and displace the nucleus.
- Macroscopically, the liver is very big (4-6 kg or more), soft, yellow and greasy.
- Steatohepatitis refers to the presence of both inflammation and steatosis.
- Hepatocyte ballooning, where single or scattered foci of hepatocytes undergo swelling and necrosis.
- Mallory-Denk bodies, which consists of damaged intermediate filaments, and can be seen as very eosinophilic inclusion bodies in degenerating hepatocytes
- Neutrophil infiltration – neutrophils accumulate around the degenerating hepatocytes. Especially around the Mallory-Denk bodies.
- A distinctive pattern of scarring occurs. It appears first as central vein sclerosis, and spreads outwards, encircling individual or small clusters of hepatocytes. It looks like a chicken wire fence in the microscope.
The area around the central veins are most susceptible to toxic injury because the generation of acetaldehyde and free radicals is biggest there. Pericellular and sinusoidal fibrosis develop in this area as well. Both steatosis and alcoholic hepatitis can be reversible if the patient stops drinking alcohol.
Alcohol-induced liver damage usually has an increased AST/ALT ratio on labs, commonly higher than 2.
Metabolic associated/non-alcoholic fatty liver disease
Introduction and epidemiology
Metabolic associated fatty liver disease (MAFLD), previously known as non-alcoholic fatty liver disease (NAFLD), refers to liver disease which develops due to obesity and diabetes mellitus type 2. The pathology and progression is similar as with ALD, but there is no alcohol abuse involved. It progresses similarly as ALD, from steatosis to non-alcoholic steatohepatitis (NASH) to cirrhosis.
Recently (2020), it’s been proprosed that NAFLD needs a new name to better describe its pathogenesis, namely metabolic associated fatty liver disease (MAFLD).
It’s a very common condition in the Western world.
Insulin resistance results in accumulation of triglycerides in hepatocytes due to these mechanisms:
- Impaired oxidation of fatty acids
- Increased synthesis and uptake of fatty acids
- Decreased hepatic secretion of VLDL cholesterol.
The fat-loaded hepatocytes are very sensitive to lipid peroxidation products generated by oxidative stress, which can damage mitochondria and plasma membranes. This leads eventually to apoptosis of the hepatocytes. A consequence of oxidative stress or release from visceral adipose tissue is that the levels of TNF and IL-6 increase, contributing to liver damage and inflammation.
|
Ducks are one of the world’s most loved animals. Hundreds of different species are as colorful and charismatic as the last.
But did you know that ducks have some pretty impressive evolutionary features that have allowed them to adapt to living in the water and some pretty extreme weather conditions?
Here, we’re going to take an in-depth look at ducks feet.
You’ll find some fantastic facts and accompanying pictures to help you discover more about these beautiful water birds.
They Walk on Their Toes
Unlike humans and most other mammals, ducks walk on their toes rather than using the soles of their feet.
This is something that most birds do, and it’s known as “digitigrade.” But how do they manage to do this without falling all the time?
Well, it’s all down to evolution. Some of the lower bones in a duck’s foot are fused together.
This forms an entire segment of their leg called the “tarsometatarsus”.
This sits just above the section we consider to be their toes and keeps them strong and stable as they walk.
Their Legs and Feet Can Change Color
It’s not just the famous chameleon that can change color!
A Mallard’s feet go from a pale orange to a bright orange during the breeding season, and it’s believed that this is a way for them to attract a mate.
Hormones cause this change of color, and the brighter orange their feet go, the better chance they have of attracting female mallards.
As soon as the mating season is over, their feet go back to their usual pale orange. It’s not just orange-footed ducks that can do this either.
Ducks with blue or gray feet can also intensify their colorings with hormones to attract a mate during mating season.
Their Feet Act as Shock Absorbers
If you’ve ever dived foot-first into water you’ll know that the soles of your feet can start stinging with the impact shock.
Ducks don’t have this problem. When a duck lands on water, its feet act like water skis, absorbing the shock of the landing and “skiing” along the surface of the water.
This also allows them to slow down before coming to a complete stop.
They Have Four Toes
Just like many other birds, ducks only have four toes.
These are arranged with three toes pointing forward and one smaller toe pointing backward.
This arrangement is called “Anisodactyl.”
Ducks also have claws on each end to help them steady their grip in muddy conditions and navigate through waters with lots of plant growth beneath the surface.
Their Webbed Feet Have a Name
We all know that ducks have webbed feet. But did you know that they actually have a name?
When two or more toes are fused, this is known as “Syndactyly.”
This is common in all aquatic birds and mammals, as the webbing helps them easily swim through the water.
They Can Run on Water
Unlike many non-aquatic birds, ducks can’t simply take off from a stationary position.
Instead, they rely on their webbed feet to help them run across the water and give them the momentum they need to start flying.
Their wings play a role here, too. A duck’s wings are generally smaller than average, and this gives them the ability to dive under the surface of the water in search of food.
The downside to this, however, is increased difficulty with getting into the air. But, thanks to their webbed feet, it’s much easier for them to get up to speed.
They Use Their Feet to Steer Themselves
Ducks have extra skin on their back toes, providing them with more webbing.
As such, they can swim and steer themselves through the water efficiently.
They also padded their feet constantly when feeding to counteract the buoyancy, allowing them to stay afloat and, when diving, get back up to the surface.
Their Feet Don’t Freeze
Amazingly, ducks use their feet to regulate their temperature.
This means that they can retain a lot of heat in their feet and, as such, can stand on snow-laden and frozen surfaces without their feet freezing.
They also have many arteries and veins in their feet that are intertwined.
This allows any heat that is retained in their bodies to be passed back down to the feet through the bloodstream and, again, will enable them to stand on frozen surfaces without freezing.
They Push Their Feet Downwards and Backwards While Swimming
A duck’s webbed feet are great for helping them glide through the water, but ducks also know how to use this webbing to their best advantage.
By pushing downwards and backward at the same time, they can stretch the webbing further.
This gives them the maximum forward motion, and on the forward stroke, they close the webbing to minimize water resistance. Clever stuff!
Their Feet and Legs Are Set in Different Positions Depending on Habitat
One evolutionary trick that ducks adapted is the placement of their feet and legs depending on where they live and feed.
Diving ducks have legs and feet that are set quite far back along their bodies. This gives them the most efficient swimming performance possible.
Ducks that spend more time on land than in the water have legs and feet that are more centralized to their body.
This allows them to keep a better balance as they walk.
As you can see, ducks have evolved some pretty remarkable feet and legs over thousands of years.
So, next time you see a duck bobbing on the surface of the water, take a moment to think about what you can’t see – its remarkable legs that help it stay afloat, keep it warm, and allow it to hunt with maximum efficiency!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.